Recent research papers have unveiled Apple’s endeavors to integrate artificial intelligence (AI) tools into iPhones, marking a step towards enhanced on-device capabilities. While Apple remains tight-lipped about the specific advantages of this integration, a research paper titled “LLM in Flash,” released on December 12, details the successful development and local operation of an AI model on iPhones. This model not only optimizes battery consumption but also conducts internal analysis and processing of all data on the device, eliminating the need for connections to cloud servers.
This is not the first instance of Apple’s commitment to introducing AI to its devices with an emphasis on on-device data processing to safeguard user privacy. A Bloomberg report from October of the previous year shed light on Apple’s AI team working on a new iteration of the intelligent assistant Siri, leveraging generative artificial intelligence, with an anticipated release to users by 2024.
The report also hinted at internal concerns within Apple regarding the application of the technology featured in the enhanced Siri version, suggesting a longer timeline for its implementation in services or other applications.
Bloomberg further highlighted Apple’s creation of a generative AI model for text processing named Ajax, internally termed AppleGPT, comparable to OpenAI’s ChatGPT. Apple has internally permitted its employees to utilize this smart platform for text-based interactions.
In addition to the “LLM in Flash” research paper, Apple published two more research papers this month. One of them revolves around an intelligent model called HUGS, designed to generate three-dimensional animated human-like digital models. This model utilizes short, limited-angle videos, typically comprised of 50 to 100 frames, and requires only around 30 minutes to produce a complete 3D design by separating the character from the recorded scene to enhance dynamism.
Through this breakthrough model, Apple’s research team significantly expedited the process of creating comprehensive three-dimensional models of the human body from single-angle videos, surpassing other AI models by a factor of 100 in both training and practical application.
The cutting-edge technology developed by Apple streamlines the extraction and integration of elements from videos, facilitating their addition to entirely new video content. This opens up avenues for diverse industries, such as improving remote meetings with the incorporation of 3D avatars for a more lifelike interaction and innovative content creation using smartphone cameras.
The second research paper focuses on the development of new techniques enabling the operation of large language models (LLMs) locally on smart devices with limited memory, like smartphones and personal computers. Notably, this can be achieved without the need for extensive data processing and model execution capabilities, setting it apart from platforms like OpenAI’s ChatGPT.
Earlier this year, Apple’s CEO, Tim Cook, confirmed the company’s strategic focus on the artificial intelligence market, making substantial investments in generative AI technologies. Reports have also suggested Apple’s daily expenditure of millions of dollars to operate and train smart models.
An October research paper, a collaborative effort involving researchers from Apple, Google, and Columbia University, unveiled progress in a versatile project named Ferret. This project is designed to run on mobile devices like smartphones and holds promise for various reasons, including its ability to deliver its full potential without necessitating high processing and storage capabilities. Furthermore, it is noteworthy for being an open-source initiative, marking a groundbreaking move by Apple in the realm of artificial intelligence.
Leave a Reply