首页 News 正文

Combining AI with iPhone? Apple's latest paper provides breakthrough solutions

米哈伊尔叔叔
1298 0 0

Is the Apple GPT in your pocket? This may become a reality faster.
Apple artificial intelligence (AI) researchers recently published a paper on the preprint website arXiv, which mentioned an innovative "flash utilization" technology that can deploy large language models (LLMs) on iPhones and other memory limited Apple devices, which is almost a major breakthrough.
Memory constraints
LLM based chatbots (such as ChatGPT, Claude, etc.) rely heavily on data and memory, requiring a large amount of data to be processed simultaneously, often requiring a large amount of memory to run.
Therefore, running LLM is a challenge for devices such as iPhones with limited DRAM (generally referring to memory) capacity.
Usually, the standard method for computing data is to load the data from flash memory into DRAM, and then perform data inference in DRAM.
DRAM with high performance can increase data processing speed by millions of times, but the downside is its capacity. Running on DRAM severely limits the maximum model size that can be run.
To address this issue, Apple researchers have developed a new technology that uses larger capacity flash memory to store data from artificial intelligence models, which can then be transferred to DRAM memory for processing when needed.
Storing AI on flash memory
In a new research paper titled "LLM in Flash: Efficient Large Language Model Reasoning in Limited Memory", the author points out that flash memory in mobile devices is more abundant than traditional memory used to run LLM.
This method cleverly bypasses capacity limitations. The paper proposes two key technologies to minimize data transmission and maximize flash processing capabilities:
One of them is called "windowing" technology, which is equivalent to a recycling method. AI models do not need to load new data every time, but instead reuse some already processed data. This reduces the need for continuous memory acquisition, making the process faster and smoother.
The second is called "Row Column Bundling" technology. This technology is achieved by grouping data more effectively, that is, by setting the order of accessing data blocks based on the data characteristics of flash memory, which can read data from flash memory faster and accelerate the ability of artificial intelligence to understand and generate language.
According to this paper, the combination of these methods enables the running capacity of artificial intelligence models to reach twice the available memory of iPhones. This means that under this method, the inference speed in the CPU has increased by 4-5 times compared to traditional loading methods, and the inference speed in the GPU has increased by an astonishing 20-25 times.
The author of the paper wrote, "This breakthrough is particularly important for deploying advanced LLMs in resource limited environments, thereby expanding their applicability and accessibility."
Apple's AI Strategy
The breakthrough in artificial intelligence efficiency has opened up new possibilities for future iPhones, such as more advanced Siri features, real-time language translation, complex AI driven photography, and augmented reality capabilities.
The new technology in the paper also lays the foundation for iPhone to run complex artificial intelligence assistants and chatbots on devices, and it is said that Apple is already developing this technology.
Apple's work in generative artificial intelligence may eventually be integrated into its voice assistant Siri. Apple introduced its large-scale language model work to employees at the February Artificial Intelligence Summit this year. According to previous media reports, Apple's goal is to launch an intelligent version of Siri that is deeply integrated with artificial intelligence.
There are also rumors that Apple plans to add artificial intelligence to as many Apple applications as possible.
In addition, according to reports, Apple is also developing its own generative artificial intelligence model, "Ajax," which runs on 200 billion parameters to compete with OpenAI's GPT-4 model.
Internally known as "Apple GPT," Ajax aims to unify the entire Apple machine learning development, highlighting Apple's broader strategy of integrating artificial intelligence deeper into the Apple ecosystem.
According to the latest report, Ajax is considered more powerful than the early ChatGPT 3.5. However, the new model GPT-4 launched by OpenAI in September 2023 may have surpassed the capabilities of Ajax.
Fruit Chain analyst Jeff Pu has pointed out that Apple will launch some kind of generative artificial intelligence feature on iPhones and iPads around the end of 2024, which will be included in iOS 18. Pu also stated that Apple will build hundreds of artificial intelligence servers in 2023, and there will be more in 2024.
CandyLake.com 系信息发布平台,仅提供信息存储空间服务。
声明:该文观点仅代表作者本人,本文不代表CandyLake.com立场,且不构成建议,请谨慎对待。
您需要登录后才可以回帖 登录 | 立即注册

本版积分规则

  •   知名做空机构香橼研究(Citron Research)周四(11月21日)在社交媒体平台X上发布消息称,该公司已决定做空“比特币大户”微策略(Microstrategy)这家公司,并认为该公司已经将自己变身成为一家比特币投资基金 ...
    caffycat
    昨天 11:18
    支持
    反对
    回复
    收藏
  •   每经AI快讯,11月20日,文远知行宣布旗下自动驾驶环卫车S6与无人扫路机S1分别在新加坡滨海湾海岸大道与滨海艺术中心正式投入运营。据介绍,这是新加坡首个商业化运营的自动驾驶环卫项目。 ...
    star8699
    3 天前
    支持
    反对
    回复
    收藏
  •   上证报中国证券网讯(记者王子霖)11月20日,斗鱼发布2024年第三季度未经审计的财务报告。本季度斗鱼依托丰富的游戏内容生态,充分发挥主播资源和新业务潜力,持续为用户提供高质量的直播内容及游戏服务,进一步 ...
    goodfriendboy
    3 天前
    支持
    反对
    回复
    收藏
  •   人民网北京11月22日电 (记者栗翘楚、任妍)2024广州车展,在新能源汽车占据“半壁江山”的同时,正加速向智能网联新能源汽车全面过渡,随着“端到端”成为新宠,智能驾驶解决方案成为本届广州车展各大车企竞 ...
    3233340
    昨天 17:06
    支持
    反对
    回复
    收藏
米哈伊尔叔叔 新手上路
  • 粉丝

    0

  • 关注

    0

  • 主题

    2