ChatGPT-style AI system coming to Apple iPhones, iPads? Report hints AppleGPT in-the-works

Apple has filed a research paper suggesting that ChatGPT-style Large Language Models (LLMs) could run on devices like iPhones and iPads. Know all about the so-called AppleGPT.

By: HT TECH
| Updated on: Dec 22 2023, 18:11 IST
Using your 4-digit iPhone passcode can be dangerous? Know how you can strengthen your phone lock
iPhone 15 Pro
1/5 Nowadays, iPhones are increasingly getting stolen, right from under our noses, which is creating major security issues. We keep all of our personal data on our phones such as bank details, social media, passwords, and more and losing it is not an option. Thieves can easily crack your 4-digit iPhone passcode which makes the device more vulnerable to security breaches. (Pixabay)
iPhone 15 Pro
2/5 Therefore, iPhone users must stay vigilant about when and how they are unlocking their iPhones with the 4-digit passcode. Additionally, users must discard using a 4-digit iPhone passcode and create a complicated iPhone lock to strengthen their phone data even when the iPhone gets stolen. (AP)
iPhone 15 Pro
3/5 To make a complicated iPhone password, open settings and go to “Face ID & Password”, then go to “Change password.” You will be asked to write your old password first before making a new one. Then go to the “passcode option and pick the “Custom Alphanumeric Code.” (Unsplash)
iPhone 15 Pro
4/5 The Alphanumeric Code will enable users to create a password using letters, numbers, and special characters  (@, #, &, ! etc.) iPhone users must add six to ten-digit characters for the eligibility of the password. (Bloomberg)
iPhone 15 Pro
5/5 Another tip to avoid anyone peeking while you are inserting your passcode is to use biometric security such as FaceID or TouchID.  Otherwise, make sure you use the 4-digit passcode or Alphanumeric Code privately when no one is near you. Note that 50 percent of security is in the hands of the device owners. (Unsplash)
iPhone 15 Pro
icon View all Images
Apple’s latest research paper hints at LLMs potentially running on limited-memory devices such as iPhones and iPads. (AP)

Artificial intelligence (AI) has been the buzzword of 2023, and companies are making efforts to incorporate this technology into their suite of products. Earlier this year, it was reported that Apple had developed an internal service similar to ChatGPT, which helps employees test new features, summarize text, and answer questions based on the data it has learned. Mark Gurman in July claimed that Apple was working on its own AI model. The heart of this large language model (LLM) work is a fresh framework called Ajax. The ChatGPT-like app, nicknamed "Apple GPT," is just one of the many possibilities that the Ajax framework can offer. Now, a research paper filed by Apple hints at Large Language Models (LLMs) possibly running on Apple devices including iPhone and iPad!

LLMs on iPhone

The research paper (first spotted by VentureBeat) is titled “LLM in a flash: Efficient Large Language Model Inference with Limited Memory”. It tackles the key challenge that is running on-device LLMs, especially for devices with limited DRAM capacity. For the unaware, LLMs contain billions of parameters. Thus, making them run on devices with restricted DRAM poses a challenge. To solve this problem, the paper suggests that LLMs can be run on-device by storing the model parameters on flash memory but bringing them on demand to DRAM.

You may be interested in

5% OFF
Apple iPhone 15 Plus 256GB
  • Black
  • 6 GB RAM
  • 256 GB Storage
9% OFF
Apple iPhone 15 256GB
  • Black
  • 6 GB RAM
  • 256 GB Storage
18% OFF
Apple iPhone 12
  • Black
  • 4 GB RAM
  • 64 GB Storage
12% OFF
Apple iPhone 13
  • Blue
  • 4 GB RAM
  • 128 GB Storage

Keivan Alizadeh, a Machine Learning Engineer at Apple and lead author of the paper said, “Our method involves constructing an inference cost model that harmonizes with the flash memory behavior, guiding us to optimize in two critical areas: reducing the volume of data transferred from flash and reading data in larger, more contiguous chunks.”

Not sure which
mobile to buy?

The team used two principle techniques - “Windowing, and row-column bundling. Windowing reuses previously activated neurons to reduce the data transfer, while row-column bundling increases the size of data chunks read from flash memory. Both of these techniques have led to a 4-5x increase in the Apple M1 Max SoC.

In theory, this context-adaptive loading could pave the way for running LLMs on devices with limited memory such as iPhones and iPads.

Catch all the Latest Tech News, Mobile News, Laptop News, Gaming news, Wearables News , How To News, also keep up with us on Whatsapp channel,Twitter, Facebook, Google News, and Instagram. For our latest videos, subscribe to our YouTube channel.

First Published Date: 22 Dec, 18:11 IST
NEXT ARTICLE BEGINS