Here’s How Apple Plans to Protect Your AI Data

It’s no secret that Apple is working on artificial intelligence features that will come in iOS 18 and macOS 15. When you update your iPhone, iPad and Mac later this year, you may find a more natural-sounding Siri or be able to create emoji based on what you are talking about in the Messages. Pretty cool, but how will Apple protect your data while the AI ​​is processing all these great new features?

While reports say Apple will use many of these features on-device, at least in its new products, rumors also say that the company plans to outsource much of the processing to the cloud. This is not unusual for the rest of the industry: most AI processing now happens in the cloud simply because AI processing is intensive. That’s why companies continue to expand the capabilities of their NPUs (or neural processing units), which are specialized processors that perform purely artificial intelligence functions. Apple has been using NPUs for years, but earlier this year put on a big show to tout the powerful NPUs of the new M4 chip , while Microsoft launched a new AI-PC standard with its Copilot+ PC line .

Running artificial intelligence on a device is more secure

Of course, it probably doesn’t matter to you whether your AI features run on your phone or in the cloud, as long as the feature works as expected. The problem, however, is that running these features on the device provides a more secure experience. By moving processing to the cloud, companies risk exposing user data to anyone with access, especially when the service doing the processing needs to decrypt the user data first. Exposure risks include employees of the company in question, as well as attackers who may try to break into the company’s cloud servers and steal any customer information they can find.

This is already an issue with services like ChatGPT, and why I advise against sharing any personal information with most cloud-based AI services : your conversations are not private and everything is transferred to these servers for both storage and training of the AI ​​model. Companies invested in user privacy, such as Apple, prefer to use in-device solutions whenever possible because they can demonstrate that isolating user data on a phone, tablet or computer keeps it from falling into the wrong hands.

How Apple will use Secure Enclave to protect AI data

While Apple’s new hardware must be powerful enough to run the artificial intelligence features the company is developing, for older devices or for features that are too power-hungry, the company may be forced to turn to cloud servers to offer those features at an affordable price. All. However, if The Information’s report cited by Android Authority is correct, the company may have found a solution: Secure Enclave.

Secure Enclave is already part of the hardware of most Apple products in use today. It is a part of the SoC (system on a chip) that is kept separate from the processor and its job is to store the most sensitive information such as encryption keys and biometric data. This way, if the main processor is ever compromised, Secure Enclave ensures that attackers cannot access its data.

According to The Information, Apple is working on an AI-cloud solution that will send all user AI data to secure enclaves of M2 Ultra and M4 Macs running on its server farms. There, these Mac servers could process the request while maintaining encryption, and then send the results back to the user. In theory, this process will keep user data safe while also giving older devices access to Apple’s latest AI features.

We won’t know for sure if this is Apple’s plan until they reveal what they’re working on at WWDC , if at all. If Apple remains silent on how it will protect AI user data, we may never know for sure. But given that Apple positions itself as a company that cares about user privacy, this approach (or any approach that provides end-to-end encryption for cloud data) would make a lot of sense.

More…

Leave a Reply