Cloud computing is pretty much the opposite of private, and yet Apple assures us that it has achieved the impossible, using the usual combination of Apple hardware and software. The result is surprisingly robust, but there are still a few dangers.
Apple made a lot of promises about Apple Intelligence privacy in iOS 18, including lots of on-device work, Private Cloud Compute, and warnings before you’re kicked into the wild west of ChatGPT. But can AI ever really be private?
"Based on [Apple's Private Cloud Compute whitepaper], I would say that they appear to have designed a very tightly constrained architecture at both the hardware and software levels, from the client/endpoint all the way down to the processing. Given the way the architecture is described, it seems likely that it would be extremely difficult for an attacker to steal data within the PCC space," Clyde Williamson, senior product security architect at data security firm Protegrity, told Lifewire via email. "The remaining risk is really how much we trust Apple. When we pass information to someone else, there is always an assumption of trust at the bottom of the security model."
Apple’s AI products come in three tiers, with increasing levels of trust required as you escalate. First, there are operations that happen on the device—for example, Siri can use your contacts and calendar events to make suggestions. Then there’s Apple’s “Private Cloud Compute,” which sends queries to the cloud and executes them on servers running on Apple Silicon chips. And the third tier is where all bets are off: this is where your data is sent to ChatGPT (or other future partners) to use those services.