Skip to main content
Protect My Mac — FreeNo credit card required
Privacy Guides8 min read

Apple Intelligence Privacy: What's Processed On-Device vs in the Cloud

Hassanain

Apple Intelligence is Apple's answer to the AI race, and to their credit, they took a very different approach than most of the industry. Instead of vacuuming up your data and processing it on some distant server farm, Apple built a system that tries to keep as much as possible on your device.

I want to be fair about this. Apple genuinely did some impressive engineering work here. But after digging into the details, including research from security firms that have actually analyzed the network traffic, the picture is more complicated than Apple's marketing suggests.

Let me walk through exactly what happens with your data when you use Apple Intelligence on your Mac.

What Apple Intelligence actually is

Apple Intelligence is a suite of AI features built into macOS, iOS, and iPadOS. It powers things like text summarization, writing assistance, photo editing, notification prioritization, image generation with Image Playground and Genmoji, and a significantly upgraded Siri.

The key architectural decision Apple made is a three-tier processing model. Every request goes through a decision point: can this be handled on-device? If not, can Apple's own servers handle it? And if not, should it be routed to a third-party model like ChatGPT?

That tiered approach is genuinely thoughtful. The question is how well it works in practice.

Tier 1: What stays on your Mac

Apple runs an approximately 3-billion-parameter language model directly on your device. This is a real, capable model running on Apple Silicon, and it handles a surprising amount of the work.

Features processed entirely on-device:

  • Text summaries and notification previews. When Apple Intelligence summarizes an email, a message thread, or a notification, that processing happens locally. Your message content does not leave your Mac.
  • Predictive text and autocomplete. The typing suggestions you see across the system are generated on-device.
  • Photo curation and search. When you search for "beach photos from last summer" in the Photos app, that understanding happens locally using on-device models.
  • Basic Writing Tools. Simple proofreading and tone adjustments can be handled by the on-device model.
  • Notification prioritization. Apple Intelligence reads your notifications to decide which ones are urgent. This analysis stays on your device.
  • App intent understanding. When Siri figures out which app you are trying to interact with, that routing logic runs locally.

The on-device model has access to your personal context, including your calendar events, messages, frequently used apps, and contacts. This is what makes features like smart summaries and contextual Siri responses possible. All of that personal context stays on your device. Apple cannot see it.

I will give Apple genuine credit here. Running a capable language model on-device is not trivial engineering, and the fact that they prioritized this over just sending everything to the cloud is meaningful.

Tier 2: Private Cloud Compute

When a request is too complex for the on-device model, Apple Intelligence routes it to Private Cloud Compute, or PCC. This is where things get technically interesting.

PCC runs on dedicated Apple Silicon servers in Apple's data centers. These are not regular servers. They are purpose-built machines with several specific privacy constraints.

How PCC works:

  1. Your device determines that a request needs cloud processing.
  2. The PCC client on your Mac encrypts the request directly to the public keys of specific PCC server nodes.
  3. Before sending anything, your device cryptographically verifies that the target server is running publicly auditable software.
  4. The request is processed on the server.
  5. Results are returned to your device.
  6. The server deletes all data related to your request. There is no persistent storage on PCC nodes.

What goes to PCC:

  • More complex Writing Tools requests, like rewriting longer passages or advanced editing suggestions
  • Some Siri queries that need more computational power than the on-device model can handle
  • Image generation tasks that exceed on-device capabilities
  • Complex reasoning tasks

The privacy protections Apple built into PCC:

  • No data retention. PCC servers have no persistent storage for user data. Once your request is processed, the data is gone.
  • End-to-end encryption. Your request is encrypted from your Mac to the specific PCC node. Load balancers and other infrastructure between your device and the PCC node cannot decrypt the data.
  • Cryptographic attestation. Your device will only send data to PCC nodes that can prove they are running published, auditable software. This is enforced through Apple's Secure Enclave.
  • Public software images. Apple publishes the software images of every PCC production build so that security researchers can inspect them.
  • No privileged access. Apple has stated that even Apple itself cannot access the data being processed on PCC nodes.
  • Metadata limits. The only metadata Apple collects from PCC is the approximate request size, which features were used, and how long the request took. This metadata is not linked to your Apple Account.

This is genuinely impressive infrastructure. Apple built a Virtual Research Environment so external security researchers can analyze PCC. They opened a bug bounty program for it. The cryptographic attestation model means your Mac will refuse to send data to a server that cannot prove it is running the correct software.

I think PCC is probably the most privacy-respecting cloud AI system that any major tech company has built. That is a real achievement.

Tier 3: ChatGPT integration

Here is where the privacy story gets more complicated.

Apple Intelligence integrates ChatGPT, powered by OpenAI, for requests that fall outside what Apple's own models can handle. This is an entirely separate system with different privacy rules.

Without a ChatGPT account linked:

  • Your request and any attachments are sent to OpenAI's servers.
  • Your IP address is obscured by Apple.
  • OpenAI states it will not store your requests.
  • No information tied to your Apple Account is shared.

With a ChatGPT account linked:

  • Your interactions are saved in your ChatGPT chat history.
  • OpenAI's standard data-use policies apply.
  • You are essentially using ChatGPT through an Apple interface.

The ChatGPT extension is off by default, and Apple asks for your permission before routing any specific request to ChatGPT. Those are good decisions. But the moment you approve a ChatGPT request, your data is subject to OpenAI's policies, not Apple's. That is a fundamentally different privacy model.

If you are privacy-conscious, be very deliberate about when you approve ChatGPT requests through Apple Intelligence. Every time that confirmation dialog pops up, you are making a choice about where your data goes.

The research that complicates things

Now, here is the part where I cannot just repeat Apple's marketing. Security researchers have found some real gaps.

At Black Hat USA 2025, Israeli cybersecurity firm Lumia Security presented research showing that Apple Intelligence transmits more data to Apple's servers than its privacy policies indicate. Some of the specific findings are concerning.

Location data accompanies every Siri request, regardless of whether the query has anything to do with location. Ask Siri to set a timer, and your location still gets sent along.

App scanning happens in the background. When you ask Siri a question, the system scans for related applications on your device and reports that information to Apple's servers. Ask about the weather, and Apple learns which weather apps you have installed.

Media metadata gets transmitted. What you are listening to, including song names, podcast names, and video titles, can be sent to Apple's servers during Siri interactions, even when it is not relevant to your request.

Messages dictated through Siri to encrypted messaging apps like WhatsApp are sent to Apple's servers through Private Cloud Compute. This is particularly concerning because users expect those messages to be end-to-end encrypted, and routing them through Apple's infrastructure, even temporarily, introduces a third party into that chain.

The researcher, Yoav Magid, also found that even when users explicitly disable settings that allow Siri to learn from specific apps, message transmission to Apple's servers continues.

When Lumia disclosed these findings to Apple, the company initially showed interest but reportedly dismissed most concerns as "expected behavior" described in their privacy policies.

This is the kind of thing that frustrates me about Apple's approach to privacy. Their marketing says "what happens on your iPhone stays on your iPhone." Their engineering is genuinely good. But the actual data flows are more complex than the marketing suggests, and when researchers point that out, the response is underwhelming.

How Apple Intelligence interacts with permissions

Apple Intelligence has broad access to your personal data on-device, including your messages, emails, calendar, contacts, and app usage patterns. It needs this access to be useful. You cannot get smart summaries without the system reading your messages.

This access is governed by Apple's existing permission system, but Apple Intelligence is a system-level feature, not a third-party app. It does not ask for individual permissions the way a downloaded app would. When you enable Apple Intelligence, you are granting it broad access to your personal context.

You do have some controls:

  • You can disable Apple Intelligence entirely in System Settings under Apple Intelligence and Siri.
  • You can turn off specific features individually.
  • You can disable the ChatGPT extension.
  • You can enable transparency logging to see what gets processed by Private Cloud Compute. Go to Settings, then Privacy and Security, then Apple Intelligence Report.

That transparency logging feature is actually quite useful. If you want to see exactly what data is being sent to PCC, turn it on. Most people do not know it exists.

My honest assessment

Apple's approach to AI privacy is significantly better than what Google, Microsoft, or any other major tech company is doing. That is not a low bar to clear, but Apple genuinely cleared it by a meaningful margin.

Private Cloud Compute is real engineering, not marketing theater. The cryptographic attestation model, the lack of persistent storage, the public software images for researcher inspection — these are substantive technical decisions that meaningfully protect user data.

But "better than everyone else" is not the same as "perfect."

The Lumia Security research shows that Apple Intelligence sends more data to servers than most users would expect, including data that does not seem necessary for the requested task. The fact that location data accompanies every Siri request, that media metadata gets transmitted during unrelated queries, and that messages to encrypted apps get routed through Apple's infrastructure — these are real privacy gaps.

Apple's response of calling these findings "expected behavior" is also not reassuring. If your privacy behavior is expected but not clearly communicated to users, that is still a privacy problem.

And the ChatGPT integration, while opt-in, creates a confusing dual-privacy-model where some of your AI interactions are covered by Apple's strong protections and others are governed by OpenAI's entirely different policies. Most users will not understand that distinction.

What you can do about it

If you use Apple Intelligence on your Mac and care about privacy, here is what I would recommend.

Enable transparency logging. Go to System Settings, then Privacy and Security, then Apple Intelligence Report. This shows you what data is being processed by Private Cloud Compute.

Be selective with ChatGPT requests. Every time Apple Intelligence asks if you want to route something to ChatGPT, think about what data is involved before approving.

Review your Siri and Apple Intelligence settings. Disable features you do not use. The less Apple Intelligence does, the less data it processes.

Monitor your network connections. Apple Intelligence creates network connections to Apple's servers that you may not be aware of. If you want to see what your Mac is actually sending, you need something that watches network traffic in real time. This is one of the things CoreLock does — it monitors outbound connections from your Mac, including connections to Apple's AI infrastructure, so you can see exactly what is communicating and when. If Apple Intelligence is sending data you did not expect, you will know about it.

Keep your macOS updated. Apple does respond to privacy research, even if their initial responses are sometimes dismissive. Updates often include fixes for the kinds of data leakage that researchers identify.

The bigger picture

We are in the early days of on-device AI, and the privacy norms are still being established. Apple is doing better than most, and their fundamental architecture of prioritizing on-device processing is the right approach.

But privacy is not a binary thing. It is not enough to say "your data stays on your device" when some of it does not. It is not enough to build impressive server-side protections when data flows exceed what users expect. And it is not enough to point to privacy policies when those policies are vague enough to cover behavior that users would not approve of if they understood it.

The best thing you can do is stay informed, use the controls Apple provides, and maintain your own visibility into what your Mac is doing. Do not just trust the marketing, from Apple or anyone else. Verify it yourself.

That is the approach I take with everything related to Mac security. Trust the engineering when it is good. Question the marketing when it is vague. And always have your own way to check what is actually happening on your machine.

Ready to try CoreLock?

Free to download. No credit card required.

Download CoreLock Free