AI Snitches
Apple, Anthropic, Google, Meta, Microsoft, and OpenAI
Agentic AI should protect us, not spy for authoritarian governments, data brokers, & criminals. The only trustworthy agentic AI is one that shields our Signal messages, our private lives, and our loved ones from bad actors. Tell Big Tech that we will accept nothing less!
Sponsored by
To:
Apple, Anthropic, Google, Meta, Microsoft, and OpenAI
From:
[Your Name]
Dear Apple, Anthropic, Google, Meta, Microsoft, and OpenAI,
We have never been more aware of how our personal data is used against us and the people we care about. As ICE agents scan our faces, the US government is scraping and subpoenaing everything from our social media to our driving habits—all while they build concentration camps and databases to criminalize dissent. En mass, people are prioritizing their digital security and choosing encrypted messaging app Signal in order to communicate more safely. No one is fooled that all this surveillance makes us safer.
Because of this, we are writing to inform you that agentic AI, as many of you are building it, will never have a place in our digital lives. Even if AI becomes environmentally responsible, stops hallucinating, and stops replacing human creativity with slop. We demand immediate changes to put safety and privacy at the core of all agentic AI.
In order to do something like book a rental for a vacation with friends, an agentic AI needs access to a lot of our private data, such as:
• Our messaging and social apps so it can read, infer, and remember everyone’s preferences and availability
• Our credit card information so it can pay
• Our calendars so it can send invites
• Any passwords for the platforms it needs to glean information and complete the booking
This level of access and data goes far beyond what other AI tools collect. We’re already very concerned about those other AI tools (and a pervasive lack of commitment to ethics in the AI industry and/or in the Trump administration) yet most major tech companies are working on agentic AI with severe risks to everyone’s well-being. Risks that will affect us all, regardless of whether or not we use this technology.
For example, Microsoft Recall is already saving screenshots of whatever a person sees on their screen, which threatens not only our own end to end encrypted messaging protections, but also the protections for anyone whose messages we receive. Worse, Microsoft’s next agentic AI iteration will allow the AI access to all of our apps by default—and retain the data it harvests. As Signal’s President Meredith Whitaker warns, this regime of constant, likely cloud -processed and -stored AI surveillance is “a profound issue with security and privacy that is haunting this hype around agents”.
Meanwhile, open source personal AI assistant OpenClaw is its own privacy, accountability, and security nightmare. Open source AI shows promise for measures like community auditing, decentralization, and putting control and power in the hands of users and everyday people instead of billionaires, but it needs to protect us while offering these features, too.
Unless AI leaders come together and agree on transparent and uncompromising privacy and safety architecture for agentic AI that matches or exceeds the benefits of end to end encryption, this technology will remain too dangerous for us to ever trust.
As leaders in this industry with resources that exceed the wealth of most nations, we demand that you:
• Prioritize privacy-preserving AI
At this time, few major players in the AI space are prioritizing private AI. For our safety, and the safety of everyone who interacts with a person using an agentic AI, we urge you to prioritize local-first processing as a default, and private cloud processing when local-first is not possible. All communication between an on-device AI and a cloud server should be end-to-end encrypted by default.
All data that a user offers to an agentic AI should be for the user alone, and only accessible to the user alone—it should not be subpoenable or used to train any other AI than the user’s agent.
• Standardize Restrictions on Agentic AI Access and Enforced Transparency
The absolute right to kick AI out, just like we can kick a human out, needs to be standardized. We urge you to meet with stakeholders from the tech justice, anti-surveillance, and open source movements in order to develop standards, flags, or app signals for absolute restriction of agentic AI.
These could resemble the following:
• Human Only Mode: implement a simple, one-click option for a person to toggle device-wide access for all agentic AI tools and exclude human-only sessions from later AI review.
• Private Mode: allow any participant to ban all agentic AI from accessing a private conversation, and set this as a default for all private chats and direct messages.
• Dev Ban Signal: allow app developers to hard-block agentic AI in a way users can’t override.
• No Secret Agents Signal: require all agentic AI to declare itself in chat and be as apparent as a human participant.
• AI Opt-In Standard: require users to manually opt-in each app that they would like the AI to have access to at setup, and prompt periodic review of that access.
• Backend Processing Consent Standard: require agentic AI to gain all-party consent before they extract chat data to a backend.
• Transparency Standard: implement a standardized transparency mechanism that allows anyone subject to a agentic AI to interrogate when they are working, what data they are accessing, how long data will be retained for, and why.
• Limit Data Access to Only what is Required
As agentic AI will likely have access to a wide range of data and AI privacy safeguards are buggy at best, these tools need to be designed to minimize the data that they process to include only what is appropriate for the user’s request. Similar rigor should be brought to analysis of what data to memorize vs. what data to discard. AI providers, not users, must bear the responsibility of implementing strong default data minimization, contextual constraints such as memory tiers, and purpose limitation settings.
• Give Users Control Over all Data
If agentic AIs are going to make choices for us and absorb vast reams of personal information, they need to be accountable to their humans. The average person must be able to easily access, change, and purge any and all information their AI collects about them.
• Establish Verification as the Baseline for Trust
We should be able to trust that the privacy standards of reputable agentic AI are as protective and data-sovereign as end-to-end encryption or zero-knowledge proofs, which is why these privacy features must be backed by independent security researchers on an ongoing basis, as Apple Intelligence does.
It is crucial for rigorous, transparent, independent, and privacy-preserving audits to become the standard for agentic AI systems. This will require access to data as well as financial support and industry-wide goodwill for the developers and researchers who take on the crucial task of creating trust between agents and humans.
We are far past the time that all industry leaders should have been overtly investing in these basic safeguards and best practices for the safety of all humans. It is our hope that your response to these demands will be swift and proactive, allowing for a future where your products might be trustworthy and useful.