Blog
Privacy in the AI era is possible, says Proton’s CEO, but one thing keeps him up at night

Follow ZDNET: Add us as a preferred source on Google.
ZDNET’s key takeaways
- AI and Big Tech are eroding personal privacy.
- Proton’s encrypted tools are increasingly appealing.
- Proton CEO Andy Yen worries about a future inundated by rogue agents.
As AI’s popularity continues to soar, privacy and safety concerns surrounding the technology have kept pace, especially during the last year.
AI is now a common tool for cybercriminals, making it much easier for bad actors to steal your data. The technology also enables the scaling of mass surveillance to new extremes. AI agents like OpenClaw have continued to go rogue despite being embraced by tech giants like Nvidia and Meta, leaking or deleting sensitive information.
Also: Proton just launched a Google Workspace alternative – and it’s fully encrypted
Earlier this month, I attended Semafor World Economy in DC, where 500 CEOs joined government leaders to discuss the state of global business, including AI’s impact on security and privacy. Andy Yen, CEO of VPN and private digital service provider Proton, spoke on the topic; I sat down with Yen after his panel to discuss whether privacy can coexist with AI, what its future looks like, and why he thinks Proton is well-positioned to succeed.
Privacy in the public consciousness
AI and privacy trade-offs go hand in hand: the thinking goes that the more data AI tools have access to, the better they perform, whether for enterprise or individual use. That directly pits implementation and efficacy against risk tolerance. Still, popularity has skyrocketed over the last two years, especially for sensitive use cases such as healthcare.
Also: How to audit what ChatGPT knows about you – and reclaim your data privacy
Since Proton’s founding in 2014, long before AI use exploded amongst everyday consumers, the company has offered users privacy-first alternatives to tools from the Big Tech likes of Google, Microsoft, and Meta. However, Yen doesn’t think the rise of AI tools has popularized data privacy concerns amongst the public. In his view, the issue is a generational mismatch between privacy awareness and tech adoption.
“There are more people who really care about privacy, but are not tech savvy enough and don’t know how to protect themselves,” he said. “Then there’s sort of the middle-aged people — we’re actually kind of the worst because we don’t have the privacy focus of our parents, yet we’re adopting all this tech. So we are more ignorant and more exposed.”
That said, Yen is optimistic that education will solve that.
Also: 5 reasons you should be more tight-lipped with your chatbot (and how to fix past mistakes)
“The best way to protect somebody is to simply teach them about the risk,” he said. “If the education piece is done correctly, then everything else will kind of naturally follow.”
Beyond that solution, though, he’s hopeful that mass lack of awareness is simply a matter of time.
“I think we need to take this in the context of long-term trends,” he said. “When we started Proton in 2014, maybe one in 10 [people] understood the business model of Google and Facebook. Today, it’s maybe 4 in 10, and when OpenAI started running ads and pushing bias suggestions for revenue, that gets seen by more people — maybe 7 in 10.”
At the moment, Yen believes the next generation is best prepared for the world AI is creating, despite what appears to be apathy.
“The young people are the most aware — they know how Google makes money, how ads work, about the algorithms, but they don’t seem to care,” he said. “Given the choice between ignorance versus not caring, I sort of prefer an audience that’s aware and doesn’t care, because you can get them to care.”
Also: This privacy-first chatbot is taking off – here’s why and how to try it
Duck.ai, the chatbot from private browser company DuckDuckGo, saw an uptick in web traffic earlier this year. Despite not gaining on industry leaders like ChatGPT and Claude, the spike echoes a trend Yen said he’s seeing at Proton, and convinces him that more people will eventually turn to privacy-first options.
“Lumo is the fastest-growing product within Proton today,” Yen said of the company’s encrypted chatbot. “That sort of shows that people need AI; they use it day to day, it is very much part of life today, but fundamentally, no one trusts it. The ability to get the benefits of AI, but have a guarantee of your conversation staying private into the future, that’s quite powerful. As time goes on, more people are going to want that.”
AI’s biggest threat
But the protections Proton offers have their limits. When I asked Yen what he believed he and Proton weren’t prepared for when it comes to AI, he answered immediately: Agents.
“You could have the strongest encryption in the world, but if you as a user freely give your agent access to Proton Mail on your device, and that agent goes crazy and posts all the information online somewhere, encryption in Proton isn’t going save you,” he said. “That’s an inherent limitation to what we’re able to do.” Theoretically, he said, Proton could develop its own agent built against these vulnerabilities, but that’s not in the works yet.
Also: The permissions behind your AI Chrome extensions deserve a closer look – they may be spying on you
Yen sees local AI as one of the best ways to address privacy concerns. (Proton’s own Scribe AI writing assistant offers users the option to run locally.) Right now, it’s hard to scale compute on personal devices, but he thinks local AI will be significantly more operational in the next few years.
“If you look at the modern iPhone and compare it with the first smartphones from 10 years ago, the amount of compute, of storage, is orders of magnitude higher, and that trend will continue,” Yen said. “But LLMs don’t necessarily get larger. In fact, we’re gonna have smaller models that are just as effective as time goes on.”
Earlier intervention
One way to protect future generations from data privacy risks is to keep them out of Big Tech’s ecosystem altogether. Yen said he is laser-focused on protecting kids, because that’s where he believes Proton can have the biggest impact. Last month, the company launched the option for parents to reserve their child’s first email address with Proton, even before they’re born.
Also: Worried about AI privacy? This new tool from Signal’s founder adds end-to-end encryption to your chats
“For a lot of people, the moment they start caring is when they have children,” he said. “You have a choice: are you going to sign them up to the Google ecosystem, with all the downsides and pitfalls that that entails, and lock them in to a lifetime of being a commodity that is abused by big tech? Or are you going to take an alternative path and set them up with a different start to life?”
For Yen, timing is critical to that decision.
“If I provide an alternative to somebody when they’re 40, after they’ve been exploited for two decades by Google, yeah, better late than never, but I think it’s much better if we can get the next generation the best possible start at the beginning,” he said.
Can privacy-first AI compete?
A future with less AI-powered data creep is perhaps only meaningful if done at scale. Companies like Proton face the challenge of getting individual consumers and enterprise customers to care enough about privacy to leave legacy systems and the enticing features they offer. For example, personalization is one of AI’s most appealing upsides, which is only possible with tons of data. Does that limit what AI that runs on encryption can do, or how successfully it can grow?
Yen noted that it’s possible to compute effectively with encrypted data, but that the biggest differentiator between privacy-first AI and leading frontier labs is cost.
“There’s Google Workspace and Proton Workspace, and they look kind of equivalent,” Yen said of his company’s recently released enterprise suite. “But actually, our job is 10 times harder, because we have encryption on top of all that. So it’s going to cost more, it’s also going to take longer. But in the end, it’s going to deliver a better product for most users, because it’s actually going to protect the data.”
Also: Proton launches a Google Workspace alternative – and it’s fully encrypted
Privacy may yield a better product, but who covers those additional costs? Proton’s own announcement for Workspace says it’s competitively priced, ranging from $12 per month (paid annually) to $15 (paid monthly) for the Standard tier, and from $20 per month (paid annually) to $25 (paid monthly) for the Premium tier. Proton also said it doesn’t raise prices annually or on existing customers. To clarify, a spokesperson for Proton told ZDNET that running “a more efficient shop” keeps prices lower for customers despite those higher costs Yen mentioned.
“I don’t really see any technical barriers to getting to comparable performance,” Yen added. “It’s just going to take longer.” In the big picture of the company’s business model, he said Proton’s premium offerings have proven worth the money so far.
“The fact that we have no VC investors sort of shows that, actually, this model probably is more scalable than most people think.”

