Articles
The Privacy Soapbox
OpenAI is adding ads to ChatGPT: Can they get it right?
The Privacy Soapbox
new

OpenAI is adding ads to ChatGPT: Can they get it right?

Published  

1/26/2026

by 

Brian Kane

4
min read

Published  

January 26, 2026

by 

Brian Kane

10 min read
Summary
In the Privacy Soapbox, we give privacy professionals, guest writers, and opinionated industry members the stage to share their unique points of view, stories, and insights about data privacy. Authors contribute to these articles in their personal capacity. The views expressed are their own and do not necessarily represent the views of Didomi.

Do you have something to share and want to take over the Privacy Soapbox? Get in touch at blog@didomi.io

OpenAI announced this week that ChatGPT will begin testing ads for free and lower-tier users in the US. The company made the right promises: Ads won't influence answers, user data won't be sold to advertisers, and conversations remain private. Smart protective guardrails are in place, with no ads for users under 18, no ads on political or health-related topics. 

For anyone who has been in ad tech for a while, this is not a new story. Historically, the latest ad tech innovation, from retargeting to mobile or programmatic advertising, tends to start with lofty manifesto-type promises. The promises sound right on paper, but the devil is in the details. 

The tightrope ahead

The real question isn't what advertisers can see, but what happens inside OpenAI's organization, the operational rigor, and the management of cultural components that are sure to come up. When you're managing complex relationships in which the same company is both an enterprise customer and a major advertiser, internal data flows become extraordinarily complex. 

Think about a company like P&G, which could simultaneously be an enterprise client using OpenAI's technology and a potential advertiser wanting to reach ChatGPT's audience. OpenAI will face the same multi-faceted revenue pressures. Google encountered as it scaled, advertisers demanding measurement and proof of effectiveness, enterprise customers expecting deep integration, all while maintaining separation between these revenue streams.

I've watched this dynamic play out across the evolution of digital advertising and the development of modern privacy frameworks, and the pattern is consistent: measurement demands always escalate, advertisers want more proof their spend is working, and that pressure has historically eroded the walls companies build between user data and advertising systems.

The cost of getting it wrong

The FTC's "Operation AI Comply" signals that regulators aren't waiting to see what happens. They're establishing oversight from day one. That's the right move, especially given what we've already seen in the US market, where litigation has proliferated as consent mechanisms have fallen short and technology hasn't lived up to its promise, and class action lawyers across the country are already paying attention and chomping at the bit, no doubt.

The margin for uncertainty is shrinking, and companies that wait to build proper governance will find themselves explaining their architecture to regulators and plaintiffs after the fact, rather than designing it correctly upfront.

User sentiment has been overwhelmingly negative, with over 10 million views on OpenAI's announcement, dominated by skepticism. But utility has to be paid for somehow, and this is the right compensation framework. A tiered approach recognizes that users who value ad-free experiences are willing to pay for them, whileUsers who can't or won't pay still have access through an ad-supported model.

With that in mind, it only works with genuine transparency and meaningful user control: Users need to understand what they're choosing and trust that their choice will be respected. The moment that trust breaks is when ads blur into answers, when "private from advertisers" becomes ambiguous under revenue pressure, the entire model collapses.

Will the guardrails hold?

OpenAI has an opportunity to set a different standard. The challenge will be whether those well-intentioned content policies and data separation promises remain intact when growth targets aren't being met, and whether the internal governance systems are robust enough to withstand the inevitable pressure from advertisers demanding more measurement, more targeting, more proof of effectiveness.

Having watched this industry evolve through multiple cycles, I know the principles are solid. Now comes the hard part: building the operational discipline and internal architecture that keeps those principles from drifting when the revenue pressure mounts, especially when that pressure comes from companies that are simultaneously your customers and your advertisers.

The author
The authors
Brian Kane
Co-founder and Chief Operating Officer at Sourcepoint by Didomi
Data Privacy | Co-Founder & COO @ Sourcepoint | AdTech Veteran | Board/Advisory Services
Access author profile
Brian Kane
Co-founder and Chief Operating Officer at Sourcepoint by Didomi
Data Privacy | Co-Founder & COO @ Sourcepoint | AdTech Veteran | Board/Advisory Services
Access author profile
Access author profile