Articles
The Privacy Soapbox
AI won't save a poorly instrumented marketing operation
The Privacy Soapbox
new

AI won't save a poorly instrumented marketing operation

Published  

4/9/2026

6
min read

Published  

April 9, 2026

by 

Romain Baert

10 min read
Summary
In the Privacy Soapbox, we give privacy professionals, guest writers, and opinionated industry members the stage to share their unique points of view, stories, and insights about data privacy. Authors contribute to these articles in their personal capacity. The views expressed are their own and do not necessarily represent the views of Didomi.

Do you have something to share and want to take over the Privacy Soapbox? Get in touch at blog@didomi.io

You can't escape it anymore. Artificial intelligence has planted itself in every strategic conversation, boardroom discussions, RFPs, product roadmaps. It's become a genuine opportunity, a competitive battleground, and, let's be honest, a significant source of pressure. Nobody wants to miss this wave.

And yet, the more I listen to what's being said about AI, the more I feel like we're looking at the problem backwards. The dominant question is "How do we add AI to our organizations?" when we should first be asking whether we're truly ready to make it work properly.

In most cases, the answer is no.

2026 won't be an AI year, it'll be a tipping point

AI is often talked about as a technology. But what's unfolding in marketing goes far beyond the technical and represents a genuine paradigm shift:

  • From search engine to answer engine
  • From traffic to visibility in generated responses
  • From volume to relevance
  • From SEO to recommendation

The change is structural, and as is often the case with this kind of disruption, the first reaction is always the same: "We need to move." The second, quieter reaction is more interesting: "But are we ready?"

That's where things tend to get a little uncertain.

History repeats itself

Every technology wave tells the same story. A bold promise, rapid adoption, then a reality more complex than expected. We saw it with data, CRM, web analytics, attribution, programmatic, and the metaverse.

Each time, the tools evolve faster than the organizations, and the difficulty of absorbing them gets underestimated. There's no reason AI should be any different.

The great paradox of this moment is that AI is particularly data-hungry. It needs data to learn, understand, predict, and automate, while the quality of available data is actually deteriorating. In practice, companies' data has been incomplete for a long time. Consent reduces collectible volumes, ad blockers and browser restrictions (Safari, Firefox) truncate signals, the gradual disappearance of third-party cookies weakens measurement chains, and imperfect historical implementations have left blind spots everywhere.

As a result, many organizations are already operating with a partial picture. And in that context, they're being asked to trust AI to make better decisions.

AI doesn't make mistakes. It executes perfectly on what we provide it

This is probably the most important point, and the least understood. If your data is biased, AI amplifies the bias. If your signals are incomplete, it optimizes on incomplete data. If your tracking is degraded, it automates a degraded view at scale.

The simplest analogy is the GPS. Enter the wrong address and it takes you perfectly to the wrong place. AI doesn't compensate for anything. On the contrary, it reveals what was already broken and amplifies it.

This is also one of the main reasons so many AI projects don't get off the ground. Right now, plenty of companies are running tests: POCs, pilots, experiments. The energy is there. But scaling up keeps hitting a wall. According to several recent studies, a large majority of decision-makers have failed to industrialize their AI projects.

The main reason is simpler and less comfortable than most would like to admit: the data isn't reliable enough to feed a machine that, by nature, makes no qualitative distinctions. It takes what it's given and draws conclusions from it.

We're investing in the wrong place

The dominant narrative on AI focuses on use cases, prompts, and models. That's understandable, since it's the visible, exciting, and sellable part. But the real issue lies elsewhere.

You don't build an AI strategy; you build the conditions that make AI possible by collecting properly, guaranteeing data quality, understanding it, then automating intelligently. In that order.

The problem is that many organizations are trying to start at step four.

For years, companies have been accumulating a silent data debt: incomplete tracking, heterogeneous implementations, tool sprawl, unclear governance, dependency on third-party environments. That debt doesn't disappear with AI. It just surfaces differently, with bigger consequences, because decisions are automated and made at a speed that teams can no longer manually correct.

And yet, many companies continue to invest heavily in paid media, advanced analytics tools, attribution solutions, CDPs, and data warehouses, without having secured the quality of what actually drives their decisions. Biased data can lead to a misreading of the business and, tomorrow, to bad decisions made automatically at scale.

The real challenge: taking back control of your data

The issue is no longer simply about measurement. It's about regaining control, stepping away from dependency on third-party environments and rebuilding a first-party approach by making what actually drives decisions more reliable.

I deeply believe in AI. I even think it will durably transform marketing, and that those who master it will have a genuine competitive advantage. But I also think we need to move past a form of collective naivety. The real question for the next two or three years isn't "how do we use AI." It's:

Are we capable of becoming a company that AI can actually work with?

A high-performing company tomorrow won't be the one that "does AI." It'll be the one that knows what it collects, understands what it's losing, controls what it rebuilds, and can trust what it uses. Less hype, more rigor. Less noise, more signal. Because in the end, it won't be about who talked about AI the best. It'll be about who was actually ready.

The author
The authors
Romain Baert
Managing Director, Server-Side at Didomi
At Addingwell, we empower advertisers to embrace the latest digital marketing standards, particularly with server-side Google Tag Manager. Our mission is to help brands navigate this evolving environment with a robust, managed solution that enhances tracking accuracy, boosts performance, and ensures compliance with data privacy regulations. By doing so, we aim to transform the challenges of a cookie-less future into opportunities for innovation and growth.
Access author profile
Romain Baert
Managing Director, Server-Side at Didomi
At Addingwell, we empower advertisers to embrace the latest digital marketing standards, particularly with server-side Google Tag Manager. Our mission is to help brands navigate this evolving environment with a robust, managed solution that enhances tracking accuracy, boosts performance, and ensures compliance with data privacy regulations. By doing so, we aim to transform the challenges of a cookie-less future into opportunities for innovation and growth.
Access author profile
Access author profile