In the Privacy Soapbox, we give privacy professionals, guest writers, and opinionated industry members the stage to share their unique points of view, stories, and insights about data privacy. Authors contribute to these articles in their personal capacity. The views expressed are their own and do not necessarily represent the views of Didomi.
Do you have something to share and want to take over the privacy soapbox? Get in touch at blog@didomi.io
No issue on that front, the coffee is good here, I told myself, just before realizing that on my left stood a clothing rack. I couldn’t believe that in this café, this little café of nothing at all, you could also buy Japanese designer clothes. It was allowed. What an idea, coming here to work for a few hours. There was even a fitting room, for heaven’s sake. Since when had this been a thing, offering that kind of service in a coffee shop in the 12th arrondissement of Paris, that gentle district that really only stands out for the width of its sidewalks? How had it happened, this shift, in that unremarkable corner of Paris that never made a fuss?
I had literally no idea. Clearly, something was slipping past me in this century, something probably quite fundamental.
What wasn’t slipping past me, though, was OpenAI’s new browser. With my MacBook firmly anchored on a slab of noble wood and myself perched on a stool whose comfort could generously be called debatable, I was about to download this promising new piece of software.
Atlas, water bearer of my digital world
Let it be said somewhere (if it even matters to say it) that there was definitely something wrong with that stool. Maybe, in fact, there was something wrong with all stools. I was nearly certain now: something fundamentally unergonomic was going on here. I decided to keep these thoughts to myself, there was no one to share them with anyway, the servers looked terribly busy. And anyway, I could probably buy the damn stool. At this point, everything seemed for sale in this coffee shop, and sitting there as weakly as I was, stripped of all charisma, I might just have bought it. I would have ended up on the sidewalk, stool under arm, a stool that would probably have cost me the equivalent of a kidney, trying to figure out where to fit it in my apartment, on top of all the pain it had already caused me. I stayed silent, but promised myself I’d one day write an article on the structural lack of ergonomics in stools.
A problem that, thankfully, did not appear in OpenAI’s new browser. From an ergonomic standpoint, I had to admit, I felt right at home. All my usual Chrome tabs had been automatically transferred. Within five minutes, I was already comfortable inside Atlas. “Those Americans really are good,” I thought to myself, while quickly checking whether Google was secretly working on a connected stool or something. You never know.
For fun (out of daily amusement, you might say), I asked Atlas why it bore that name. GPT’s answer confirmed my fears: it was indeed a reference to the Titan of Greek mythology. “The one who carries the world on his shoulders,” it replied, adding a little globe emoji just to make sure I got the gist. The contempt was palpable. I knew who Atlas was; it might have been the only thing I still remembered from Madame Delbos’s Ancient Greek class. That, and the fact that my first name translates to Didymus, a fun fact that came in handy the day I decided to join Didomi.
Anyway, the balance of power was clear: I was dealing with a browser that saw itself as the transhumanist bearer of my entire digital world. GPT soon confirmed it (still with that faint air of disdain), explaining it was “a metaphor” (no kidding!), “to mean that this version can carry your entire web experience: browsing, research, document analysis, content creation, all in one place.” Sure thing.
A few years ago, that kind of promise would have convinced me instantly. For sure, I’d have believed it. But I’m old enough now to have seen others like it come and go. Back in 2013, Zuckerberg had already bought an ad-serving solution from Microsoft. Its name? Atlas, of course. Facebook had declared at the time that the acquisition would “help advertisers gain a holistic view of their campaign performance.” The word holistic had been used, so you knew it was serious. Grand promises followed, as always, before the platform was quietly shut down in 2018, buried in the cemetery of digital projects that failed to hold up the sky above our too-human heads, as Nietzsche might say (whom one should always summon for backup, whatever the topic and whatever the cost since it’s trendy in France).
To test my new Atlas, I asked it to find me Eurostar tickets. November is always a good time to visit London, autumn is celebrated better there than anywhere else. I set my criteria: price (for obvious reasons), schedule (not too early, please), and a few dates meant mostly to avoid the French school holidays, because if I was going to end up in a train full of kids on their way to Harry Potter World, it simply wasn’t worth it.
After one final confirmation, which I granted, chatGPT took control of my browser. I could see, step by step, the actions leading it to buy me a round-trip ticket to the UK. It was doing everything exactly as I would have done, like a real human being, assuming I still count as one. If I had any remaining doubts, it was clear now: technology was on track to replace me. After a few smart Google searches, the Eurostar site appeared, nice animation, decent loading time. It was a pleasant experience until the cookie banner appeared.
There it was, the little rascal. “Let’s see you handle this, Atlas,” I thought. “Not so clever after all, are you, when faced with the GDPR?” He was cornered. From my point of view, he was done for. “What do we do, boss, where do I click?” he’d have to ask, this so-called Atlas who, far from carrying my digital world, would end up, at max, carrying my water. I was about to win, a clear victory, over AI.
Except, of course, not at all. In a sudden movement worthy of a French prime minister (quick, borderline panicked, the kind you make when you know you won’t be in charge for long) the mouse darted over to the banner and clicked the now-famous “Reject all.” That was it. Obstacle cleared. GPT’s little pointer was already entering my travel dates and selecting stations. Just like that, ChatGPT had chosen, for months to come, the fate of cookies on my browser.
Calmly. Effortlessly. As if it didn’t matter at all, as if the GDPR were just a footnote in my existence. It was humiliating, frankly, especially for someone who’d built his career on the subject. A questionable career choice, granted, but one made by a person I still happen to like very much: myself. I ordered a matcha latte to meditate on the matter. It was good too, by the way, still no complaints there. Seven euros for a meditative session felt steep, though, especially in the 12th. Pricey, for realizing I’d just lost control of my cookies.
“Would you like me to proceed to payment?” GPT eventually asked.
I declined, irrevocably. I’d lost all desire to go to London, and honestly, all desire for anything at all. All I wanted was to never again sit on a coffee shop stool or hear about Japanese design pieces. The whole experience had disgusted me. I left, hollowed out by a sharp sense of emptiness, though I did thank the staff for the quality of their free Wi-Fi. Outside, the light was grey and clear. There was also a little wind.
As I walked, an obvious thought came to me: the CNIL (and all the other data protection authorities) are going to love this. A real windfall. The problem was solved at last, just ask Atlas to close all banners, and consent disappears everywhere. Goodbye cookies, it’s done.
The idea of CNIL agents glowing with genuine happiness moved me deeply, too deeply perhaps, because even bureaucratic joy is contagious. Yet, beneath that collective bliss, a worry lingered. Atlas’s behavior raised all sorts of questions, if not legal, then at least ethical. The freedom to refuse, the freedom not to consent, I’d thought that was rather the spirit of the times. Was I wrong again? At this point, apart from my phone number, I wasn’t sure of anything anymore (and even that, I sometimes forgot). I was, as they say, in a bit of a mess.
Consent, According to Atlas
From a strictly legal point of view, Atlas’s behavior was, in truth, impeccable. There wasn’t much to reproach. The idea is simple: if the user gives no contrary instruction, Atlas will take the lowest-risk decision, which is to refuse cookies. That’s privacy by default. Legally speaking, it doesn’t get more compliant than that. The GDPR’s spirit is respected to the letter. And the fact that the decision is automated doesn’t change that: a refusal is still a refusal, even when made by a robot you’ve casually given the keys to your digital life.
Some will argue that such a refusal isn’t free, since Atlas decides without consulting me. Fair point, the GDPR does require consent to be freely given. But not refusals. The text demands that consent be free; it says nothing about how a refusal must be given. In other words, the consent must be free, but the refusal doesn’t have to be. Practically speaking, it’s always easier to refuse cookies than to accept them, since refusal has no formal requirements. Even the CNIL itself says that anything that isn’t consent must be considered a refusal. GPT clearly learned that lesson well. “If I have no instruction to the contrary, I refuse cookies,” it probably told itself, in both French and Swahili (since it knows both languages). Full stop.
One could say, of course, that Atlas might make mistakes, and that this is very serious. True, sometimes GPT hallucinates, does nonsense. But you can tell OpenAI’s people have thought about this; they’ve worked on it. And you have to admit (after several tests) the agent is remarkably skilled at refusing cookies. So yes, errors can happen, but the risk is negligible, much like me, according to Nietzsche.
Where it does get complicated, legally, is in what it reveals about the ease with which OpenAI’s AI makes decisions on our behalf, without notice, once it takes control of our browsing. If, for instance, Atlas closes a banner by refusing cookies, you’ve simply exercised your right to refuse (nothing harmful there). But if it accepts terms and conditions to finalize a booking, you’re generally bound by that acceptance as if you’d given it yourself. Using an agent doesn’t shield you from liability; legally, it’s considered an implied mandate.
Atlas, just a random Cookie Blocker
Ethically, though, that’s another story. One could go as far as saying it’s about as indecent as it gets (though this is not the place for rage). So let’s just conclude (since one must always conclude, alas) that Atlas is, in essence, nothing more than a cookie blocker. And that’s quite a shock for the independent press.
Take the media industry, especially the press, it’s cruelly ironic. ChatGPT, which had already siphoned off their articles, columns, investigations, and analyses down to the last comma, now cuts off their ad revenue. Done with absorbing the words, it now closes the revenue taps. Some will rejoice, of course, those not too fond of free press or democracy. But who, these days, still cares to pay a journalist? Sure, one might say it lacks elegance, or a certain respect, toward the spirit of the Enlightenment (if that still means anything). Voltaire, Diderot, Rousseau, Montesquieu… all those 18th-century Instagram influencers fought pretty hard for freedom of expression and of the press. But maybe it’s not so bad, after all. Perhaps there’s nothing like being silenced to realize you finally have something to say. I digress a little, I admit.
Still, we really are delegating the entire field of consent to an AI. Everywhere outside Atlas, no consent. Inside Atlas, consent everywhere. Atlas will know everything about us, and leave nothing to others.
What could possibly go wrong? After all, the people running this seem reasonable enough. And their predecessors set such fine examples: Cambridge Analytica, billions in fines, handled perfectly, of course. Nothing to worry about. In the end, maybe it’s just the end of the world. Happens all the time.
Any Solutions?
I kept walking, stunned, weary. But I did have one small professional consolation: I work for a company already tackling this problem. Word from our product team was that they were developing a version of our CMP immune to agent control. We’re always here to help, no need for the internet to panic just yet.
In the meantime, there’s always the cookie wall. Not the best user experience, but effective. I tried it, it works. Atlas quickly gives up. Soon it’s calling for help: “Where do we click, boss? Accept?”
Eventually, cookie walls will become the norm. Publishers won’t have a choice, it’ll be cookie wall or extinction. The CNIL, in fact, just ordered a survey praising the model. Apparently, even with public finances in shambles, the French are ready to dip into their sustainable savings accounts to pay for online content that used to be free. Supposedly, it’s even a growth opportunity, since according to the CNIL, there’s “a large pool of people interested in paid offers that better protect their personal data.” Personally, I have my doubts (Cartesian reflex). I’ve always been wary of pools. Seems to me we’re killing the free and open internet, to the benefit of big platforms and their AI-generated content. But who really cares anymore?
As I pushed open the door to our offices, I found myself thinking I should’ve specialized in stools instead. At least with those, when they hurt, you know why, you know who to blame. The internet, decidedly, is not modern.











