← Back to Home

Stop big tech from making users behave in ways they don’t want to

That means targeting mechanisms engineered to rewire the brain’s reward system, writes Marie Potel-Saville

Stop big tech from making users behave in ways they don’t want to

SOMEWHERE IN META’S servers sat a slide deck marked “Confidential”. Written in 2019, its conclusion was blunt: “Teens can’t switch off from Instagram even if they want to.” On March 25th this year, a Los Angeles jury read it into the record and found Meta and YouTube liable for designing addictive products. The world is continuing to digest this landmark ruling and figure out its implications.

When I practised competition law in the early 2000s, the race between competitors could turn ugly in familiar ways: predatory pricing, foreclosure, killer acquisitions, pay-for-delay. Such practices were eventually prosecuted, and resolved with hefty fines and the occasional structural remedy.

The internal documents produced in KGM v Meta Platforms describe a different race entirely. In 2016 Meta was losing ground to TikTok and Snapchat. That year, executives set the “overall company goal” as “total teen time spent”. Why? An internal memo found that 12-year-olds were three times as likely as 32-year-olds to stay on Facebook for the long term, despite the platform nominally requiring users to be at least 13; the memo concluded that Facebook “should consider investing more heavily in bringing in larger volumes of tweens”. The logic was ruthlessly simple: children who arrived the youngest were the “stickiest” users. Hence the formulation later read aloud in court: “If we wanna win big with teens, we must bring them in as tweens.”

The company knew exactly what it was building. Internal research established an “addict’s narrative”: teens spending too much time on a compulsive activity they knew was negative but felt powerless to resist. One employee message read: “Oh my gosh y’all, [Instagram] is a drug. We’re basically pushers.”

The underlying mechanism here is what I now call “predatory design”. It takes various forms. A case involving Amazon tells the same story with different prey. In 2023 America’s Federal Trade Commission (FTC) sued Amazon over its Prime subscription programme, alleging the company had engineered its interface to trap consumers into memberships they had not chosen and could not easily escape. Amazon’s internal documents called the approach “misdirection”: a large, prominent button reading “Get FREE Two-Day Shipping” that enrolled users in Prime, and a small grey text link, easy to miss, to decline. An internal memo recorded the reasoning with striking candour: making the process clearer for users was not the “right approach” because it would cause a “shock” to business performance.

Cancellation was designed with the same logic applied in reverse. The process, which Amazon internally named the “Iliad Flow” after Homer’s epic of the long Trojan war, required users to navigate four pages, six clicks and 15 separate options before reaching the exit. By Amazon’s own accounting, 35m consumers had been enrolled without meaningful consent over seven years. The eventual settlement with the FTC, in September 2025, cost $2.5bn.

Another way to think of these digital ruses is as “dark patterns”, a term coined in 2010 by Harry Brignull, a user-experience expert, to describe tricks used online to “make you do things you didn’t mean to”. Since then, this research field has progressively mapped the systematic weaponisation of cognitive science against the people interfaces are supposed to serve.

We all have hundreds of cognitive biases: mental shortcuts that lead us to make predictable, irrational decisions. Dark patterns exploit these weaknesses. But addictive design goes further, targeting the brain’s reward architecture directly. Dopamine neurons respond not to rewards received but to the uncertainty of whether a reward will arrive: the more unpredictable the outcome, the stronger the signal.

Digital platforms have replicated this architecture with greater precision and at incomparably larger scale. Infinite scroll removes natural stopping-points. Algorithmic feeds withhold and then deliver content in unpredictable sequences. The pull-to-refresh gesture replicates, almost exactly, the physical act of pulling a slot machine lever. None of these features arrived by accident.

The competition lawyer in me cannot help but wonder whether there is still such a thing as a market when some of the largest companies in the world prey on the very people they are supposed to serve. A market economy is meant to generate the best allocation of resources and the biggest benefits for consumers. For these promises to be fulfilled, consumers must be able to see and choose alternatives deliberately; compare them on undistorted dimensions; form preferences that reflect actual interests; and switch freely. Cognitive exploitation undermines all four of these. Infinite scroll captures attention. Dark patterns distort comparison. Dopaminergic loops manufacture compulsion. Addiction engineering blocks effective switching.

Securities regulation offers an instructive analogy. When a trader manipulates stock or derivatives prices, the law treats the crime as a structural harm to the broader market; the corrupted price no longer tells the truth. Cognitive exploitation should be seen in the same light, at a much larger scale. When platforms systematically manufacture the preferences of billions of users, consumer signals no longer point anywhere useful. That is a structural failure.

Dark patterns and addictive design breach an impressive array of laws. The regulatory apparatus is moving. And yet the harm continues, at scale, by design, because the rules as they are don’t amount to an adequate systemic response.

The burden of proof should fall on the platform, not the victim. The question is not whether a harmed user can show specific damage. The question is whether the company can show, before rolling a product out to billions of people, that it is not predatory by design.

Applying that standard to big-tech platforms would be disruptive. It would force them to subject their engagement mechanics, from infinite scroll to algorithmic amplification, to independent safety assessment before deployment, and potentially to redesign or retire features that cannot pass it. For an industry whose business model depends on maximising time-on-platform, it would be hugely challenging. But this is the standard we apply to drugs, to medical devices and to aircraft. Why should it not also apply to systems deliberately engineered to rewire the brain’s reward architecture?

Marie Potel-Saville is the co-founder of FairPatterns, a member of the Support Pool of Experts on Dark Patterns at the European Data Protection Board and the Paris chair of Women in AI Governance.