On 6 February 2026, the European Commission made a preliminary finding that TikTok had breached the Digital Services Act (DSA) — not because of illegal content, not because of data misuse, but because of the way the platform was designed to keep users scrolling. Infinite scroll, algorithmic amplification of emotionally charged content, push notifications engineered to interrupt, and the deliberate absence of stopping cues: these are the features now at the center of a landmark EU enforcement action.
For anyone working on AI governance and digital regulation, this moment demands attention. It marks the first time the EU has directly targeted what researchers call addictive design — the architecture of compulsive engagement — as a regulatory violation in its own right.
A New Kind of Digital Harm
Until now, EU digital enforcement has largely focused on what platforms contain: illegal content, privacy violations, anticompetitive behavior. The TikTok case shifts the question to what platforms do — specifically, how they engineer user behavior at scale. The Commission's investigation focuses on design features that behavioral science identifies as drivers of compulsive engagement: variable reward mechanisms, negativity-biased algorithmic curation, and the elimination of natural stopping points that create what researchers describe as a continuous "attention capture loop."
This is a conceptually significant move. The harm being addressed is not a deceptive transaction, a stolen data record, or a piece of misinformation. It is the gradual, engineered erosion of a user's capacity to disengage — what might be termed cognitive exploitation. The potential fine of up to 6% of global annual turnover signals that the Commission treats this as a structural, not symbolic, matter.
The Legal Gap the Case Exposes
The DSA does not use the phrase “addictive design.” The Commission’s case rests primarily on Article 34, which requires Very Large Online Platforms (VLOPs) to identify and mitigate systemic risks to mental health and the rights of the child, and on Article 25’s prohibition of manipulative interface design. But the framework was built with a different kind of harm in mind.
Classical consumer protection law, including the Unfair Commercial Practices Directive, is designed around discrete transactional decisions: did the platform trick a user into making a purchase they wouldn't otherwise have made? Addictive design doesn't work that way. It doesn't target a transaction. It targets presence; sustained, compulsive engagement that users struggle to terminate even when they want to.
This exposes a fundamental limitation in how EU law currently conceptualizes autonomy. If autonomy means only informed consent plus transparency — explaining the algorithm, offering opt-outs — then addictive design will remain structurally under-regulated. Disclosing that a platform uses infinite scroll does not neutralize its psychological effects. The Commission's move toward systemic risk analysis represents an implicit, and potentially transformative, shift beyond the consent paradigm.
A Transatlantic Convergence
The EU is not alone in grappling with this challenge. In early 2026, opening arguments began in a landmark social media addiction trial in California — a consolidated set of lawsuits brought by families, school districts, and state attorneys general against Meta, YouTube, and others. TikTok and Snapchat settled before jury selection. The US cases are grounded in product liability and design defect theories under state tort law, asking whether engagement-maximizing platforms constitute defective products rather than protected editorial services.
The two approaches — EU administrative risk regulation and US private tort litigation — reflect genuinely different legal philosophies. The EU model is precautionary and systemic: the Commission does not need to prove that a specific child suffered a specific psychiatric injury. It must show that platform design generates systemic risks to fundamental rights. The US model is reactive and compensatory: plaintiffs must establish individualized harm, causation, and damages.
Yet both are converging on the same underlying question: can the behavioral engineering of attention be held legally accountable? If US courts accept that addictive design is a defect rather than a protected feature, and EU regulators require concrete interface modifications as risk mitigation, addictive platform architecture may become a transnational regulatory category. The stakes are high on both sides of the Atlantic.
Where Governance Research Meets Regulatory Reality
This is precisely where AI4POL’s work becomes directly relevant. The TikTok case is not just a legal proceeding — it is a live demonstration of the governance gaps AI4POL is designed to address.
WP2 examines the intersection of emerging AI and data technologies with EU regulation across three areas: restricted data access, self-preferencing in digital rankings, and online consumer manipulation. The TikTok addictive design case sits squarely in that third domain. The challenge regulators now face is not identifying that a problem exists — the Commission's preliminary findings make that clear — but having the analytical tools to define it precisely, measure it rigorously, and enforce against it durably.
What does "risk mitigation" actually require of a platform whose entire engagement model is built around maximizing time-on-screen? What design indicators distinguish legitimate personalization from manipulative attention capture? How should regulators assess whether a platform's self-reported risk assessment is substantively adequate or merely procedurally compliant? These are not questions that doctrinal legal analysis alone can answer. They require the kind of interdisciplinary, technically grounded research that AI4POL is designed to produce.
The DSA's systemic risk framework must evolve into a more robust theory of digital compulsion if it is to genuinely protect mental autonomy, particularly for children, whose rights under the EU Charter provide the strongest constitutional foundation for structural intervention.
What Comes Next
The Commission's preliminary findings against TikTok mark a watershed, but not a conclusion. Whether this moment becomes transformative depends on what regulators, courts, and researchers do with it.
If addictive design is treated as a peripheral compliance issue — a box to check in a risk assessment — the DSA's potential will remain unrealized. If, however, systemic attention capture is recognized as a genuine threat to human dignity and cognitive self-determination, particularly for vulnerable users, the EU may be laying out the foundation for a new generation of rights-based digital governance.
For that to happen, regulators need more than legal authority. They need actionable frameworks for defining manipulation, measurable indicators for identifying it, and governance models robust enough to anticipate the next generation of engagement technologies before they entrench. That is the work AI4POL is here to support.
This post draws on analysis published in EU Law Live Weekend Edition No. 264 (February 2026) by Pratiksha Ashok, Post-Doctoral Researcher at Tilburg University's TILT and TILEC institutes and a contributor to AI4POL Work Package 2.
by
Pratiksha Ashok & Fernanda Sauca
/
Read more

