A closely watched California trial opened this week that could redefine how social platforms design for engagement. A Los Angeles jury will hear claims that Instagram and YouTube are “addiction machines” that harmed a young user, identified as K.G.M., by intentionally optimizing for time-on-platform. Plaintiff attorney Mark Lanier told jurors he will present internal emails and product decisions to show executives pursued growth targets with full knowledge of risks to children’s mental health.
Meta and YouTube are expected to counter that K.G.M.’s struggles stemmed from other life factors and that, under federal law, they aren’t liable for third-party content. The six-week proceeding will feature testimony from Mark Zuckerberg, Instagram head Adam Mosseri, and YouTube CEO Neal Mohan, alongside expert witnesses and former employees. Snapchat and TikTok settled with the plaintiff last month and are no longer defendants.
What’s on trial: design choices, not just content
The case is a test of a legal strategy surfacing nationwide: target the mechanics of social media – feeds, notifications, autoplay – not merely the content users see. In court, Lanier previewed a 2015 internal email in which Zuckerberg reportedly pressed for a 12% “time spent” increase to hit business goals. For YouTube, the plaintiff argues the company favored the main app over YouTube Kids to command higher ad rates from parents using it as a “digital babysitter.” Both companies reject the “addiction” framing and say they invest heavily in youth safety tools.
Europe is already calling out ‘addictive design’
Whatever the verdict, regulators abroad are moving faster. The European Commission issued preliminary findings that TikTok breached the Digital Services Act, saying features like infinite scroll and autoplay failed to mitigate harms to young users. Brussels suggested concrete changes – screen-time breaks and disabling endless feeds – and warned TikTok it faces fines up to 6% of global revenue if it does not comply. TikTok called the findings “categorically false” and plans to challenge them.
“The Digital Services Act makes platforms responsible for the effects they can have on their users,” EU tech chief Henna Virkkunen said, adding that enforcement would protect children online. Analyst Paolo Pescatore put it bluntly: the market is shifting from “maximize engagement” to “engineer responsibility.” If that stance hardens, design patterns long treated as growth engines could become regulatory liabilities.
Policy pressure expands: age gates and outright bans
Several governments are weighing blunt instruments. Spain plans to ban social media access for minors under 16 and signal criminal penalties for managers who ignore takedown orders, following moves in Australia and debates in the UK and Denmark. The details remain in flux, but the direction is clear: age gating will grow stricter, and enforcement teeth will sharpen. For platforms, that means investments in robust age verification, content filters by default, and measurable risk-reduction features that satisfy auditors.
The incentives problem: ad-funded attention vs. trust
At the core of the courtroom and regulatory scrutiny is an old question with new urgency: do ad-driven business models nudge products toward engagement that overwhelms user well-being? Some AI players are choosing a different path. Anthropic said it will keep its Claude assistant ad-free, arguing ads are incompatible with sensitive, high-trust tasks. Even OpenAI’s Sam Altman recently called the combination of ads and conversational assistants “uniquely unsettling,” as his company tests banner ads in a lower-cost tier.
Step back, and a broader industry pivot is underway. Investors betting on enterprise AI point to revenue models that don’t depend on perpetual user attention – Anthropic’s growth with developer and business tooling is a case in point – suggesting a split between consumer attention platforms and enterprise value capture. For consumer platforms with ad-heavy P&Ls, that divergence raises strategic questions: how quickly can engagement-oriented UX patterns evolve if regulators and courts tie them to measurable risk?
What to watch next
- Discovery and testimony: Internal documents and executive testimony in Los Angeles could set a factual baseline for hundreds of similar cases and influence settlement math.
- Product changes: Expect more experiments with time limits, downranking, and disabling infinite scroll for teens – especially in Europe, where the DSA’s fines bite.
- Compliance costs: Age verification, safety audits, and design documentation will add real costs; ad yield may fall if youth engagement mechanics are curtailed.
- Business model risk: The more courts and regulators equate certain UX patterns with harm, the more valuable ad-light, trust-first models may look to boards and investors.
The question for platforms is not whether engagement matters – it does – but whether the industry can credibly show it can engineer for engagement without engineering dependency. Jurors in Los Angeles – and policymakers in Brussels and Madrid – are signaling they’re ready to test that claim.
Updated 2026-02-11 15:29 EST: No updates were made to the article as the new source (ID: 22017) did not provide information that adds clarity, relevance, or significant news value beyond what was already covered in the existing article. The source contained information already present in the current article, such as the trial duration, settlements with Snapchat and TikTok, and expected testimony from executives. Adding this redundant information would not enhance the article’s news value.

