Lighting Up or Shooting Up?

Lighting Up or Shooting Up?

Last week, two major verdicts against Facebook and Google have prompted people to wonder if this is social media’s “Big Tobacco Moment,” but there’s a better metaphor.

By Brad Berens

Last week, Meta (Facebook, Instagram) and Alphabet (Google, YouTube) suffered big legal setbacks that many are calling social media’s “Big Tobacco Moment.”

I created this image using Google’s Gemini.*

The cigarette metaphor is powerful, but it’s a misfire. We should really be calling this social media’s OxyContin Moment (or its heroin moment, or its fentanyl moment, or its cocaine moment).

What the verdicts are about

First, in New Mexico, a jury found Meta liable for failing to protect kids from online dangers and levied the maximum penalty: $375 Million. That’s a big sum for an individual, but as The Wall Street Journal ($) observed, “Meta made 160 times that amount of revenue in its most recent quarter.”

Then, in Los Angeles, a jury found both Meta and Alphabet liable for creating addictive platforms (NYT $) that harmed the mental health of a plaintiff who was underage when she began using social media. Snap and TikTok settled before the case came to court. The jury awarded the plaintiff $6 Million, with Meta to pay 70% and Alphabet to pay 30%.

The two big tech companies will stall and appeal, and they have the resources to do so, but thousands of other such cases are piling up.

What’s new about these cases is that they are attacking the AI-driven algorithms  that social media platforms use to hyper-personalize what a person sees when she or he logs in. Previous attempts to hold big tech companies accountable for the harms they inflict have hit a brick wall: Section 230 of the Telecommunications Act of 1996, which freed digital companies from liability for anything users posted on their platforms. (I’ve written about this here.)

People should be able to control what they see from other people. There should be a “degrees of separation” feature where I can agree to see things from my friends only, or my friends and their friends, or my friends and their friends and their friends.

The current cases attack the big tech companies’ product design. The argument is that people can say anything they want on a platform, but that doesn’t mean the platforms have the right to use AI-driven algorithms to put those statements in front of other people who will engage with them and then (the most important thing to the companies) keep on using the platform.

As Sasha Baron Cohen and others have observed: freedom of speech is not freedom of reach.

If you pay attention to just one story about these verdicts, then I recommend that you listen to the March 26 episode of the “Your Undivided Attention” podcast from the Center for Humane Technology. It’s only 19 minutes long, but it will give you a sense of what’s at stake for our collective mental health in these cases. It also demonstrates the shocking callousness of the Meta C-Suite, who knew how dangerous their algorithms were to kids but chose profit instead.

For all that I loathe the shoddy thinking behind Jonathan Haidt’s bestselling book The Anxious Generation: How the Great Rewiring of Childhood is Causing an Epidemic of Mental Illness, I have to give him credit: I believe the book helped raise awareness about the dangers of social media. You don’t even need to read the book to get its (flawed) thesis: the title and subtitle say it all. In some ways, the book is a tweet with 385 pages of camouflage surrounding it.

Why the “Big Tobacco Moment” is a Misfire

Tobacco companies engineered cigarette nicotine content to be super addictive in order to drive sales. Comparing social media to cigarettes—and then advocating that we should control minors’ access to it—frames the addictive qualities of social media as a cognitive developmental issue rather than an inherent one. “We have to protect kids from exposure to these dangerous things until they’re a little older and can handle them.”

This presumes that adults can handle social media, which is a big leap.

Alcohol is a better metaphor than cigarettes for this developmental argument because more people who smoke cigarettes are chemically dependent than people who drink are alcoholics.

The problem with the developmental argument is that social media is inherently and structurally addictive. In her book Dopamine Nation, Stanford professor and psychiatrist Anna Lembke argues that social media “can cause the release of large amounts of dopamine into our brains’ reward pathway all at once, just like heroin, or meth, or alcohol.” (Scientific American has a new overview of the research around this issue.)

Adults are vulnerable to dopamine addiction. It isn’t a developmental issue. The comparison we should be making is to OxyContin. The Sackler family has been held liable for a nationwide epidemic of opioid addiction and are paying upwards of $7 Billion in penalties. Many more adults are addicted to OxyContin than adolescents or children. For example, according to the CDC the most opioid overdose-induced deaths were among adults aged 35 and older.

Children are vulnerable, but so are the rest of us.

Is there hope?

Yes. In addition to the lawsuits, in 2024 California passed the “Protecting Our Kids from Social Media Addiction Act.” This requires parental consent for the big tech companies to algorithmically target minors. That same year, New York passed the SAFE for Kids Act. SAFE is short for “Stop Addictive Feeds Exploitation.” These bills don’t help adults, but they are steps in the right direction.

In better news, the New York state legislature is currently considering SAFE for All. This bill would require “a setting which allows a social media user to turn off algorithmic recommendations and other notifications; prohibits the use of dark patterns to subvert choice or inhibit users from accessing certain mechanisms.”

To my mind, this doesn’t go far enough. Algorithmic feeds should be opt-in rather than opt-out. (Although, since it’s impossible to opt-out now, even that would be an improvement.) Users should have to assert that they want personalized feeds, and they should be required to opt-in every time they log into the platform.

Moreover, people should be able to control what they see from other people. There should be a “degrees of separation” feature where I can agree to see things from my friends only, or my friends and their friends, or my friends and their friends and their friends, and so on.

The figure on the right shows how this collection of options would look:

The bottom-left square represents the settings where the user would only see what their friends post in reverse chronological order. The bottom-right square would expand the degrees of separation so the user would see more, but still in reverse chronological order. The top-left square would personalize the user’s feed so that what the algorithm thought most relevant would come first but still restrict the content to friends-only.

The top-right square, where we get everything the algorithm thinks will keep us on the platform, is what we have today. Which is to say that we don’t have any options.

But that might change.

Coda

What I haven’t discussed is the impact that these possible changes to social media platforms would have on media and advertising, which is, to use a technical term, humongous. Meta and Alphabet make the majority of their vast revenues from highly targetable and effective advertising.

If any of these changes take effect, they will reduce how much people use social media, which will reduce how many ads the companies can sell. They will also reduce the effectiveness of that advertising, since with optional algorithms the companies will know less about their users than they do now. These ultra-profitable companies would then become a lot less profitable.

This is why the big tech companies will fight fang and claw to prevent any of it from happening.

As Guido the killer pimp said in Risky Business, “In a sluggish economy, never, ever fuck with another man’s livelihood.”
__________

 

Brad Berens is the Center’s strategic advisor and a senior research fellow. He is principal at Big Digital Idea Consulting. You can learn more about Brad at www.bradberens.com, follow him on Blue Sky and/or LinkedIn, and subscribe to his weekly newsletter (only some of his columns are syndicated here).

 

 

* Image Prompt: “Two figures in silhouette, one facing to the left lighting a cigarette, one facing to the right injecting himself with a hypodermic needle.”

 

See all columns from the Center.

April 1, 2026