February 19, 2026, Meta CEO Mark Zuckerberg leaves federal court in downtown Los Angeles after defending the company in a landmark social media addiction trial in Los Angeles, USA.
John Putman | Anadolu | Getty Images
Ten years ago, meta The company, then known as Facebook, hired social science researchers to analyze how the social network’s services were affecting its users. This was a way for the company and its colleagues to demonstrate that they seriously understood the benefits and potential risks of their innovation.
But as Meta’s court defeat this week shows, researchers can be held accountable for their work. Brian Borland, a former Facebook executive who testified in both the New Mexico and Los Angeles trials, said the damning results of Meta’s internal investigation and documents appear to be inconsistent with the company’s public image. Jurors in two trials found Meta’s sites were poorly secured and endangered children.
Mark Zuckerberg’s company began cracking down on its research team several years ago after Facebook researcher Francis Haugen became a high-profile whistleblower. Then, emerging technology companies like OpenAI and Anthropic invested heavily in researchers and tasked them with studying the effects of modern AI on users and publishing their findings.
At a time when AI is receiving so much attention for its negative effects on some users, these companies must ask themselves whether it is in their best interest to continue to fund research, or whether it is best to suppress research.
“There was a time when there were internal teams that were set up and started looking at things. For a short period of time, there were some very good researchers who were looking at what was going on with these products with a little bit more freedom than I understand today,” Borland said in an interview.
The two losses filed by Meta this week centered around different lawsuits, but they had a common theme. That means Meta did not share with the public what it knew about the harms of its products.

Jurors had to evaluate millions of corporate documents, including executive emails, presentations, and internal investigations conducted by Meta staff. The document includes internal research that appears to show an alarming rate of teenage users receiving unwanted sexual advances on Instagram. One study, which was ultimately discontinued by Meta, suggested that people who used Facebook less felt less depressed and anxious.
Plaintiffs’ attorneys in this case did not rely solely on internal investigations to make their case, but those investigations helped strengthen their position regarding Mr. Mehta’s charges. Meta’s lawyers argued that certain studies were outdated, taken out of context and misleading, giving a false impression of how the company operates and how it views safety.
“Both sides of the story”
“The jury was able to hear both sides and a very fair presentation of the facts and make a decision based on what they saw,” Boland said. “And both juries dealt with completely different cases and returned clear verdicts.”
Meta and Google’s YouTube, which is also a defendant in the Los Angeles trial, have indicated they intend to appeal.
Lisa Stroman, a psychologist and attorney who served as an in-house expert consultant on the New Mexico case, said leaders at Meta and across the technology industry may have thought they could use the internal investigation to their advantage to gain public support.
“I think what they failed to realize is that researchers are parents and family members,” Stroman said. “And I think what they didn’t understand was that these people weren’t going to be bought.”
Whatever public relations efforts the publicists had hoped would yield, it backfired when the research began to spread to the public. The most damaging incident for Meta occurred in 2021. Haugen, a former Facebook product manager turned whistleblower, has leaked a trove of documents suggesting the company knew of the potential harms of its products.
Former Facebook employee Frances Haugen speaks during a hearing of the Committee on Energy and Commerce’s Communications Technology Subcommittee on December 1, 2021, at the U.S. Capitol in Washington, DC.
Brendan Smialowski AFP | Getty Images
Kate Blocker, director of research and programs at the nonprofit Children and Screens: Institute of Digital Media and Child Development, said Haugen’s “disclosure marked a global turning point, not just for the companies themselves, but also for researchers, policy makers, and the broader public.”
The leak also led to major changes at Meta and the technology industry, as research that could be seen as counterproductive to the company began to be weeded out. CNBC previously reported that a number of teams studying suspected harms and related issues have been cut.
Some companies have begun removing certain tools and features from their services that third-party researchers were using to study their platforms.
“While companies may now view ongoing research as a responsibility, independent third-party research must continue to be supported,” Blocker said.
Much of the internal research used in this week’s trial contained no new revelations, and many of the documents had been previously released by other whistleblowers, said Sasha Howarth, executive director of the Technology Oversight Project. What was added in the trial, Howarth said, were “the emails themselves, the words themselves, the screenshots themselves, internal marketing presentations, notes” that provided the necessary context.
While the tech industry is currently actively working on AI, companies like Meta, OpenAI, and Google are prioritizing products over research and safety. This is a worrying trend, Blocker said, noting that “like social media before it, the public has limited knowledge about what AI companies are researching about their products.”
“Although AI companies seem to be primarily studying the models themselves (model behavior, model interpretability, and consistency), there is a large gap in research regarding the impact of chatbots and digital assistants on child development,” Blocker said. “AI companies have an opportunity to avoid repeating the mistakes of the past. We urgently need to establish systems of transparency and access that allow these companies to share what they know about their platforms with the public and support further independent evaluation.”
In focus: Regulatory pressure continues after landmark social media ruling.

