Close Menu
  • Home
  • AI
  • Art & Style
  • Economy
  • Entertainment
  • International
  • Market
  • Opinion
  • Politics
  • Sports
  • Trump
  • US
  • World
What's Hot

President Trump requests historic $1.5 trillion for military spending in Congressional budget request | Donald Trump News

April 3, 2026

Bristol City’s Roy Hodgson: Why the 78-year-old former England manager does little to help the Robins find the identity they so desperately need | Soccer News

April 3, 2026

United Airlines unveils base Polaris business class with more restrictions

April 3, 2026
Facebook X (Twitter) Instagram
WhistleBuzz – Smart News on AI, Business, Politics & Global Trends
Facebook X (Twitter) Instagram
  • Home
  • AI
  • Art & Style
  • Economy
  • Entertainment
  • International
  • Market
  • Opinion
  • Politics
  • Sports
  • Trump
  • US
  • World
WhistleBuzz – Smart News on AI, Business, Politics & Global Trends
Home » Facebook Insider Builds Content Moderation for the Age of AI
AI

Facebook Insider Builds Content Moderation for the Age of AI

Editor-In-ChiefBy Editor-In-ChiefApril 3, 2026No Comments5 Mins Read
Share Facebook Twitter Pinterest LinkedIn Tumblr Telegram Email Copy Link
Follow Us
Google News Flipboard
Share
Facebook Twitter LinkedIn Pinterest Email


When Brett Levenson left Apple in 2019 to lead business integrity at Facebook, the social media giant was in the midst of the Cambridge Analytica fallout. At the time, he believed Facebook’s content moderation problems could be easily solved with better technology.

He quickly learned that the problem ran deeper than technology. Human judges were required to memorize a 40-page policy document that was machine translated into their own language, he said. They then had about 30 seconds for each piece of flagged content to decide not only whether the content violated their rules, but whether to block it, ban the user, or limit its spread. According to Levenson, the accuracy of these quick calls was only “slightly better than 50%.”

“It was a coin flip as to whether a human reviewer would actually be able to get the policy right, and the damage had already been done for days anyway,” Levenson told TechCrunch.

Such a slow, reactive approach is not sustainable in a world of agile and well-funded adversaries. The rise of AI chatbots is exacerbating the problem, with content moderation failures leading to a series of high-profile incidents, including chatbots providing self-harm instructions to teens and AI-generated images bypassing safety filters.

Levenson’s frustration led to the idea of ​​”policy as code.” This is a way to turn static policy documents into executable and updatable logic that is tightly coupled with enforcement. That insight led to the creation of Moonbounce, which announced Friday that it had raised $12 million in funding, TechCrunch has learned exclusively. The round was co-led by Amplify Partners and StepStone Group.

Moonbounce works with companies to provide an additional layer of safety wherever content is generated by users or AI. The company trained a proprietary, large-scale language model that can examine a customer’s insurance policy documents, evaluate the content at runtime, and respond and take action within 300 milliseconds. Depending on the customer’s preferences, that action could look like Moonbounce’s systems delaying delivery while the content waits for human review later, or it could look like blocking high-risk content right then and there.

Currently, Moonbounce serves three main areas: It is a platform that handles user-generated content such as dating apps. An AI company that builds characters and companions. AI image generator etc.

tech crunch event

San Francisco, California
|
October 13-15, 2026

Moonbounce supports more than 40 million reviews each day and serves more than 100 million daily active users on its platform, Levenson said. Customers include AI companion startup Channel AI, image and video generation company Civitai, and character role-playing platforms Dippy AI and Moescape.

“Safety can actually be a product benefit,” Levenson told TechCrunch. “That’s never happened before because it’s always something that happens afterwards, it’s not something that can actually be built into a product. And we’re seeing our customers finding really interesting and innovative ways to use our technology to make safety a differentiator and part of their product story.”

Tinder’s head of trust and safety recently explained how the dating platform used this type of LLM-powered service to improve detection accuracy by 10x.

“Content moderation has always been an issue that has plagued large online platforms, but now that LLM is at the center of every application, this challenge has become even more difficult,” Lenny Pruss, general partner at Amplify Partners, said in a statement. “We invested in Moonbounce because we envision a world where objective, real-time guardrails are the backbone of every AI-powered application.”

AI companies are facing mounting legal and reputational pressure after chatbots were accused of driving teenagers and vulnerable users to suicide, and image generation tools like xAI’s Grok were used to create non-consensual nude images. Clearly, internal safety guardrails have failed, creating liability issues. Levenson said AI companies are increasingly looking outside their walls and seeking help to strengthen their safety infrastructure.

“Because we are a third party between the user and the chatbot, we don’t have as much context flowing into the system as the chat itself,” says Levenson. “The chatbot itself needs to remember potentially tens of thousands of previously submitted tokens. We’re only concerned about applying rules at runtime.”

Levenson runs the 12-employee company with Ashish Bhardwaj, a former Apple colleague who previously built large-scale cloud and AI infrastructure across the iPhone maker’s core products. Their next focus is on a feature called “iterative steering,” which was developed in response to incidents such as the suicide of a 14-year-old Florida boy who became hooked on a character AI chatbot in 2024. When a toxic topic arises, instead of rejecting it outright, the system intercepts the conversation and redirects it, changing the prompt in real-time to guide the chatbot toward a more proactive and supportive response.

“We want to add to the action toolkit the ability to better guide chatbots so that they can basically take the user’s prompts and modify them to force the chatbot to be a helpful listener in those situations, rather than just an empathetic listener,” Levenson said.

Asked if his exit strategy includes an acquisition by a company like Meta, which would bring his content moderation efforts full circle, Levenson said he recognizes how well Moonbounce fits into his former employer’s stack and his fiduciary responsibilities as CEO.

“Investors will kill me for saying this, but I don’t want to see someone buy us and limit our technology,” he said. “It’s like, ‘Okay, this is now ours, no one else can profit from it.'”



Source link

Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
Editor-In-Chief
  • Website

Related Posts

OpenAI acquires trending founder-led business talk show TBPN

April 2, 2026

Microsoft takes on AI rivals with three new basic models

April 2, 2026

Google now lets you direct your avatar through prompts in the Vids app

April 2, 2026
Add A Comment

Comments are closed.

News

President Trump requests historic $1.5 trillion for military spending in Congressional budget request | Donald Trump News

By Editor-In-ChiefApril 3, 2026

President Donald Trump has called for military spending of $1.5 trillion in his annual budget…

More than 100 US legal experts condemn attack on Iran as possible ‘war crime’ | US and Israel’s war against Iran News

April 3, 2026

Why did Trump fire Pam Bondi from the Justice Department? Who is Todd Blanche? | Donald Trump News

April 3, 2026
Top Trending

Facebook Insider Builds Content Moderation for the Age of AI

By Editor-In-ChiefApril 3, 2026

When Brett Levenson left Apple in 2019 to lead business integrity at…

OpenAI acquires trending founder-led business talk show TBPN

By Editor-In-ChiefApril 2, 2026

OpenAI has acquired the popular technology industry talk show TBPN (Technology Business…

Microsoft takes on AI rivals with three new basic models

By Editor-In-ChiefApril 2, 2026

Microsoft AI, the tech giant’s research lab, on Thursday announced the release…

Subscribe to News

Subscribe to our newsletter and never miss our latest news

Welcome to WhistleBuzz.com (“we,” “our,” or “us”). Your privacy is important to us. This Privacy Policy explains how we collect, use, disclose, and safeguard your information when you visit our website https://whistlebuzz.com/ (the “Site”). Please read this policy carefully to understand our views and practices regarding your personal data and how we will treat it.

Facebook X (Twitter) Instagram Pinterest YouTube

Subscribe to Updates

Subscribe to our newsletter and never miss our latest news

Facebook X (Twitter) Instagram Pinterest
  • Home
  • Advertise With Us
  • Contact US
  • DMCA Policy
  • Privacy Policy
  • Terms & Conditions
  • About US
© 2026 whistlebuzz. Designed by whistlebuzz.

Type above and press Enter to search. Press Esc to cancel.