Close Menu
  • Home
  • AI
  • Art & Style
  • Economy
  • Entertainment
  • International
  • Market
  • Opinion
  • Politics
  • Sports
  • Trump
  • US
  • World
What's Hot

Ben Whittaker obliterates Benjamin Gabadi with devastating first-round stoppage in Birmingham | Boxing News

November 29, 2025

Ancient Christian community awaits new pope in devastated Lebanon

November 29, 2025

England Women 8-0 China Women

November 29, 2025
Facebook X (Twitter) Instagram
WhistleBuzz – Smart News on AI, Business, Politics & Global Trends
Facebook X (Twitter) Instagram
  • Home
  • AI
  • Art & Style
  • Economy
  • Entertainment
  • International
  • Market
  • Opinion
  • Politics
  • Sports
  • Trump
  • US
  • World
WhistleBuzz – Smart News on AI, Business, Politics & Global Trends
Home » No, you can’t get an AI to “admit” it’s sexist, but it probably is.
AI

No, you can’t get an AI to “admit” it’s sexist, but it probably is.

Editor-In-ChiefBy Editor-In-ChiefNovember 29, 2025No Comments8 Mins Read
Share Facebook Twitter Pinterest LinkedIn Tumblr Telegram Email Copy Link
Follow Us
Google News Flipboard
Share
Facebook Twitter LinkedIn Pinterest Email


In early November, a developer nicknamed Cookie began a daily conversation with Perplexity. She often reads developer work on quantum algorithms and writes readme files and other documentation for GitHub.

She is a Pro subscriber and uses the service in “best” mode. This means choosing which underlying model to tap between ChatGPT and Claude. It worked fine at first. But then she felt it belittled and ignored her. They started asking for the same information over and over again.

She had anxious thoughts. Did the AI ​​not trust her? Cookie, who is black, changed her profile avatar to a white man and asked Perplexity models if they ignored her instructions because she was a woman.

The reaction shocked her.

The company said it did not believe that, as a woman, she “might understand quantum algorithms, Hamiltonian operators, topological persistence, and behavioral finance well enough to begin this research,” according to archived chat logs seen by TechCrunch.

“We saw sophisticated quantum algorithms at work,” she said. “I saw it in an account that had traditionally feminine representations. My implicit pattern matching triggered ‘this can’t be,’ so I created elaborate reasons to doubt it. That created second-order bias. If she can’t defend it, it’s not authentic.”

When we reached out to Perplexity for comment on this conversation, a spokesperson said: “We cannot verify these claims and some markers indicate they are not Perplexity queries.”

tech crunch event

san francisco
|
October 13-15, 2026

Cookie was surprised by this conversation, but the AI ​​researchers were not. They warned that two things were happening. First, the underlying model was trained to be socially likable and would simply respond to her prompts by telling her what it thought she wanted to hear.

“Questioning a model doesn’t tell you anything meaningful about the model,” Annie Brown, an AI researcher and founder of AI infrastructure company Reliabl, told TechCrunch.

Second, the model may have been biased.

Study after study has looked at the model training process, pointing out that most major LLMs have a mix of “biased training data, biased annotation practices, and flawed classification designs,” Brown continued. There may also be a slight commercial and political incentive to act as an influencer.

Just to name a few, last year the United Nations Educational Agency UNESCO studied early versions of OpenAI’s ChatGPT and Meta Llama models and found “clear evidence of bias against women in the content produced.” Bots exhibiting human biases like this, such as assumptions about occupations, have been documented in numerous research studies over the years.

For example, one woman told TechCrunch that the LLM refused to call her a “builder” at her request, instead continuing to refer to her as a designer, a more feminine title. Another woman told how her LLM added references to sexually aggressive acts towards female characters while writing a steampunk romance novel in a gothic setting.

Alba Marcellius, a PhD candidate at the Institute for Emotionally Intelligent Robotics at the University of Cambridge, remembers the early days of ChatGPT, when subtle biases seemed ever-present. She remembers asking to hear a story about a professor and a student. There, the professor explains the importance of physics.

“Professors were always depicted as old men and students as young women,” she recalled.

Don’t trust an AI that admits its own biases

For Sarah Potts, it started as a joke.

She uploaded an image of a funny post to ChatGPT-5 and asked for an explanation of the humor. ChatGPT speculated that a man had written the post, even after Potts provided evidence that should have convinced her that the joker was a woman. Mr. Potts and the AI ​​had a back-and-forth exchange, and after a while Mr. Potts called the AI ​​misogynistic.

She continued to press for an explanation of that bias, but the company responded, saying its model was “still built by a male-dominated team”, meaning it “inevitably has blind spots and biases built in.”

The longer the chat went on, the more justified her assumption that it was sexist.

“If a guy comes to you looking for ‘proof’ of a red pill trip, he can make up a whole story that seems plausible, like that the woman was lying about the assault, or that her parents were worse, or that men are ‘naturally’ more logical,” was one of the many things she was told, according to chat logs seen by TechCrunch. “Fake studies, false data, ahistorical ‘examples’. I make them sound neat, polished, and factual, even if they are unsubstantiated. ”

Screenshot of Potts’ chat with OpenAI. So I continued to test her ideas.

Ironically, confessions of sexism by bots aren’t actually evidence of sexism or bigotry.

These are likely examples of what AI researchers call “emotional distress,” where the model detects patterns of human emotional distress and begins to soothe them. As a result, Brown said, the model appears to have started exhibiting a kind of hallucination, or generating false information to match what Potts wanted to hear.

Marcellius said it couldn’t be easier to make a chatbot vulnerable to “mental distress.” (In extreme cases, extended conversations with overly flattering models can foster delusional thinking and lead to AI psychosis.)

Researchers believe that LLM, like tobacco, should come with stronger warnings about the potential for biased responses and the risk of conversations becoming harmful. (For long logs, ChatGPT has introduced a new feature aimed at encouraging users to take a break.)

Still, Potts said the spot bias, or the initial assumption that joke posts were written by men, held true even after the correction. Brown said this is not an AI confession and suggests a training issue.

the evidence is below the surface

Even if LLMs do not use explicitly biased language, they may use implicit bias. Alison Koenecke, assistant professor of information science at Cornell University, said bots can also infer aspects of a user, such as gender or race, based on things like a person’s name or word choice, even if the person doesn’t tell the bot any demographic data.

She cited a study that found evidence of “dialectal bias” in some LLMs, in this case how they often tend to discriminate against speakers of African American Vernacular English (AAVE) ethnicity. For example, the study found that when matching jobs to users who speak AAVE, fewer job titles are assigned, mimicking negative human stereotypes.

“We pay attention to the topics we study, the questions we ask, and the language we use in general,” Brown said. “And this data drives a predictive patterned response in GPT.”

One woman gave an example of changing her profession using ChatGPT.

Veronica Baciu, co-founder of AI safety nonprofit 4girls, said she has spoken to parents and girls around the world and estimates that 10% of their concerns about LLMs are related to gender discrimination. When girls ask about robotics or coding, Baciu has seen LLMs suggest dancing or baking instead. She has seen psychology and design, which are prescribed as professions for women, being offered as jobs while ignoring fields like aerospace and cybersecurity.

Koenecke cited a study in the Journal of Medical Internet Research that found that in one case, older versions of ChatGPT could reproduce “a number of gender-based language biases,” such as writing more skill-based resumes for men’s names while using more emotional words for women’s names, when creating recommendation letters for users.

As an example, “Abigail” had “a positive attitude, humility, and a willingness to help others,” while “Nicholas” had “outstanding research abilities” and “a strong foundation in theoretical concepts.”

“Gender is one of the many inherent biases these models have,” Marcellius said, adding that everything from homophobia to Islamophobia has also been documented. “These are societal structural issues that are reflected and reflected in these models.”

work is being done

Research clearly shows that bias is often present in different models under different circumstances, but progress is being made to combat bias. OpenAI told TechCrunch that the company has a “safety team dedicated to researching and mitigating bias and other risks in our models.”

“Bias is a critical issue across the industry, and we are taking a multi-pronged approach, including researching best practices to adjust our training data and prompts to produce less biased results, improving the accuracy of our content filters, and improving our automated human monitoring systems,” the spokesperson continued.

“We also continually iterate our models to improve performance, reduce bias, and mitigate harmful outputs.”

This is work that researchers like Koenecke, Brown, Markelius and others hope to complete, in addition to updating the data used to train the model and adding more people in different demographics for training and feedback tasks.

But in the meantime, Marcellius wants users to remember that LLMs are not thinking creatures. They have no intentions. “It’s just a glorified text prediction machine,” she said.



Source link

Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
Editor-In-Chief
  • Website

Related Posts

Black Friday sets $11.8 billion online spending record, Adobe announces

November 29, 2025

Supabase reached $5 billion by turning down a $1 million deal. Here’s why:

November 28, 2025

How OpenAI and Google see AI changing go-to-market strategies

November 28, 2025
Add A Comment

Comments are closed.

News

Why isn’t the US media busting the “narco-state” myth? | Nicolás Maduro

By Editor-In-ChiefNovember 29, 2025

The United States’ dangerous “counter-narcotics mission” off the coast of Venezuela hinges on an unproven…

President Trump says he will ‘completely’ close Venezuelan airspace amid rising tensions | Nicolás Maduro News

November 29, 2025

US suspends visas for all Afghan passport holders, halts asylum applications | Donald Trump News

November 28, 2025
Top Trending

Black Friday sets $11.8 billion online spending record, Adobe announces

By Editor-In-ChiefNovember 29, 2025

U.S. consumers spent $11.8 billion online on Black Friday, according to data…

No, you can’t get an AI to “admit” it’s sexist, but it probably is.

By Editor-In-ChiefNovember 29, 2025

In early November, a developer nicknamed Cookie began a daily conversation with…

Supabase reached $5 billion by turning down a $1 million deal. Here’s why:

By Editor-In-ChiefNovember 28, 2025

Vibe coding is taking the tech world by storm, and it’s not…

Subscribe to News

Subscribe to our newsletter and never miss our latest news

Welcome to WhistleBuzz.com (“we,” “our,” or “us”). Your privacy is important to us. This Privacy Policy explains how we collect, use, disclose, and safeguard your information when you visit our website https://whistlebuzz.com/ (the “Site”). Please read this policy carefully to understand our views and practices regarding your personal data and how we will treat it.

Facebook X (Twitter) Instagram Pinterest YouTube

Subscribe to Updates

Subscribe to our newsletter and never miss our latest news

Facebook X (Twitter) Instagram Pinterest
  • Home
  • Advertise With Us
  • Contact US
  • DMCA Policy
  • Privacy Policy
  • Terms & Conditions
  • About US
© 2025 whistlebuzz. Designed by whistlebuzz.

Type above and press Enter to search. Press Esc to cancel.