One day in November, product strategist Michelle (not her real name) logged into her LinkedIn account and switched her gender to male. She told TechCrunch that she also changed her name to Michael.
She was participating in an experiment called #WearthePants, where women tested the hypothesis that LinkedIn’s new algorithm was biased against women.
For several months, some heavy LinkedIn users have been complaining about a drop in engagement and impressions on the career-oriented social network. This comes after the company’s vice president of engineering, Tim Jarka, said in August that the platform had “very recently” implemented LLM to surface content that is useful to users.
Michelle, whose identity was revealed by TechCrunch, was skeptical of the change because she has more than 10,000 followers and ghostwrites posts on behalf of her husband, who only has about 2,000 followers. However, she said that even though she has more followers, she and her husband tend to get about the same number of post impressions.
“The only variable that mattered was gender,” she said.
Founder Marilyn Joyner also changed the gender on her profile. She has been posting consistently on LinkedIn for two years, but in recent months she has noticed a decline in the visibility of her posts. “When I changed my profile gender from female to male, I saw a 238% increase in impressions within one day,” she told TechCrunch.
Megan Cornish reported similar results, as did Rosie Taylor, Jessica Doyle Mekkes, Abby Nydam, Felicity Menzies, Lucy Ferguson, and others.
tech crunch event
san francisco
|
October 13-15, 2026
LinkedIn said its “algorithms and AI systems do not use demographic information, such as age, race, or gender, as signals to determine the visibility of content, profiles, or posts in a feed,” and that “a side-by-side snapshot of unique feed updates that are not fully representative or of comparable reach does not automatically imply unfair treatment or bias in a feed.”
Social algorithm experts agree that while implicit bias may be at play, explicit sexism may not be the culprit.
Data ethics consultant Brandeis Marshall told TechCrunch that a platform is a “complex symphony of algorithms that simultaneously and continuously pull on certain mathematical and social levers.”
“Changing your profile picture and name is just one such measure,” she said, adding that the algorithm is also influenced by, for example, how users consume and currently interact with other content.
“What we don’t know is all the other levers this algorithm uses to prioritize one person’s content over another person’s content. This is a more complex issue than people realize,” Marshall said.
bro coded
The #WearthePants experiment started with two entrepreneurs: Cindy Gallop and Jane Evans.
They wanted to know if gender was the reason so many women felt less engagement, so they asked two men to create and post the same content as them. Mr. Gallop and Mr. Evans both had sizable followings, with more than 150,000 followers combined, compared to about 9,400 for the two at the time.
According to a report from Gallup, her post reached just 801 people, while a man who posted the exact same message reached 10,408 people, more than 100% of his followers. Later, other women joined in as well. Some, like Joyner, who uses LinkedIn to market her business, are concerned.
“I would love to see LinkedIn held accountable for any bias that may exist within its algorithms,” Joyner said.
However, LinkedIn, like other LLM-dependent search and social media platforms, provides few details about how its content selection models were trained.
Marshall said most of these platforms have an “inherently white, male, Western-centric perspective embedded in them” due to who trained the models. Researchers found evidence of human biases, such as sexism and racism, in common LLM models. This is because models are trained on human-generated content, and humans are often directly involved in post-training learning and reinforcement learning.
Still, how individual companies implement AI systems is shrouded in algorithmic black box secrecy.
LinkedIn says the #WearthePants experiment failed to demonstrate gender bias against women. Jurka’s August statement was echoed by Sakshi Jain, LinkedIn’s head of responsible AI and governance, who said in another post in November that the company’s systems do not use demographic information as a visibility signal.
Instead, LinkedIn told TechCrunch it is testing millions of posts to connect users with opportunities. The company told TechCrunch that it only uses the demographic data for tests such as “to see if posts from different creators compete on an equal footing and the scrolling experience seen in the feed is consistent across viewers.”
LinkedIn is known for researching and adjusting its algorithms to provide a less biased experience for its users.
Marshall said an unknown variable likely explains why some women saw an increase in impressions after changing their profile gender to male. For example, participating in viral trends can lead to increased engagement. Some accounts had not posted in a while, and the algorithm may have been rewarding them for their posts.
Tone and writing style may also play a role. For example, Michelle said that during the week she posted as “Michael,” she changed her tone slightly and wrote in a simpler, more direct style, just as she did for her husband. In that time, the number of impressions increased by 200% and the number of engagements increased by 27%, she said.
She concluded that the system was not “obviously sexist”, but seemed to believe that the communication styles commonly associated with women were “surrogates for devaluation”.
The stereotypical male writing style is thought to be more concise, while the female stereotypical writing style is imagined to be softer and more emotional. When LLMs are trained to promote writing that conforms to male stereotypes, it is a subtle implicit bias. And, as we previously reported, researchers determined that most LLMs are full of them.
Sarah Dean, an assistant professor of computer science at Cornell University, said platforms like LinkedIn often use a user’s entire profile, in addition to their behavior, to decide what content to boost. This includes the jobs on a user’s profile and the types of content they typically engage with.
“Someone’s demographics can affect ‘both sides’ of the algorithm: what they see and who sees their posts,” Dean said.
LinkedIn told TechCrunch that its AI system examines hundreds of signals to determine what to push to a user (including insights from that person’s profile, network, and activity).
“We conduct ongoing testing to understand what helps people find the most relevant and timely content for their careers,” the spokesperson said. “Member behavior also shapes the feed, determining what users click, save, and change each day, as well as which formats they like and dislike. This behavior also naturally shapes what they see in the feed, along with updates from us.”
Chad Johnson, a sales professional on LinkedIn, explained that this change will deprioritize likes, comments, and reposts. The LLM system “no longer cares about posting frequency or time of day,” Johnson wrote in the post. “What matters is whether your writing demonstrates understanding, clarity, and value.”
All of this makes it difficult to determine the real cause of #WearthePants results.
people just hate algorithms
Nevertheless, it seems that many people, regardless of gender, don’t like or understand whatever LinkedIn’s new algorithm is.
Data scientist Shailvi Wakhulu told TechCrunch that he has averaged at least one post a day for five years, and once saw thousands of impressions. Now, she and her husband are lucky to see hundreds of them. “For content creators with large and loyal followings, this is discouraging,” she says.
One man told TechCrunch that engagement has dropped by about 50% over the past few months. Still, another man said his post’s impressions and reach increased by more than 100% over a similar period. “This is primarily because I write about specific topics for specific audiences, and that’s where the new algorithm benefits,” he told TechCrunch, adding that his clients are seeing a similar increase.
But in Marshall’s experience, as a Black person, she believes posts about her experiences perform worse than posts related to her race. “If Black women only interact when they talk about Black women, but not when they talk about their particular expertise, that’s bias,” she says.
Researcher Dean believes the algorithm may simply be amplifying “all the signals that are already there.” A particular post may be rewarded not because of the writer’s demographics, but because it has a longer history of response across platforms. Marshall may have encountered other areas of implicit bias, but her anecdotal evidence is not enough to determine that with certainty.
LinkedIn provided insight into what is currently working well. According to the company, the user base has expanded, resulting in a 15% increase in the number of posts and a 24% increase in the number of comments compared to the previous year. “This means increased competition in the feed sector,” the company said. Posts about professional insights, career lessons, industry news and analysis, and educational and informative content about work, business, and economics all do well, he said.
Rather, people are just confused. “I want transparency,” Michelle says.
However, this is a big problem because content selection algorithms have always been closely guarded secrets by companies, and transparency can lead to the exploitation of secrecy. It’s never satisfying.
