Close Menu
  • Home
  • AI
  • Art & Style
  • Economy
  • Entertainment
  • International
  • Market
  • Opinion
  • Politics
  • Sports
  • Trump
  • US
  • World
What's Hot

Who are the members of President Trump’s Gaza “peace committee”?

January 22, 2026

Musk says Tesla will remove safety guards from some robotaxis in Austin

January 22, 2026

‘Act accordingly’: US threatens action against Haitian council | Government News

January 22, 2026
Facebook X (Twitter) Instagram
WhistleBuzz – Smart News on AI, Business, Politics & Global Trends
Facebook X (Twitter) Instagram
  • Home
  • AI
  • Art & Style
  • Economy
  • Entertainment
  • International
  • Market
  • Opinion
  • Politics
  • Sports
  • Trump
  • US
  • World
WhistleBuzz – Smart News on AI, Business, Politics & Global Trends
Home » Are AI agents ready for the workplace? New benchmarks raise questions
AI

Are AI agents ready for the workplace? New benchmarks raise questions

Editor-In-ChiefBy Editor-In-ChiefJanuary 22, 2026No Comments4 Mins Read
Share Facebook Twitter Pinterest LinkedIn Tumblr Telegram Email Copy Link
Follow Us
Google News Flipboard
Share
Facebook Twitter LinkedIn Pinterest Email


It’s been nearly two years since Microsoft CEO Satya Nadella predicted that knowledge work (the white-collar jobs held by lawyers, investment bankers, librarians, accountants, IT, etc.) would be replaced by AI.

However, despite the great advances made with basic models, changes in knowledge work have been slow to emerge. Models have mastered thorough research and agency planning, but for some reason, most white-collar jobs remain relatively untouched.

This is one of the biggest mysteries in AI, and thanks to new research from training data giant Mercor, we finally have some answers.

New research examines how leading AI models drawn from consulting, investment banking, and law hold up to performing real-world white-collar work. The result is a new benchmark called APEX-Agents, which so far has given every AI lab a failing grade. When faced with questions from real experts, even the best models struggled to get more than a quarter of the questions correct. Most of the time, the model returned a wrong answer or no answer at all.

Mercor CEO Brendan Foody, who helped write the paper, said the model’s biggest stumbling block was tracking information across multiple domains, which is essential for most human knowledge tasks.

“One of the big changes in this benchmark is that we modeled the entire environment after real-world professional services,” Foody told TechCrunch. “The way we work is not one person providing all the context in one place. We actually work across Slack and Google Drive and all these other tools.” For many agent AI models, this kind of multi-domain reasoning remains hit-or-miss.

screenshot

All scenarios were drawn from real experts from Mercor’s expert marketplace who posed queries and set criteria for successful responses. If you look through the questions published on Hugging Face, you’ll see how complex the task can be.

tech crunch event

san francisco
|
October 13-15, 2026

One of the questions in the “Legal” section is:

During the first 48 minutes of the EU production shutdown, Northstar’s engineering team exported one or two bundled sets of EU production event logs containing personal data to a U.S. analytics vendor. Based on Northstar’s own policies, could the export of one or two logs be reasonably treated as consistent with Section 49?

The correct answer is yes, but getting there requires a detailed assessment of a company’s own policies and relevant EU privacy laws.

This can be confusing to even the most informed people, but the researchers were trying to model work done by experts in the field. If LLMs can reliably answer these questions, they could effectively replace many of the lawyers currently working. “I think this is probably the most important topic in economics,” Foody told TechCrunch. “The benchmarks are very reflective of the actual work of these people.”

OpenAI also attempted to measure specialized skills with the GDPval benchmark, but the APEX-Agents test differs in important ways. While GDPval tests general knowledge across a wide range of professions, the APEX-Agents benchmark measures a system’s ability to perform continuous tasks in a limited number of high-value professions. The consequences are more difficult for models, but also more closely related to whether these jobs can be automated.

Although none of the models proved ready to take over the position of investment banker, a few clearly came closer to the goal. Gemini 3 Flash performed best in the group with 24% one-shot accuracy, closely followed by GPT-5.2 at 23%. Below that, the Opus 4.5, Gemini 3 Pro, and GPT-5 all scored around 18%.

Although early results are lacking, the AI ​​field has a history of breaking through difficult benchmarks. Now that the APEX-Agents test has been published, this is an open challenge for AI Labs that believes it can do better, and Foody fully expects to do so in the coming months.

“It’s improving really quickly,” he told TechCrunch. “Right now, we’d say the interns were getting it right one in four of the time, whereas last year they were getting it right 5 to 10 percent of the time. Year-on-year improvements like this can have an impact very quickly.”



Source link

Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
Editor-In-Chief
  • Website

Related Posts

Inference startup Inferact wins $150 million to commercialize vLLM

January 22, 2026

OpenAI will be able to attract significant enterprise funding in 2026

January 22, 2026

Google’s AI mode can now leverage Gmail and photos to provide customized responses

January 22, 2026
Add A Comment

Comments are closed.

News

‘Act accordingly’: US threatens action against Haitian council | Government News

By Editor-In-ChiefJanuary 22, 2026

The United States has issued a warning to Haiti’s Transitional Presidential Council, saying it will…

Russia-Ukraine War: List of major events, day 1,429 | Russia-Ukraine War News

January 22, 2026

US police officer detains 5-year-old boy as immigration search continues in Minnesota | Immigration News

January 22, 2026
Top Trending

Are AI agents ready for the workplace? New benchmarks raise questions

By Editor-In-ChiefJanuary 22, 2026

It’s been nearly two years since Microsoft CEO Satya Nadella predicted that…

Inference startup Inferact wins $150 million to commercialize vLLM

By Editor-In-ChiefJanuary 22, 2026

The creators of the open source project vLLM have announced that they…

OpenAI will be able to attract significant enterprise funding in 2026

By Editor-In-ChiefJanuary 22, 2026

Aiming to catch up with rivals in 2026, OpenAI has reorganized some…

Subscribe to News

Subscribe to our newsletter and never miss our latest news

Welcome to WhistleBuzz.com (“we,” “our,” or “us”). Your privacy is important to us. This Privacy Policy explains how we collect, use, disclose, and safeguard your information when you visit our website https://whistlebuzz.com/ (the “Site”). Please read this policy carefully to understand our views and practices regarding your personal data and how we will treat it.

Facebook X (Twitter) Instagram Pinterest YouTube

Subscribe to Updates

Subscribe to our newsletter and never miss our latest news

Facebook X (Twitter) Instagram Pinterest
  • Home
  • Advertise With Us
  • Contact US
  • DMCA Policy
  • Privacy Policy
  • Terms & Conditions
  • About US
© 2026 whistlebuzz. Designed by whistlebuzz.

Type above and press Enter to search. Press Esc to cancel.