Close Menu
  • Home
  • AI
  • Art & Style
  • Economy
  • Entertainment
  • International
  • Market
  • Opinion
  • Politics
  • Sports
  • Trump
  • US
  • World
What's Hot

Big Tech earnings show how big, smart spending can be rewarded by the market

May 3, 2026

German Chancellor Merz downplays rift with US government despite reduction in US military forces | Political News

May 3, 2026

B’mouth 3 – 0 C Palace

May 3, 2026
Facebook X (Twitter) Instagram
Smart Breaking News on AI, Business, Politics & Global Trends | WhistleBuzz
Facebook X (Twitter) Instagram
  • Home
  • AI
  • Art & Style
  • Economy
  • Entertainment
  • International
  • Market
  • Opinion
  • Politics
  • Sports
  • Trump
  • US
  • World
Smart Breaking News on AI, Business, Politics & Global Trends | WhistleBuzz
Home » In Harvard University study, AI provided more accurate emergency room diagnoses than two human doctors
AI

In Harvard University study, AI provided more accurate emergency room diagnoses than two human doctors

Editor-In-ChiefBy Editor-In-ChiefMay 3, 2026No Comments4 Mins Read
Share Facebook Twitter Pinterest LinkedIn Tumblr Telegram Email Copy Link
Follow Us
Google News Flipboard
Share
Facebook Twitter LinkedIn Pinterest Email


New research investigates how large-scale language models perform in a variety of medical situations, including real-life emergency room cases. There, at least one model appears to be more accurate than human doctors.

The study, published this week in the journal Science, is the work of a research team led by doctors and computer scientists from Harvard Medical School and Beth Israel Deaconess Medical Center. The researchers said they conducted various experiments to measure how OpenAI’s models compared to human doctors.

In one experiment, researchers focused on 76 patients who came to Beth Israel’s emergency room and compared the diagnoses provided by two attending internists with the diagnoses generated by OpenAI’s o1 and 4o models. These diagnoses were evaluated by two other primary care physicians, but it was unclear which were human and which were AI-based.

“At each diagnostic touchpoint, O1 performed nominally better than or equal to two primary care physicians and 4O,” the study said, adding that the difference was “particularly pronounced at the first diagnostic touchpoint (early ER triage), when the least information is available about the patient and making the right decision is most urgent.”

In a press release from Harvard Medical School about the study, the researchers emphasized that “no data preprocessing was performed.” The AI ​​model was presented with the same information that was available in the electronic medical record at the time of each diagnosis.

Armed with that information, the o1 model was able to provide “accurate or very close diagnoses” in 67% of triage cases. Meanwhile, one doctor was correct or very close to the diagnosis 55% of the time, and the other doctor was right 50% of the time.

“We tested our AI model against nearly every benchmark, and it outperformed both previous models and physician baselines,” Arjun Manraj, director of the AI ​​Lab at Harvard Medical School and one of the study’s lead authors, said in a press release.

tech crunch event

San Francisco, California
|
October 13-15, 2026

To be clear, this study does not claim that AI is ready to make real life-or-death decisions in emergency rooms. Instead, it said the findings demonstrate “an urgent need for prospective clinical trials to evaluate these technologies in real-world patient care settings.”

The researchers also noted that they only studied how the model behaves when provided with text-based information, and that “existing research suggests that current underlying models are more limited in their inferences to non-text inputs.”

Adam Rodman, a Beth Israel physician and one of the study’s lead authors, warned in the Guardian that there is “currently no formal framework for accountability” for AI diagnostics, and that patients still “want humans to guide them through life-and-death decisions and guide them through difficult treatment decisions.”

In a post about the study, emergency physician Kristen Pantagani said it was an “interesting AI study that led to some very hyped headlines,” especially because it compared AI diagnoses to those of internists rather than ER doctors.

“If you want to compare an AI tool to a doctor’s clinical capabilities, you should start by comparing it to a doctor who actually practices that specialty,” Pantagani said. “I wouldn’t be surprised if an LLM could beat a dermatologist on the neurosurgery board exam, but that’s not particularly helpful to know.”

“My main goal as an ER doctor seeing a patient for the first time is not to guess the final diagnosis. My main goal is to determine whether you have a potentially fatal disease,” she also asserted.

This post and headline have been updated to reflect the fact that the study diagnosis came from the attending physician in internal medicine and to include comments from Kristen Pantagani.

If you buy through links in our articles, we may earn a small commission. This does not affect editorial independence.



Source link

Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
Editor-In-Chief
  • Website

Related Posts

AI-generated actors and scripts no longer eligible for Oscars

May 2, 2026

Best AI dictation apps tested and ranked

May 2, 2026

Meta acquires robotics startup to strengthen humanoid AI ambitions

May 1, 2026
Add A Comment

Comments are closed.

News

German Chancellor Merz downplays rift with US government despite reduction in US military forces | Political News

By Editor-In-ChiefMay 3, 2026

German Chancellor says President Donald Trump’s criticism of Iran strategy is unrelated to announcement of…

Investigators announce that President Trump’s assassination suspect shot and killed a police officer at a press conference | Donald Trump News

May 3, 2026

President Trump considers Iran peace plan, warns of possibility of renewed attacks | Donald Trump News

May 2, 2026
Top Trending

In Harvard University study, AI provided more accurate emergency room diagnoses than two human doctors

By Editor-In-ChiefMay 3, 2026

New research investigates how large-scale language models perform in a variety of…

AI-generated actors and scripts no longer eligible for Oscars

By Editor-In-ChiefMay 2, 2026

The organization that organizes the Academy Awards announced new Oscar rules on…

Best AI dictation apps tested and ranked

By Editor-In-ChiefMay 2, 2026

AI dictation apps have come a long way in a short period…

Subscribe to News

Subscribe to our newsletter and never miss our latest news

Welcome to WhistleBuzz.com (“we,” “our,” or “us”). Your privacy is important to us. This Privacy Policy explains how we collect, use, disclose, and safeguard your information when you visit our website https://whistlebuzz.com/ (the “Site”). Please read this policy carefully to understand our views and practices regarding your personal data and how we will treat it.

Facebook X (Twitter) Instagram Pinterest YouTube

Subscribe to Updates

Subscribe to our newsletter and never miss our latest news

Facebook X (Twitter) Instagram Pinterest
  • Home
  • Advertise With Us
  • Contact US
  • DMCA Policy
  • Privacy Policy
  • Terms & Conditions
  • About US
© 2026 whistlebuzz. Designed by whistlebuzz.

Type above and press Enter to search. Press Esc to cancel.