Following the rollout of OpenAI’s ChatGPT Health, Anthropic on Sunday announced the introduction of Claude for Healthcare, a toolset for healthcare providers, payers, and patients.
Similar to ChatGPT Health, Claude for Healthcare allows users to sync health data from their phones, smartwatches, and other platforms (both OpenAI and Anthropic say their models do not use this data for training). However, Anthropic’s product is expected to be more sophisticated than ChatGPT Health, and will likely focus more on the patient-side chat experience as it is rolled out in stages.
Although some industry experts are concerned about the role of hallucinatory-prone LLMs in providing medical advice to clients, Anthropic’s “agent skills” look promising.
Claude added what he calls “connectors” to allow AI to access platforms and databases such as: These platforms and databases speed up the research process and report generation for payers and healthcare providers. International Classification of Diseases, 10th edition (ICD-10). National Provider Identifier Registry; and pubmed.
Anthropic explained in a blog post that Claude for Health’s connectors can speed up pre-authorization reviews, a process that requires doctors to submit additional information to insurance companies to see if a drug or treatment will be covered.
“Clinicians often report spending more time on paperwork and paperwork than actually seeing patients,” said Mike Krieger, CPO at Anthropic, in a presentation about the product.
For physicians, submitting prior authorization documentation is more of an administrative task than one that requires specialized training or expertise. This makes more sense to automate than the actual process of giving medical advice…although Claude plans to automate it as well.
tech crunch event
san francisco
|
October 13-15, 2026
People already rely on LLMs for medical advice. According to OpenAI, 230 million people use ChatGPT to talk about their health every week, and Anthropic is no doubt observing that use case as well.
Of course, both Anthropic and OpenAI caution consumers that they should consult a medical professional for more reliable and customized guidance.
