Big Tech wants to access your health records
OpenAI’s launch of ChatGPT Health this month is set to capitalise a behavioural pattern that has already taken hold.
Millions of people use AI chatbots to make sense of symptoms, test results and medical jargon, often even before speaking to a qualified clinician.
The difference is that now, as of just last week, the technology has been invited closer to the source data itself.
ChatGPT Health allows its users to connect medical records and wellness apps like Apple Health or MyFitnessPal, to generate responses grounded in their own information.
Sam Altman’s firm claims the tool is not intended for diagnosis, or treatment, and has been set to help users better understand their health and prepare ahead of conversations with professionals.
Health-related questions are already one of the most common uses of ChatGPT, with the company stating that over 230 million people globally use it to ask health and wellness questions every week.
Seperate research from GWI has found that 26 per cent of ChatGPT users sought health advice in the past month alone, a figure that spans all age groups.
For OpenAI then, the launch is less of a pivot than a consolidation of an existing and firmly established use case.
Existing demand
Health data is typically scattered across hospital letters, GP portals, PDFs and fitness apps. ChatGPT’s new health function was created to bring that information together, set to allow its users to ask questions across various sets of data.
The company has said that health conversations sit in a dedicated, encrypted space, separate from other chats, and that they will not be used to train any models.
Notably, users have to opt in to connect records or apps, and are able to remove access at any time.
OpenAI says the product was developed in collaboration with over 260 physicians across 60 countries, and that its responses are evaluated against clinical standards, using an internal framework.
“This is about helping people feel more informed and prepared”, the company claimed in its announcement.
Trust becomes key
The scale of adoption, however, raises questions around trust and governance. ChatGPT Health remains a consumer product, rather than a regulated medical device.
In the US, it doesn’t fall under the Health Insurance Portability and Accountability Acy (HIPAA), which means users rely on company policies, rather than statutory protections when sharing sensitive data.
That distinction matters, espcially as Big Tech firms expand into healthcare adjacent services.
Eric Yang, chief executive of AI lab Gradient, argues: “When a company asks hundreds of millions of people to upload medical records to a centralised platform, the question becomes why the architecture requires that level of trust in the first place”, he said.
“Health data is among the most sensitive information people have”.
OpenAI has claimed health data is encrypted at rest and in transit, stored completely seperately from other chats, and shared only with user content or in limited circumstances outlined in its privacy policy.
Accuracy and accountability
Alongside privacy, accuracy remains a concern, when AI systems are known to generate confident but incorrect responses – a problem that becomes far more serious in health contexts.
Alex Ruani, doctoral researcher in health misinformation at UCL, warned that ChatGPT health is not subject to mandatory safety testing, or post-market surveillance.
“There are no published studies specifically testing the safety of ChatGPT health”, he said. “The way responses are presented can make it difficult for users to distinguish between general information and medical advice”.
And, while OpenAI has pointed to its physician collaboration and evaluation processes, it has not yet published any data on error or hallucination rates in these sensitive scenarios.
Anthropic enters the ring
OpenAI isn’t alone in doing this. Anthropic just this week launched its own Claude for healthcare function, a tool aimed more squarely at clinicians and healthcare organisations, with integrations designed to meet HIPAA requirements.
Google, which previously faced backlash over health data projects, has so far taken a more cautious approach.
The commercial incentive is clear, with healthcare still being data-heavy, fragmented and expensive.
AI tools that reduce friction, even at the level of explanation and administration, have obvious appeal to both users and Big Tech firms.
Max Sinclair, founder of consumer AI firm Azoma, said ChatGPT Health also signals a broader shift in how platforms position themselves.
“ChatGPT is becoming a trusted intermediary,” he said. “Once users rely on it to interpret health information, that trust can extend into lifestyle and purchasing decisions as well.”
For users, ChatGPT Health is likely to be most useful in low-stakes scenarios like understanding terminology, spotting trends, or preparing questions ahead of appointments.
But the launch marks another step in Big Tech’s gradual move into domains traditionally governed by stricter rules. And as adoption grows, the balance between convenience, trust and oversight will come under increasing scrutiny.