OpenAI says it has built this experience with what it describes as enhanced privacy controls such as compartmentalised storage, added encryption, and the ability for users to connect and then disconnect data. File photo/ANI
Health

AI meets your medical file: Why ChatGPT’s new health feature sparks both promise and privacy fears

Whether this feature will ultimately be seen as a breakthrough in consumer health empowerment or a cautionary example of privacy overreach will depend on how well the platform implements its protections.

Unni K Chennamkulath

OpenAI has officially rolled out a new feature within ChatGPT called ChatGPT Health, designed to let users connect their own medical records and wellness apps to the AI so that responses can be tailored more closely to their personal health situation. The company says this dedicated health space will help people make better sense of lab test results, prepare for doctor visits, understand diet and workout choices, and even explore insurance options based on an integrated view of their data.

OpenAI has framed the feature as a way of making existing health-related interactions with the chatbot more meaningful, noting that hundreds of millions of people already ask ChatGPT about health questions each week and that this step should help them feel more informed and confident about their wellness decisions.

OpenAI insists that ChatGPT Health is not a diagnostic or treatment tool and is not meant to replace professional medical advice, but rather to support everyday health discussions and decision-making with context grounded in users’ own data. To address privacy, the company has built this experience with what it describes as enhanced privacy controls such as compartmentalised storage, added encryption, and the ability for users to connect and then disconnect data or delete health records from the system, and claims that health conversations will not be used to train its broader models.

OpenAI is initially offering the feature via a waitlist to a limited group of users, with a plan to expand access more broadly in the coming weeks. Many of the integrations for medical records and fitness data are available only in the US at first, partly because of regional data protection laws.

Potential benefits

The potential benefits of this innovation are clear to many users and health-tech proponents. By having an AI that can reference real medical context — from previous tests to ongoing conditions — people could be better equipped to understand complex medical information, reduce the time they spend researching on their own, and even feel more prepared for appointments with clinicians.

This could be particularly valuable for people managing chronic conditions, interpreting confusing lab results, or trying to make lifestyle changes with informed guidance. Some medical professionals who have advised on the feature suggest that such tools can fill gaps in understanding between doctor visits and empower patients with more clarity about their health patterns when used responsibly and with proper safeguards.

Privacy concerns

At the same time, the very notion of uploading personal health records into a commercial AI platform has drawn strong criticism from privacy campaigners, advocates and data security experts. Electronic health records contain some of the most sensitive and regulated categories of personal data, and once those records are uploaded to ChatGPT Health they are bound by OpenAI’s terms rather than by established healthcare privacy laws like HIPAA in the US.

Critics point out that without strict legal protections, companies can change their data usage policies at any time and that the safeguards currently described by OpenAI may not be enforceable or fully understood by everyday users. Some privacy advocates warn that giving an AI access to medical histories could expose individuals to unwanted legal or commercial risks, especially if the data were ever accessed through a court order or a security breach, or if it were mishandled by third-party services connected to the system. Skeptics also note that generative AI tools are inherently prone to errors and “hallucinations” — confidently presented but incorrect information — which could be dangerous in health contexts where accuracy matters and where misunderstanding a result could have serious consequences.

Accuracy issues

Beyond data privacy, there are broader questions about the appropriateness of AI in personal healthcare support. Because AI responses are based on patterns in data and probability rather than clinical judgement, there is always the risk that users may over-trust the tool’s suggestions or misinterpret its limitations. OpenAI’s emphasis that the feature is not a replacement for professional care may not fully mitigate these risks if users start relying on it for decisions that should be made with qualified physicians.

Essencially, ChatGPT Health stands at the intersection of significant opportunity and real concern. For many individuals, it may represent a useful way to organise and understand scattered health information and make mundane health-related tasks less daunting. At the same time, it underscores the growing tension between personalised digital services and the imperative to protect sensitive personal data, especially when market forces and regulatory frameworks have not fully caught up with rapid advances in AI technology.

Whether this feature will ultimately be seen as a breakthrough in consumer health empowerment or a cautionary example of privacy overreach will depend on how well OpenAI implements its protections and how users, healthcare professionals, and regulators respond to the very real data privacy questions it raises.

'American? No!' says Greenland after latest Trump threat

Meghalaya: Three held in youth’s killing; CM Sangma appeals for communal harmony

Iranians protest for 13th day amid internet blackout; Trump warns US could strike

Five Jharkhand migrant workers kidnapped in Niger return to India after eight months

R Sreelekha condemns arrest of Tanthri in Facebook post; withdraws comment following controversy

SCROLL FOR NEXT