Balance AI and human intelligence for better healthcare

The quality of AI-guided diagnostic and therapeutic instruments depends on how extensive, representative and accurate the input data are.
Image used for illustrative purposes only. (Express illustration | Sourav Roy)
Image used for illustrative purposes only. (Express illustration | Sourav Roy)

"This is as big a game-changing innovation in human history as fire and electricity,” declared a speaker at a recent international conference on the future of healthcare, well-being and longevity. He was referring to artificial intelligence (AI). At that conclave, great hope was placed on technology as a transformational force for improving access, affordability and quality of healthcare delivery globally, with AI as the major propellant of that progress.

AI has already started making its presence felt in many areas of healthcare. The rush to extend its use to the whole spectrum of healthcare is spurred by several expectations: improved diagnostic and prognostic assessment; accurate risk prediction models for personalised healthcare and ‘precision’ public health interventions; better design and delivery of treatment protocols; relief from laborious documentation for doctors and nurses; increased efficiency of hospital management and health insurance systems; engaging interactions with patients; creation of ‘virtual medical assistants’; enhanced quality and pace of medical research; and sped up design and development of new therapeutic agents.

The ability to predict the probability of developing cardiac rhythm abnormalities like intermittent atrial fibrillation from seemingly normal electrocardiograms has been a remarkable advance. People with such undetected arrhythmias are at a higher risk of blood clots developing in the heart and dislodging to travel and block blood supply to other organs. Prescribing blood thinners to such people is helpful in preventing debilitating brain strokes or cerebral embolisms.

Automated diagnosis in radiology and organ imaging has extended from X-rays to computerised tomography and magnetic resonance imaging. AI is being used in cancer diagnostics, detection and treatment of life-threatening infections that enter the bloodstream such as septicaemia, and in the design of treatments for rare diseases, apart from developing as a diagnostic tool in neurologic and mental health disorders.

Several concerns have emerged from the use of AI with regard to accuracy and safety. The quality of AI-guided diagnostic and therapeutic instruments depends on how extensive, representative and accurate the input data are. Confidentiality of patient data and the misuse of AI-derived instruments to deny medical insurance benefits have also become concerns. Regulation of AI is still in a foggy zone, with perspectives varying among technology developers, marketing companies, health professionals, the government, courts and the public.

The heterogeneity of data sources and diversity of population groups from where data have been gathered are critical elements in the development of algorithms designed to diagnose, prognosticate and treat people. Instruments developed for selected population groups or restricted patient groups are often not generalisable to other geographic, socio-demographic, epidemiological and health system settings. Context matters in probability-based diagnostic assessments and therapeutic decision-making. As renowned data analyst Nate Silver said, “Information becomes knowledge only when placed in context. Without it, we have no way to differentiate the signal from noise, and our search for the truth might be swamped by false positives.”

The intensity of data capture matters. In radiological imaging, every micro-pixel may be captured, but clinical and public health datasets are often incomplete in the capture of bio-markers and socio-demographic, environmental and behavioural determinants. It is essential that we use accurately ascertained and extensively assembled Indian datasets to guide the development of AI-guided healthcare algorithms for use in our hospitals and population settings.

Diagnostic algorithms based on datasets from white patients in the US performed poorly on African Americans, not only because of race but also because of socio-economic differences that influence living conditions and behaviours. An insurance company algorithm, which used cost predictions to estimate patients’ need for stepped-up care, targeted only 18 per cent African-American patients versus 82 per cent white patients. When independent researchers revised the algorithm to predict the risk of illness instead of cost, the proportion of eligible African-American patients went up to 46 per cent.

Dataset shifts create mismatches between an algorithm developed on one dataset and applied on another. A sepsis detection algorithm developed in the US based on an urban hospital dataset was misdiagnosed when it was applied at a later date. Diagnostic codes had changed in the International Classification of Diseases during the interval. Also, the new dataset included patient data from suburban hospitals which the urban hospital had acquired or become affiliated with after the algorithm was developed, resulting in a more diverse population profile.

A study conducted by the University of Michigan and published in the Journal of the American Medical Association Internal Medicine in 2021 tested the commercially marketed Epic algorithm for sepsis against historically documented case records. It reported that the algorithm missed two-thirds of the cases and caregivers would have needed to respond to 109 alarms to identify a single case of sepsis.

Health insurance systems in the US have been using AI to advise on the length of hospital stay and determine the extent of insurance coverage. A class action suit, filed in November 2023 in Minneapolis, alleges that United Healthcare deployed an AI system that had a 90 per cent error rate and improperly overrode physician recommendations for medically necessary post-acute care. Indian healthcare systems must avoid commercially driven distortions of AI-guided healthcare.

AI does help in improving service efficiency in hospitals. It frees doctors from the dreary burden of detailed documentation and enables them to spend more time with patients or find a better work-life balance. It can also bring efficiency gains to hospital and health service management. It can integrate information on clinical and social determinants of health to create better diagnostic and management algorithms. However, it cannot substitute for the empathy and emotional support that a compassionate doctor or nurse can provide. It may not always perform better than clinical experience and reasoned assessment of a patient’s health condition in arriving at the right diagnosis or adopting the most beneficial therapeutic approach.

As emergency medicine physician Craig Spencer wrote, “The question is how we combine AI and clinical acumen in a way that improves patient care.” For that, we need to train a new generation of doctors to optimally combine AI and human intelligence for delivering efficient and empathetic care.

K Srinath Reddy

Author of Pulse to Planet, and distinguished professor, Public Health Foundation of India

(Views are personal)

Related Stories

No stories found.

X
The New Indian Express
www.newindianexpress.com