Artificial Intelligence (AI) promises to uplift our ability to profile, predict, promote and protect human health in many exciting ways. But eagerness in the health system to ardently embrace AI should not blind us to potential pitfalls. Lest we lament, as Othello did about Desdemona, that we “loved too well but not wisely”.
Human health is configured by intricate interactions between several complex systems—biological, physical and social environments being the foremost. Alongside is the layered labyrinth of the health system that serves our health needs. Each of these has a universe of complex subsystems. Doctors base diagnostic and therapeutic decisions on only a few variables identified from this multiverse. While many decisions appear to be informed by sound scientific evidence, diagnostic dark alleys, therapeutic trapdoors and prognostic potholes waylay us when only limited knowledge is utilised.
Why do some patients not benefit from a drug shown in a large group trial to be highly beneficial? Why do some react adversely to certain food items that others can devour with impunity? What predicts a robust recovery in a seriously ill patient while another with a similar clinical profile slides downhill? Should medicine become highly ‘personalised’ as genetic profilers espouse? Should such genetic mapping be limited to the human genome or extend to the trillions of microbes that cohabit and co-regulate our bodies? What about the many external influences that alter gene expression? Will holistic healing, the philosophical underpinning of several traditional systems of medicine, come to life in the rigidly reductionist mould of modern medicine through AI that captures and collates multiple data sets, from cellular biology to social circumstances?
Can the varying standards of clinical care, resulting from differing knowledge levels of physicians, be overcome by AI-guided therapy and monitoring? Can the much faster analytic speed of AI eliminate delays in diagnosis and treatment? Can medical errors be substantially reduced to enhance patient safety?
The response to these questions is cautiously affirmative, as evidence accumulates of AI’s broad spectrum of applications for healthcare delivery, patient engagement and behavioural modification, population health, research and development, and health administration (Future Agenda, Accenture, 2018). From detailed patient profiling to diagnostic and management algorithms, individual healthcare can be improved in timeliness and quality. At the population level, infectious disease outbreaks (such as dengue) can be more accurately predicted, tracked and quelled. For non-communicable diseases (such as diabetes), clusters of risk factors that pose heightened threat to different population groups can be better identified for tailored policy and programmatic response. Doctors can provide more humane care, with empathetic communication, when AI frees them from the drudgery of collating and analysing data (Eric Topol, Deep Medicine, 2019).
AI is exciting and alarming. It has been defined as ‘software writing software’, with increasing levels of autonomy accompanying the computer’s rapidly rising appetite for data acquisition and accelerating speed of data analysis. ‘Big data’, with huge volumes of multiple data sets on a vast array of variables, can be assimilated and analysed with amazing speed. As ‘machine learning’, extensively used by AI to write algorithms, moves to ‘deep learning’, ‘self-learning’ and ‘reinforced learning’, AI’s rapidly growing power has triggered debates on promise versus perils of its dominance over human deliberation and discretion.
A major concern is over privacy. Who will be the custodians of the extensive and incisive personal data? Another concern is about the generalisability of the developed algorithms to specific individuals and populations. Even when vast data are analysed, there may be missing elements of an individual patient’s profile that can be gleaned from elicitation of detailed medical history and careful clinical examination.
AI-driven algorithms are presently developed from extensive but selective data sets from Western populations. Predictive algorithms depend on Bayesian principles, wherein the pre-test probability and the likelihood ratio of the observed test result yield an estimate of the post-test probability. While the likelihood ratio is usually stable across populations, pre-test probability differs widely among different population groups. Predictive accuracy of Western algorithms may not apply to Indian population groups. We need to develop algorithms based on large data sets from our own population.
It is difficult to predict whether AI will lead healthcare to a utopian or dystopian future. As French poet Valery observed “we enter the future backwards”. We need to carefully control the course of AI in health to maximise benefits and minimise risks. While scientific advances will help resolve many of the technical issues, concerns of data misuse and unbridled technological tyranny subverting a humane profession will remain. The responsibility for defining boundaries cannot be left only to scientists who revel in their creativity, health professionals who develop dependence on new technologies and business investors who push for greater profits. It needs a broader societal engagement, with additional involvement of community representatives, patient groups and ethicists.
To those who doubt the ability of laypersons to comprehend AI’s intricacies and contribute to collective societal control, the response comes from Thomas Jefferson’s wise words: “I know no safe depository of the ultimate powers of society but the people themselves, and if we think them not enlightened enough to exercise that control with a wholesome discretion, the remedy is not to take it from them but to inform their discretion.”
DR K Srinath Reddy
President, Public Health Foundation of India, and author of Make in India: Reaching a Billion Plus. Views expressed are personal