Artificial intelligence (AI) has brought a paradigm shift to healthcare, thanks to the increasing availability of healthcare data and the rapid progress of analytics. Whether it is being used to provide early warning for Coronaviruses, diagnose breast cancer, perform robotic surgeries, or re-innovate drugs, the healthcare ecosystem is experiencing an AI revolution. With the several ways that AI is getting better than humans in detection, diagnosis, prediction and even prognosis evaluation, health insurance companies may soon offer their clients the option of either being treated by a human physician, or AI.
This does not mean less work for the human physician. With the paucity of medical personnel needed to meet the current global demand, there should always be work for both human physicians and the AI. AI physicians, however, will help to reduce the much-protested workload of human physicians, and excuse them from the mundane aspects of their work, as well as offer diagnoses based on analysis of big data. As automobiles have not rendered horses useless, nor digital music limited human artistry, the value of human doctors will remain beyond the reach of AI technology.
Health is a critical aspect of one’s life. For me, I am inclined to consider where (and from whom) I can acquire the quickest and most reliable health service. Since AI can learn from previous situations to provide input and automate complex future decision-making processes, it may be easier and faster for it to arrive at concrete conclusions based on data and past experiences (and family history too). This is important for when a person may have a medical emergency like an accident, or a heart attack. If in three seconds Watson compresses a process that normally takes an ordinary physician several weeks to accomplish, this timesaving ability could help with early diagnosis and treatment-intervention to save more lives (including eliminating the costs associated with such delay and post-treatment complications). An AI physician – as well as a human physician – may be able to “tap the brains” of thousands of human doctors all at once.
Medical AI can perform with expert-level accuracy by sourcing through (and prioritising) the unprecedented amount of medical data available today, combined with advances in natural language processing, and social awareness algorithms. An AI physician could even better customise treatment choices by identifying optimal treatments based on specific health needs and formulate a personalised approach to care. Machine Learning (ML) can determine if I am at risk of certain diseases faster than a human would and long before it becomes critical. I believe that an AI physician is bound to provide better and faster health services than a human would. With an AI physician, patients may not have to deal with the shame and judgement they could feel with a human, when talking about their sexual health. They may also not have to deal with the “irrational exuberance” or stereotyping that some human doctors perform.
However, I know that AI systems will be limited by several factors such as implicit bias, malfunction, privacy breaches and a lack of creative “common sense” (slight changes in input signals can wreck ML models and this may be because one of the challenges facing AI is the inability to solve the “common sense” problem or replicating situational awareness. The ability for an AI physician to take appropriate action based on situational context and to decide without having to train through vast data pools, is perhaps not yet possible). But human doctors have similar limitations too – they have their biases and are prone to making errors, particularly if there are inaccurate records, or if they are over-worked. Furthermore, they are subject to fatigue, ill-health, and phobias. With an AI doctor, we may find a reorganization of medical bureaucracies. For example, a patient should be able to see a doctor within shorter waiting times and perhaps from any location. AI will not limit people by space and time as they could “log in” from their homes (and may not need to deal with the infection risks of entering hospitals, too).
One concern I have with AI in health diagnostics, is accountability. If I am ill-advised or treated negligently, it would be difficult to determine who takes responsibility. Would I sue the AI manufacturer, the hospital, or my insurance company? Will the law accord strict liability, vicarious liability or product liability to the use of AI physicians? Will I have to sign a liability waiver even when I don’t understand how the AI physician would arrive at a decision or instruction? (The ways by which ML algorithms make decisions often remain opaque to us, which raises questions about the acceptability of delegating responsibility to them). In medicine, justification and trust are deeply linked, so this gap will not only be problematic for determining accountability, but also to situate causation and remedy in law. Based on my choice, I may also subject myself to discrimination (there may be increased inequalities in health outcomes between those who use human physicians and those who use AI). This may also put unnecessary pressure on human doctors as patients will begin to compare (and rate) human services with that of AI systems. This may foster competition, or initiate tensions (just like the drivers’ protests, or congress debates on labour rights in the extant gig economy). And unlike a human physician, my AI doctor is susceptible to hacking, the result of which could be terminal.
Another concern is how AI may not be able to provide the human-to-human connection that a human physician would. Caregiving is not only defined by who saves lives better, or best palliates suffering. Delivery also matters. Linking technical competence to caregiving, compassion and consolation is a central task of good medicine. Therefore, values of love, empathy and human kindness are invaluable to healthcare delivery. As these values are not only obtained from a physician, however, it is not necessarily something that needs to be sought from AI. The majority of the caregiving that goes on in this world is not administered by doctors or nurses, but by families and communities (the majority of this, unpaid and uninsured).
As we navigate our way through this Fourth Industrial Revolution, our health will not be guaranteed by how intelligently we choose between a human physician or AI, but by our access to quality comprehensive social medicine — which includes mental, physical and emotional health provision. To secure them, we need both technological and human services alike. The extraordinary promise of AI medicine should not only be about options; but to reach half of the world’s population (who lack access to health), be it human, or AI, or a combination of both. I hope that this post will start a conversation among those scholars who are interested in contributing to policies that improve access to healthcare for those who most need it. If you would like to collaborate, please get in touch.