The World Medical Innovation Forum from Partners HealthCare is coming to Boston this April, and as we announced earlier, the theme for this year’s annual event is AI in healthcare.
MassTLC members receive a 20% discount to attend the World Medical Innovation Forum. Follow this link to register at the discounted rate.
Like any other enabling technology, AI creates new questions for society. These questions are particularly acute in healthcare given the vulnerability of patients and the desire to protect privacy. How society responds to these questions and how that is manifest in the actions of legislators and regulators will make a significant difference to how quickly and how well we are able to benefit from AI innovations in healthcare.
Four questions for us to consider as we approach the 2018 Forum.
In what way will AI change what humans do in healthcare?
The pace of development of the underlying enabling AI and the applications across the landscape above varies dramatically. An interesting observation made by Hans Moravec and others is that what most lay people intuitively think is easy or hard for machine intelligence is wrong.
When does it matter if intelligence is artificial?
The level of realism that can be achieved by AI agents in remote interactions is rapidly reaching the point that is hard to distinguish from a human. This has far-reaching second order implications in healthcare such as relating to the patientclinician relationship and for payment models.
The well-known Turing test assesses a machine’s ability to interact using natural language conversations in a way that is indistinguishable from a human. In 2014 a software program over a five-minute text conversation convinced 10 of 30 judges that it was a teenage boy and there have been other successes as well.
Bots are not only getting better at parsing questions and finding the right answers, they are also getting better at demonstrating human like qualities such as typos, delays, and idiosyncrasies.
Do we need to know why?
The most successful ML approaches in the form of deep networks can behave like black boxes, in the sense that while we have a statistical sense of how inputs relate to outputs, it is often not possible to precisely determine why a particular input generates a particular output.
While this is important in other industries as well, it is particularly important in healthcare. Consider, for instance, if a patient dies after following AI guidance, not knowing why a particular pathway was recommended will confound issues in the ethical, legal, and regulatory spheres.
What does AI mean for privacy and legal responsibility?
Information collected by and inferences made by an AI embedded in a provider system will be covered by the usual patient data protections. However, similar systems may well begin to exist outside the traditional healthcare system. If a nonclinical AI can infer the health status of an individual with a high degree of accuracy based on non-protected information, that has implications for privacy.