12 October 2018
Health providers and consumers need to consider questions of privacy and confidentiality before using artificial
intelligence to assist in healthcare, an Otago law expert says.
Associate professor of law Colin Gavaghan is part of a three-year multi-disciplinary project funded by the Law
Foundation looking at applications of AI in New Zealand that raise legal, ethical and social questions.
AI has big implications for medicine, as tasks that are currently done by a human doctor or nurse could be done by a
machine.
For example, Babylon Health recently unveiled AI technology that it claims can get medical examination scores equivalent
to those of human doctors and make accurate diagnoses.
Health services such as this can be delivered by companies outside New Zealand, which means health data would be sent
offshore.
Gavaghan says foreign health providers are not subject to New Zealand law, and confidentiality and privacy issues arise
when there is a lack of guarantee about what will happen to information when it leaves the country.
Health providers also need to think through what is lost when jobs are taken over by machines. Patients may go to their
GP about a certain complaint, but in fact want to talk about something else in person that may never be revealed to an
AI robot.
“These things will only come out in context when trust builds up, and I’m interested in to what extent that will be lost
as we rely more and more on automative services,” he says.
However, there is also evidence that patients are more forthright when discussing some health issues with a machine
rather than a human and that certain jobs viewed as dehumanising could be done by robots.
Gavaghan says there is a lack of understanding about who is doing what in the AI space and there is a need to take stock
and potentially create a standards organisation.
A central body with oversight of technological developments in the AI space could be useful as a way of vetting overseas
companies or products for accuracy, transparency, privacy and confidentiality concerns, he says.
Otago university associate professor department of primary health care and general practice Angela Ballantyne says
health is a highly regulated sector and this may present a barrier to the introduction of AI.
“The risk of underutilising AI is that we continue to rely on less effective processes for diagnosing disease and
allocating resources, and we therefore have avoidable levels of error in the system. This translates to greater patient
suffering,” she says.
“While there is lots of hype about AI being transformational in healthcare, the reality at the moment is that
diagnostics is where we are seeing the most practical applications of AI.”
Challenges yet to be overcome include the risk of AI perpetuating existing inequalities, as the datasets used to train
AI track human behaviour and so reflect the biases and inequalities already in the health system.
Also, AI relies on access to huge datasets, which raises questions around patient consent and trust in the companies
developing AI as well as the agencies controlling access to the health data, Ballantyne says.
ends