China: COVID-19 and AI healthcare response


Philippa Jones
Contributor

COVID-19 has proven an ideal testbed for an industry very much in its infancy. Diagnosing the virus requires lung CT scans to identify GGO (ground glass opacity), a key characteristic. During the crisis in China, overwhelmed doctors took to AI-assisted diagnostic software to initially assess scans.

AI is also used to screen existing medicines for a possible cure or treatment while vaccine research is ongoing. And the fast sequencing of SARS-CoV-2, the virus behind COVID-19, was aided by AI.

For China’s regulators, AI’s medical applications are a conundrum. The former SFDA (State Food and Drug Administration) moved to accept AI-based diagnostics only in 2017, designating them Class-III medical devices (under maximum regulation).

SFDA’s successor, NMPA (National Medical Products Administration), recently released preliminary review guidelines, approving the very first such application on 15 Jan 2020.

AI Healthcare
Health: The COVID crisis has accelerated AI healthcare in China

The industry is not waiting to be regulated. According to MIIT (Ministry of Industry and Information Technology), more than 20 AI-assisted diagnostic tools have been used to analyse lung CTs of suspected COVID-19 patients.

To help the global fight against it, Huawei and Alibaba have made their cloud-based AI diagnostic tools, now being used in Ecuador and the Philippines, free.

Incorporating MRI and ultrasound, AI is bound to be embedded in future surgery robots. It is much faster, more accurate, and better at quantifying the progress of abnormalities than human doctors.

AI healthcare firms are also keen to explore multi-track investigation, developing algorithms to identify lung nodules, a key indicator of lung cancer.

Ethical concerns

Major issues in China’s healthcare system – lack of qualified doctors and frequent misdiagnoses – will, it is hoped, be resolved by AI.

But ethical concerns are stubborn. Until it attains 100 per cent accuracy, there are risks in allowing AI-assisted diagnostic software to make final diagnoses. Better to use it as a safety valve and checking mechanism.

At Zhejiang University’s No.1 Hospital, it is limited to training student doctors and research, rather than frontline diagnosis.

AI algorithms too are notorious ‘black boxes.’ More than 50 per cent of surveyed doctors mistrust them, states a March 2019 white paper by the Chinese Innovative Alliance of Industry, Education, Research and Application of AI for Medical Imaging; some 60 per cent fail to understand it. Doctors in turn often feel they cannot explain it to patients.

This weakens doctor – patient trust, already under stress (as evidenced by a stream of violent attacks in hospitals in recent years). It is feared doctors will fail to detect that an algorithm is compromised, or erroneous in its findings.

Mining for data

AI feeds on data. AI software, prior to reading and assessing new data, must be trained on existing data. But technological, institutional and ethical constraints have meant many firms find it difficult to gather sufficient material.

Fierce competition with other hospitals and patient privacy concerns leave hospitals reluctant to share data. Not only are digital patient histories still far from maturity, the lack of data standardisation across different hospitals and regions further frustrates firms’ efforts.

An NMPA standardisation project launched October 2019 may solve some problems.

Data protection has implications for national security. The 2016 Cybersecurity Law and subsequent National Health Commission regulations have urged data protection; foreign entities are prohibited from collecting and storing biological data.

The upcoming Biosecurity Law is expected to foreshadow protection measures. Since 2017, central SOEs have been building infrastructure for healthcare big data, and the landscape of data collection could be completely reshaped.

In search of a business model

As in other emerging industries, low profits plague AI healthcare. For example, Demetics Medical, service provider to No.1 Hospital affiliated to Zhejiang University, saw a loss of C¥14.33 million in 2018, almost six times its revenue.

Losses on this scale indicate the cost of attracting new customers. Free services are commonly brought into play to coax more hospitals on board.

For those exploring practical business models, selling software fails to make ends meet given the huge investment required in R&D. Firms hence usually choose to bundle software with hardware.
This limits the market to top hospitals, as those at the grassroots cannot afford hardware. Other firms are looking to sell products to insurance firms and drug companies.

A new toy in the playground

MIIT applauded AI’s role in fighting COVID-19 4 February and encouraged the tech firms behind it to continue their contribution to restarting the economy. COVID-19 familiarised many people with AI healthcare. This is only a first step.

So far, the vast majority of AI-assisted diagnostic tools lack approval, and hospitals err on the side of caution. Intended functions of AI-assisted diagnostic tools often clash with ethics and profitability. It will take more time for AI to establish itself in the healthcare system.

What the experts are saying

Shi Lei 石磊 | Yitu Tech deputy CEO

A former doctor, Shi works in the firm that developed the first AI-equipped system for detecting COVID-19. Launched at Shanghai’s COVID-19-designated Public Health Clinical Centre in February, it has now been deployed in over 100 hospitals in over 20 cities. AI can, proclaims Shi, efficiently analyse lung CTs to help doctors diagnose COVID-19, but he denies it could ever replace doctors.

Qian Dahong 钱大宏 | Shanghai Jiaotong University School of Biomedical Engineering professor

Qian works in the medical imaging and information discipline at Shanghai Jiaotong University, one of the institutions tasked by NMPA with drawing up standards for AI healthcare. He heads the Human–Computer Interaction Centre, part of Jiaotong’s Research Institute on Medical Robots. As data standards differ from one hospital to another, argues Qian, AI healthcare tools must undergo clinical trials to show adaptability. Demand from firms he says, will speed up standardisation.

Philippa Jones is managing director at Beijing-based economic research group China Policy. She is a trade policy specialist at the Australian Embassy in Beijing and is a China-EU trade expert. For details of China Policy reporting contact client.services@policycn.com.

Do you know more? Contact James Riley via Email or Signal.

Leave a Comment

Your email address will not be published.

Related stories