Artificial Intelligence (AI) holds “enormous potential” for improving the health of millions around the world if ethics and human rights are at the heart of its design, deployment, and use, the head of the UN health agency has said.
“Like all new technology, artificial intelligence…can also be misused and cause harm”, warned Tedros Adhanom Ghebreyesus, Director-General of the World Health Organization (WHO).
To regulate and govern AI, WHO published new guidance that provides six principles to limit the risks and maximize the opportunities intrinsic to AI for health.
WHO’s Ethics and governance of Artificial Intelligence for health report points out that AI can be and, in some wealthy countries is already being, used to improve the speed and accuracy of diagnosis and screening for diseases; assist with clinical care; strengthen health research and drug development; and support diverse public health interventions, including outbreak response and health systems management.
AI could also empower patients to take greater control of their own health care and enable resource-poor countries to bridge health service access gaps.
However, the report cautions against overestimating its benefits for health, especially at the expense of core investments and strategies required to achieve universal health coverage.
WHO’s new report points out that opportunities and risks are linked and cautions about the unethical collection and use of health data, biases encoded in algorithms, and risks to patient safety, cybersecurity, and the environment.
Moreover, it warns that systems trained primarily on data collected from individuals in high-income countries may not perform well for individuals in low- and middle-income settings.
Against this backdrop, WHO upholds that AI systems must be carefully designed to reflect the diversity of socio-economic and health-care settings and be accompanied by digital skills training and community engagement.
This is especially important for healthcare workers requiring digital literacy or retraining to contend with machines that could challenge the decision-making and autonomy of providers and patients.
Because people must remain in control of healthcare systems and medical decisions, the first guiding principle is to protect human autonomy.
Secondly, AI designers should safeguard privacy and confidentiality by providing patients with valid informed consent through appropriate legal frameworks.
To promote human well-being and public interest, the third principle calls for AI designers to ensure regulatory requirements for safety, accuracy, and efficacy, including measures of quality control.