WHO cautions on AI use in public healthcare due to bias and misuse concerns

WHO has emphasised the need for safe and ethical practices in the use of AI for health purposes.

 Logo, Text

The World Health Organization (WHO) has advised being cautious when using artificial intelligence (AI) generated large language model tools (LLMs) in public healthcare due to concerns about potential biases and misuse of data. While acknowledging the positive aspects of AI, WHO specifically raised concerns about its application in improving health information access, as a decision-support tool, and in diagnostic care. 

According to WHO, the data used to train AI systems may contain biases, leading to inaccurate information, and the models themselves could be misused to spread disinformation. The organisation stressed the importance of evaluating the risks associated with using large language model tools like ChatGPT to ensure the protection of human well-being and public health.