Beware of biased, misleading information while using AI in healthcare: WHO

TECH-AI-WHO
The WHO said it was enthusiastic about the potential of AI but had concerns over how it will be used to improve access to health information. Photo: Reuters.

Calling for caution in using artificial intelligence for public healthcare, the World Health Organization said the data used by AI to reach decisions could be biased or misused.

The WHO said it was enthusiastic about the potential of AI but had concerns over how it will be used to improve access to health information, as a decision-support tool and to improve diagnostic care.

The WHO said in a statement the data used to train AI may be biased and generate misleading or inaccurate information and the models can be misused to generate disinformation.

It was 'imperative' to assess the risks of using generated large language model tools (LLMs), like ChatGPT, to protect and promote human well-being and protect public health, the U.N. health body said.

Its cautionary note comes as artificial intelligence applications are rapidly gaining in popularity, highlighting a technology that could upend the way businesses and society operate.

The comments posted here/below/in the given space are not on behalf of Onmanorama. The person posting the comment will be in sole ownership of its responsibility. According to the central government's IT rules, obscene or offensive statement made against a person, religion, community or nation is a punishable offense, and legal action would be taken against people who indulge in such activities.