Over 100 Massachusetts Auctions End Tomorrow 04/30 - Bid Now
Over 1750 Total Lots Up For Auction at Five Locations - NJ Cleansweep 05/02, TX 05/03, TX 05/06, NJ 05/08, WA 05/09

WHO releases AI ethics and governance guidance for large multi-modal models

Press releases may be edited for formatting or style | January 19, 2024 Artificial Intelligence
The World Health Organization (WHO) is releasing new guidance on the ethics and governance of large multi-modal models (LMMs) – a type of fast growing generative artificial intelligence (AI) technology with applications across health care.

The guidance outlines over 40 recommendations for consideration by governments, technology companies, and health care providers to ensure the appropriate use of LMMs to promote and protect the health of populations.

LMMs can accept one or more type of data inputs, such as text, videos, and images, and generate diverse outputs not limited to the type of data inputted. LMMs are unique in their mimicry of human communication and ability to carry out tasks they were not explicitly programmed to perform. LMMs have been adopted faster than any consumer application in history, with several platforms – such as ChatGPT, Bard and Bert – entering the public consciousness in 2023.
stats
DOTmed text ad

Your Centrifuge Specialty Store

Quality remanufactured Certified Centrifuges at Great prices! Fully warranted and backed by a company you can trust! Call or click for a free quote today! www.Centrifugestore.com 800-457-7576

stats
“Generative AI technologies have the potential to improve health care but only if those who develop, regulate, and use these technologies identify and fully account for the associated risks,” said Dr Jeremy Farrar, WHO Chief Scientist. “We need transparent information and policies to manage the design, development, and use of LMMs to achieve better health outcomes and overcome persisting health inequities.”

Potential benefits and risks
The new WHO guidance outlines five broad applications of LMMs for health:

Diagnosis and clinical care, such as responding to patients’ written queries;
Patient-guided use, such as for investigating symptoms and treatment;
Clerical and administrative tasks, such as documenting and summarizing patient visits within electronic health records;
Medical and nursing education, including providing trainees with simulated patient encounters, and;
Scientific research and drug development, including to identify new compounds.
While LMMs are starting to be used for specific health-related purposes, there are also documented risks of producing false, inaccurate, biased, or incomplete statements, which could harm people using such information in making health decisions. Furthermore, LMMs may be trained on data that are of poor quality or biased, whether by race, ethnicity, ancestry, sex, gender identity, or age.

The guidance also details broader risks to health systems, such as accessibility and affordability of the best-performing LMMs. LMMS can also encourage ‘automation bias’ by health care professionals and patients, whereby errors are overlooked that would otherwise have been identified or difficult choices are improperly delegated to a LMM. LMMs, like other forms of AI, are also vulnerable to cybersecurity risks that could endanger patient information or the trustworthiness of these algorithms and the provision of health care more broadly.

You Must Be Logged In To Post A Comment