Dangerous chatbots: AI chatbots to be approved as medical devices?
Date:
July 3, 2023
Source:
Technische Universita"t Dresden
Summary:
LLM-based generative chat tools, such as ChatGPT or Google's
MedPaLM have great medical potential, but there are inherent risks
associated with their unregulated use in healthcare. A new article
addresses one of the most pressing international issues of our
time: How to regulate Large Language Models (LLMs) in general and
specifically in health.
Facebook Twitter Pinterest LinkedIN Email
==========================================================================
FULL STORY ========================================================================== "Large Language Models are neural network language models with remarkable conversational skills. They generate human-like responses and engage
in interactive conversations. However, they often generate highly
convincing statements that are verifiably wrong or provide inappropriate responses. Today there is no way to be certain about the quality,
evidence level, or consistency of clinical information or supporting
evidence for any response. These chatbots are unsafe tools when it
comes to medical advice and it is necessary to develop new frameworks
that ensure patient safety," said Prof. Stephen Gilbert, Professor for
Medical Device Regulatory Science at Else Kro"ner Fresenius Center for
Digital Health at TU Dresden.
Challenges in the regulatory approval of large language models Most
people research their symptoms online before seeking medical advice.
Search engines play a role in decision-making process. The forthcoming integration of LLM-chatbots into search engines may increase users'
confidence in the answers given by a chatbot that mimics conversation. It
has been demonstrated that LLMs can provide profoundly dangerous
information when prompted with medical questions.
LLM's underlying approach has no model of medical "ground truth," which
is inherently dangerous. Chat interfaced LLMs have already provided
harmful medical responses and have already been used unethically in 'experiments' on patients without consent. Almost every medical LLM use
case requires regulatory control in the EU and US. In the US their lack
of explainability disqualifies them from being 'non devices'. LLMs with explainability, low bias, predictability, correctness, and verifiable
outputs do not currently exist and they are not exempted from current
(or future) governance approaches.
In this paper the authors describe the limited scenarios in which LLMs
could find application under current frameworks, they describe how
developers can seek to create LLM-based tools that could be approved as
medical devices, and they explore the development of new frameworks that preserve patient safety.
"Current LLM-chatbots do not meet key principles for AI in healthcare,
like bias control, explainability, systems of oversight, validation and transparency. To earn their place in medical armamentarium, chatbots
must be designed for better accuracy, with safety and clinical efficacy demonstrated and approved by regulators," concludes Prof. Gilbert.
* RELATED_TOPICS
o Health_&_Medicine
# Today's_Healthcare # Medical_Education_and_Training #
Medical_Imaging
o Computers_&_Math
# Computer_Modeling # Internet # Mathematical_Modeling
o Science_&_Society
# Public_Health # Privacy_Issues # Bioethics
* RELATED_TERMS
o Quality_of_life o Personalized_medicine o
Evidence-based_medicine o Language_acquisition o Medicine o
Public_health o General_fitness_training o CAT_scan
==========================================================================
Print
Email
Share ========================================================================== ****** 1 ****** ***** 2 ***** **** 3 ****
*** 4 *** ** 5 ** Breaking this hour ==========================================================================
* Screens_More_Versatile_Than_LED:_Fins_and_...
* GM_Pig_Heart_in_a_Human_Patient:_Update *
Multiple_Sclerosis_Severity * Wind_Farm_Noise_and_Road_Traffic_Noise
* Mavericks_and_Horizontal_Gene_Transfer *
Early_Reading_for_Pleasure:_Brains,_...
* New_Light_Shed_On_Evolution_of_Animals *
Gullies_On_Mars_from_Liquid_Meltwater?
* DNA_Organization_in_Real-Time *
How_the_Cat_Nose_Knows_What_It's_Smelling
Trending Topics this week ========================================================================== HEALTH_&_MEDICINE Personalized_Medicine Fitness Nervous_System
MIND_&_BRAIN K-12_Education Caregiving Intelligence LIVING_&_WELL Fitness Staying_Healthy Nutrition
==========================================================================
Strange & Offbeat ========================================================================== HEALTH_&_MEDICINE Grocery_Store_Carts_Set_to_Help_Diagnose_Common_Heart_Rhythm_Disorder_and Prevent_Stroke DNA_Can_Fold_Into_Complex_Shapes_to_Execute_New_Functions Everyone's_Brain_Has_a_Pain_Fingerprint_--_New_Research_Has_Revealed_for_the First_Time MIND_&_BRAIN Scientists_Discover_Spiral-Shaped_Signals_That_Organize_Brain_Activity Illusions_Are_in_the_Eye,_Not_the_Mind Long_Missions,_Frequent_Travel_Take_a_Toll_on_Astronauts'_Brains
LIVING_&_WELL Amputees_Feel_Warmth_in_Their_Missing_Hand Why_Do_Champagne_Bubbles_Rise_the_Way_They_Do?_Scientists'_New_Discovery_Is Worthy_of_a_Toast 'Gluing'_Soft_Materials_Without_Glue Story Source:
Materials provided by Technische_Universita"t_Dresden. Note: Content
may be edited for style and length.
========================================================================== Journal Reference:
1. Stephen Gilbert, Hugh Harvey, Tom Melvin, Erik Vollebregt,
Paul Wicks.
Large language model AI chatbots require approval as medical
devices.
Nature Medicine, 2023; DOI: 10.1038/s41591-023-02412-6 ==========================================================================
Link to news story:
https://www.sciencedaily.com/releases/2023/07/230703133029.htm
--- up 1 year, 18 weeks, 10 hours, 50 minutes
* Origin: -=> Castle Rock BBS <=- Now Husky HPT Powered! (1:317/3)