Last fall, FDA advisors discussed regulating generative AI in medicine, emphasizing the risks highlighted by a report showing that a generative AI tool used by 40% of U.S. radiology practices produced errors in 1 in 21 reports. Committee chair Ami Bhatt expressed concern over these findings during the meeting, stressing the need for regulations. However, the FDA has yet to issue formal guidelines for generative AI tools in healthcare. Meanwhile, European regulators are taking swift action. In April, the U.K.’s National Health Service classified popular ambient AI scribes as Class 1 medical devices, and in March, the generative AI tool “Prof. Valmed” was designated a medium-to-high-risk medical device. This contrast raises significant questions for U.S. regulators about the classification and approval of medical generative AI tools. The urgency for clear regulatory frameworks is critical as the technology continues to evolve in the healthcare sector.
Source link

Share
Read more