Artificial intelligence (AI) companies, Anthropic and OpenAI, are targeting health-related applications with new offerings that manage medical records and patient data. Anthropic’s “Claude for Healthcare and Life Sciences” is aimed at payers, providers, and pharma companies, while OpenAI’s “ChatGPT Health” offers tools to generate summaries and suggestions from medical histories. Both solutions claim to be “HIPAA-ready,” yet the handling of sensitive data poses significant privacy concerns. Current AI practices often sidestep traditional health privacy regulations, placing a burden on users and healthcare providers to navigate consent and data use. There are also risks associated with data inference about conditions even when identifiers are removed. As these tools integrate with global platforms, cross-border data flow adds complexity to regulatory compliance. Past incidents, including lawsuits against OpenAI for mishandling sensitive mental health issues, highlight the potential dangers of relying on AI for medical guidance and advice.
Source link
