<-- test --!> How to Regulate Generative AI in Health Care – Best Reviews By Consumers

How to Regulate Generative AI in Health Care

news image

To safely realize the clinical benefits of LLMs, we need a regulatory approach as innovative as the technology itself.

September 04, 2024

Andrew Brookes/Getty Images


  • Post


  • Post


  • Share


  • Annotate


  • Save


  • Print

A challenge confronting the Food and Drug Administration — and other regulators around the world — is how to regulate generative AI.  The approach it uses for new drugs and devices isn’t appropriate. Instead, the FDA should be conceiving of LLMs as novel forms of intelligence. It should employ similar approaches to those it applies to clinicians.

Generative AI has arrived in medicine. Normally, when a new device or drug enters the U.S. market, the Food and Drug Administration (FDA) reviews it for safety and efficacy before it becomes widely available. This process not only protects the public from unsafe and ineffective tests and treatments but also helps health professionals decide whether and how to apply it in their practices. Unfortunately, the usual approach to protecting the public and helping doctors and hospitals manage new health care technologies won’t work for generative AI. To realize the full clinical benefits of this technology while minimizing its risks, we will need a regulatory approach as innovative as generative AI itself.


  • David Blumenthal, MD, is a professor of practice of public health and health policy at the Harvard T.H. Chan School of Public Health. He is the former president of the Commonwealth Fund and served as the National Coordinator for Health IT in the Obama Administration.


  • Bakul Patel is senior director, global digital health strategy and regulatory, at Google. He previously served as the U.S. Food and Drug Administration’s chief digital health officer and was the founding director of its Digital Health Center of Excellence.

Read More