ChatGPT’s Use In Medicine Raises Questions Of Security, Privacy, Bias


Generative AI, the prompt-based artificial intelligence that can generate text, images, music and other media in seconds, continues to advance at breakneck speeds.

Less than six months ago, OpenAI launched its generative AI tool, ChatGPT. A mere four months later, the company released GPT-4, a massive leap forward in processing speed, power and ability.

Every industry has taken notice. Healthcare, in particular. Observers were mildly impressed when the original version of ChatGPT passed the U.S. medical licensing exam, though just barely. A couple of months later, Google’s Med-PaLM 2 aced the same test, scoring in the “expert” category.

Despite mounting evidence that generative AI is poised to revolutionize medical care, patients hesitate to embrace it. According to a recent Pew Research poll, 6 in 10 American adults say they’d feel “uncomfortable” with their doctor relying on artificial intelligence to diagnose disease and provide treatment recommendations.

Today’s iterations of generative AI aren’t ready for broad use in healthcare. They occasionally fail at basic math, make up sources and “hallucinate,” providing confident yet factually incorrect responses. The world is watching closely to see how quickly OpenAI, Google and Microsoft can correct these errors.

But those fixes, alone, won’t address the two biggest concerns patients reported in the Pew survey:

  1. Technological risks, including security, privacy and algorithmic bias.
  2. Ethical concerns about the interplay between machines and humans.

This article examines the first set of fears. The next, on May 8, will cover the ethical ones, including AI’s impact on the doctor-patient relationship.

Are patient fears valid?

Americans have long-held suspicions about new technologies. Recall how bank customers in the 1970s resisted using ATMs, fearing the machines would eat their cards and mishandle their money. Indeed, cashpoint errors were common at first. But when banks made tweaks and the roots of people’s tech-driven fears stopped materializing, the fears themselves…

Source…