Do not use generative AI for advice, such as whether you should go to the emergency room for chest pain, the doctors say.
“Currently, the chatbot cannot create a risk profile on an individual patient at a particular point in time, so it’s better to avoid those types of questions,” says Andrew Taylor, MD, MHS, a Yale Medicine emergency department (ED) physician, who is also leading Yale’s 2024 AI in Medicine Symposium.
Rather, here are some tips for trying generative AI:
1. Use it to provide context or education.
For example, try the prompt: “I was told to take these medications; please explain them to me.” Or “How is [insert condition] diagnosed?”
Generative AI can also explain medical terminology you find on a lab report or imaging results, Dr. Taylor adds. “From a patient education standpoint, AI has the potential to be a great tool,” he says.
2. Know that some AI platforms are not updated in real time.
Although there are reports that some AI platforms have up-to-date information for users with premium—or paid—subscriptions, for others, the data AI relies on to answer questions may not have been updated for a few years.
Because medical information is always changing, that lag in data may mean that the AI responses are not capturing the latest medical knowledge on conditions or treatments.
3. Consider the source.
One of the advantages of doing a standard search through Google is transparency, Dr. Wilson explains. “If I see that the top link [in the search results] is from a trusted source, such as the American Medical Association, I can be sure they vetted it and that the information will be accurate,” he says. “But if I use generative AI, it might not tell me where the information is coming from.”
4. Maintain some skepticism.
AI is known for sometimes “hallucinating,” or providing information that is not true. For example, Dr. Taylor says he asked a chatbot to create a scientific paper summarizing opioid use disorder and to provide references. “It was a nice summary with information in the body of the text that was, for the most part, correct, but the references were made up,” he says. “Although the references listed names of scientists and titles that seemed plausible and they were associated with legitimate journals, a closer inspection using search engines revealed they were fictitious.”
There are other potential source issues, too. Some users report that, at times, the information AI provides is correct, but the cited sources don’t include answers to the questions they asked. Other times, users say that AI provides source links that don’t exist or that give them a “page not found” result—all of which call into question the accuracy of the answer. “These models aren’t pulling information from one particular resource or site, and they might not necessarily be evidence-based,” Dr. Taylor says.
Some platforms now allow users to customize searches by asking that information comes only from medical literature, for example, or other specified sources, Dr. Wilson adds.
Dr. Wilson compares the way AI gathers information to playing a video game. “Its goal is to get the highest score it can, and the score is based on how humanlike it sounds and its readability,” he says. “When it sounds so human and confident, it can be hard to distinguish between what is accurate and what is not. But this is an active area that is being refined as greater restrictions are being imposed on AI.”
Ultimately, patients should keep in mind that just because something sounds correct does not mean it is.
“It’s fun to try generative AI, but you should always be skeptical of the source. “In the end, trust your doctors, as we are the ones who have the responsibility to look out for your best interests,” Dr. Wilson says.
However, this is the beginning of a new technological era, and people should be aware that it is out there, it adds.