OpenAI Would Like You to Share Your Health Data with Its ChatGPT


OpenAI Would Like You to Share Your Health Data with Its AI Chatbot

Users will be able to upload their health data to ChatGPT in order to get what OpenAI has described as a more personalized experience

Illustration of AI medical technology concepts

OpenAI wants your health data. On Wednesday OpenAI, the company behind the wildly popular artificial intelligence chatbot ChatGPT, announced that some users will be able to feed their health information into the bot, from medical records to test results to health and wellness app data. In return, OpenAI says, users can expect ChatGPT to give them more personalized meal planning, nutrition advice and lab test insights.

In a blog post explaining ChatGPT Health on Wednesday, OpenAI said that more than 230 million people a week ask the company’s AI chatbot health-related questions.

The new feature was designed in collaboration with physicians and is meant to help people “take a more active role in understanding and managing their health and wellness” while “supporting, not replacing, care from clinicians,” according to the company.


On supporting science journalism

If you’re enjoying this article, consider supporting our award-winning journalism by subscribing. By purchasing a subscription you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today.


But as Scientific American and many other outlets have previously reported, some health experts have urged caution when using ChatGPT for health care reasons—and especially for mental health. The company has faced legal scrutiny in recent years after several people, including at least two teenagers, died by suicide after interacting with ChatGPT. OpenAI did not immediately respond to a request for comment.

Other experts are more positive. Peter D. Chang, an associate professor of radiological sciences and computer science at the University of California, Irvine, says the tool represents a “step in the right direction” toward more personalized medical care. But he also cautions that users should approach any AI-generated medical advice with a grain of salt. “Maybe don’t do exactly what it says but use it as a starting point to learn more.”

“Absolutely there’s nothing preventing the model from going off the rails to give you a nonsensical result,” Chang says.

IF YOU NEED HELP

If you or someone you know is struggling or having thoughts of suicide, help is available. Call or text the 988 Suicide & Crisis Lifeline at 988 or use the online Lifeline Chat.

It’s Time to Stand Up for Science

If you enjoyed this article, I’d like to ask for your support. Scientific American has served as an advocate for science and industry for 180 years, and right now may be the most critical moment in that two-century history.

I’ve been a Scientific American subscriber since I was 12 years old, and it helped shape the way I look at the world. SciAm always educates and delights me, and inspires a sense of awe for our vast, beautiful universe. I hope it does that for you, too.

If you subscribe to Scientific American, you help ensure that our coverage is centered on meaningful research and discovery; that we have the resources to report on the decisions that threaten labs across the U.S.; and that we support both budding and working scientists at a time when the value of science itself too often goes unrecognized.

In return, you get essential news, captivating podcasts, brilliant infographics, can’t-miss newsletters, must-watch videos, challenging games, and the science world’s best writing and reporting. You can even gift someone a subscription.

There has never been a more important time for us to stand up and show why science matters. I hope you’ll support us in that mission.


Leave a Reply

Your email address will not be published. Required fields are marked *

Back To Top