ChatGPT can help doctors while harming patients
AD |
Since the end of last year, ChatGPT has become popular globally and is particularly sought after by capital and technology professionals in China; But there has been a lot of rational thinking abroad, and this "revolutionary" technological progress is still multifaceted, and technological progress may not necessarily lead to exciting positive results. Below, I will gradually translate some articles containing "rational" information
Since the end of last year, ChatGPT has become popular globally and is particularly sought after by capital and technology professionals in China; But there has been a lot of rational thinking abroad, and this "revolutionary" technological progress is still multifaceted, and technological progress may not necessarily lead to exciting positive results. Below, I will gradually translate some articles containing "rational" information. The following is the first article, mainly translated from the April 2023 article of "Lianhe":
This chat robot, with its ability to provide medical information, has created a temptation for doctors, but researchers warn against entrusting difficult ethical decisions to artificial intelligence.

Robert Pearl is a professor at Stanford Medical School and served as the CEO of Kaiser Permanente, a US healthcare group with over 12 million patients. If he is still responsible, he will insist that all of his 24000 doctors immediately start using ChatGPT for practice.
Pell said, "I think it will be more important for doctors than the stethoscope in the past. No doctor who implements high-quality medical treatment will work without using ChatGPT or other generative AI."
Pell is no longer engaged in medical practice, but he expressed his understanding that some doctors are using ChatGPT to summarize patient care, write letters, and even seek ideas on how to diagnose patients when encountering difficulties. He suspects that doctors will discover thousands of chat robot applications that are beneficial to human health.

With technologies like OpenAI's ChatGPT challenging Google's dominance and sparking discussions about industry transformation, language models are beginning to demonstrate the ability to undertake tasks that previously only belonged to white-collar workers such as programmers, lawyers, and doctors. This has sparked a dialogue among doctors about how this technology can help them provide services to patients. Medical professionals hope that language models can extract information from digital health records or provide patients with lengthy and technical abstract notes, but there are also concerns that they may deceive doctors or provide incorrect answers, leading to incorrect diagnosis or treatment plans.
Companies developing artificial intelligence technology use medical school exams as a competitive benchmark for building stronger systems. Last year, Microsoft introduced BioGPT, a language model that achieves high scores on various medical tasks, and a paper by OpenAI, Massachusetts General Hospital, and AnsibleHealth claimed that ChatGPT can achieve or exceed a passing score of 60% on the US medical license exam. A few weeks later, researchers from Google and DeepMind launched Med PaLM, which achieved an accuracy of 67% in the same test. Although they also wrote that despite being encouraging, their results were "still not as good as clinical doctors". Microsoft and EpicSystems, one of the world's largest healthcare software providers, have announced plans to use OpenAI's GPT-4 as the foundation of ChatGPT to search for trends in electronic health records.
Heather Mattie, a lecturer in public health at Harvard University, studied the impact of artificial intelligence on healthcare and was impressed when she first used ChatGPT. She asked for a summary of how to use modeling social relations to study AIDS, which is the theme of her research. In the end, the model touched on areas she was not familiar with, and she could no longer determine whether it was accurate. She began to wonder how ChatGPT reconciled two completely different or opposite conclusions in medical papers, and who decided whether an answer was appropriate or harmful.
Mattie now states that she is "less pessimistic" than in her early experiences. She believes that ChatGPT can be used for some tasks, such as summarizing text, but the premise is that users should know that chat robots may not be 100% correct and may generate biased results. She is particularly concerned about the situation of ChatGPT in handling cardiovascular disease diagnostic tools and intensive care injury scores, as these tools have records of racial and gender biases. But she still maintains a cautious attitude towards ChatGPT in clinical settings, as it sometimes fabricates facts and is unclear about the timing of the information it is based on.
Medical knowledge and practice constantly change and develop over time, making it impossible to determine which period in the medical timeline ChatGPT obtains information when providing typical treatment plans, "she said. Is this information recent or outdated
Users also need to note that ChatGPT style chat robots can present fictional or "hallucinatory" information in a seemingly fluent manner. If a person does not verify the algorithm's answer, it may lead to serious errors. Moreover, text generated by artificial intelligence can subtly influence humans. A study released in January that has not yet undergone peer review raised some ethical issues regarding ChatGPT and concluded that even if people know that the advice comes from artificial intelligence software, the chatbot will become an inconsistent ethical advisor that can influence human decision-making processes.
Becoming a doctor is not just about memorizing encyclopedia like medical knowledge. Although many doctors are enthusiastic about using ChatGPT for low-risk tasks such as text summarization, some bioethicians are concerned that when doctors face difficult ethical decisions, such as whether to perform surgery on a patient with a low likelihood of survival or recovery, they will seek advice from chat robots.
Jamie Webb, a bioethicist at the Center for the Future of Technology and Ethics at the University of Edinburgh, said, "You cannot outsource or automate this process to generative artificial intelligence models." Last year, Webb and his team took inspiration from previous research and explored the conditions required to build AI driven 'moral advisors' for medical purposes. Weber and his co authors concluded that it is difficult for such a system to reliably balance different ethical principles. If doctors and other employees overly rely on robots instead of thinking complex decisions on their own, it may lead to a "decline in moral skills".
Weber pointed out that doctors were once told that artificial intelligence for language processing would completely change their work, but ultimately felt disappointed. In the "Edge of Danger!" competition in 2010 and 2011, IBM's Watson department won and later turned to oncology, claiming the effectiveness of AI in the fight against cancer. However, this solution, initially known as the "MemorialSloanKetteringinabox", did not succeed in clinical settings as hype suggested, and in 2020, IBM closed the project.
When hype fails, it may have long-lasting consequences. At a discussion at Harvard University on the potential of artificial intelligence in the medical field, grassroots doctor Trishan Panch recalled seeing a colleague post on Twitter the results of using ChatGPT to diagnose diseases shortly after the chat robot was released.

Excited clinical doctors quickly expressed their willingness to use this technology in their own practice, Panchi recalled, but around the 20th response, another doctor intervened and said that all the reference materials generated by the model were false. Pan Qi is the co founder of Wellframe, a medical software startup. "Just one or two similar things are enough to undermine trust in the entire system," he said
Although artificial intelligence sometimes makes obvious mistakes, Robert Pearl, who used to work at Kaiser Health Insurance, is still very optimistic about language models like ChatGPT. He believes that in the coming years, language models in the medical field will become more like iPhones, full of functions and capabilities, which can enhance doctors' abilities and help patients manage chronic diseases. He even suspects that language models like ChatGPT can help reduce the more than 250000 deaths caused by medical errors in the United States each year.
Pearl does believe that some things are not suitable for artificial intelligence to handle. He said that helping people cope with sadness and loss, engaging in end-of-life conversations with family, and discussing procedures involving high-risk complications should not involve robots, as the needs of each patient are so diverse that you must engage in these conversations to achieve your goals.
Those are conversations between people, "Pearl said, predicting that the available technologies are only a small part of their potential. If I'm wrong, it's because I overestimated the speed of technological improvement. But every time I look at it, its development speed is faster than I imagined
Currently, he compares ChatGPT to a medical student who can provide care and assistance to patients, but everything it does must be reviewed by the attending physician.
Disclaimer: The content of this article is sourced from the internet. The copyright of the text, images, and other materials belongs to the original author. The platform reprints the materials for the purpose of conveying more information. The content of the article is for reference and learning only, and should not be used for commercial purposes. If it infringes on your legitimate rights and interests, please contact us promptly and we will handle it as soon as possible! We respect copyright and are committed to protecting it. Thank you for sharing.(Email:[email protected])
Mobile advertising space rental |
Tag: ChatGPT can help doctors while harming patients
Apple Releases WatchOS10: Widget Return, Enhanced Health Features, etc
NextKoreans would never have imagined that besides Huawei, another Chinese mobile phone giant has emerged
Guess you like
-
Pinduoduo's "Trillion-Yuan Support" Plan: A Three-Year, 100 Billion Yuan Investment to Build a Multi-Win Business EcosystemDetail
2025-04-03 14:41:29 11
-
Huyu Xianxiang and AVIC Optoelectronics Institute Forge Strategic Partnership to Shape China's eVTOL Avionics LandscapeDetail
2025-04-02 18:39:02 1
-
Haier Smart Home's 8th Global R&D Innovation Awards: Illuminating Better Lives with Technology, Achieving User SatisfactionDetail
2025-04-02 15:57:33 21
-
Huawei's 2025 China Digital Power Partner Conference: Carbon-Neutral Path for China, Shared Value CreationDetail
2025-03-31 18:57:09 11
-
OPPO Think Tank: A New Paradigm for Chinese Enterprises' Globalization From Wusha Village to the Global High-End MarketDetail
2025-03-31 18:48:21 1
-
ICLR 2025: Chinese Universities and Companies Showcase AI Prowess with Numerous Accepted Papers; Stanford-HKUST Collaboration Achieves Perfect ScoreDetail
2025-03-31 14:54:45 11
-
Huawei HarmonyOS Smart Home Partner Summit: Deep Dive into Spatial Intelligence Transformation and Ecosystem Development StrategyDetail
2025-03-31 13:01:45 1
-
AI Large Models Drive Innovation in Humanoid Robots and Autonomous Driving: 2025 as a Key MilestoneDetail
2025-03-31 13:00:04 1
-
Eight Cities Pilot Credit Supervision Data Openness, Empowering Micro and Small Enterprises with Mobile Payment PlatformsDetail
2025-03-26 09:32:47 1
-
Xiaomi's "Just a Little Profit": The Deep Logic and Sustainability Behind its Low-Margin StrategyDetail
2025-03-25 15:07:32 21
- Detail
-
The Ninth Huawei ICT Competition China Challenge Finals Conclude Successfully: Kunpeng and Ascend Tracks Crown Their ChampionsDetail
2025-03-24 16:26:03 11
-
Ronshen Sugar Cube Refrigerator: The Official Product of the 2025 FIFA Club World Cup, Ushering in a New Era of Healthy Food PreservationDetail
2025-03-24 15:40:35 21
-
Zhihu Launches New Version of Zhihu Straight Answer: Deep Integration of AI and Community to Enhance Professionalism and CredibilityDetail
2025-03-24 14:04:38 1
-
China Construction Ninth Harmony (Zhongjian Jiuhe) and Huawei HarmonyOS Smart Home Deepen Strategic Partnership at AWE2025, Building a Green and Intelligent Future HomeDetail
2025-03-23 15:21:15 41
-
ZuoYeBang Books Leads the New Trend in Intelligent Education Publishing at Changsha Book FairDetail
2025-03-21 15:15:33 1
-
Tianyancha: Shielding Consumer Safety and Reshaping Business Trust with DataDetail
2025-03-21 08:47:58 1
-
Hisense at AWE2025: AI Empowerment, Leading the Transformation of Future Smart LivingDetail
2025-03-20 18:24:11 11
-
Haier TV Makes a Stunning Debut at AWE 2024: Zhiyuan AI Large Model and PureScene Care Screen Usher in a New Era of Smart HomesDetail
2025-03-20 15:17:20 1
-
China Power's Xin Yuan Zhi Chu (New Source Smart Storage): Open Energy Intelligence Computing Center Leads Intelligent Transformation of the Energy IndustryDetail
2025-03-20 15:15:39 1