coumadin sensitivity test
ChatGPT, an artificial intelligence (AI) chatbot, may be helpful for patients with cirrhosis or hepatocellular carcinoma (HCC) and their clinicians by generating easy-to-understand information about the disease, a new study suggests.
ChatGPT can regurgitate correct and reproducible responses to commonly asked patient questions on cirrhosis and HCC; however, the majority of the correct responses were labeled by clinician specialists as “correct but inadequate,” according to the study findings.
The AI tool can also provide empathetic and practical advice to patients and caregivers but falls short in its ability to provide tailored recommendations, the researchers note.
“Patients with cirrhosis and/or liver cancer and their caregivers often have unmet needs and insufficient knowledge about managing and preventing complications of their disease. We found ChatGPT — while it has limitations — can help empower patients and improve health literacy for different populations, flomax dosage ” study investigator Brennan Spiegel, MD, with Cedars-Sinai, Los Angeles, California, said in a news release.
The study was published online in Clinical and Molecular Hepatology.
Adjunctive Health Literacy Tool
ChatGPT (Chat Generative Pre-Trained Transformer), developed by OpenAI, is a natural language processing tool that allows users to have personalized conversations with an AI bot capable of providing detailed responses to any question posed.
It has already seen several potential applications in the medical field, but the Cedars-Sinai study is one of the first to examine the chatbot’s ability to answer clinically oriented, disease-specific questions correctly and compare its performance to physicians.
The investigators asked ChatGPT 164 questions relevant to patients with cirrhosis and/or HCC across five categories — basic knowledge, diagnosis, treatment, lifestyle, and preventive medicine. The chatbot’s answers were graded independently by two liver transplant specialists.
Overall, ChatGPT answered about 77% of the questions correctly, generating high levels of accuracy in 91 questions across the categories, the researchers report.
ChatGPT regurgitated extensive knowledge of cirrhosis (79% correct) and HCC (74% correct), but only small proportions were deemed by specialists to be comprehensive (47% in cirrhosis, 41% in HCC).
The chatbot performed better in basic knowledge, lifestyle, and treatment than in the domains of diagnosis and preventive medicine.
The specialists grading ChatGPT felt that 75% of its answers for questions on basic knowledge, treatment, and lifestyle were comprehensive or correct but inadequate. The corresponding percentages for diagnosis and preventive medicine were lower (67% and 50%, respectively). No responses from ChatGPT were graded as completely incorrect.
Responses deemed by the specialists to be “mixed with correct and incorrect/outdated data” were 22% for basic knowledge, 33% for diagnosis, 25% for treatment, 18% for lifestyle, and 50% for preventive medicine.
No Substitute for Specialists
The investigators also tested ChatGPT on cirrhosis quality measures recommended by the American Association for the Study of Liver Diseases and contained in two published questionnaires. ChatGPT answered 77% of the relevant questions correctly but failed to specify decision-making cutoffs and treatment durations.
ChatGPT also lacked knowledge of variations in regional guidelines, such as HCC screening criteria, but it did offer “practical and multifaceted” advice to patients and caregivers about next steps and adjusting to a new diagnosis.
“We believe ChatGPT to be a very useful adjunctive tool for physicians — not a replacement — but adjunctive tool that provides access to reliable and accurate health information that is easy for many to understand,” Spiegel said in the news release. “We hope that this can help physicians to empower patients and improve health literacy for patients facing challenging conditions such as cirrhosis and liver cancer.”
ChatGPT could enhance clinician workflow by helping to draft a framework for each tailored question asked by patients and caregivers, the researchers write.
“Given the high proportion of either comprehensive or correct but inadequate responses and expected continued improvement over time, we foresee that physicians would only need to revise ChatGPT’s responses to best answer patient queries,” they write. “This may not only improve the efficiency of physicians but also decrease the overall cost and burden on the healthcare system.”
Additionally, ChatGPT could empower patients to be better informed about their care, the researchers note.
“This allows for patient-led care and facilitates efficient shared decision-making by providing patients with an additional source of information,” they add.
The study had no specific funding. The authors report no relevant financial relationships.
Clin Mol Hepatol. Published online March 21, 2023. Abstract.
For more news, follow Medscape on Facebook, Twitter, Instagram, and YouTube.
Source: Read Full Article