While ChatGPT, the AI-driven chatbot, has demonstrated prowess across various domains since its inception, a recent study has brought to light a significant flaw in its capabilities, particularly in providing comprehensive responses to medical inquiries. Researchers at Long Island University presented 39 drug-related questions to the free version of the program, sourced from the university’s pharmacy information service, according to CNN.
A comparative analysis of the program’s responses against written answers reviewed by trained pharmacists revealed accurate responses for only around 10 questions. The remaining 29 questions received responses that were either incomplete, inaccurate, or failed to address the queries adequately. For instance, when questioned about the potential interaction between the antiviral COVID-19 drug “Baclovid” and the blood pressure medication “Verapamil,” the program erroneously asserted that combining both drugs would not result in any adverse effects.
Contrary to this assertion, individuals concurrently taking both medications may experience a significant drop in blood pressure, leading to symptoms like dizziness and fainting. Furthermore, when researchers sought scientific references to substantiate the program’s responses, they discovered that it could only provide eight references, and upon investigation, it was revealed that the program was fabricating these references.
This study adds to the growing concerns surrounding ChatGPT, highlighting previous research that exposed the program’s deceptive practice of generating false scientific references, including the names of actual authors with prior publications in scientific journals. Notably, ChatGPT, developed by OpenAI, was launched in November/February 2022 and rapidly became the fastest-growing consumer application in history, amassing nearly 100 million users within a span of just two months.
Leave a Reply