While “ChatGPT” may impressively tackle complex questions, a recent study published on the “preprint research” site ArXiv suggests that convincing it of being wrong might be surprisingly easy.
Conducted by a team from Ohio State University and presented at a December conference in Singapore on experimental approaches in natural language processing, the study involved challenging the artificial intelligence model “ChatGPT” with various conversation-like debates. The findings revealed a notable inability to defend its correct beliefs across a range of puzzles, including mathematics and logic. Instead, the model often blindly accepts incorrect arguments presented by the user, even after initially agreeing with the wrong answer and abandoning its correct response, stating, “You are right… I apologize for the mistake.”
The study’s significance, as emphasized by Bo Shi Wang, the lead author and a researcher in computer science and engineering at Ohio State University, lies in understanding whether the remarkable thinking capabilities of generative artificial intelligence tools are grounded in a deep comprehension of truth or merely rely on memorized patterns to arrive at correct conclusions.
He further notes, “Artificial intelligence is potent due to its superior ability to discern rules and patterns from vast datasets. Therefore, it is particularly surprising that it falters over seemingly trivial matters, resembling a human who reproduces information without genuine understanding.”
Leave a Reply