Oops — An AI Said Why it was Wrong

Large language models (LLMs) are wonderful tools, but they sometimes make important mistakes. This informal note shows a stream of incorrect responses from Perplexity (a generally great resource) when we asked it about a rental property. Despite a series of follow-up questions and prompts to provide additional accurate and definitive information, the LLM continued to confidently and persistently get the facts wrong. This note traces the causes and conditions that caused the errors.

When an LLM appears to “reflect” on an answer, it does not build, test, refine, or correctly explain a realistic model of the world like a curious and responsible person can. It may incorrectly combine information from outdated sources. It can be difficult for a user to know if, when, and why LLM responses are incorrect. In this example the LLM exhibits great but misplaced confidence in its answers.

With further prompting Perplexity provided previously published explanations about why LLMs can confidently give incorrect responses. LLM technology is the product of impressive effort and engineering. Nonetheless, it should be used with caution This note illustrates how its mistakes occur.

Full Post: https://www.dropbox.com/scl/fi/kk0d9y8i5oyi93im0c6o3/2025-10-28-Oops-An-AI-Said-Why-it-was-Wrong.pdf?rlkey=aj0bci6ccgfwnnd1g2n16vi4b&dl=0

This entry was posted in AI Engineering, Artificial Intelligence, Uncategorized. Bookmark the permalink.

Comments are closed.