AI hallucinations aren’t a tech problem—they’re a human problem we’ve been dealing with for centuries.
Megan Morrone‘s excellent Axios piece reinforces my growing conviction: we aren’t going to eliminate AI ‘hallucinations’.
Why? Because we haven’t solved misinformation and overconfidence in humans either.
Human error and overconfidence are at the root of many serious issues today. The underlying problem isn’t new. People have confidently asserted incorrect information for millennia; AI is just following our lead.
The solution to the underlying problem is the same: critical thinking, source verification, and independent confirmation. This means, as Morrone notes, “keep[ing] a human in the loop,” but ensuring those humans keep other humans in the loop too.
Fortunately, scientific research has already shown us the way. Peer review, reproducibility, and collaborative verification have protected us from error and bias for centuries (when we use them). We can and should use these with information from AI too.
We have proven tools to manage unverified information. We just need the discipline to apply these tools consistently, whether our information comes from people or AI.
https://www.axios.com/2025/06/04/fixing-ai-hallucinations