AI “hallucinations” have become a favourite headline. The idea that technology can produce false, misleading, or fabricated outputs makes for great copy. But here’s the uncomfortable truth: humans do exactly the same.
Humans: the Original Hallucinators
- Inconsistency: Managers change their minds up to 40% of the time when faced with the same problem in a different context (Harvard Business Review).
- Bias: Over 180 cognitive biases influence how decisions are made - most of them invisible until challenged.
- Memory gaps: Eyewitness accounts, long trusted in courtrooms, are only accurate around 50% of the time.
If AI hallucinates, so do we.
AI: Fallible, but Predictable
- Repeatability: Give AI the same input and you’ll get the same output (unless randomness is built in). Humans rarely offer that level of consistency.
- Scalability: Under pressure, human error rates rise. AI can analyse millions of data points in seconds without breaking a sweat.
- Transparency: With the right governance, AI outputs can be traced back to data and prompts. Human reasoning is rarely so easy to audit.
But AI doesn’t bring judgment, ethics, or cultural nuance - the things leaders often lean on most heavily.
The Real Question for Leaders
This isn’t about whether AI or humans hallucinate more. The question is: where does human fallibility act as a feature, and where does AI’s consistency provide an advantage?
- Spotting patterns at scale? AI wins.
- Navigating ambiguity, values, and trade-offs? Humans must lead.
- Accountability and governance? Only people can be held responsible.
The Leadership Outcome
Forward-thinking leaders won’t waste time debating whether AI can be trusted. They’ll build decision-making systems where the risks of both human and AI hallucinations are recognised, managed, and balanced.
The winners won’t be those who trust AI blindly, or distrust it entirely. They’ll be the ones who design organisations where trust isn’t left to chance - it’s built into the system.