Human LLMs exist. I just found one.

a few seconds ago   •   1 min read

By Vladimír Záhradník
Cliff Clavin — the original human LLM. Image credit: Paramount Television / Cheers

When people talk about AI today, two reactions are common:

  • excitement
  • fatigue from hearing about it everywhere

But there is one well‑known weakness of generative models that everyone recognizes:

hallucinations.

Large language models can produce answers that sound perfectly plausible — even when they are completely wrong.

Yesterday I realized something funny.

Human LLMs have existed for decades.

I recently started watching Cheers again, the classic sitcom set in a Boston bar. And suddenly it hit me.

One of the regulars — Cliff Clavin, the postal worker — behaves exactly like a hallucinating chatbot.

Cliff Clavin:

  • can answer any question asked
  • the answer sounds authoritative and detailed
  • fact‑checking usually makes it collapse quickly
  • he is absolutely confident while explaining it
  • and the explanation tends to be… very long

In other words, Cliff behaves like an LLM running at high temperature.

And of course, he never says:

"I don't know."

Once you see it, it is hard to unsee.

Every workplace probably has at least one human LLM.

Spread the word

Keep reading