What Claude Shannon Knew In 1950 That We’re Pretending Is New
AI didn’t arrive yesterday; it just changed its outfit
Every era gets its favorite tech panic. Ours, apparently, is watching a chatbot say something polished, half-right, and faintly dangerous, then acting as though civilization has been ambushed by a brand-new problem. But Claude Shannon saw the outline of this mess in 1950, back when computers were still large enough to qualify as real estate.
In his paper, “Programming a Computer for Playing Chess,” Shannon wasn’t trying to build a novelty act for brainy cocktail parties. He chose chess because it forced a machine to face the sort of problem machines are facing now: too many possible moves, not enough time to calculate everything, and no choice but to make a judgment anyway.
That should sound familiar to anyone watching generative AI answer questions about our products, policies, processes, or software configuration with the calm confidence of a man giving directions while standing in the wrong city.
“Tolerably Good” Was the Goal
One of the most useful things Shannon did was lower the temperature. He didn’t say machines had to play perfect chess. He said they had to play “tolerably good” chess.
He understood that perfect performance wasn’t realistic. The problem space was too large. The machines of the day couldn’t analyze every possible continuation. So the real challenge wasn’t perfection. It was usefulness.
Could the machine make choices that were good enough to hold up under normal conditions?
That’s still the question with AI now, even though people keep dressing it up in more expensive language. We don’t actually need AI to be magical. We need it to be useful without wandering off into fiction. We need it to stop sounding certain when it should be saying, “I’m not entirely sure, and frankly neither should you be.”
The trouble is that “good enough” means very different things to different people depending on the task. If AI drafts a mediocre summary of a meeting, nobody faints. If it gives our prospects or customers the wrong setup instruction, skips a prerequisite, or blends two different product versions into one smooth paragraph of nonsense, suddenly “tolerably good” starts looking like a phrase that should come with a lawyer.
Related: Generative AI Lawsuits Timeline (updated weekly)
The Machine Doesn’t Know — It Guesses, Confidently
Shannon understood something that people still resist because it ruins the fantasy. A machine doesn’t always arrive at the answer by knowing the answer. Often, it arrives there by evaluating possibilities and choosing what seems best according to the signals it has.
Modern AI works pretty much the same way. It doesn’t know your product the way an experienced support agent knows it. It doesn’t understand your docs the way a careful reader does. It doesn’t stop, scratch its chin, and ask whether two similar procedures might apply to different user roles. It predicts. It estimates. It assembles a response that looks like the kind of thing a good answer would look like.
When the signals are strong, this can feel impressive. When the signals are weak, missing, or inconsistent, it still produces something. That’s where the trouble starts. The output may sound measured and complete, which is often the first sign you should worry.
Coherence Isn’t the Same Thing as Accuracy
People often treat fluent, easy-to-process language as a cue that something is true. Psychologists call this processing fluency, and a large body of research ties it to truth judgments. In a classic study, statements that were simply easier to read were more likely to be judged true. The authors’ conclusion was blunt: “perceptual fluency affects judgments of truth.”
If the answer flows nicely, uses the correct nouns, and doesn’t visibly burst into flames, many assume the machine must have understood what it was talking about.
But Shannon pushes against that kind of wishful thinking. He knew the system would often rely on approximation. In chess, that means the machine makes a reasonable move without proving it’s the best move. In genAI, the machine doesn’t arrive at an answer by checking whether it’s true in the way a human expert might. It produces a response by predicting what a good answer is likely to sound like, one LLM token at a time. That’s why AI-generated output can read as polished, measured, and entirely sensible while still being wrong in ways that matter.
That distinction matters a lot for tech writers. A response can be coherent and still be useless. It can be elegant and still omit the thing that mattered most. It can sound like it belongs in the product manual while quietly steering the reader into a ditch.
That’s why the current conversation about AI so often feels silly. People keep asking whether the prose sounds natural. That’s the least interesting part. The dangerous part is that the prose sounds natural even when the reasoning underneath it is shaky enough to qualify as decorative.
The Real Issue Isn’t Intelligence — It’s Signal Quality
Shannon’s chess-playing computer needed a way to judge positions — basically, how the board was set up at that moment — when it couldn’t calculate everything. In other words, it needed signals that helped it decide what choices looked stronger, weaker, safer, riskier, more promising.
That should sound very familiar to tech writers working with structured content, metadata, taxonomy, ontology, versioning, or workflow status. Because in modern AI systems, those are the signals.
They tell the model, or the retrieval layer feeding the model, what content applies to which audience, which version is current, which procedure belongs to which product version, which warnings matter, and which info has been reviewed and approved by someone who’d prefer not to be sued.
Without those signals, the machine doesn’t throw up its hands and refuse to answer. That would be far too responsible. It keeps going. It infers. It blends. It smooths over cumbersome gaps. It performs the digital equivalent of a person nodding along in a conversation they stopped understanding eight minutes ago.
That’s why weak content governance keeps showing up as an AI problem. It isn’t always an AI problem. Quite often it’s a problem with our content that AI exposes.
Why Tech Writers Are Suddenly More Important
This is where the tech writer’s job gets more interesting, and perhaps a bit more annoying. 🤬 It used to be enough for us to write clearly for humans. Homo sapiens are patient in ways machines aren’t. We notice contradictions, sense when context is missing, and when it is, we infer from surrounding material. We pause when something feels off.
AI doesn’t do that. Nope. It doesn’t slow down because a sentence feels off. It doesn’t ask whether the warning in one document conflicts with caution advice in another. It doesn’t notice that the troubleshooting article applies only to Version 11 while the setup instructions were written for Version 9 and last updated when everyone still thought QR codes were futuristic.
So now our job is larger. We have to make context more explicit, surface conditions, boundaries, prerequisites, exceptions, and applicability in our content. And, we have to help the system distinguish between general guidance and situation-specific instruction.
It’s not glamorous, but it is important. The machine can only work with what the content provides.
We’re Not Dealing With A New Problem
The reason Shannon’s work still matters some 75+ years later is that he described the shape of the problem long before anyone had a chatbot to blame. A machine facing a large decision space can’t calculate everything. It has to use approximations (see video below to learn more). The quality of its behavior depends on the quality of the signals guiding those approximations. That was true in 1950. It’s true now.
What changed is that the machine no longer sits quietly in the corner playing chess. Now it drafts documents, answers support questions, summarizes policy, and speaks to customers in complete sentences (increasingly through human-like interfaces called digital avatars or virtual humans). That makes the underlying problem more visible, but it doesn’t make it new.
We keep pretending we’ve stumbled into some unprecedented crisis of machine unreliability. In truth, we’ve mostly put a friendly conversational interface on top of an old computational reality and then acted stunned when it behaved like a machine making approximations under constraint.
In fact, it may be useful for us to stop saying that AI systems hallucinate when they actually approximate.
Claude Shannon would not be surprised. He might be amused, though. 😆
We Should Stop Pretending We Don’t Know
If there’s a lesson here for us, it’s not that AI is broken or that machines are foolish or that prompts are the secret to salvation. It’s that reliability has always depended on bounded problems, explicit signals, and sensible expectations.
AI doesn’t need to be brilliant. It needs to be trustworthy within clear limits. That means the content behind it — our content — has to do more than read well. It has to signal what applies, what doesn’t, what changed, what matters, and what should never be blended into a paragraph just because the model found it statistically convenient.
Shannon knew that a machine making decisions under uncertainty would only ever be as good as the method guiding its choices. We’re still learning that lesson now, only with better branding and much more overconfidence. 🤠





