Category Mistakes and Elegant Nonsense
In Human Thought and Artificial Intelligence
Gilbert Ryle introduced the notion of a category mistake to explain a peculiar kind of error, one that does not arise from ignorance, confusion, or lack of intelligence, but from misplaced competence. The error occurs when a concept is applied as though it belonged to a logical category it does not inhabit, or when a question is asked that presupposes the wrong kind of thing.
Ryle’s now-classic examples are familiar: the visitor to Oxford who, after being shown the colleges, libraries, and administrative buildings, asks where the university itself is located. Or the reader who, having been given a complete account of neural activity, perception, memory, and behavior, asks where the mind itself is hiding.
The mistake is not factual. It is grammatical, conceptual, and structural. The university is not an additional object alongside the buildings. The mind is not an extra entity alongside the processes that constitute it.
What made Ryle’s insight unsettling was his insistence that these mistakes arise from linguistic fluency, not from ignorance. People who commit them are often articulate, educated, and confident. The sentences sound right. The questions feel natural. That is precisely why the errors are so durable. Once seen, they are obvious. Before that, they are invisible.
Consider this sentence:
“When there were no clocks, it must have been quite hard to be on time.”
Nothing about the sentence sounds foolish. It is grammatically clean. It gestures toward a historical intuition. And yet it commits a category mistake.
“On time” already assumes clocks, fixed measures, and shared expectations. Without those, the phrase has no foothold. The sentence invents a problem by importing a concept into a milieu with different conditions, where it has no salience.
Better historical data do not correct this sort of error. It is corrected by noticing that the predicate does not belong to the subject. Many readers can see that instantly, without appeal to evidence or consequences. The correction is internal and immediate.
This example shows exactly what is at stake. Category errors can be detected purely by structural awareness. The correction comes from sensitivity to conditions of intelligibility, not external pressures or repercussions.
It is tempting to say that both humans and AI systems are prone to category mistakes. That is true in a shallow sense. But the symmetry breaks as soon as we look at how category errors are detected and corrected.
A human writer can reread a sentence and notice, silently and without drama, that something is off. Not false, but wrongly framed. The wrong kind of explanation. The wrong register. The wrong sort of question. This does not require feedback from the world. It requires ownership of the framing stance.
An AI system does not have that.
It can reproduce definitions, recognize canonical examples, and flag category mistakes if they match known patterns. What it cannot do reliably is notice that its own output has crossed explanatory registers unless the pattern matches specific cases in the model’s setup or the error is called out by the AI’s user.
Consider this AI-generated sentence:
“Once the model understands the user’s intention, it can decide how best to guide the conversation toward insight.”
The sentence is fluent and coherent. Syntax and semantics pose no problem. Nothing in it triggers an internal alarm. And yet it is pure nonsense.
The AI operates entirely by pattern continuation. To attribute understanding, intention recognition, decision-making, or pedagogical guidance to it misplaces psychological and educational categories. The model has no access to a user’s intention and no capacity to guide anyone toward insight. The sentence sounds explanatory only because grammar permits it.
This is not a defect in intelligence. It is an architectural limit. Human writers are not merely producing coherent sentences. They are policing categories as they go, often pre-reflectively. AI systems generate fluent continuations of prompts and evaluate them only when evaluation is explicitly cued. Their fluency, in this case, is a liability.
Category errors bedevil human discourse just as predictably, particularly in discussions of self, mind, awareness, and agency. Much of the confusion in contemporary discussions about these topics can be traced to category mistakes that go unnoticed because they seem like answers rather than errors.
We speak as if awareness were an agent; the self were an inner object; and experience were something that requires management. Presence is treated as a place one can enter or leave. Attention becomes a faculty that stands apart from what it attends to. Each move is grammatically licensed. Each produces sentences that seem significant. And each quietly invents problems that then call for solutions, paths, methods, or practices.
In my essays and in correspondence with readers, a pattern recurs—not universally, of course, but frequently. A description of how experience functions is misperceived as guidance. A denial of agency is heard as an instruction to relinquish control. An observation about systems and loops is treated as a metaphysical position. These mistakes are not disagreements. They are miscategorization.
AI systems amplify this pattern. They are exceptionally good at producing language that fits an activated frame and remains coherent within it. They are less reliable at noticing when the frame itself is misplaced. Without human curation, coherence persists while categories drift, because nothing actively polices their boundaries.
This is why AI without supervision can generate elegant nonsense. Without a human responsible for conceptual hygiene, its outputs drift in the name of fluency.
Ryle’s point was never merely academic. He was warning that language, left unchecked, invents entities and tasks that feel real simply because grammar allows them. The shock is not that people fall for this. The shock is how often it occurs in serious discourse.
Category mistakes are not beginner errors. They are expert errors. They arise precisely where language is most fluent, and confidence is highest. Noticing them does not make one wiser, just more careful. And once the care is there, many problems evaporate before they can even be put into words.
A reminder: David Matt’s narration of Depending On No-thing is available as an Audible audiobook.

Thank you for restacking Category Mistakes and Elegant Nonsense, Dab Dib. I found it interesting that you also restacked the essay on cybernetics by The Living Fractal, which is a theme that runs through my work too.
Ryle’s point was that category mistakes arise when we posit an extra something where a competence already suffices.
When you say humans correct category errors through “ownership of the framing stance,” it’s unclear whether you’re naming a distinct capacity or merely redescribing norm-governed linguistic practice.
If the former, that seems to reintroduce precisely the kind of supervisory element Ryle warned against. If the latter, then the contrast with AI appears functional and contextual, not categorical.
I’m not disagreeing, just asking which category you intend and what explanatory work “ownership” is doing here.