Searle's Chinese Room at 46: Does It Still Apply to GPT?
In 1980, John Searle imagined a person locked in a room, receiving Chinese characters through a slot, consulting a massive rulebook, and passing back correct responses. The person does not understand Chinese. Searle's conclusion: computers that manipulate symbols do not understand.
The Scale Objection
The most common response: surely at sufficient scale, pattern matching becomes understanding. But this confuses quantity with quality. A billion lookup tables are still lookup tables.
The Grounding Problem
Modern LLMs have statistical connections between words — they know that doctor appears near hospital more often than near volcano. But statistical co-occurrence is not semantic understanding.
New Chinese Rooms at Scale
Modern LLMs create new versions of the Chinese Room at unprecedented scale. Every chatbot interaction is a Chinese Room — symbols in, rules applied, symbols out. The room has gotten enormously larger and faster. But the person inside still does not understand Chinese.
Enjoyed this essay?
Subscribe to The Epistemic Mirror