· ai philosophy architecture

The Homunculus Fallacy - and Why GPT-5 Might Be Walking Right Into It

Nested boxes showing GPT-5 system containing a router, which needs its own judge, leading to infinite regress You cannot explain intelligence by inserting another intelligent agent - the mystery just moves inward.

Imagine there’s a tiny CEO in your brain, sitting at a desk, watching everything you do, and making all the big calls.

But… who’s managing that CEO?

This is the homunculus fallacy - you can’t explain intelligence by inserting another intelligent agent. You just move the mystery up one level.

Now, think about GPT-5. OpenAI describes a router that decides when to think fast and when to think slow based on “conversation type, complexity, tool needs, and your explicit intent.” But if sophisticated judgment is required to route intelligence… who’s routing the router?

The router must choose between AI strategies, choosing well requires judgment, and judgment itself needs a decision-maker.

Of course, routing can be a much simpler cognitive task than doing the work itself. A router might just use lightweight heuristics or pattern recognition - the AI equivalent of a quick “this looks like math, send it to the calculator” decision.

But when the stakes are high, even that “simpler” choice can require surprisingly deep judgment. And if the router needs its own layer of intelligence, we’re back to infinite regress - routers all the way down.

Philosophers spotted this trap centuries ago. Maybe GPT-5’s router really is just simple pattern matching. Or maybe OpenAI has genuinely solved a centuries-old philosophical puzzle and just forgot to mention it in their blog post.