How Close is ChatGPT to Brain? (Engineering Point of View)

A neuroscience-informed analysis of what’s missing in current AI systems like ChatGPT compared to human brain architecture, and the concrete engineering roadmap to AGI.
ai
neuroscience
AGI
Author

Jaekang Lee

Published

July 19, 2025

Why I’m Suddenly Optimistic About AGI

I recently fell down a rabbit hole of neuroscience books — especially “A Brief History of Intelligence” and “A Thousand Brains”. For years, I’ve thought of the human brain as this unknowable black box. But these books pull back the curtain.

They reframe intelligence not as some mystical property, but as a series of concrete, evolutionary “upgrades” our brains acquired over millions of years. And when you look at it that way, building AGI starts to feel less like summoning a ghost in the machine and more like a massive, fascinating engineering problem. We can actually see the components that are missing in models like ChatGPT.

Upgrade 1: The Habit Engine (The Basal Ganglia)

The story of intelligence doesn’t start with thinking; it starts with doing.

  • Plants don’t ‘do’ anything, hence no brain.
  • Cellular organisms had to ‘move forward’ and ‘rotate’ towards food. Hence, neurons evolved. But how do you know where and when to ‘move forward’ and ‘rotate’. Hence, sensors evolved, but neurons conflicted all the time, hence the central ‘brain’ evolved. (Super simplified)

About 500 million years ago, our fish ancestors got a key upgrade: the basal ganglia. Think of this as the brain’s original reinforcement learning system.

It’s the part of your brain that learns through trial and error. When you first learned to ride a bike, you were consciously thinking about every little movement. But after a while, the basal ganglia took over. It learned which muscle movements led to a “reward” (staying upright) and automated the whole process. Now you can ride without even thinking about it.

ChatGPT as a Super-Powered Basal Ganglia

In ML terms, ChatGPT is like a super-powered basal ganglia. It was trained on a massive dataset and, through a complex reward system, it learned the patterns of human language. It’s incredibly good at running on these learned “habits,” spitting out text that feels natural.

But here’s the catch: The basal ganglia is for automating what’s already been learned. It’s not great at figuring things out from scratch.

Upgrade 2: The World Simulator (The Neocortex)

Neocortex Structure

There are 6 layers of neocortex - basically, all the wrinkly stuff is neocortex, and it is theorized that the entire thing is just the same thing all over (cortical columns) just like a sheet of paper squashed into the skull.

This is the game-changer that mammals got. The neocortex is the big, wrinkly outer layer of the brain. If the basal ganglia is the habit engine, the neocortex is the incredible simulation engine.

This is where another book, “A Thousand Brains” by Jeff Hawkins, blows your mind. Hawkins argues the neocortex is made of thousands of tiny, identical computing units called cortical columns. Each one learns a model of the world. When you look at a coffee cup, you don’t just see it. Your neocortex activates a model of “cup.” You instantly know it has a handle, a bottom you can’t see, and a specific feel, all based on a “reference frame” — a sort of internal coordinate system for concepts.

This is what allows you to imagine. You can mentally rotate the cup, picture it filled with orange juice, or simulate what would happen if you dropped it. You’re running a physics and object model inside your neocortex.

What ChatGPT Lacks

ChatGPT doesn’t have this. It knows the word “cup” is statistically associated with the words “handle,” “coffee,” and “drink.” But it has no underlying model of what a cup is. It can’t simulate, which is why it can’t reason from first principles or understand true cause and effect.

Upgrade 3: The CEO (The Prefrontal Cortex)

If we zoom into the most modern part of the neocortex, we find the prefrontal cortex (PFC). This is the brain’s CEO. It doesn’t store information; it directs it. The PFC is what lets you set goals and make plans.

Think of it like this: You’re driving. Your long-term goal, set by your PFC, is “get to the grocery store.” Your neocortex helps you simulate the route, and your basal ganglia handles the actual steering and braking habits.

ChatGPT Has No CEO

ChatGPT has no PFC. It has no goals of its own. It is a purely reactive system. It will never get bored of a conversation and suggest changing the topic. It will never decide your prompt is flawed and suggest a better one. It’s a powerful engine, but it’s sitting in park until you, the user, turn the key and press the gas.

The AGI Engineering Roadmap

So, looking at it this way, the path to AGI isn’t some fuzzy philosophical quest. It’s a series of concrete engineering problems. For those of us who build models, this is good to keep in mind. Here’s my take on the roadmap:

Problem #1: Build a World Model

We need to move beyond pure pattern-matching. The next generation of AI will need a built-in simulator — a “neocortex” — that can learn the fundamental rules of how the world works, allowing for true reasoning and imagination.

Problem #2: Create the Dynamic Loop

We need to connect this new simulator to a reinforcement learning system (a “basal ganglia”). This will create a feedback loop where the AI can learn continuously from its interactions, just like we do. It needs to be able to update its own weights on the fly.

Problem #3: Engineer a Goal-Setter

This one I’m not too sure, a human who has damaged APFC just loses will to do anything. However, ChatGPT doesn’t have this problem, in fact it has the opposite problem, where it will always output something, even if it’s a hallucination. Very funny.

Explore Brain Anatomy Yourself

Want to dive deeper into brain anatomy? Check out the interactive 3D Brain Explorer from BrainFacts.org. You can explore each brain region we discussed and many more in stunning 3D detail.


Video Attribution: Brain anatomy videos used with permission. Copyright © Society for Neuroscience (2017). Users may copy images and text, but must provide attribution to the Society for Neuroscience if an image and/or text is transmitted to another party, or if an image and/or text is used or cited in User’s work.

Conclusion

I believe this is what LeCunn means when he says “AGI requires world model”. I believe he goes more in-depth as to say that a simple simulator won’t work (which was my original thought, “why don’t we just give ChatGPT simulation tools that we can program like AlphaGO did.”). He says it has to be the actual world that we live in to be experienced for real AGI. I’m not sure how this can be scalable, though; it would have to be a real-world simulation. Maybe GTA 7 (?)

Seeing it laid out like this makes me genuinely optimistic. The challenge is enormous, but it’s far less than a black box mystery. We have the blueprint. Now, we just have to build it.

Am sharing this because these books really fascinated me and I was thinking about how I can apply what I learned to the real world. Would love to hear what you have to say.