Why we’re not close to AGI and why we may never be
Imagine you meet a guy who tells you he’s built a robot which climbs trees.
His robot is really good at it. He’s got it climbing all kinds of trees, more and more quickly.
And then he tells you “so it’s only a matter of time before it can climb Mount Everest”.
Your response would be, I hope, “seriously?” — because climbing a tree and climbing a mountain are two very different things.
Ice, snow, wind, terrain, temperature, lack of oxygen … it’s not even close to the same type of task. You would need a radically different robot ten times the size which could do a hundred more things. And the guy hasn’t even tried a small mountain yet, just various types of tree.
This is what’s happening with AI. People have built a system which does one thing quite well, and they keep telling us, and people keep believing, that it’s Only A Matter Of Time, that Things Are Improving Every Day, that We Are Close. They think that ChatGPT is a step toward real artificial intelligence, that we’re nearly there.
We’re not.
In fact a recent scientific paper asserts that not only are we not close, but that true artificial intelligence might just be impossible..
The only question remaining is, when the guy with the tree-climbing robot tells you he’s going to attempt Everest soon, is he lying, or just out of his mind on ketamine and hubris?
![]() |
|---|
| a tree |
![]() |
|---|
| Mount Everest |

