8 Comments
User's avatar
Gauss's avatar

The example of turbulence is apt. Kolmogorov’s theory and subsequent developments along those lines are the most we can probably expect. As noted in the post, useful results and devices (aircraft) can be designed with the current understanding. Computational fluid dynamics is somewhat helpful but most of the predictive value in the physics of fluids comes from semi-empirical and heuristic methods like dimensional analysis.

The trouble with turbulent fluids is not complexity as much as it is chaos. As Edward Lorenz showed back in the 1960s, extremely simple systems exhibit deterministic chaos. I suspect chaos plays a role in the brain, which means it does not lend itself to closed-form, analytic solutions. So what? Airplanes still fly and a human-designed GAI can be created. Take it from an older, albeit not distinguished, physicist.

Expand full comment
Francis Turner's avatar

It may be possible to build GAI. I suspect (not having read the book) that it is possible. That doesn't mean that current methods (i.e. LLMs and their close relatives) are the way to do it.

In fact from my understandng of how LLMs work I think that they are dead ends like the various analog computers developed in the early/mid twentieth century.

Expand full comment
Arqiduka's avatar

LLMs have turned out to be extremely useful given their basic architecture, emergent properties at their finest. But it's not how were going to get AGI

Expand full comment
TonyZa's avatar

A chess or go engine can learn first by studying human games than improve by playing with itself. A chatbot can be trained on stuff scraped from the internet but it can't improve on that so it's stuck at redditor level. Google AI shows links for its sources which are frequently Reddit and Quora threads. At least it doesn't quote 4chan or Tumblr.

Expand full comment
Francis Turner's avatar

The paper discussed here - https://arstechnica.com/ai/2025/08/researchers-find-llms-are-bad-at-logical-inference-good-at-fluent-nonsense/ - says that LLM is incapable of learning more after its initial training

ISTR that google's go engine has been able to develop successful new plays that no one has ever seen before

Expand full comment
[insert here] delenda est's avatar

Very good review, thanks. I agree with your conclusions, the authors' arguments do seem like being close to the best of their ilk and they are indeed weak enough to reinforce the opposite view.

I will let you know if I ever succeed in writing my own steelman of the case against AGI, your review would be very helpful.

Expand full comment
Doctor Hammer's avatar

Very interesting essay, thank you.

I note that there might be a small category error in your opening when you go from GAI to AI trained on special tasks, specifically with video games. It is my understanding from the last few months (so maybe out of date already) that the generalized AI models do pretty terribly at games, even chess and amusingly Pokemon, even though specialized AI do a lot better. I think that is important to keep in mind, as a GAI would be expected to be human level at any random task; hyper trained specialist AIs have been around for a bit, and are rather less interesting. If Claude or GPT could step in and play any game better than or equal to a human with just a few hours of practice that would be incredibly impressive, but my understanding is that they are severely limited in this way.

Expand full comment
[insert here] delenda est's avatar

Scaling

Expand full comment