top of page
Search
Writer's pictureMikael Svanstrom

Will LLMs take us to AGI or any kind of Artificial Intelligence?


I know this is an impossible question unless I first define what I mean with AGI, but I addressed that at least partially in a previous article (here: https://www.linkedin.com/pulse/im-artifical-intelligence-timeline-sceptic-part-1-mikael-svanstrom-u7fec/).

So will LLMs take us there? It seems, at least in the very gossipy world of AI researchers and AI companies, that LLMs are beginning to hit a point of diminishing returns. There are other signs/rumours that the next models in training are not creating the leaps that previous generations did. I watched a recent talk by Ilya Sutskever (source: https://www.youtube.com/watch?v=1yvBqasHLZs) and he points out that the pre-training will end. We’ve been scaling compute and had great success with that, but we are reaching peak data. That is, we only have a finite data set to train these models on, so what happens after that? Is data itself a limit? I don’t think so, but for anything to be regarded as intelligent it would need to be able to come up with new data, not just summarising existing data.

And of course we have the Apple researchers who recently published a paper stating: “We hypothesize that this decline is because current LLMs cannot perform genuine logical reasoning; they replicate reasoning steps from their training data.“ (source: https://machinelearning.apple.com/research/gsm-symbolic)

So what does this mean? An LLM is not in and of itself intelligent and will never be. I quite like Meta’s chief AI scientists Yann LeCun and his view. One of his recent posts on X summarised it quite well: “Retrieval isn’t reasoning, rote learning isn’t understanding, accumulated knowledge isn’t intelligence.”

I saw an interesting way of explaining this (a comment under this video: https://www.youtube.com/watch?v=AqwSZEQkknU&t=212s):

A LLM does not measure its success by "how correct it is", it measures success by "how similar is this text to human text". To a LLM, "completing physics" would be considered a mistake, because humans haven't completed physics, so by completing physics it's not doing its job of copying humans properly. I have no idea why anyone acts like this is anything like how a human behaves.

But maybe I’m asking the wrong question. By itself, the answer is No. LLMs will not take us to AGI. But layered with other models, LLMs could be a part of a greater whole that we would indeed call intelligent. Ilys Sutskever’s talk calls out the areas where we will see development:

·       Agentic – Current systems only show a very limited version of what agents will be able to do.

·       Ability to Reason – Ability to process information, draw conclusions, and make decisions based on evidence and rational thought.

·       Ability to Understand from limited data – The cycle of more data/more compute needs to be broken.

·       Will be Self-aware – This is an interesting point and one I may need to write a completely separate post about. Ilya doesn’t delve into this in any detail. We are part of our own world model, so it means an intelligent system also needs this. I’m not sure this is needed not even wanted. Self-awareness may have side effects we’d prefer not to have to deal with.

So the answer is a No. But don’t be surprised if ChatGPT 10 ends up intelligent. It just won’t be an LLM, but many other things too.

20 views0 comments

Recent Posts

See All

Kommentare


bottom of page