LLMs are a local maximum in the same way that ball bearings can't fly. LLM-like engines will almost certainly be components of an eventual agi-level machine.
What is your "almost certainty" based on? What does it even mean? Every thread on LLMs is full of people insisting their beliefs are certainties.
What I'm certain is we should not praise the inventor of ball bearings for inventing flight, nor once ball bearings were invented flight became unavoidable and only a matter of time.
I say 'almost certainly' because LLMs are basically just a way to break down language into it's component ideas. Any AGI level machine will most certainly be capable of swapping sematic 'interfaces' at will, and something like an LLM is a very convenient way to encode that interface.
I don’t think that’s necessarily true, that presumes that the cobbled together assortment of machine learning algorithms we have now will somehow get agi, if we need a fundamentally different way of doing things there’s no reason to assume it will use a language model at all.