I'll simultaneously call all current ML models "stupid" and also say that SOTA LLMs can operate at junior (software) engineer level.
This is because I use "stupidity" as the number of examples some intelligence needs in order to learn from, while performance is limited to the quality of the output.
LLMs *partially* make up for being too stupid to live (literally: no living thing could survive if it needed so many examples) by going through each example faster than any living thing ever could — by as many orders of magnitude as there are between jogging and continental drift.
If you’re a shop that churns through juniors, LLMs may match that. If you retain them for more than a year, you rapidly see the difference. Both personally and in the teams that develop an LLM addiction versus those who use it to turbocharge innate advantages.
I have had the unfortunate experience of having to work with people who have got a lot more than one year experience who are still worse than last year's LLMs, who didn't even realise they were bad at what they did.
Data-efficiency matters, but compute-efficiency matters too.
LLMs have a reasonable learning rate at inference time (in-context learning is powerful), but a very poor learning rate in pretraining. And one issue with that is that we have an awful lot of cheap data to pretrain those LLMs with.
We don't know how much compute human brain uses to do what it does. And if we could pretrain with the same data-efficiency as humans, but at the cost of using x10000 the compute for it?
It would be impossible to justify doing that for all but the most expensive, hard-to-come-by gold-plated datasets - ones that are actually worth squeezing every drop of performance gains out from.
Energy is even weirder. Global electricity supply is about 3 TW/8 billion people, 375 W/person, vs the 100-124 W/person of our metabolism. Given how much cheaper electricity is than food, AI can be much worse Joules for the same outcome, while still being good enough to get all the electricity.
This is because I use "stupidity" as the number of examples some intelligence needs in order to learn from, while performance is limited to the quality of the output.
LLMs *partially* make up for being too stupid to live (literally: no living thing could survive if it needed so many examples) by going through each example faster than any living thing ever could — by as many orders of magnitude as there are between jogging and continental drift.