> we will never be able to make something smarter than a human brain on purpose. It effectively has to happen either naturally or by pure coincidence.
Why is that? We can build machines that are much better than humans in some things (calculations, data crunching). How can you be certain that this is impossible in other disciplines?
that's just a tiny fraction of what a human brain can do, sure we can get something better in very narrow subjects, but something as being able to recognize patterns apply that to solve problems is something way beyond anything we can even think of right now.
Ok, but how does that mean that we will never be able to do it? Imagine telling people 500 years ago that you will build a machine that can bring the to the moon. Maybe AGI is like that, maybe it’s really impossible. But how can people be confident that AGI is something humans can’t create?
What we have right now with llms is bruteforcing our way to create something 'smarter' than a human, of course it can happen, but it's not something that can be 'created' by a human. An llm as small as 3b already performed more calculations than all the calculations done in the entire human history.
Why is that? We can build machines that are much better than humans in some things (calculations, data crunching). How can you be certain that this is impossible in other disciplines?