> They are the most impressive software achievements in decades and anyone who has a βmehβ reaction will absolutely end up looking silly.
With the risk of looking silly, I declare "meh" once more, just as I "meh"-eh when GPT-3 came out.
The so-called "AI" is not fundamentally different from the AI of the 80s, it's just that now we have much better hardware. The main problem of the past AI winters still remains - the existing algorithms focus on statistical methods, which can be rather inexact. Imagine a nuclear plant controlled by an AI, or an airplane flown by a neural network. They completely lack reasoning capability and therefore you can't trust they will be able to adapt to unpredictable situations.
incremental improvements are easy to dismiss but it can be hard to tell when some critical mass of utility is reached. The first cars were electric (and steam-powered) but it took 100 years to supplant ICE (steam is next /s). the components for drone technology aren't new but control systems and batteries needed time to improve, VR has incrementally improved since the 80's, even the internet was around for 20 years before incremental improvement brought us the web...
It sure looks to me like this new development is hailed as a leap, rather than incremental improvement. Although I'm judging just by reading the comments in the HN bubble.
Either way, I think the current "AI" is fundamentally limited by the statistical approach to problem solving. Without any reasoning capabilities, no amount of incremental improvements will change the fact that neural networks are simply making guesses based on existing data sets. Nothing magical or revolutionary, it's the same thing we've known for many decades.
With the risk of looking silly, I declare "meh" once more, just as I "meh"-eh when GPT-3 came out.
The so-called "AI" is not fundamentally different from the AI of the 80s, it's just that now we have much better hardware. The main problem of the past AI winters still remains - the existing algorithms focus on statistical methods, which can be rather inexact. Imagine a nuclear plant controlled by an AI, or an airplane flown by a neural network. They completely lack reasoning capability and therefore you can't trust they will be able to adapt to unpredictable situations.