Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> Well, you have to define what you mean by "intelligence".

The burden of defining these concepts should be on the people who wield them, not on those who object to them. But if pressed, I would describe them in the context of humans. So here goes...

Human understanding involves a complex web of connections formed in our brains that are influenced by our life experiences via our senses, by our genetics, epigenetics, and other inputs and processes we don't fully understand yet; all of which contribute to forming a semantic web of abstract concepts by which we can say we "understand" the world around us.

Human intelligence is manifested by referencing this semantic web in different ways that are also influenced by our life experiences, genetics, and so on; applying creativity, ingenuity, intuition, memory, and many other processes we don't fully understand yet; and forming thoughts and ideas that we communicate to other humans via speech and language.

Notice that there is a complex system in place before communication finally happens. That is only the last step of the entire process.

All of this isn't purely theoretical. It has very practical implications in how we manifest and perceive intelligence.

Elsewhere in the thread someone brought up how Ramanujan achieved brilliant things based only on basic education and a few math books. He didn't require the sum of human knowledge to advance it. It all happened in ways we can't explain which only a few humans are capable of.

This isn't to say that this is the only way understanding and intelligence can exist. But it's the one we're most familiar with.

In stark contrast, the current generation of machines don't do any of this. The connections they establish aren't based on semantics or abstract concepts. They don't have ingenuity or intuition, nor accrue experience. What we perceive as creativity depends on a random number generator. What we perceive as intelligence and understanding works by breaking down language written by humans into patterns of data, assigning numbers to specific patterns based on an incredibly large set of data manually pre-processed by humans, and outputting those patterns by applying statistics and probability.

Describing that system as anything close to human understanding and intelligence is dishonest and confusing at best. It's also dangerous, as it can be interpreted by humans to have far greater capability and meaning than it actually does. So the language used to describe these systems accurately is important, otherwise words lose all meaning. We can call them "magical thinking machines", or "god" for that matter, and it would have the same effect.

So maybe "MatMul with interspersed nonlinearities"[1] is too literal and technical to be useful, and we need new terminology to describe what these systems do.

> I think we have to revisit the Chinese Room argument by John Searle.

I wasn't familiar with this, thanks for mentioning it. From a cursory read, I do agree with Searle. The current generation of machines don't think. Which isn't to say that they're incapable of thinking, or that we'll never be able to create machines that think, but right now they simply don't.

What the current generation does much better than previous generations is mimicking how thoughts are rendered as text. They've definitively surpassed the Turing test, and can fool most humans into thinking that they're humans via text communication. This is a great advancement, but it's not a sign of intelligence. The Turing test was never meant to be a showcase of intelligence; it's simply an Imitation Game.

> Those structures are able to manipulate symbols in ways that are extremely useful and practical for an enormously wide range of applications!

I'm not saying that these systems can't be very useful. In the right hands, absolutely. A probabilistic pattern matcher could even expose novel ideas that humans haven't thought about before. All of this is great. I simply think that using accurate language to describe these systems is very important.

> Have you seen this gem by Richard Feynman from the mid 1980s?

I haven't seen it, thanks for sharing. Feynman is insightful and captivating as usual, but also verbose as usual, so I don't think he answers any of the questions with any clarity.

It's interesting how he describes pattern matching and reinforcement learning back when those ideas were novel and promising, but we didn't have the compute available to implement them.

I agree with the point that machines don't have to mimic the exact processes of human intelligence to showcase intelligence. Planes don't fly like birds, cars don't run like cheetahs, and calculators don't solve problems like humans, yet they're still very useful. Same goes for the current generation of "AI" technology. It can have a wide array of applications that solve real world problems better than any human would.

The difference with those examples and intelligence is that something either takes off the ground and maintains altitude, or it doesn't. It either moves on the ground, or doesn't. It either solves arithmetic problems, or doesn't. I.e. those are binary states we can easily describe. How this is done is an implementation detail and not very important. Whereas something like intelligence is very fuzzy to determine, as you point out, and we don't have good definitions of it. We have some very basic criteria by which we can somewhat judge whether something is intelligent or not, but they're far from reliable or useful.

So in the same way that it would be unclear to refer to airplanes as "magical gravity-defying machines", even though that is what they look like, we label what they do as "flight" since we have a clear mental model of what that is. Calling them something else could potentially imply wrong ideas about their capabilities, which is far from helpful when discussing them.

And, crucially, the application of actual intelligence is responsible for all advancements throughout human history. Considering that current machines only excel at data generation, and at showing us interesting data patterns we haven't considered yet, not only is this a sign that they're not intelligent, but it's a sign that this isn't the right path to Artificial General Intelligence.

Hopefully this clarifies my arguments. Thanks for coming to my TED talk :)

[1]: https://news.ycombinator.com/item?id=44484682



Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: