Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

We've had 300,000 years to adapt to the specific ways in which humans are fallible, even if our minds are black boxes.

Humans fail in predictable and familiar ways.

Creating a new system that fails in unpredictable and unfamiliar ways and affording it the same control as a human being is dangerous. We can't adapt overnight and we may never adapt.

This isn't an argument against the utility of LLMs, but against the promise of "fire and forget" AI.



Agreed that there shouldn't be automatic or even rapid reliance based on the parallels I drew to humans.

My point was more that falliability isn't the inherent show stopper the author makes it out to be.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: