We've had 300,000 years to adapt to the specific ways in which humans are fallible, even if our minds are black boxes.
Humans fail in predictable and familiar ways.
Creating a new system that fails in unpredictable and unfamiliar ways and affording it the same control as a human being is dangerous. We can't adapt overnight and we may never adapt.
This isn't an argument against the utility of LLMs, but against the promise of "fire and forget" AI.
Humans fail in predictable and familiar ways.
Creating a new system that fails in unpredictable and unfamiliar ways and affording it the same control as a human being is dangerous. We can't adapt overnight and we may never adapt.
This isn't an argument against the utility of LLMs, but against the promise of "fire and forget" AI.