Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The linked study is of 29 “people” (assuming they are real).

How do we know if these examples aren’t just the 0.1% of the population that is, for all intend and purposes, “out there”?

So much of “news” is just finding these corner cases that evoke emotion, but ultimately have no impact.



The Stanford Prison Experiment only had 24 participants and implementation problems that should have concerned anyone with a pulse. But it’s been taught for decades.

A lot of psych research uses small samples. It’s a problem, but funding is limited and so it’s a start. Other researchers can take this and build upon it.

Anecdotally, watching people meltdown over the end of ChatGPT 4o indicates this is a bigger problem that 0.1%. And business wise, it would be odd if OpenAI kept an entire model available to serve that small a population.


The stanford prison experiment is unverifiable. Another example of one of these emotion evoking stories.

https://pubmed.ncbi.nlm.nih.gov/31380664/

See critiques of validity section:

https://en.wikipedia.org/wiki/Stanford_prison_experiment


I think it IS 0.1% (or less, hopefully) of the population.

But it’s hard to study users having these relationships without studying the users who have these relationships I reckon.


The outcry when 4o was discontinued was such that open AI kept it on paying subscriptions. There are at least enough people attached to certain AI voices that it warrants a tech startup spending the resources to keep an old model around. That’s probably not an insignificant population.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: