Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

LLM hallucinations aren't errors.

LLMs generate text based on weights in a model, and some of it happens to be correct statements about the world. Doesn't mean the rest is generated incorrectly.






You know the difference between verification and validation?

You're describing a lack of errors in verification (working as designed/built, equations correct).

GP is describing an error in validation (not doing what we want / require / expect).




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: