Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Just saying "no" is unclear. LLMs are still very sensitive to prompts. I would recommend being more precise and assuming less as a general rule. Of course you also don't want to be too precise, especially about "how" to do something, which tends to back the LLM into a corner causing bad behavior. Focus on communicating intent clearly in my experience.


> Just saying "no" is unclear.

No.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: