Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Considering the agreement Anthropic has with Palintar, you could say the exact same about employees working at Anthropic.

Edit: Google, too. Microsoft with its Israel and US Gov ties. Probably most of big tech tbh. How do you recommend we view these employees from an ethical perspective?



Anthropic has willingly left money on the table by taking a stand. They could just not have done this.

OpenAI so far has done the opposite, instead seizing the above as an opportunity.

That is a seriously meaningful difference. Their agreement with Palantir (fwiw OpenAI has been partnering with them for even longer) doesn't erase that.


From what I recall, Microsoft has been deploying GPT in Gaza for years. So even if OpenAI said no, wouldn't the same thing end up happening via Microsoft?

(I understand that domestic and foreign deployment are separate issues — I'd personally object to both — but I'm not sure Microsoft has a reason to take a principled stand on either of those, and they have been working with intelligence for decades.)


Are they working on building tech that is being used for weapons or mass surveilance? Like yes Microsoft has contracts with israel, but their entire business is not centered around those contracts. If you help build a better ai for openai, it will be used for war and control. If you help build a better version of one of the 10,000 things microsoft makes, that’s not definitely going to be used for war and control.

Not to get all historical on you, but if you worked for IBM in the 1930s-1940s you may have worked on something that was used to perpitrate a holocaust. Was that ethical? I don’t think so.

That said, it’s very easy to abstract yourself away from the harm. To tell yourself you’re not the one who builds the landmines, you just maintain the coffee machine at the landmine factory. But that’s just lying to yourself. An honest and deep appraisal of what you’re work is helping make happen is required to decide if your job is ethical or not.


> Are they working on building tech that is being used for weapons or mass surveilance?

Weird how that seems to apply to the other tech companies, but for OpenAI it's just "Anybody who stays at openai"

Someone at Google working on Gemini CLI is clear morally, but someone at OpenAI working on Codex is acting immoral? Seems like a clear double standard.


No i’d actually say those are both deeply morally questionable jobs. Not just because of the weapons and mass surveillance angles either.

Is one worse than the other? Not clearly. They are both helping build tools that are causing environmental and economic destruction, and they’re both building things likely to be used for violence and control. Idk if gemini has been tapped by any defense departments, but that would be the only subtle distinction i can see (has it happened yet, how hard will the company resist unrestricted use).

Not sure how you read my comment and came to this whack conclusion.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: