Correct. The bias is baked into the weights of both V3 and R1, even in the largest 671B parameter model. We're currently conducting analysis on the 671B model running locally to cut through the speculation, and we're seeing interesting biases, including differences between V3 and R1.
Really? Local DeepSeek refuses to talk about certain topics (like Tiananmen) unless you prod it again and again, just like American models do about their sensitive stuff (which DeepSeek is totally okay with — I spent last night confirming just that). They're all badly censored which is obvious to anyone outside both countries.
Weird. Followup - I am getting censorship on the model from ollama's public model repository, but NOT from the models I got from huggingface running on a locally compiled llama.cpp.
If you ran it on your computer, then it wasn't R1. It's a very common misconception. What you ran was actually either a Qwen or LLaMA model fine-tuned to behave more like R1. We have a more detailed explanation in our analysis.
Meanwhile, we've released the first part of our research including the dataset: https://news.ycombinator.com/item?id=42879698