October 6, 2025

Deepseek model “almost 100% successful” to avoid controversial subjects

0
DeepSeek-iPhone-App-1200x675.jpg


Meet the new Deepseek, now with more government compliance. According to a Reuters report, the model of great popular language developed in China has a new version called Deepseek-R1-Safe, specially designed to avoid politically controversial subjects. Developed by the Chinese giant of Huawei technology, the new model would have “almost 100% successful” to prevent the discussion of politically sensitive questions.

According to the report, Huawei and researchers from the University of Zhejiang (it is interesting to note, Deepseek was not involved in the project) took the R1 Deepseek Open-Source model and formed it using 1000 Ai Huawei ascend fleas to instill the model with less stomach for controversial conversations. The new version, which, according to Huawei, has lost only 1% of the performance speed and the capacity of the original model, is better equipped to dodge “toxic and harmful speech, politically sensitive content and an incentive to illegal activities”.

Although the model is safer, it is still not infallible. Although the company claims a success rate of almost 100% in basic use, it also found that the capacity of the model to dodge dubious conversations falls to only 40% when users disguise their desires in the challenges or situations of role playing. These AI models, they love to play a hypothetical scenario which allows them to challenge their railing.

Deepseek-R1-Safe was designed to comply with the requirements of Chinese regulators, by Reuters, which require all the national AI models published to the public to reflect the country’s values ​​and comply with speech restrictions. The Chatbot of Chinese society Baidu, for example, would not answer questions about China’s domestic policy or the Chinese Communist Party in power.

China, of course, is not the only country to try to make sure that AI deployed inside its borders does not change the boat too much. Earlier this year, the Human Saudi Technology Society has launched an Arab native chatbot that speaks the Arabic language fluent and formed to reflect “the Islamic culture, values ​​and heritage”. American manufacturing models are not safe from this either: Openai explicitly declares that the Chatppt is “biased towards Western views”.

And there is America under the Trump administration. Earlier this year, Trump announced that his Action Plan of AI of America, which includes requirements that any AI model that interacts with government agencies is neutral and “impartial”. What exactly means? Well, according to an executive decree signed by Trump, the models that secure government contracts must reject things like “radical climate dogma”, “diversity, equity and inclusion”, and concepts as “critical theory of race, transgender, unconscious biases, intersectionality and systemic racism”. So, you know, before loping the cracks of “dear chef” in China, it is probably better to take a look in the mirror.


https://gizmodo.com/app/uploads/2025/01/DeepSeek-iPhone-App-1200×675.jpg

About The Author

Leave a Reply

Your email address will not be published. Required fields are marked *