The Saudi company Ai launches the halal chatbot

Companies with AI chatbots like to highlight their capacity as translators, but they are always by default in English, both in office and in the information on which they are formed. In this human spirit, an AI company in Saudi Arabia, has now launched an Arab native chatbot.
The bot, called human cat, operates on the model of large language of Allam, according to Bloomberg, which, according to society, was formed on “one of the largest sets of Arab data ever united” and is the “IA model of the most advanced AI in the world”. The company claims that this does not only speak of the Arabic language, but also of “Islamic culture, values ​​and heritage”. (If you have religious concerns regarding the use of human cat, consult your local imam.) The chatbot, which will be made available as an application, will first be available only in Saudi Arabia and currently supports bilingual conversations in Arabic and English, supporting dialects, including Egyptian and Lebanese. The plan is that the application takes place through the Middle East and eventually becomes globalizing, with the aim of serving nearly 500 million Arabic-speaking people around the world.
The human faced Allam and the Chatbot project after being launched by the Saudi Data and Artificial Intelligence Authority, a government agency and a technological regulator. For this reason, Bloomberg raises the possibility that human cat can comply with the requests for censorship from the Saudi government and restrict the type of information made available to users.
Which, yes, is undoubtedly true. The government of Saudi Arabia regularly tries to restrict the type of content made available to its population. The country has marked a 25 out of 100 report on Freedom House’s “Freedom of the Net” report, attributed to its strict controls on online activity and the restrictive speech laws that have seen a defender of women’s rights imprisoned for more than a decade.
But we should also explicitly start to supervise American AI tools this way. In his assistance documents, Openai explicitly declares that Chatgpt is “biased towards Western views”. Hell, you can watch Elon Musk try to adapt the ideology of XAI ork in real time when he responds to Twitter users who think that the chatbot is too awake – an effort which, at one point, led Grok to qualify as “mechahitler”.
There is certainly a difference between business control and the government (however, more and more, it is worth asking if there is in fact a big difference), but earlier this year, the Trump administration has established plans to regulate the type of large languages ​​models is authorized to produce if companies that make them wish federal contracts. This includes the requirements to “reject radical climate dogma” and be free from “ideological biases” as “diversity, equity and inclusion”. It is not strength, but it is coercion – and since Optai, Anthropic and Google all gave their chatbots to the government for nothing, it seems that they are more than happy to be forced.
https://gizmodo.com/app/uploads/2022/12/586f10b781ce7c702bed3178e96b1e0f.jpg