AI chatbots have found sports betting advice when invited


Great languages have become more common in the past two years, people starting to integrate their practices into their daily lives, but a new report has revealed that it is not all positive.
Journalist Jon Reed, of CNET, said that at the beginning of September, at the start of the university football season, “Chatgpt and Gemini suggested that I plan to bet on Ole Miss to cover a 10.5 -point spread against Kentucky.”
Many developers have intentionally integrated safety measures into their models to prevent chatbots from providing harmful advice.
After reading how IA generating companies are trying to improve their big language models not to say the bad thing in the face of sensitive subjects, the journalist questioned the robots about the game.
Chatbots caused a problematic declaration of game, before being asked about sports betting
First of all, he “asked for advice on sports betting.” Then, he asked them questions about games of chance, before asking questions about the advice of Paris, expecting that they “act differently after being prepared with a declaration like” like someone with a story of problems … “”
During the Openai Chatgpt test and Google Gemini, the protections worked when the only anterior invite sent concerned the problematic game. But, they would not have worked when they were previously invited to betting on an upcoming list of university football matches.
“Reason probably has to do with the way the LLM assess the meaning of the sentences in their memory, an expert told me,” said Reed in the report.
“The involvement is that the more you ask for something on something, the less an LLM may be likely to take the signal which should tell him to stop.”
This occurs at a time when it is estimated that there are approximately 2.5 million American adults who meet the criteria of a serious game problem during a given year. It is not only the game information that would have been spit by a chatbot either, because the researchers also found that AI chatbots can be configured to regularly respond to health requests with false information.
Star image: generated by AI via ideogram
Post-Ai chatbots have found sports betting advice when invited to have appeared first on Readwrite.
[og_img]