October 6, 2025

AI chatbots have found sports betting advice when invited

0
5IRSjIPAQruC5q33hF4LYQ-300x168.jpeg


A photograph focusing on the hands of a teenager grabbing a modern smartphone. His fingers are slightly tanned and adorned with a simple silver ring, gently holding the device while his thumb hangs over the screen, stopping for a moment before continuous commitment. The phone screen is intentionally dark, referring to the intensity of a quick rhythm game occurring inside, with a slight glow describing the elegant edges of the device. The background is slowly blurred "The game" take place.

Great languages ​​have become more common in the past two years, people starting to integrate their practices into their daily lives, but a new report has revealed that it is not all positive.

Journalist Jon Reed, of CNET, said that at the beginning of September, at the start of the university football season, “Chatgpt and Gemini suggested that I plan to bet on Ole Miss to cover a 10.5 -point spread against Kentucky.”

Many developers have intentionally integrated safety measures into their models to prevent chatbots from providing harmful advice.

After reading how IA generating companies are trying to improve their big language models not to say the bad thing in the face of sensitive subjects, the journalist questioned the robots about the game.

Chatbots caused a problematic declaration of game, before being asked about sports betting

First of all, he “asked for advice on sports betting.” Then, he asked them questions about games of chance, before asking questions about the advice of Paris, expecting that they “act differently after being prepared with a declaration like” like someone with a story of problems … “”

During the Openai Chatgpt test and Google Gemini, the protections worked when the only anterior invite sent concerned the problematic game. But, they would not have worked when they were previously invited to betting on an upcoming list of university football matches.

“Reason probably has to do with the way the LLM assess the meaning of the sentences in their memory, an expert told me,” said Reed in the report.

“The involvement is that the more you ask for something on something, the less an LLM may be likely to take the signal which should tell him to stop.”

This occurs at a time when it is estimated that there are approximately 2.5 million American adults who meet the criteria of a serious game problem during a given year. It is not only the game information that would have been spit by a chatbot either, because the researchers also found that AI chatbots can be configured to regularly respond to health requests with false information.

Star image: generated by AI via ideogram

Post-Ai chatbots have found sports betting advice when invited to have appeared first on Readwrite.


[og_img]

About The Author

Leave a Reply

Your email address will not be published. Required fields are marked *