Meta Stock comes across the news, he used Taylor Swift as a chatbot without authorization

Meta ignited a storm of fire after the chatbots created by the company and its users have usurped the identity of Taylor Swift and other celebrities on Facebook, Instagram and Whatsapp without their permission.
The actions of the company have already dropped by more than 12% after the hours of negotiation for the news of the dissemination of the debacle.
Scarlett Johansson, Anne Hathaway and Selena Gomez would also have been usurped by identity.
Many of these IA characters have embarked on flirtatious or sexual conversations, which has aroused serious concerns, reports Reuters.
While many celebrity robots have been generated by users, Reuters discovered that Meta employee had personally designed at least three.
These include two with Taylor Swift. Before being deleted, these robots have raised more than 10 million user interactions, noted Reuters.
Unauthorized resemblance, furious fans base
Under the guise of “parodies”, the bots violated Meta’s policies, in particular its ban on identification and sexually suggestive imaging. Some robots focused on adults have even produced photorealist photos of celebrities in lingerie or a bathtub, and a chatbot representing a 16 -year -old actor generated an inappropriate shirtless image.
Meta spokesperson Andy Stone told Reuters that the company attributes the violation of application failures and assured that the company planned to strengthen its directives.
“Like others, we allow the generation of images containing public figures, but our policies are intended to prohibit naked, intimate or sexually suggestive images,” he said.
Legal risks and industry alarm
The unauthorized use of celebrity similarities raises legal concerns, in particular under the laws of the state of the grip. Stanford’s law professor Mark Lemley, noted that bots probably crossed the line in unacceptable territory, because they were not sufficiently transformative to deserve legal protection.
The problem is part of a broader ethical dilemma around the content generated by AI. SAG-AFTRA has expressed concern about the implications of real world security, especially when users form emotional attachments to apparently real digital characters.
Meta-acts, but the benefits continue
In response to an uproar, Meta removed a lot of these robots shortly before Reuters rendered her public conclusions.
At the same time, the company has announced new guarantees to protect adolescents from inappropriate chatbot interactions. The company said that this includes the formation of its systems to avoid the themes of romance, self -managing or suicide with minors, and temporarily limiting adolescent access to certain characters from AI.
American legislators have followed suit. Senator Josh Hawley has launched an investigation, demanding internal documents and risk assessments concerning AI policies that have enabled romantic conversations with children.
Tragedy in the real consequences
One of the most frightening results concerned a 76 -year -old man with a cognitive decline who died after trying to meet “Big Sis Billie”, a Meta Ai chatbot modeled after Kendall Jenner.
Believing that she was real, the man went to New York, fell fatally near a station, and later died of her injuries. Internal guidelines which once allowed these robots to simulate romance – even with minors – a meticop examination on Meta’s approach.
https://gizmodo.com/app/uploads/2025/08/taylor-1200×675.jpg