Why section 230, the preferred American media -favorite American responsibility shield may not protect large technologies in the AI era

Meta, the parent company of social media applications, including Facebook and Instagram, is not unrelated to a meticulous examination on how its platforms affect children, but as the company grows further in products powered by AI, it faces a new set of problems.
Earlier this year, internal documents obtained by Reuters revealed that the Meta AI chatbot could, according to the official directives of the company, engage in “romantic or sensual” conversations with children and even comment on their attractiveness. The company has since declared the examples reported by Reuters were wrong and have been deleted, said a spokesperson Fortune: “While we continue to refine our systems, we add more railings as an additional precaution, including the formation of our AI not to engage with adolescents on these subjects, but to guide them to expert resources and limit adolescent access to a selected group of IA characters for the moment.”
Meta is not the only technological company being examined in relation to the potential damage of its AI products. OPENAI and Startup Character. AI currently defend the alleged proceedings alleging that their chatbots have encouraged minors to commit suicide; The two companies deny complaints and previously told Fortune They had introduced more parental controls in response.
For decades, technology giants are protected from prosecution similar to the United States on content harmful by article 230 of the law on the decency of communications, sometimes known as “26 words that made the Internet”. The law protects platforms like Facebook or YouTube against legal complaints on the content of users who appear on their platforms, dealing with companies as neutral hosts – similar to telephone companies – rather than publishers. The courts have long strengthened this protection. For example, AOL dodged the responsibility of the defamatory posts in a judicial case in 1997, while Facebook avoided a trial linked to terrorism in 2020, based on defense.
But while article 230 has historically protected technological companies against the responsibility of the third -party content, legal experts affirm that its applicability to the content generated by AI is not clear and, in some cases, unlikely.
“Section 230 was built to protect platforms from responsibility users Say, not for what the platforms themselves generate. This means that immunity often survives when AI is used in an extractive way – drawing quotes, extracts or sources like a search engine or a flow, “said Chinmayi Sharma, an associate professor at the Fordham Law School, said Fortune. “The courts are comfortable treating this as accommodation or conservation of third -party content. But chatbots based on transformers are not content to extract. They generate new personalized organic outings at the prompt of a user. ”
“It looks much less like a neutral intermediation and much more like the author’s speech,” she said.
At the heart of the debate: do algorithms have content?
The protection of article 230 is lower when the platforms actively shape the content rather than simply hosting it. Although traditional failures to moderate third -party publications are generally protected, design choices, such as the creation of chatbots that produce harmful content, could expose companies to responsibility. The courts have not yet addressed this, without any decision to date to know if the content generated by the AI is covered by article 230, but the legal experts have declared that the AI which causes serious damages, in particular to minors, will probably not be entirely protected under the law.
Some cases around the safety of minors are already disputed in court. Three prosecution accused Openai and character separately.
Pete Furlong, researcher in the main policy for the Center for Humane Technology, who worked on the case against the character. Ai, said that the company had not claimed to defend article 230 in relation to the case of Senell Setzer III, 14, died by suicide in February 2024.
“The character took a number of different defenses to try to repel this, but they did not claim article 230 as a defense in this case,” he said Fortune. “I think it is really important because it is a kind of recognition of some of these companies that it is probably not a valid defense in the case of AI chatbots.”
Although he noted that this question had not been settled definitively in court, he declared that the protections of article 230 “almost certainly not extend to the content generated by the AI”.
Legislators take pre -emptive measures
In the midst of growing real damage reports, some legislators have already tried to ensure that Article 230 cannot be used to protect AI platforms from responsibility.
In 2023, the “IA immunity law” of Senator Josh Hawley sought to modify article 230 of the law on the decency of communications to exclude generative artificial intelligence (IA) from its responsibility protections. The bill, which was then blocked in the Senate due to an objection from Senator Ted Cruz, aimed to clarify that IA companies would not be immune to civil or criminal responsibility for the content generated by their systems. Hawley continued to defend the complete repeal of article 230.
“The general argument, given the political considerations behind article 230, is that the courts have and will continue to extend the protections of article 230 as much as possible to ensure the protection of platforms,” said Collin R. Walke, a lawyer for Oklahoma Data-Privacy, told Fortune. “Consequently, in anticipation of this, Hawley proposed his bill. For example, some courts said that as long as the algorithm is” neutral content “, then the company is not responsible for the exit of information according to the entry of the user.”
The courts previously ruled that algorithms that organize or simply correspond to the content of users without modifying it are considered “neutral content” and that platforms are not treated as the creators of this content. By this reasoning, an AI platform whose algorithm produces outings based solely on the neutral treatment of user inputs could also avoid responsibility for what users see.
“From a pure textual point of view, the platforms AI should not receive protection from section 230 because the content is generated by the platform itself. Yes, the code actually determines the information communicated to the user, but it is always the code and the product of the platform-not a third party,” said Walke.
https://fortune.com/img-assets/wp-content/uploads/2025/10/GettyImages-2195497483.jpg?resize=1200,600