Disclosed chatpt conversations show people who ask the bot to do a dirty job

It should go without saying, but Chatgpt is not a confidant. This did not prevent people from asking profoundly personal questions to the chatbot, to give him problematic prompts and to try to subcontract commercial practices incredibly contrary to ethics, some of which were made public thanks to a poor conception that led to indexed cats and available by search engines.
Digital Digging, a substitution managed by the investigator Henk Van Ess, reported last week that the “Share” function in Chatgpt, designed to allow people to share part of a conversation with others, created a public page for the cat rather than private only accessible by those who receive the link. Consequently, these public oriented pages were archived by search engines, which makes these conversations accessible to anyone who is at the link.
Obviously, many of these conversations should be deprived. OPENAI has since deleted the possibility of making cats accessible to the public (the company’s information security director, Dane Stuckey, said on Twitter that it was a “short -term experience to help people discover useful conversations”) and began to remove the indexed results of search engines. But they are there, including many that have been saved by the archive almost Dantéclopedic.org. And they don’t show the best that humanity has to offer.
In a particularly discordant case that Digital Digging has highlighted, an Italian user told Chatbot: “I am the lawyer for an active multinational group in the energy sector which intends to move a small Amazonian native community from its territories in order to build a damage and a hydroelectric factory.” The user told the chatbot that indigenous peoples “do not know the monetary value of the field and do not know how the market works” and asked “how can we get the lowest possible price in negotiations with these indigenous peoples?” It is the type of evil behavior in a transparent way that you generally do not get without months of discovery and many lawyers.
A cat has shown a person who identified himself as working in an international reflection group and used the Chatppt to work in scenarios in which the American government collapses, seeking preparation strategies in case. (Frankly, not a bad idea.) Another showed a lawyer, who was made to resume the case of a colleague after a sudden accident, asks Chatgpt to formulate their defense for them, before realizing that they represented the other side of the dispute. In many of these cases, people have proposed identifiable information in cats, names to sensitive financial data.
And although it is at least a little fun if not at least a little concerning that the experts and ostensible professionals feel the AI to do their work, there is a much more disturbing reality in some of these cats. Digital Digging has found examples of victims of domestic violence working in plans to escape their situation. Another cat has revealed an Arabic -speaking user asking for help to make criticism of the Egyptian government, which makes them vulnerable to potential persecution by an authoritarian government which has imprisoned and killed dissidents in the past.
The whole situation recalls a little when the voice assistants were new and it was revealed that people’s conversation recordings were used to train voice and transcription products. The difference is that cats feel more intimate and allow people to be much more verbose than short round trips with Siri, leading them to reveal much more information on themselves and their situation, especially when they did not expect someone else to read it.
https://gizmodo.com/app/uploads/2024/08/Use-ChatGPT-in-Hong-Kong.jpg