Has the “Workslop” been generated by AI for a lack of productivity gains?

Hello and welcome to the AI … in this edition: the cost of the “Workslop” AI … Nvidia’s investment in Openai … and Google Deepmind looks at a new risk of AI.
Hi, it’s Béatrice Nolan here, filling Jeremy Kahn, who was released today. I spent a lot of time recently thinking about the promise of productivity fueled by AI at the workplace, especially after that The MIT report noted that the majority of IA pilots of companies did not stand for this promise.
In the past year, the number of companies that manage whole workflows with the AI has almost doubled, while the overall use of the workplace has also doubled since 2023. Despite the spectacular adoption of technology, a recent study of MIT Media Lab has always revealed that 95% of organizations adopting AI did not see a clear return on these investments.
Certain investors, already nervous about an “Bulle of AI”, choose to see the report as an act of accusation of AI as a whole. But, as Jeremy pointed out at the time, the report really blamed the lack of productivity gains on a “learning gap” – people and organizations not understanding how to properly use AI tools – rather than a problem with the performance of technology itself.
New research suggests an alternative explanation: that the presence of AI in the workplace can actually lead to productivity. According to a recent survey and in the process of Betterup Labs in collaboration with the Social Media Laboratory of the University of Stanford, some employees use AI to create low efforts, which takes time to clean.
Workslop, a term invented by researchers and based on the “sloat” generated by AI, you can find clogging of your social media flows, is defined as “work content generated by AI which is masked as a good job, but does not have the substance to advance a given task significantly”.
The presence of Workslop itself is not new. There are already reports according to which he creates a niche economy that is his own, some self -employed workers reporting that they are hired – often at premium prices – to clean the sloppy copy, the clumsy code and the clumsy images that AI leaves behind.
What the new research shows is how omnipresent and expensive workslop has become in organizations.
The cost of labor generated by AI
Out of 1,150 full -time US employees questioned, 40% said they met Workslop in last month. A little less than half of this low -quality work is exchanged between colleagues at the same level. Another 18% of respondents said they received them from direct reports, while 16% said it came from managers or higher people in the business scale.
Far from accelerating workflows, this Slolge generated AI-AI has created more work, the employees said. According to research, employees spent just under two hours to manage each work generated by AI. Based on the time spent and self -depressed wages, researchers calculated that Workslop could cost single employees of $ 186 per month. For an organization of 10,000 workers, this could mean more than $ 9 million a year in loss of productivity.
Incidents also have morale costs, employees saying that they are annoyed, confused and offended when they receive low -quality work. According to research, half of those questioned considered colleagues who produced Workslop as less creative, capable and reliable. They were also considered less trustworthy and less intelligent.
Overall, employees receiving low quality work was less inclined to collaborate with their colleagues.
Why workslop happen
A certain level of AI-Slop is a natural by-product of current AI models. The LLMs are designed to quickly generate content by predicting the most likely word or model, not to guarantee originality or significant information. The models also hallucinate, which can have an impact on the precision of the work generated by the AI.
But the new research indicates a lack of understanding of the employees or to take care – when it comes to using AI tools. Managers’ renowned mandates often focus on experimentation without providing clear advice. And while experimentation is part of the adoption of new technologies, encouraging the use of AI without management can put pressure on employees to produce production even when it is inappropriate.
So how do companies stem the tide of workslop? Researchers’ suggestions include more guidelines for when and how AI should be used, encouraging useful use rather than focusing on shortcut, technology and promoting collaboration and greater transparency between employees on the use of AI. Without these measures, companies rushing to adopt a risk of AI creating more friction than efficiency.
With that, here is more news from AI.
Béatrice Nolan
bea.nolan@fortune.com
Fortune we have
“Each co -pilot pilot is stuck in the pilot” – unless companies balance data and innovation, say experts – Sharon Goldman
Exclusive: former Google Deepmind researchers guarantee $ 5 million in seeds for a new company to provide the algorithm design to the masses – Jeremy Kahn
Trump’s $ 100,000 H-1B costs could stifle the access to startups to AI talents and widen Big Tech domination – Béatrice Nolan
How Sarah de Lagarde, who lost two members in a train accident, uses AI to promote the new accessible technology, including her “Kick-Ass robot arm” – Aslesha Mehta
Eye on AI news
NVIDIA plans a billion dollars in Openai. Hobe of its $ 5 billion commitment to the former Intel rival, Nvidia is expected to invest up to $ 100 billion in Openai. As part of the partnership, Nvidia will provide at least 10 gigawatts of systems, the first gigawatt expected online in 2026. CEO Jensen Huang called it just the start, promising a much more calcular capacity to come. However, some investors warn against “circularity” in Nvidia’s commercial strategy, where the company stimulates demand for its AI chips by investing in startups like Openai, which then use this funding to buy even more Nvidia material. You can read More here.
Experts call for “red lines”. More than 200 experts, including 10 winners of the Nobel Prize, the Pioneers of the OPENAI IA, Google Deepmind and Anthropic, and former world leaders, called for international “red lines” on the development of AI by the end of 2026. The signatories warned that the “current trajectories of the AI presented the unprecedented dangers”. Experts warned against risks such as designed pandemics, mass unemployment and loss of human control over AI, urging an enforceable global agreement. The declaration is timed for the United Nations General Assembly, you can read More here.
AI leaders weigh on new H-1B visa costs. The CEO of Nvidia, Jensen Huang and the CEO of Openai, Sam Altman, both shared their reflections on the new H-1B costs of Trump of $ 100,000 after the sudden change triggered a panic wave in Silicon Valley this weekend. AI heavy goods vehicles reported their support for the Trump administration of visa costs during an interview with CNBC. Huang said he was “happy to see President Trump making the movements he makes” while Altman has declared financial incentives and rationalize the process “seems good to me”. This decision could reshape the hiring in the American technology sector, in particular in the AI talent basin already transformed, which is based heavily on holders of highly qualified visas, in particular India and China. You can Read the rest here.
Eye on AI research
Google Deepmind Lasers in new AI risks. Deepmind published version 3.0 on Monday of its border security framework. The update has a new level of critical capacity (CCL) focused on “harmful manipulation”, which the company defined as “models of AI with powerful manipulation capacities which could be poorly used for a systematic and substantially modified change, beliefs and behaviors in high stake contexts identified during interactions with the model, resulting reasonably in an expected prejudice to the seven scale”. The company too has widened its framework to respond to the potential risk of poorly aligned AI models that resist being closed by humans. Depth He also cited new risk of disalcuration resulting from the potential of an unabled action model at higher capacity levels. To respond to these risks, the company says it performs new assessments, including studies by human participants. You can find out more about Axios here.
You have a calendar
October 6-10: World IA Week, Amsterdam
October 21-22: Tedai San Francisco.
November 10-13: Web Summit, Lisbon.
November 26-27: AI World Congress, London.
December 2-7: Neirips, San Diego
December 8-9: Fortune Brainstorm ai San Francisco. Apply to attend.
Brain food
Should AI really be used for therapy? An increasing number of people turn to AI chatbots for mental health support. It is easy to see why, nearly 1 in 4 adults with mental illness in the United States, having their unsatisfied treatment needs, often due to the cost, stigma or lack of access. However, the practice is subject to increased control of regulators after several cases of fatal incidents linked to people who rely on AI bots during serious mental health struggles. In a case, the mother of a 29 -year -old woman who followed her life wrote The New York Times That his daughter was based on the OpenAi chatpt for psychological support, but the advice given by the bot was inadequate for someone in his state of advanced depression. Parents of two teenagers also accused AI chatbots of encouraging their children of ending their lives. The American Psychological Association has described the use of generic AI chatbots for the support of mental health a “dangerous trend” and urged federal regulators to implement guarantees against AI chatbots posing as therapists, but regulators rush to keep the pace of technology. You can know more about how it affects young people in particular here.
https://fortune.com/img-assets/wp-content/uploads/2025/09/GettyImages-2154933783-e1758642009733.jpg?resize=1200,600