Some doctors have carried out procedures less effectively by themselves after exposure of the AI, the said study

Artificial intelligence can be a promising way to stimulate the productivity of the workplace, but rely on too difficult technology can prevent professionals from keeping their own skills lively. More specifically, it seems that AI could worsen certain doctors to detect irregularities during routine projections, new research results, which raises concerns about specialists who rely too much on technology.
A study published in the Lancet Gastroenterology & Hepatology Journal this month revealed that in 1,443 patients who have undergone colonoscopy with and without AI assisted systems, endoscopists have been introduced into an AI assistance system went from detection of potential polyps at a rate of 28.4% with technology at 22.4% after having access to access to AI where they were Introduced into detection rates.
Doctors’ failure to detect as many polyps on the colon when they no longer used AI help were a surprise for Dr. Marcin Romańczyk, gastroenterologist at HT. Medical center in Tychy, Poland, and author of the study. The results not only question a potential laziness developing following an excessive excess of AI, but also of the evolution of relations between doctors and a long -standing tradition of analog training.
“We have learned the medicine of books and our mentors. We observe them. They told us what to do,” said Romańczyk. “And now there is an artificial object suggesting what we have to do, where we have to look, and in fact we do not know how to behave in this particular case.”
Beyond the increased use of AI in operating rooms and doctors’ offices, the proliferation of automation in the workplace has brought great hopes to improve the performance of the workplace. Goldman Sachs predicted last year that technology could increase productivity by 25%. However, emerging research has also warned against the traps of the adoption of AI tools without taking into account its negative effects. A study from Microsoft University and Carnegie Mellon at the beginning of this year revealed that among the knowledge workers questioned, AI increased the efficiency of work, but reduced critical engagement with content, skills in atrophy.
Romańczyk’s study contributes to this growing set of research questioning humans’ ability to use AI without compromising their own skills. In its study, AI systems helped identify the polyps on the colon by putting a green box in the region where an anomaly would be. Admittedly, Romańczyk and his team measured why the endoscopists behaved in this way because they did not provide for this result and therefore did not collect data on the reasons why it happened.
Instead, Romańczyk speculates that endoscopists have become so accustomed to the green box that when technology was no longer there, specialists did not have this signal to pay attention to certain areas. He called this “the Google Maps effect”, comparing its research results to the changes that drivers have made the transition from the era of paper cards to that of GPS: many people now count on automation to show the most effective route, when 20 years ago, this way had to be discovered for themselves.
IA controls and balances
The actual consequences of the automation of human critical skills are already well established.
In 2009, the Air France 447 flight on Rio de Janeiro in Paris fell into the Atlantic Ocean, killing 228 passengers and crew members on board. An investigation revealed that the automatic aircraft pilot had been disconnected, that the ice crystals had disrupted his speed sensors and the automated “flight director” gave inaccurate information. The flight staff, however, was not effectively trained in the way of stealing manually under these conditions and took the defective instructions of the automated flight director instead of making the appropriate corrections. The Air France accident is one of the many in which humans were not trained by property, based rather on the automated characteristics of planes.
“We see a situation where we have pilots who cannot understand what the plane does unless a computer is interpreting it for them,” said William Voss, president of Flight Safety Foundation, at the time of the Air France investigation. “This is not a unique problem with Airbus or Unique in Air France. It is a new training challenge that the whole industry has to face.”
These incidents bring periods of calculation, in particular for the critical sectors where human life is at stake, according to Lynn Wu, an associate professor of operations, information and decisions at the Wharton school of the University of Pennsylvania. While industries should rely on technology, she said, there is location to ensure that humans adopt it appropriately should appear on institutions.
“What is important is that we learn from this history of aviation and the previous generation of automation, that AI can absolutely increase performance,” said Wu Fortune. “But at the same time, we must maintain these critical skills, so that when AI does not work, we know how to take over.”
Likewise, Romańczyk do not avoid the presence of AI in medicine.
“The AI will be part of our life, whether we like it or not,” he said. “We do not try to say that AI is bad and (to stop using it). Rather, we say that we should all try to investigate what is happening inside our brain, how are we affected? How can we really use it effectively? ”
If professionals and specialists want to continue to use automation to improve their work, it is up to them to keep their essential skills set, WU said. AI is based on human data to train, which means that if its training is defective, therefore, will also be its production.
“Once we have become really bad in this area, AI will also become really bad,” said Wu. “We must be better for AI to be better.”
https://fortune.com/img-assets/wp-content/uploads/2025/08/GettyImages-73773203.jpg?resize=1200,600