Doctors were worse to locate cancer after relying on AI, the results of the study

It has been shown that artificial intelligence tools help doctors to detect precancerous colon growth, but do not even think about removing these tools once you have introduced them. A new study published this week in Lancet have found that doctors who receive AI tools to help identify the potential risk of cancer in patients are getting used to these same observations when they come back to do so without the help of AI.
The study examined four endoscopy centers in Poland, according to the success rate of the detection of colon for three months before the introduction of AI tools and three months later. Once the AI has been introduced, the colonoscopies were assigned to random to receive support from the AI or not. The researchers found that doctors who gave colonoscopies without IA after having their available help saw their detection rates drop, producing results 20% worse than what they were before the IA introduction.
Make the results all the more disturbing is the fact that the 19 doctors who participated in the study were all very experienced and had carried out more than 2,000 colonoscopies each. If these doctors can be practical in the development, by seeing their own capacities eroding due to the dependence on AI tools, the results of inexperienced doctors could be even worse.
There is no doubt that AI tools can help in medical circles. There have been many studies that suggest that AI can facilitate everything from the detection of cancers to the diagnosis of diseases on the basis of a patient’s medical history. Analysis of information based on a whole richness of previous examples is a bit of AI bread and butter (you know, as opposed to the generation of Braindead Sols content), and there is evidence that suggests that humans can increase their own capacities using AI tools. Medical studies have shown that doctors who use these tools can produce better results for their patients.
But no one, including doctors, is immune to the risk of closing your brain and counting on AI rather than their own skills. Earlier this year, Microsoft published a study that revealed that the knowledge workers who rely on AI stop thinking in a critical way at work that they do and feel confident that AI will be sufficient to do the work. MIT researchers also found that relying on the test of testing tests has led to a less critical commitment with the equipment. In the long term, there is a real risk that dependence on AI erodes our ability to solve problems and reasoning, which is not ideal when AI continues to generate bad information.
The American Medical Association noted that around two in three doctors have already adopted AI to increase their capacities. I hope they are always able to identify when it does something like amazing part of the body that does not exist.
https://gizmodo.com/app/uploads/2024/12/GettyImages-2183732565.jpg