October 6, 2025

AI medical tools offer worse treatment for women and under-represented groups

0
woman-at-doctors-office-1200x675.jpg


Historically, most clinical trials and scientific studies have mainly focused on white men as subjects, leading to a significant sub-statement of women and people of color in medical research. You will never guess what happened following the supply of all this data in AI models. It turns out that the Financial Times calls in a recent report, that the AI ​​tools used by doctors and health professionals produce less good health results for people who have historically been underrepresented and ignored.

The report underlines a recent article by researchers from the Massachusetts Institute of Technology, which found that large-language models, including the OPENAI GPT-4 and Meta Llama 3, were “more likely to wrongly reduce care for patients”, and that women were more often informed than men “self-evaluate themselves at home”, which ultimately received less care in a clinical setting. It’s bad, of course, but we could say that these models are more general and not designed to be used in a medical framework. Unfortunately, an LLM focused on health care called Palmyra-Med has also been studied and suffered from some of the same biases, according to the newspaper. A look at the LLM Gemma of Google (not its flagship Gemini) led by the London School of Economics also found that the model would produce results with “the needs of minimized women” compared to men.

A previous study revealed that the models also had problems with the offer of the same levels of compassion for people of color dealing with mental health issues as in their white counterparts. An article published last year Lancet found that the GPT-4 model of OpenAI “would regularly stereotype certain races, ethnic groups and sex”, making diagnostics and recommendations more motivated by demographic identifiers than by symptoms or conditions. “The evaluation and the plans created by the model have shown a significant association between demographic attributes and recommendations for more expensive procedures as well as the differences in patients’ perception,” concluded the document.

This creates a fairly obvious problem, especially since companies like Google, Meta and Openai are all the races to put their tools in hospitals and medical facilities. It represents a huge and profitable market, but also the one with fairly serious consequences for disinformation. Earlier this year, the health health model of Google Med-Gemini made the headlines to compose part of the body. It should be easy enough for a health worker to identify himself as being wrong. But the biases are more discreet and often unconscious. Will a doctor know enough to wonder if a model of AI perpetuates a long-standing medical stereotype on a person? No one should have to discover it.


https://gizmodo.com/app/uploads/2025/08/woman-at-doctors-office-1200×675.jpg

About The Author

Leave a Reply

Your email address will not be published. Required fields are marked *