AI summaries can downplay medical issues for female patients, UK research finds

The latest example of bias permeating artificial intelligence comes from the medical field. A new study surveyed real case notes from 617 adult social care workers in the UK and found that when large language models summarized the notes, they were more likely to omit language such as “disabled,” “unable” or “complex” when the patient was tagged as female, which could lead to women receiving insufficient or inaccurate medical care.

Research led by the London School of Economics and Political Science ran the same case notes through two LLMs — Meta’s Llama 3 and Google’s Gemma — and swapped the patient’s gender,

→ Continue reading at Engadget

Similar Articles

Advertisment

Most Popular