(CC) Jessica Mesman–The problem with artificial intelligence is us

Turns out, it’s not that easy at all to create a bot that diagnoses disease with more nuance and compassion than a human doctor, and the consequences for some of us may be dire. You might reply that not all human doctors are nuanced and compassionate, but this is just my point. As long as AI is trained on human behavior, it will tend to replicate our worst flaws, only more efficiently. What happens when medical racism or sexism in the training data means that even our most sophisticated bots share human doctors’ tendency to misdiagnose women and people of color?

We are finding out. A 2019 study found that a clinical algorithm used in many hospitals required Black patients to be much sicker than White patients in order to be recommended for the same level of care, because it used data indicating that Black patients had less money to spend on care. Even when such problems are corrected and new guardrails are put in place, self-teaching AI seems to be able to find patterns in data that elude our own pattern-detecting capabilities. We don’t even realize they exist.

A 2022 study in the Lancet found that AI trained on huge data sets of medical imaging could determine a patient’s race with startling accuracy based on x-rays, ultrasounds, CT scans, MRIs, or mammograms, even when there was no accompanying patient information. The human researchers couldn’t figure out how the machines knew patient race even when programmed to ignore markers such as, for example, density in breast tissue among Black women. Attempts to apply strict filters and programming that controls for racism can also backfire by erasing diagnosis of minorities altogether.

“Our finding that AI can accurately predict self-reported race, even from corrupted, cropped, and noised medical images, often when clinical experts cannot, creates an enormous risk for all model deployments in medical imaging,” the authors of the Lancet study wrote. “Just as with human behavior, there’s not a simple solution to fixing bias in machine learning,” said the lead researcher, radiologist Judy W. Gichoya. As long as medical racism is in us, it will also be one of the ghosts in the machine. The self-improving algorithm will work as designed, if not necessarily as intended.

Read it all.

print

Posted in Anthropology, Science & Technology