Scientists warn that the unexamined use of artificial intelligence (AI) in health care could result in worse health outcomes for marginalized people…
“AI is already here, especially in radiology and even cancer treatment,” says Dr. Andrew Pinto, a family physician and a scientist at MAP Centre for Urban Health Solutions at St. Michael’s. “The problem is we don’t know if it’s creating bias because we don’t often have data on things like race, gender, identity, education and income,” he explains. “We may inadvertently be replicating biases.”
A program trained on lung scans may seem neutral, but if the training data sets only include images from patients from one sex or racial group, it may miss health conditions in diverse populations. Experts have raised similar concerns about AI programs that diagnose skin cancer, given that decades of clinical research that might be used to train the programs focused mostly on people with light skin.
Over the next year, Pinto will survey health providers and patients, asking providers about the problems they want AI to solve, and asking patients questions like, “How do you feel about the computer creating a risk score for you?” One of Pinto’s concerns with algorithm-based care is that doctors will spend less time listening to patients, trying to understand the complex social determinants that factor into health, and more time looking at screens.