Broussard also recently recovered from breast cancer, and after reading the fine print of her electronic medical record, she learned that AI played a role in her diagnosis. That discovery led her to run her own experiment to learn more about how well AI was at diagnosing cancer.
We sat down to talk about her findings, as well as the problems with police use of technology, the scope of “AI fairness,” and what she sees as solutions to some of the challenges AI poses. The discussion has been edited for clarity and length.
I was struck by the personal story you share in the book about AI as part of your own cancer diagnosis. Can you tell our readers what you did and what you learned from that experience?
Early in the epidemic I was diagnosed with breast cancer. I wasn’t just stuck inside because the world was closed; I was stuck inside because I had major surgery. As I was scanning through my charts one day, I noticed that one of my scans said: This scan is read by AI.. i thought, Why did an AI read my mammogram? No one told me this. It was in some obscure section of my electronic medical record. I was curious about the state of the art in AI-based cancer diagnosis, so I set up an experiment to see if I could replicate my results. I took my own mammograms and ran them through open source AI to see if they could detect cancer. What I discovered was that I had a lot of misconceptions about how AI works in cancer diagnosis, which I explore in the book.
[Once Broussard got the code working, AI did ultimately predict that her own mammogram showed cancer. Her surgeon, however, said the use of the technology was entirely unnecessary for her diagnosis, since human doctors already had a clear and precise reading of her images.]
One of the things I’ve realized as a cancer patient is that the doctors and nurses and health care workers who helped me through my diagnosis and recovery are amazing and so critical. I don’t want a sterile, computational future where you go and get your mammogram and then it says a little red box This is probably cancer. This isn’t really the future anyone wants when it comes to life-threatening disease, but there aren’t that many AI researchers who have their own mammograms.
Sometimes you’ve heard that if AI biases are “corrected” enough, the technology can become ubiquitous. They write that this argument is problematic. why?
One of my biggest issues with this argument is the idea that AI is somehow going to reach its full potential, and that’s a goal everyone should strive for. AI is just math. I don’t think everything in the world should be governed by mathematics. Computers are good at solving math problems. But they are not very good at social issues, but they are being applied to social problems. This kind of intended endgame Oh, we’re going to use AI for everything. I decided not in the future.
They also write about facial recognition. I’ve recently heard an argument that the movement toward facial recognition (especially in the police) is fueling efforts to make the technology more fair or accurate. What do you think about this?
I fall into the camp of those who don’t support facial recognition in the police. I understand that’s discouraging to people who really want to use it, but one of the things I did while researching the book was delve into the policing history of the technology, and what I found was not encouraging.
I started with the best book. Black software as if [NYU professor of Media, Culture, and Communication] Charlton McIlwain, And he wrote about IBM selling more of their new computers during the so-called War on Poverty in the 1960s. We had people who really wanted to sell machines to find a problem to implement them, but they did not understand the social problem. Fast forward to today – we are still living with the dire consequences of decisions made back then.