Site icon The Student Life

OPINION: Unmasking bias: The human element in algorithmic decision-making

A white robot skeleton breaks out from the screen of a computer.
(Quinn Nachtrieb • The Student Life)

Open the message app on your phone and type in the name “Ayaan.” Your phone automatically changes it to “Susan,” the closest white equivalent. Even if it does not immediately change it, the name remains underlined in red: ERROR.

This is a product of machine learning — a form of artificial intelligence (AI). Algorithms like this comb through millions of pieces of data, detecting patterns and making predictions without being told what to look for or what to conclude. The appeal is immense and AI is finding its way into every aspect of our lives, from autocorrect word assumptions to facial recognition software to healthcare decisions.

Despite a common assumption that machines are impartial decision makers, working from cold, hard data, there is an inextricable link between AI and perceptions humans have of the world. 

“Algorithms are still made by human beings, and those algorithms are still pegged to basic human assumptions … if you don’t fix the bias, then you are just automating the bias,” famed American author and journalist Ta-Nehisi Coates said at the annual 2019 MLK Now event.

By examining the use of AI in the healthcare industry, it becomes apparent that algorithms perpetuate human biases and are another platform that disadvantages Black people.

AI reinforces supposed Black inferiority, a myth that conveys Black people as physically and genetically inferior. The “cold, hard” data that shapes algorithms is rooted in a historical context where information and studies have predominantly been conducted by and for white individuals.

For instance, a study published in the Proceedings of the National Academies of Science (PNAS) in 2016, found that more than half the white medical trainees still believed myths such as: “‘Black people’s nerve endings are less sensitive than white people’s.’ ‘Black people’s skin is thicker than white people’s.’ ‘Black people’s blood coagulates more quickly than white people’s.’” Notions such as these, whether intentional or not, allow medical professionals a “viable” reason to dismiss Black people’s pain.

Fallacies like these also inadvertently influence algorithms and outputs from AI that actively hurt Black people, and subsequently harmed Black communities. For example, a widely-used algorithm designed to guide medical care decisions for millions of people has led to further disparities in healthcare for Black individuals. This algorithm, designed by leading healthcare company Optum, predicts which patients will benefit from additional medical care. It substantially underestimates the health needs of the sickest Black patients, exasperating long-standing racial disparities in medicine.

Correcting the bias in this Optum algorithm would help more than double the number of Black patients flagged as at risk and get them the help they need. A study of 3.7 million patients displayed that Black patients the algorithm considered to be equally in need of extra care as white patients were, in reality, much sicker. As a result, Black patients, on average, did not receive the proper care to which they were entitled.

The algorithm functioned via a seemingly race-blind metric — how much patients would cost the healthcare system in the future. Cost, however, is not a race-neutral measure of healthcare needs. Black patients sustain about $1,800 less in medical costs per year than white patients with the same number of chronic conditions. This leads the algorithm to score a white patient as equally at risk of future complications as a Black patient who has many more ailments

Thus, Black patients tend not to receive the same level of care as white patients, as the algorithm ranks Black patients as a lower priority than they are because they sustain less in medical costs.

Sure, the Optum algorithm might not have been intentionally racist, but it worked in favor of white patients and against Black patients. As it stands, AI should never be a substitute for a doctor’s expertise and knowledge of their patient’s individual needs.

This problem is not isolated to this company or this specific algorithm, but rather reflects a broader challenge in AI. As computer systems increasingly determine critical decisions in healthcare and beyond, the potential exists to automate racism and other human biases on a larger scale than ever before.

However fruitless it is to attempt to eliminate AI, it is essential to understand and acknowledge that AI reflects what exists in the world, including bias. Currently, AI lacks the ability to detect biases or make determinations of how the world should be. That responsibility falls on humans to define.

As AI continues to disseminate across all aspects of our lives, there needs to be more accountability and less reliance on AI as the main or sole decision-maker. The pervasive and often overlooked biases embedded in AI demand urgent and ongoing scrutiny, along with meaningful collaboration across disciplines, to prevent the automation of systemic racism and injustice.

Ella Francis PZ ’25 is from Oakland, California. She is currently living in a tent in Kimana, Kenya, conducting research on conservation ecology and wildlife conservation.

Facebook Comments
Exit mobile version