Search

Rating Scales Influence Racial Discrimination

Image courtesy of Markus Spiske via Unsplash

In today’s digital world, tech corporations are constantly working to ensure customer satisfaction. That has led to the design of rating systems, enabling users to select from, for instance, five or ten stars.  Yet, a new study in Nature suggests that the way these scales are designed could be fueling racial bias—and that a simple change could fix it.

Everything in society is subject to evaluation, and this dictates the distribution of resources: who gets the job, who gets paid what, which restaurants people visit, and what books people read. Past work has shown that customers tend to rate marginalized groups lower due to subtle biases, even when the performance is identical. These lower scores can translate into real-life consequences, such as lower earnings, decreased visibility, and potential job loss.

Tristan Botelho, a professor at the Yale School of Management, evaluated these systems to see how they could reduce bias in evaluation. Botelho and his team analyzed an online labor market platform that made a sudden change from five stars to a dichotomous, or yes-or-no, system. The results showed that when people rated with a thumbs-up or a thumbs-down—as opposed to scales ranging from one to five—the racial disparity in reviews shrank.

“How we present individuals with an evaluation process could change the outcomes that we observe and thus the resources that are distributed. I think what this paper really brings to bear is the idea that we should be very clear about what we are asking people to do when they are evaluating,” Botelho said.

Botelho and his team are now exploring how these evaluative processes occur over multiple stages, such as in the hiring process, how a customer’s initial evaluation impacts a worker’s ability to complete their job, and how we can work to make those evaluations more fair and accurate.