Back in September, the online crib-sharing platform Airbnb confessed that it has been slow to address complaints of discrimination against black and Latino would-be renters and released a slate of new policies to remedy the problem. These remedies haven’t been in place long enough to determine whether they are up to the task. However, Boston University economist Ray Fisman and Harvard University business professor Michael Luca have identified one area where Airbnb’s anti-discrimination proposals fall short: the collection and disclosure of data on the race and gender of its users.
Writing for the December issue of Harvard Business Review, Fisman and Luca say that regularly reporting this kind of demographic data is one “necessary step … toward revealing and confronting any problems” with discrimination. Luca was part of a group of researchers who investigated Airbnb’s racial bias problem by creating 20 fake Airbnb profiles, half of which carried commonly used African-American names while the other half used common white names. The fake users contacted roughly 6,400 hosts about their rental properties. Here’s how that turned out:
Requests with black-sounding names were 16% less likely than those with white-sounding names to be accepted. And the discrimination was pervasive, occurring with cheap listings and expensive ones, diverse neighborhoods and homogeneous ones, rooms in the host’s own dwelling and separate units rented out by landlords with multiple listings. Most of the hosts who declined requests from black-sounding profiles had never hosted a black guest—suggesting that some hosts are especially inclined to discriminate on the basis of race.
Airbnb’s own self-audit in September recognized this as a problem, but the company is far from the only sharing-economy business that’s guilty of this. A large part of the problem, according to Luca and Fisman, is that these businesses have become too reliant upon algorithms and big data for managing online commerce.
There once was a belief that algorithm-driven commerce could reduce the effects of racism, given that the role of humans is minimized in these kinds of transactions. Instead, Fisman and Luca say that reliance on algorithms has, in many cases, “nurtured rather than suppressed discrimination.” As they explain in their Harvard Business Review article:
In fact, algorithm-generated discrimination occurs in ways that humans would probably avoid. In an eye-opening study, computer science professor Latanya Sweeney sought to understand the role of race in Google ads. She searched for common African-American names—such as Deshawn and, well, Latanya—and recorded the ads that appeared with the results. She then searched for names, such as Geoffrey, that are more common among whites. The searches for black-sounding names were more likely to generate ads offering to investigate possible arrest records.
The math-powered applications powering the data economy were based on choices made by fallible human beings. Some of these choices were no doubt made with the best intentions. Nevertheless, many of the models encoded human prejudice, misunderstanding and bias into the software systems that increasingly managed our lives…. And they tended to punish the poor and the oppressed in our society, while making the rich richer.
It’s not that online platforms, whether Airbnb or Google, intentionally set out to reinforce racial discrimination when advertising or selling goods and services. The problem lies more with the data that these platforms depend upon, which reflect the racism that already exists in American society and institutions. Before Airbnb, people temporarily rented out rooms and houses via Craigslist, where African-American prospective renters were often explicitly filtered out, even during times of great need, like after Hurricane Katrina.
Algorithms don’t naturally launder that kind of racism out of business transactions—if data generated from a racist society is what goes into the formula, racism is what comes out. And when a company neglects to collect data on race or gender altogether, it should be even less surprised when racism and sexism become difficult to curtail on the user end. Such problems can be corrected, though, by creating algorithms that are more attuned to potential bias, write Fisman and Luca, as well as designing websites that offer fewer opportunities for discrimination to happen.
Citing ride-hailing services as an example, they point to the fact that the Uber app doesn’t show its drivers pictures of the passengers who book rides, while Lyft does. Fisman and Luca say that this makes it easier for Lyft drivers to reject people based on race or other factors. However, a recent study from the National Bureau of Economic Research found clearer evidence of racial discrimination among Uber drivers than Lyft, at least in Boston and Seattle. That study also recognized, though, that Lyft could still discriminate by rejecting riders from the onset by inferring race through the potential passengers’ faces or names. Airbnb announced in September that it would be performing similar experiments with hiding the photos of its users.
Airbnb has also proposed ramping up use of its “instant book” feature, which grants a potential renter automatic booking of a selected property, without needing pre-approval from the host. But Airbnb hosts must voluntarily opt in to that feature, giving racist hosts the option to remain racist. Luca and Fisman recommend that Airbnb make “instant book” the default feature, one that hosts must pay a price to opt out of.
Airbnb spokesman Nick Papas tells CityLab that the company is still in the early stages of its anti-discrimination efforts. “The proposal we put forward in September was just the beginning of our work to fight bias and discrimination,” he says. “We're eager to work with researchers and experts who share our commitment to building a community that's fair for everyone and we will review all of these recommendations. We've launched the new community commitment, made anti-bias training available to hosts, and are actively working on a range of experiments, but we know we have more work to do and we're going to do all we can to create a community that is fair for everyone.”
One thing that would be helpful: Improving racial diversity among the employees and leadership of companies like Airbnb. Luca and Fisman don’t mention this in their article, but workforce diversity is one area where the tech industry as a whole fails miserably. Airbnb, to its credit, is one of the few companies in this arena that has been taking steps to address its diversity dilemma. In Airbnb’s most recent report to the Equal Employment Opportunity Commission, it showed that just 2.9 percent of its staff are African Americans while 10 percent of its staff identify as minorities. The company made a goal of increasing that percentage to 11 percent in 2017.
Sadly, it seems that many of the proposed remedies for reducing discrimination, whether from Luca and Fisman or from the companies themselves, rely heavily upon muting consumers’ physical characteristics. Omitting the photo of a potential Uber passenger or Airbnb renter literally means hiding the users’ race or gender to protect them from getting denied services. Such erasure is only needed because the racism or sexism is inevitable. It’s the 21st century and the market still can’t depend on people to suspend their bigotry for the sake of profit.
The people subjected to racism can try “punishing” these companies by taking their business to other, more inclusive platforms—and this is happening—but in doing that they are forced to settle for limiting their own leisure mobility and options. It would probably take something like white consumers deciding en masse that they will no longer patronize these platforms if they are unable or unwilling to stop discriminating. That’s the kind of data companies would no doubt respond to.