Homeless people sleep in tents in the Skid Row area of downtown L.A., where Eubanks conducted her research on the coordinated entry system—framed as the Match.com of homeless services. Jae C. Hong/AP

Seemingly benign and even well-meaning high-tech tools are evolving the ways in which government criminalizes and punishes the poor.

Fifty years ago in March, Dr. Martin Luther King, Jr. spoke at the National Cathedral in Washington D.C., at what turned out to be his last Sunday sermon. He talked about the perils and promises of the three major changes he saw taking place around the world—a “triple revolution,” as he called it, consisting of automation, the emergence of nuclear weaponry, and the global fight for human rights. Regarding that first prong, he noted at the time:

Through our scientific and technological genius, we have made of this world a neighborhood and yet we have not had the ethical commitment to make of it a brotherhood.

It’s this speech that Virginia Eubanks, an associate professor of political science at the University at Albany, SUNY, comes back to at the end of her new book Automating Inequality: How High-Tech Tools Profile, Police, and Punish the Poor. In it, Eubanks takes a hard look at some of the seemingly agnostic—and even well-meaning technologies—that promise to make the U.S. welfare apparatus well-oiled and efficient. Automated systems that gauge eligibility for Medicaid and food stamps, databases that match homeless folks to resources, statistical tools that detect cases of child abuse are all considered game-changers for welfare institutions. But Eubanks demystifies these complex-sounding technologies, detailing the ways they can compromise the human rights and dignity of the very people they claim to help. King’s vision on this front, as with many others, is yet to be realized, she argues.

CityLab caught up with Eubanks to talk about some of the main themes in her book.

You start by laying out a fascinating, troublesome history of poverty management systems: from hellish “poorhouses” to the scientific charity movement, the New Deal welfare apparatus to the automation of welfare. What is the common thread?

Often when we talk about new technologies, we talk about them as “disruptors”—things that shake up the system that we're in right now. One of my big arguments in the book is that the tools that I'm talking about are more evolution than revolution. So that history really, really matters.

Why I start the book with a brick-and-mortar poorhouse is because it was the most innovative poverty regulation system of its time, in the 1800s. It rose out of a huge economic catastrophe—the 1819 depression—and the social movements organized by folks to protect themselves and their families. What’s really important about the poorhouse—and this is the thread that goes throughout all of the things I talk about in the book—is that it was based on this distinction between what at the time were called the “impotent” and the “able” poor. The “impotent” poor were folks who, by reason of physical disability or age or infirmity, just couldn't work. The “able” poor were those folks who moral regulators at the time believed were probably able to work, but might just be shirking [it].

That distinction between the impotent and the able poor, which today we would talk about as “deserving” and “undeserving” poor, created a public assistance system that was more of a moral thermometer than a floor that was under everybody protecting their basic human rights.

I think of that as the deep social programming of all of the administrative public assistance systems that serve poor working-class communities. That social programming often shows up in invisible assumptions that drive the kind of automation of inequality that I talk about in the book.

Let’s talk a bit more about the “digital poorhouse,” which you see as an extension of these previous systems of controlling the poor. When did it come up and what is it?

One of the most important historical moments that I talk about in the book is the rise of what I call the “digital poorhouse,” which is really the shift from quite sophisticated but analog systems of control to digital and integrated systems of control.

When I first started doing this [book], I actually began in the New York State Archives, looking for the technical documents of when the [poverty management] system started to be computerized. I had just assumed that that would have happened in the 1980s with the widespread uptake of personal computers, or in the 1990s when the actual policy change happened around welfare reform, which required that local welfare offices computerize some of their processes.

But actually, where I found the move to computers in the administration of public assistance happened was in the late 1960s and early 1970s. That was really surprising to me. What I learned was that right at that moment, there was a very successful national welfare rights movement that was challenging discriminatory eligibility rules in public assistance programs. It was succeeding in opening up the welfare rolls to those folks who have been unjustly barred in the past, especially women of color and never-married mothers. As a result, the rolls expanded very quickly. Though it's important to understand that public assistance has never reached even as many as 50 percent of people under the poverty line, right around 1970, it got close to 50 percent. Four-fifths of children living under the poverty line were receiving public assistance of some kind. At the same time, there's a backlash against the Civil Rights movement, especially black power, and there's a recession.

That is the moment that the “digital poorhouse” arises that these new technologies come into play. And if you look at the size of the rolls, they basically start to drop off right at that moment and continue in a downward trajectory until today—with less than 10 percent of people under the poverty line receiving cash assistance.

You explore three specific case studies. The first is in Indiana, where lawmakers pushed to automate and privatize the welfare eligibility system—including cash assistance, food stamps, and Medicaid. What was the effect of that?

So in 2006, Indiana signed this really large contract—at the time, it was $1.16 billion—with a coalition of high-tech companies, including IBM and [Affiliated Computer Services (ACS)]. It replaced most local caseworkers with online forums and regional call centers that were staffed mostly by private employees.

The result of this was 1 million benefit denials in the first three years of the project, which was a 54 percent increase from three years prior to automation. Mostly, folks were denied for a catch-all reason—“failure to cooperate in establishing eligibility”—which basically meant a mistake was made somewhere on the application.

So, a technical issue with filling out a form.

Yes, the rules became really brittle. And because people no longer had a personal relationship with their case workers, it then became very difficult for them to respond in way that meant they could retain their benefits. They were kind of on their own.

Explain the human cost of that.

These applications can be anywhere from 20 to 120 pages long. So it's very very tricky to go back and figure out where you may have made a mistake, or where the state had made a mistake, or where the document processing center had made a mistake. All mistakes ended up being the fault of the applicant. That really resulted in some some huge, life-shattering tragedies.

One of the stories I tell in the book is about Omega Young, who was a 50-year-old African-American mom from Evansville, Indiana. She missed a phone appointment to re-certify for Medicaid because she was in the hospital suffering from terminal ovarian cancer. She did call the office ahead of time, to tell them she could make the time of the telephone appointment. But she was cut off anyway for “failure to cooperate in establishing eligibility.” She was unable to afford the medications, she had trouble paying rent, she lost access to free transportation to medical appointments. And though her family wouldn't hold the state responsible for her death, she did succumb to cancer on March 1, 2009. The next day, on March 2,  she won an appeal for wrongful termination of benefits and all of her benefits were restored.

So certainly, this process made her last days much more full of stress and suffering than they should have been.

You also looked at the “coordinated entry system” in L.A., which started in 2013 as sort of a Match.com for homeless services. It’s based on the effective housing-first approach, which first aims to get a roof over the head of homeless folks, and then helps them in other ways. The coordinated entry system itself consists of a survey, which gathers information about homeless individuals, and plugs it into a database. Then, an algorithm ranks the cases on a “vulnerability index” so that the most vulnerable ones can be helped first.

That seems pretty positive at first glance.

The housing-first approach is clearly a really positive approach to the housing crisis. And I think there's a definite argument to be made for prioritization. There are 58,000 unhoused people in Los Angeles County alone, and there are not currently enough housing resources for everyone. So, I understand the impulse.

But one of the things that I did in this book that might be a little different is that I started from the point of view of unhoused folks themselves, who are the targets of this system. What really stood out to me from their stories is the difficult choices they have to make in how they interact with the system. Because that survey I talked about? It asks deeply private or even intentionally criminalizing questions about personal behavior. [It] asks if you are having sex without protection, if you're trading sex for money or drugs, if you're thinking of harming yourself or others, if you're running drugs for someone else, if there's an open warrant on you. And if you answer “yes” to these questions, you potentially get a higher score on the vulnerability index, which prioritizes you for housing.

Under existing federal data standards, the information that's stored in this Homeless Management Information System (HMIS) database can be accessed by law enforcement on the basis of only an oral request. So you don't need a warrant—you don't even need a written request. So to many people I spoke to, it was unclear where the line was between this system and the criminal justice system.

I want to be really fair; there were definitely some people who said, “Coordinated entry was a gift from god. It is the best thing that ever happened to me because it helped me get housed.” I will also say that even the people who had success with it had moments of reflections about it: “It's strange that I should get housed when so many other people I know who are going through similar things to me didn't get housed. That doesn’t seem them right.”

But for the folks who haven't had success being housed, folks like Gary Boatwright, this idea that the unhoused community was being assessed on a spectrum of deservingness for housing really, deeply troubled them. Gary was was 64 at the time I spoke to him. He had been unhoused and living on the street for almost ten years off and on. He said to me, “This is just another way of kicking the can down the road.” The problem is not scoring people, the problem is really that there's just not enough housing for the 58,000 people in Los Angeles.

And exactly what people were afraid of really happened to Gary. It wasn't directly attributable to the coordinated entry system but he was on the street long enough that sort of everyday behaviors of being unhoused are often treated as crimes—sleeping on the sidewalk, leaving your stuff on the sidewalk, and public urination—leaves people open to criminalization. As far as I could tell, he got really upset one day around public transportation and was arrested for attacking a bus. He spent close to nine months in jail. He is out now, and doing well.

So, it’s similar to the the argument  Khiara M. Bridges makes in her book about privacy rights and mothers on welfare: That it isn’t that folks choose to exchange their privacy for a benefit, but that they don’t really have a meaningful choice.

Yes, and this question of consent is important here. In Los Angeles, the folks who are given [this survey] do sign an extensive informed consent document. But, it seems to me that you are stretching the boundaries of informed consent if access to a basic human needs like housing is in any way contingent on you filling out this form.

Part of the consent form says, “We share that information with a lot of other agencies, if you want to know more about it you have to request this other form.” Folks who go through the process requesting the second form get a list of 168 agencies this information is shared with. You can ask to be expunged from the database, but the process by which you do so is really unclear—and some of your information stays in the [database]. The consent lasts for seven years and you have to actively stop it—by writing in and saying, “I withdraw my consent.”

So, it's legitimate that people have can have fears about how that information is being used and shared.

In your third case study, you delved into the predictive statistical tool in Allegheny County, Pennsylvania, that scours through a database of 29 different public programs—including law enforcement, public schools, and public housing records. It then puts a numerical value on the likelihood that a child is being abused or neglected. The workers who man the phone lines, screening calls that report abuse, are supposed to use it to complement their decision-making in these cases.

I set the cases in a way where the first one, the Indiana case, feels in some ways like the easiest one to understand. It had all the characters that we are used to—a greedy corporation, contractors, and maybe bad intentions [by certain politicians]. The story gets both more ethically and more technically complicated when we get to the Allegheny County case. The stakes feel very, very high, of course, because we're talking about the safety of children. And the Allegheny County folks that I spoke to seemed to have absolutely fantastic motives and intentions. They have done everything that those of us who talk about algorithmic justice and fairness ask designers to do right: There was a participatory design process around the system; they've been almost totally transparent about what's in the model; the model is controlled by public agencies so there’s some sort of accountability there.

But there are also some parts of the system that are still deeply troubling. There are two things worth noting. One is—and this may seem obvious—[the system] only has access to the data it has access to. This model is built on data about access to public programs. So if you're receiving mental health services through private insurance or you access financial help through your family, you're not in the system. I argue that it's a form of poverty profiling, where poor parents are drawn into a feedback loop of very invasive surveillance.

It also means that the model is likely missing key variables because they're not included in the universe of that data that's available. It doesn't include things like geographic isolation, which researchers say is predictive of neglect and abuse. That’s not something that will show up in this database because most of the folks accessing county services in Allegheny County live in dense urban neighborhoods.

The other thing to understand about the system is that it uses proxies to stand in for actual child maltreatment. Luckily, in Allegheny County, there's only a handful of actual [child maltreatment cases] a year, which is good because that means children are pretty safe. But that means there’s not enough data to actually produce a viable model. So the folks who made this model had to choose stand-ins for actual child maltreatment. One proxy is called child re-referral, which means that the child has been called on, but the call was screened out. And then the child was called on again, within two years.

But the agency's own data shows that the majority of racial disproportionality that exists in the child welfare system in Allegheny County comes into the system through call referral. Black and biracial families are three-and-a-half times more likely to be called on by either mandatory reporters or anonymous callers. So that's a really major point at which racial injustice enters the system. That can make this disproportionality worse.

In all these cases, there seems to be this idea that technology can help take human biases—like racism—out of the equation; that it can make the process not only more efficient, but more equitable.

It’s really important to acknowledge that biased decision-making by frontline caseworkers has been and continues to be an issue in public service. But what I argue is that in many cases, the tools I've looked at don’t remove bias, but just move it to a different place.

Basically, many of these systems seem to be arguing that human decision-making is an unknowable black box—that we can't possibly know what drives a frontline caseworker to make a decision one way or the other. What I argue is that's a very specific way of understanding bias—that bias is either a conscious or unconscious individual attitude held by individual people, and not something that is structurally and systemically integrated into our institutions.

There is the possibility to actually put bias on hyperdrive by removing that frontline discretion. Bias gets built into the systems at the base level of the coding—it's the data that we have. Like when I talk about poverty profiling? We're only collecting data on people who get public services. The fact that there is no information on people who receive private services to support themselves is a form of bias. Because these biases get moved from frontline humans to the invisible backends of the systems, I believe that there is enormous potential for them to be amplified and intensified. Because as the book shows, these systems are not terribly transparent.

I am also deeply troubled by the ethical premise that human decision-making is unknowable, where machine decision-making is transparent and accountable. It seems to me to foreclose ethical development of communities—that we can talk about things and get better; that we can acknowledge bias and work through it. That seems to be the work, and this seems to me a way of avoiding doing that work.

So, talk about why these sorts of developments are such a big deal—not just for the poor and those who advocate for them—but for society as a whole.

Part of the social programming is the idea that the poor are this tiny population of probably pathological people. It's really important to understand that in the United States, 54 percent of us will be in poverty at some point in our lives between the ages of 25 and 60. Two-thirds of us will access a means-tested public program, which is just straight-up welfare. So, the reality is these systems are already a majority concern. [A study from 2015 shows that four out of five Americans will likely face “economic insecurity,” at some point in their lives, which includes using a means-tested welfare program, experience poverty, or unemployment.]

Even if you don't care about that, it's really important to understand that tools that are tested in places where there are low expectations that folks’ rights will be protected, that those systems are then subject to mission creep. They start out potentially as limited and with strong rules around their use, but then something changes. Like, a political administration changes.

So how should we be thinking of this in the future?

Fundamentally, I think we can do better than we're doing right now. We deserve better.

Part of that is designing these tools with our values front and center. So efficiency, evidence-based policy, and maximum impact should definitely be among those values, but so should equity, justice, fairness, and self-determination, and human dignity. You get a very different system if you design from a principle that says,“public service systems should be a floor underneath us all rather than a moral thermometer.” This is really about choosing our political values clearly before we design the systems rather than designing assuming that the impact will be neutral, and then scratching our heads about how everything went horribly wrong.

The bigger issue is the conversation that’s happening at this moment around inequality in this country—not just economic inequality but inequality writ large. What I want people to take from this book is that though we often talk about these systems as like disruptors or as equalizers, at least in the cases that I research they really act more like intensifiers or amplifiers of the system we already have. Changing the ways that these technologies operate is really incredibly deep cultural work, particularly in redefining poverty as a majority issue not a minority issue, redefining the poor as a political identity, and helping people see each other’s shared experience across lines of difference.

That cultural change may help drive a political change that gets us away from this legacy of the poorhouse that is still in our public assistance system—away from systems that are primarily oriented towards finding out whether your poverty is your own fault rather than finding out ways that they can support people's self-determination and unleash their human capacity.

In the meantime, though, we need the system to do less harm because I really do believe that many of these systems are altering the lives of hardworking families in life-shattering ways.

Any last words?

One of the things that was common to all of the administrators and designers I spoke to across all three of these systems is that they would say, “Look we don't have enough resources. We have to make really hard decisions and these tools help us make really hard decisions.” What I want to point out is that the decision that we don't have enough resources to help everyone and we have to triage? That we have to ration care? That is a political decision.

One of the things I most fear about these systems is they allow us the emotional distance that's necessary to make what are inhuman decisions. Like, I do not want to be the caseworker looking at the 58,000 people in Los Angeles and having just a handful of resources and deciding who gets them. That is an incredibly difficult decision to make. My fear is that sometimes these systems act as empathy overrides—that we are allowing these machines to make decisions that are too difficult for us to make as human beings. That's something that we really need to pay attention to because in the long run that means that we're giving up on the shared goal of caring for each other. I don't think that's who we are as a society.

About the Author

Most Popular

  1. Transportation

    Beverly Hills Has Financed Its Metro Fight With $13 Million In Local Taxes

    Instead of reconstructing aging school facilities, the district is using a voter-backed ballot measure to pay for a legal campaign against a subway extension.

  2. An Uber pick-up location in downtown Houston in 2017.
    Transportation

    Is Uber the Enemy or Ally of Public Transit?

    Depends on the city, and the transit agency.

  3. A vacant home on Milwaukee's north side.
    Equity

    Can Milwaukee Really Create 10,000 Affordable Homes?

    The city has an ambitious plan to fix its housing woes. But so far, most of development has been focused on the city’s downtown area.

  4. Life

    How Manhattan Became a Rich Ghost Town

    New York’s empty storefronts are a dark omen for the future of cities.

  5. Transportation

    Why Public Transportation Works Better Outside the U.S.

    The widespread failure of American mass transit is usually blamed on cheap gas and suburban sprawl. But the full story of why other countries succeed is more complicated.