Laura Bliss is CityLab’s West Coast bureau chief. She also writes MapLab, a biweekly newsletter about maps (subscribe here). Her work has appeared in The New York Times, The Atlantic, Los Angeles magazine, and beyond.
The outward signs of income are so predictable that even robots can learn them.
These patterns of wealth are so predictable that even robots can learn them. An online interactive called “Penny” proves it, and opens questions around the role that machines might play in the urban future.
A joint collaboration between Stamen Design, DigitalGlobe, and Carnegie Mellon University, Penny is artificial intelligence that can “read” satellite imagery of two very different cities and judge the income brackets of neighborhoods within them. Curious users can test her power by moving a viewfinder over stitched aerial photographs of New York City and St. Louis; once run, Penny will judge the level of median income with various degrees of confidence.
The Upper East Side of Manhattan? High median income. The St. Louis Arch? Medium-high.
More intriguing is that Penny allows a bit of Sim City-esque play. What would happen to the neighborhood’s predicted median income level if, say, a tennis court or a greenhouse appeared where that park is? What if a school bus parking lot replaced those homes, or a famous luxury hotel? Run Penny again, and she’ll offer a new prediction.
Sometimes the results are intuitive, sometimes not. Throw a parking lot over Trump Tower in midtown, and you’ll decrease Penny’s confidence that it’s a high-wealth area. Makes sense. But shove the Plaza Hotel into certain pockets of Harlem, and you increase her prediction that it’s a low-income area. Planting trees to improve a neighborhood’s predicted socioeconomic status works in some cases, but not there.
Even when it’s not apparent to the human user, Penny has a logical explanation for every interpretation. Like every AI, she bases interpretations on previously learned patterns. Jordan Winkler, a geospatial data engineer with the satellite imagery and software company DigitalGlobe, helped “train” Penny’s neural network to memorize Census income data for the two cities. The program then learned to correlate this knowledge with the lines, colors, and shapes embedded in the satellite images.
“It has no subjective concept of what the Plaza Hotel is,” says Winkler. “So when you drop the Plaza into the Bronx, it’s saying that this new pattern of shapes and colors, and what’s been covered up by it, looks more like lower-income places it’s seen.”
That doesn’t mean that a real-world transfer of a luxury hotel into a high-poverty neighborhood would necessarily have an income-depressing effect. Penny merely knows how to draw correlation, not causation. According to Winkler and Jon Christensen, a strategic advisor at Stamen Design, the mapping firm that finessed the aesthetics, the tool is meant to provoke conversation around a conclusion with scientific backing: Wealth is visible from space—and can be tracked as its contours invariably shift.
Still, Penny brushes up against an eerie implication: A robot could theoretically make “smarter” urban planning decisions than humans themselves. Or a robot could be deployed with that belief, and miss some critical human element hidden in its data diet—or succumb to the data’s inherent bias.
Already, artificial intelligence is being used to help govern cities in certain contexts, such as predicting where traffic and crime might occur. How far into urban space should AI tread? Penny’s a place to start thinking.