Linda Poon is an assistant editor at CityLab covering science and urban technology, including smart cities and climate change. She previously covered global health and development for NPR’s Goats and Soda blog.
An ongoing project from MIT uses an algorithm to predict the safety of streets, helping researchers and urban planners better understand cities.
In 2011, a group of MIT researchers asked people on the internet to to play a game. They took two geo-tagged photos of different streets from Google Street View and placed them side-by-side. Then they asked the users: Which one looks safer? Livelier? More depressing? Once a user answered, a new set of photos would appear.
The project, which is ongoing, is called Place Pulse, and as CityLab reported then, it was used to study how people perceived the safety of a street based on the appearance of buildings, the presence of trees, and how the sidewalks looked. Fast forward about five years, and more than 80,000 people from around the world have participated, resulting in more than 1.3 million clicks on more than 100,000 photos.
Another group of MIT researchers took all that data and created an algorithm that calculates a street’s “safety score” based on the colors, textures, and shapes present in each photo. Specifically, the algorithm considers four components in each photo: buildings, trees, ground, and sky. Using the algorithm, the students mapped the scores for New York City, Boston, Chicago, Detroit, and most recently, Philadelphia. The maps, part of a project called StreetScore, are dotted with green (perceived as safest), yellow, orange, and red (perceived as least safe).
Like the maps of the other listed cities, Philadelphia’s shows that highways and waterfronts are generally perceived as least safe. Generally, the appearance of buildings matter a lot, says Nikhil Naik, a graduate student at MIT and one of the researchers behind the project. Photos with slick, modern buildings score higher than ones featuring older buildings made from bricks, for example. And unsurprisingly, and tree-lined streets tend to score higher than bare roads.
So far, Naik’s team has looked at more than 25 U.S. cities, and they’re working on expanding the project to analyze not only the perception of safety but also the beauty and liveliness of cities across the globe. But a question still remains: How useful is this data, especially when the results are determined by math instead of a human who understands the cultural significance and history of a particular place?
To be clear, Naik tells CityLab, these maps aren’t meant to tell someone where they should and shouldn’t go. And they aren't trying to predict where criminal activity is likely to take place. In fact, he says, the data should be used as a tool to test the hotly debated ”broken windows” theory of policing. The ultimate goal is to use the algorithm to provide a snapshot of a city’s appearance, which can be used by urban planners to pinpoint places that could use some revitalization efforts, and by researchers to test the many effects of urban decay.
Since the project started in 2014, Naik says, both academics and policymakers have put the data sets to good use. Researchers are using the data sets to study how historic-preservation laws affect how people perceive an area, as well as how the appearance of a neighborhood correlates with the educational attainment of the students living there. On the policy side, he says, Washington, D.C., is using StreetScore’s data in combination with 311 calls to work on better maintenance of specific streets.
But there is a trade-off. “Yes, it is best to have a human who probably knows an area to make a judgement. But that actually limits the scale and scope of studies,” he says. “You either can pick very high quality data for a small area, or you can pick data that is approximately good quality but covers a much larger area.”