Laura Bliss is a staff writer at CityLab, covering transportation and the environment. She also authors MapLab, a biweekly newsletter about maps (subscribe here). Her work has appeared in the New York Times, The Atlantic, Los Angeles magazine, and beyond.
When you apply Google’s artificial neural networks to cartography, the hills literally have eyes.
A discovery by Google researchers studying artificial “neural networks” (computational models that can adapt to different inputs, sort of like our central nervous system) shook the Internet’s sleep last month.
While training neural networks to interpret and identify images (hey robot, is that a frying pan or a tennis racket?), researchers found that if they allowed the network to linger on certain elements of a picture (edges, shapes, or complex subjects like a building or a face), it could over-interpret them. It would visualize figures and objects where there were none.
The results are like an LSD trip—and maybe not a good one. If you spend much time on the Internet, you’re probably familiar with the Google robot’s hallucinations. In case you aren’t, here are a few original examples:
Last week, Google released the ”DeepDream” code to the public, so that anyone with some programming skills could process their own images with a psychedelic glaze. Naturally, a couple of brave mapmakers stepped in and produced some geo-visualizations—now, the hills literally have eyes.
Tim Waters, a freelance geospatial developer and consultant, rendered a couple of England’s cities using OpenStreetMap data, while Keir Clarke of GoogleMapsMania gave downtown Washington, D.C., the treatment using Google Maps.
Their results are below, and I can only hope we see more of these. Developers, where’s my DeepDream map of the world?