Is London the most important city in the world? What about Singapore? Or dark horse Frankfurt? In the past year, all of these places (plus New York, Toronto and Vienna) have been named the top city on one important metric or another by a reasonably reputable organization, business consultancy or think-tank. The annual releases of such indexes are generally followed by a bit of headline puffery (“Chicago Is The 7th Most Globally Integrated City In The World,” Curbed reported last April. Or “Sorry, London: New York Is the World's Most Economically Powerful City,” as a little site called CityLab wrote this year.) Readers can and should be forgiven for clicking on these rankings but failing to scrutinize them much further than that.
And yet, dear reader, beware: As the Chicago Council on Global Affairs writes in a report released last week, many city rankings are not always what they seem. When done well, rankings can draw attention to certain aspects of city successes or failures, creating loose blueprints for others to follow. “Where the indexes are good are in making decision makers deal with their shaming power,” says Michele Wucker, vice president of studies at the Chicago Council. “The value of lists is in their headline value, in their ability to attract attention to something important.”
The trouble comes when rankings are done poorly, as the report warns—when they are arbitrarily or opaquely slapped together, intended as “little more than fodder for civic bragging rights.”
So what’s a savvy, index-loving urbanist to do? With help from the Chicago Council report, allow us to be your guide to the top six reasons things tend to go wrong with city rankings—ranked.
6) Different rankings measure totally different things.
This one may seem obvious, and yet comes up all the time: Indexes that sound like they measure very similar things—let’s take global consulting firm A.T. Kearney’s “Global Cities Index” and professional services network PricewaterhouseCoopers’ “Cities of Opportunity” index—actually use wildly diverging methodologies. For A.T. Kearney’s “business activity” indicator—one of many that goes into its final city ranking—the fine print shows the firm looks at the locations of top business services firms, the number of international conferences hosted in a city, and the flow of goods through ports and airports (among other factors). For PwC’s “economic clout” indicator, the factors look very different: the number of global 500 headquarters, productivity and rate of real GDP growth contribute to that index. Comparing these two lists, then, is really like comparing apples and oranges. No wonder their results look different:
5) Not all cities were included.
Don’t see your beloved metropolis on a list? If it’s not a big, famous one (think: New York), chances are decent it wasn’t included in the analysis at all. By some accounts, Portland, Oregon, is the greenest city in the United States. But the consultancy ARCADIS didn’t even bother to take a look at poor Stumptown when it was putting together its Sustainable Cities 2015 report. The cities included in that report, the group notes somewhat opaquely, “were selected to provide an overview of the planet’s cities, providing not only wide-ranging geographical coverage, but also a variety of levels of economic development, expectations of future growth and an assortment of sustainability challenges.” Portland never stood a chance.
4) Some rankings measure reputation—not reality.
Just because a ranking uses data doesn’t mean that data is totally impartial. PwC’s index, for example, includes a survey of 15,000 PwC staffers from around the world. These are knowledgeable and well-educated folks, of course, but they certainly give different answers to questions about the cities where they live and work than would a local policeman, or a florist, or a schoolteacher. If a smart city wants to raise its global reputation, then, it might not need to get significantly better at managing its trash problem, giving benefits to seniors, or making it easier for small businesses to navigate local bureaucracy—it just needs to target PwC employees with a sweet city branding campaign.
3) Some data is hard to find …
Particularly if a ranking is looking to evaluate a less developed city, data can be hard to track down. And even if index-writers can find the numbers, using them is more challenging: It’s not just about having city data, but city data that provides points of comparison across wildly different metros. To compensate, some rankings use metrics that seem pretty silly when you look at them up close. ARCADIS’s sustainability ranking, for example, uses World Bank data on life expectancy at birth to determine its city “health” score—but there’s more to health, of course, than how long you live. More worryingly, drawing data on an entire metro area can lead to some serious oversimplifications. Including numbers from a wealthy, low-crime suburb can, for example, quickly skew the overall picture for a city that’s actually grappling with real crime. As the Chicago Council writes, “assigning a single crime score to a city is difficult, let alone comparing that score with other cities around the world.”
2) … While some data is just silly.
The Chicago Council breaks this one down best in its report:
Due to both a lack of data and the labor-intensive gathering effort, many rankings are compendiums of other rankings. The [Economist Intelligence Unit] methodology notes that a city’s financial maturity score is based on “a review of secondary reports on financial depth, including Z/Yen Group’s 2012 Global Financial Centres Index.” The PwC 2014 digital economy score, included in the “technology readiness” bucket, is based on an EIU report from 2010. And A.T. Kearney’s entire 2014 Global Cities Index is explained as “a compendium of analyses published in 2013…[which] may represent data as far back as 2010.
1) They’re probably trying to sell you something.
Think tanks, consultancies and even non-profits don't put these rankings together out of the kindness of their hearts. Each index serves a purpose, whether it’s to attract media attention to a social issue like climate change, or, as is often the case, to promote a group’s brand. There’s a reason the Chicago Council calls city indexes a “booming cottage industry”—many of these groups are out to make money. More specifically, the publishing of these indexes are a great opportunity to sell interested governments, organizations and academic institutions a group’s unique dataset or analytics services. That doesn’t mean that this data is bad—in fact, it can serve as a real incentive for an organization to make it robust. But it also means that rankings can and will be shaped by whatever it is a firm does best. For example, many of these rankings come out of groups with customer bases in the U.S. and Western Europe. It’s no wonder, then, that the cities they rank tend to be in those places, too.