Data are scarce, but a look at grants from the National Institutes of Health gives us at least a partial picture.
Cities are the engines of innovation. While repeated so often that it has become a cliché, the wisdom behind this is predicated on the power of cities to aggregate people, allow for collaboration, the cross-pollination of ideas, and the spread of information. One of the places where this most often occurs is within the walls of the research university.
We know that cities produce an inordinate amount of scientific output. The science journal Nature even devoted a special issue to the importance of cities and their relationship to science. We know many of the outputs of science at the city level (such as papers and citations), and even some of the inputs (such as the number of students and researchers), but there is one area where data are lacking: the amount of science-related funding that a given city pulls in.
Scientific funding (in the form of grant money) is both a measure of input as well as a measure of research success: successful research begets more funding. But the data are both rare and incomplete.
Recently, we attempted to make a small dent in this problem. The National Science Foundation compiles federal R&D spending from all governmental agencies, but unfortunately they only have data at the state-level, which is not granular enough for our purposes. And looking at many other individual agencies proves just as ineffectual for determining metropolitan spending.
Fortunately, the National Institutes of Health bucks this trend; the NIH in fact keeps incredibly detailed records of their grant-making. This is especially useful because, of the approximately $130 billion spent by the federal government on R&D every year, the NIH accounts for approximately $34 billion (a percentage of the total topped only by the Department of Defense, which keeps the least informative, least specific, and least reliable numbers of any agency). So, while we can’t get a complete picture of federal R&D spending at the sub-state level, we believe that the NIH data represents the best portrait currently available.
Since the data was only available at the state-level or the city-level, our job consisted of aggregating up to the county and to the MSA level. After cleaning 20 years of data (from 1992-2011), and throwing out unreliable contracts data, we were left with about $7 billion in R&D spending annually.
So, what did we find? The top ten metropolitan areas from 1992 include many of the usual suspects, including New York, Boston, Chicago, Seattle, and the California regulars (San Diego, San Francisco, Los Angeles); the biggest surprise on the list might have been Philadelphia, which isn't traditionally thought of in terms of a biomedical research powerhouse.
|Rank||1992||Real Spending||Change in Rank vs. 2011|
|1||New York-Northern New Jersey-Long Island, NY-NJ-PA||$729m||-1|
|3||Los Angeles-Long Beach-Santa Ana, CA||$326m||0|
|5||San Francisco-Oakland-Fremont, CA||$290m||-1|
|7||San Diego-Carlsbad-San Marcos, CA||$249m||-44|
|10||Houston-Sugar Land-Baytown, TX||$188m||+1|
As we compare these rankings with the 2011 funding figures, the top 3 cities have really separated from the pack in recent years, seeing significant increases in funding while their contemporaries in the top 10 have garnered only modest gains.
As far as ranking shifts, there are a number of very minor moves over time, but three jumps in particular really stand out. San Diego drops 44 spots, a shift which few other metros match. By comparison, the highest change in ranking was Jacksonville¹s jump of 86, but they only cracked the 94th spot, making their jump relatively less impressive.*
|Rank||2011||Real Spending||Change in Rank vs. 1992|
|2||New York-Northern New Jersey-Long Island, NY-NJ-PA||$1.13b||-1|
|3||Los Angeles-Long Beach-Santa Ana, CA||$1.05b||0|
|4||San Francisco-Oakland-Fremont, CA||$509m||+1|
|9||Houston-Sugar Land-Baytown, TX||$324m||+1|
|10||Ann Arbor, MI||$296m||+1|
Furthermore, you can see below that these top 10 cities have steadily remained atop the list over the years examined (Washington, D.C. is hanging out just beyond the top ten in 1992 and 2012, but has enough from other years to slide into the top 10 when everything is aggregated).
|1992-2011||Average of Real Value|
|New York-Northern New Jersey-Long Island, NY-NJ-PA||$1.02b|
|Los Angeles-Long Beach-Santa Ana, CA||$823m|
|San Francisco-Oakland-Fremont, CA||$446m|
|Houston-Sugar Land-Baytown, TX||$310m|
While largely unchanging at the top, the consistency of the top ten masks some churn a bit lower down. Between 1992 and 2011, the mean change in rank was 15, with a median of 4. This is largely due to the fact that grants are awarded in sizable amounts, so when a smaller metro gets a new grant or finishes an old one, the impact on its total is much more significant than in larger cities with many grants.
When we examine the list for changes per capita, the college towns (unsurprisingly) dominate the list, and include such places as Ann Arbor, Iowa City, Rochester, MN, and Ithaca, NY.
Finally, although the top lists were mainly consistent with other popular measures of economic activity, we wanted to run a couple of regressions and see if federal R&D funding could be related to a city’s metrics throughout the spectrum. As it turns out, surprisingly, neither the Creative Class Index, creative class percentage of total workforce, nor the Milken Institute’s Tech Pole index for the medical sector had significant correlative relationships with the NIH grant data we collected here. This suggests that although the biggest cities will continue to see the biggest gains, cities that typically struggle by other indices can still capture unexpected or outsize funds from federal sources.
CUMULATIVE NIH FUNDING BY METRO AREA, 1992-2011
We hope that these data will provoke some thought on where federal R&D money goes; we hope too that the absence of better data is cause for even greater conversation. Ultimately, our insight into the impact of federal spending is blunted by weak data availability, and the resultant missing analysis means that outside observers have little grounds to evaluate billions of dollars of government spending. The NIH has done an admirable job in making its process and output transparent – should other agencies follow suit, more minds could apply themselves to easily overlooked inefficiencies in spending, and as we all know: all problems are shallow given enough brains. And enough data.
* Correction: An earlier version of this article incorrectly identified St. Louis, Missouri, as another city that saw a tremendous shift in its position between 1992 and 2011.