Laura Bliss is a staff writer at CityLab, covering transportation and technology. She also authors MapLab, a biweekly newsletter about maps (subscribe here). Her work has appeared in the New York Times, The Atlantic, Los Angeles magazine, and beyond.
As influenza rages across the U.S., scientists labor to develop better health surveillance techniques.
By the second week of January, the Centers for Disease Control and Prevention thought this flu season had already peaked. Cases of the disease were widespread around the country, but the overall numbers were not overwhelming: The hospitalization rate was about half as high as in 2014-15, the last severe flu season. CDC officials predicted there would be fewer deaths.
But the feverish masses kept growing. Now, for the third week in a row, flu activity remains widespread in 49 states, according to the latest CDC data. Some 6.6 percent of patients visiting the doctor now have flu-like symptoms, the highest rate since 2009. And the rate of pneumonia and influenza-related deaths has suddenly rocketed up. This year’s death rate is now on pace to match or exceed that of 2014-15.
“We’ll expect something around those numbers,” Daniel Jernigan, the director of the CDC’s influenza division, told reporters during a telephone news conference last week.
There are a couple of stories here. One, this is turning into a really aggressive flu season. Two, CDC data is far from a perfect predictor of how bad a flu season will be, let alone how bad it already is.
The CDC bases its “Flu View” reports and predictions on physician records that report “influenza-like illnesses” among patients. That means there’s about a six-day gap between the ground truth and the CDC’s best understanding of it, according to Jeffrey Shaman, a professor of environmental health sciences and the director of Columbia University’s Climate and Health Program. “It is not an ideal stream of data,” Shaman said.
Strong data matters for surveilling and anticipating blooms of illness. When flu viruses peak at unpredicted moments—or in unexpected places—medical centers can be caught off-guard and left short on supplies. That happened this season when flu flushed across the country at more or less the same time. A number of cities were running low on critical anti-viral medications by mid-January. “That wasn’t typical—typically flu is staggered from region to region,” Shaman said. “That can really strap resources in a big country.”
A better grasp on fevers to come can also help the public. If people can see the wave of sickness about to hit their town, they can take extra precautions—being vigilant about hand-washing, keeping the kids home from coughing classmates, or making more of an effort to get a flu shot. One can imagine highly accurate flu forecasts becoming a feature of local weather reports.
Using an algorithm similar to those used in weather prediction, Shaman and his lab publish weekly flu forecasts for 50 states, 10 CDC regions, and 108 cities nationwide, predicting how and when cases will rise and fall. They pull together multiple datasets: CDC reports, lab reports of patients who’ve tested positive for influenza (drawn from a pool of WHO data), and Google search activity. The algorithm simulates flu-transmission dynamics using data from past flu seasons and produces the real-time forecasts of future flu outcomes based on the new information. The results have been published weekly since 2013.
Right now, according to Shaman’ s system, California is clearly past the height of its season. Texas has stayed at near peak levels for five weeks straight. The northeast has probably just crescendoed, though some cities like Boston and Providence are still about a week away.
The model has proven accurate up to 10 weeks out, but it’s not perfect for every city and every week. This season, Shaman’s predictions of peak timing in New York City were spot-on going back to the second week of December. But for Houston, which experienced unprecedented levels of flu activity throughout this season, the predictions were shakier, since historical data was not as informative.
Until flu season is over, Shaman won’t be able to say how well his algorithms did overall. And he’s still refining the approach. “The hope is that it gets better over time with more observations and stronger models,” he said.
He is not alone in trying to develop better models of how flu spreads, waxes, and wanes. Google famously attempted to use search trends to approximate flu prevalence. Scientists at the University of Osnabrück, Germany partnered with IBM to link Twitter data with CDC reports. Harvard University and Boston Children’s Hospital developed an online map called Flu Near You, which analyzes and charts self-reported user data to show where signs of the virus are clustered. Researchers at NYU are conducting an ongoing study that asks citizen scientists to self-report symptoms online and mail in nasal swabs and saliva samples.
Pharmaceutical companies and start-ups have also tried their hands at building flu maps, often using crowdsourced data. The Weather Channel hosts a map—sponsored by Theraflu—that is populated by “SickScore” data, which draws from tweets, Facebook posts, and self-submitted user reports of flu symptoms to show (though not predict) virus activity levels. A GlaxoSmithKline-sponsored app called Flumoji, developed in part by MIT researchers, culls user-reported symptoms to track the spread of flu in real time.
Crowdsourced data can be valuable, especially in developing countries where official health data can lag by months. But on their own, these maps tend to overpredict, said Mauricio Santillana, a mathematician, assistant professor at Harvard Medical School, and faculty member of the Computational Health Informatics Program at Boston Children’s Hospital. “They don’t add the nuances when they try to predict at a hyperlocal level,” he said. Many are also commercially driven, so he questions their objectivity. “They’re produced by people who’d benefit by people rushing into pharmacies.”
Like Shaman, Santillana is working to identify mathematical methodologies to monitor and forecast flu activity. In his research, he leverages Google search activity, Twitter data, clinician search engines, health records, Flu Near You, electronic health records, and historical flu activity information to build machine-learning algorithms that make predictions, too.
“Over the years, we have learned that relying on a single data source to track flu (say, using only Twitter microblogs) may introduce undesirable noise in our flu tracking systems,” he wrote via email. Santillana offered an example: When people hear about a potential disease outbreak, they often turn to Google to “panic search” for more information. “This, in turn, causes disease monitoring systems that are based on only Google search activity to produce unrealistically ‘high’ flu estimates,” Santillana said.
Google’s flu-tracking project ended in 2015*, the year the search giant failed to predict the peak of the flu season by 140 percent. Wired called the miss an “epic failure” that showed the perils of relying on big data for human solutions. Google now provides a subset of search data to a handful of epidemiological researchers around the world; Santillana and Shaman are both among them. “You can’t treat any single dataset as the gold standard,” said Shaman. Every source has its biases and flaws.
So is the CDC sitting idly by a single dataset of clinical records? Not quite. Since 2013, the Influenza Division has been working to improve its forecasting methodologies by working with outside teams of researchers like Shaman and Santillana. On a beta website, the agency posts flu forecasts based on their work. This year, for the first time ever, the CDC is sponsoring a “State FluSight challenge,” asking states to submit public records of influenza-like illness. In the past, the CDC has only ever charted national and regional flu trends.
And it’s never really tried to make forecasts at all; historically, the CDC has focused on surveillance. That’s changing. “If we know we’ll have a season with high or moderate severity, or a peak in December versus March, those types of forecasts, if reliable, would be really helpful for communicating when vaccinations should happen,” said Matt Biggerstaff, an epidemiologist in the CDC's influenza division leading its influenza forecasting initiative. “You’d really want to push a vaccination message in October or November if the peak is in December.”
The CDC might also be capable of making better recommendations for controlling pandemics, or mass outbreaks in localized areas. “If you close a school after the peak, you haven’t really done much to stop the spread of flu,” Biggerstaff said. “But if you close on the uptick, you can have a much bigger impact.” Next year, he expects that these forecasts will be integrated into the public-facing flu reports on the CDC’s official website.
For now, Santillana expects that, based on his models, flu cases are still going up this year. Individually, some of his datasets suggested the national peak had already occurred. But taken overall, the worst may indeed be yet to come. The one way to know the real “ground truth”? “Only if we were measuring every person in every place,” he said.
*CORRECTION: A previous version of this article stated that Google Flu Trends stopped publishing results in 2013.