A pilot program in Los Angeles is testing a more scientific way to reach “influencers” who persuade and educate their peers.
While out with friends in Los Angeles last Spring, Cody Woods found himself sharing what he learned about ways to avoid contracting HIV. A few weeks earlier, he had been immersed in another discussion on Facebook about sexually transmitted diseases, this time correcting something a friend had posted about Hepatitis B transmission.
“I remember one of my friends posting [on Facebook] about Hep B and that you can get it through making out with someone,” says Woods, 22. “I commented that ‘actually it can’t be transmitted that way,’ and I explained a bit more about what I had learned.”
Woods was a participant in an HIV and sexually transmitted infections prevention program focused on homeless youths. Unlike most HIV prevention programs targeted to the homeless population—about 46,000 people in Los Angeles—the project uses artificial intelligence to identify public health “influencers,” who are more likely to share information through peer-to-peer education.
The AI tool, called HEALER (Hierarchical Ensembling based Agent which plans for Effective Reduction in HIV spread), uses an algorithm based on “influence maximization,” which is how individuals share information with one another.
Identifying and empowering community members like Woods worked: In a series of pilot studies presented and published over the last few months, 70 percent of the homeless youths who were identified through the algorithm received HIV information, compared to 25 percent in a control group. Thirty-seven percent changed their behavior and got tested for HIV and other STIs, whereas no change was seen in the control group.
The project suggests the model could have a large impact on public health awareness—particularly in cities, and amongst vulnerable groups.
Existing health promotion programs often choose peer educators based on perceived popularity and influence within a group. Yet these individuals tend to stay within their (largely static) social group, not spreading crucial information to people with whom they lack social ties.
Milind Tambe, the founding co-director of the USC Center for Artificial Intelligence in Society (CAIS), which collaborated on the project, emphasizes that HEALER works differently, and incorporates the nuances that underlie social groups.
“What HEALER recognizes is that we can’t just reach the most ‘popular’ youth in the community. We need to reach different social networks,” Tambe says.
Homeless youths (people under the age of 25) like Woods took part in small group HIV/STI education sessions, run by USC social workers. They were then surveyed, where they rated peers based on closeness and willingness to share information about sexual behaviour. Depending on these answers, new youth are identified by HEALER to complete the next cycle of training, with the goal of reaching as many as possible within Los Angeles.
The project, a collaboration between CAIS and A Safe Place for Youth (SPY) drop-in center, has now received a new grant from UCLA to expand the study to include up to 900 youths in L.A. The larger study will help inform how generalizable the findings are. The group will also be releasing a textbook next year on AI and social work that will include lessons about how AI could be used to dispense sexual health information, stop intimate partner violence, and judge where kiosks about health education should be placed in Los Angeles.
The term “influencer” is now commonly used to describe people with large followings on social media who are able to work with brands to leverage product sales.
If cities used the same concept for other public health campaigns, messages could potentially spread faster, and be self-propagating, especially if AI like HEALER can distinguish one’s influence based on which social circle she belongs to as opposed to their sheer number of contacts.
HEALER was developed in large part by graduate student Amulya Yadav, who left his job at Amazon a few years ago to start his PhD program at the University of Southern California. He had little idea, when he started in the tech industry in 2012, that he would one day become a staunch public health advocate, particularly in the area of HIV prevention.
“I wanted to give back to society, so I decided that pursuing this project could be one way to do it,” Yadav says, explaining that he’s excited to see computer science being used in this way.
According to Tambe, the study group is currently exploring how HEALER could be used to address other public health challenges, such as anti-vaccination sentiments gaining traction amongst Somali immigrants in Minnesota and in multiple communities in California. The principle is similar: by identifying a few key individuals and equipping them with accurate and easily digestible information, accurate vaccination information could potentially spread rapidly and sustainably; through influence maximization, no one is left out.
They also hope to make the case for global applications, such as HIV and tuberculosis prevention. The first discussions were held at Microsoft’s “AI for Social Good” conference in Bangalore, India.
The USC group is not the first to use AI for public health in general. But according to Yadav, the USC CAIS group is the first to use AI to identify influencers for the purpose of health promotion and disease prevention.
Susan Krenn, the Executive Director of the Johns Hopkins Center for Communication Programs (CCP), believes the study is promising, and builds upon previous work around social networks and global health.
“Social network analysis has been used for a long time to help public health educators decipher who the influencers (which we think of as “nodes”) are in the community ... Though [AI] is at an early stage, I’m optimistic that it could be a tool to build upon existing knowledge and improve public health awareness, particularly in difficult-to-target and underserved groups,” Krenn says.
AI expert Suresh Venkatasubramanian, a Professor at the School of Computing at University of Utah, who also blogs at Algorithmic Fairness, is more cautious. Though he applauds the potential of influence maximization in the USC study, his own research has found that bias can still make its way into automated decision-making systems. Other studies have also reported this effect. This could limit who is reached, and stymie the public health impact.
“For example, suppose I’m trying to hire someone, and I only reach out within my (social) network to spread the word about the job. Then people not ‘in the know,’ who might be qualified, will not have the opportunity to apply. This is a version of the ‘it’s not what you know, but who you know’ issue in hiring, but with more impact because of the power and reach of social networks,” he explains.
There are other drawbacks to using AI in this way, especially with vulnerable groups. Tambe recalls a surprising ethical issue with HEALER: some of the youth chosen by HEALER benefited from a spike in self-esteem through being part of this project and responsible for public health messaging. Many became more empowered to seek jobs and other opportunities, for instance.
“They no longer felt invisible…some told us that the machine ‘saw’ them as people,” Tambe says. On the other hand, those youth who were not chosen did not seem as empowered.
If the bias in AI systems like HEALER can be managed, the potential to more effectively target public health messaging is clear.
“These kids who are on the street need help and don’t have a lot of options…sometimes they don’t have a lot to offer…so bringing [them] into a project like this…it’s a very big deal,” says Woods.
This article is part of our project, “The Diagnosis,” which is supported by a grant from the Robert Wood Johnson Foundation.