How would you feel if a cop enticed you into accepting a fake Facebook friend request, then ran your posts through a machine learning program to “detect” your emotions? That’s what Boston’s police department wants to be able to do. And it makes the social media monitoring operations in the Chicago area that we reported on on Wednesday seem like child’s play.
Boston police are facing pushback from community groups and city council members for their quiet plans to acquire $1.4 million worth of social media monitoring software that would have surveillance capabilities far beyond the tools used by other police departments around the country.
The department’s request for proposal calls for a program that uses machine learning and natural language processing to determine “sentiment” and “hostile verbiage” in social media posts. The tool would also help police operate and expand the number of covert social media accounts, or “virtual identities,” used in their social media monitoring. The RFP also discusses numerous ways to collect and map users’ posts, associates, and locations through sophisticated network and GIS mapping techniques.
The surveillance program has attracted opposition from civil liberties groups and at city council meetings, but, as of now, the police commissioner William Evans and Mayor Martin J. Walsh seem set on seeing it through, citing terrorism and other threats to public safety. “We’re not going after ordinary people,” Evans told WGBH Boston. “It’s a necessary tool of law enforcement and helps in keeping our neighborhoods safe from violence, as well as terrorism, human trafficking, and young kids who might be the victim of a pedophile.”
The program will be operated by the department’s Boston Regional Intelligence Center and personnel from the Metro Boston Homeland Security Region, according to The Boston Globe. Some police social media programs around the country have adopted a less invasive approach than what is currently being proposed in Boston, and at a far lesser cost. In Arlington, Virginia, for example, police uses a system, called Social Sentinel, that alerts law enforcement about threats via text, e-mail, and daily reports by looking for key terms posted rather than individual accounts.
Kade Crockford, Director of the Technology for Liberty Program at the ACLU of Massachusetts, worries that the program’s automated data processing capabilities will vastly increase social media surveillance on innocent people.
“Right now, if they want to create a fake Twitter profile, an individual analyst has to go through all the work of maintaining their profile information and making sure to route their activities with the right IP addresses,” says Crockford. “But with this, they’ll have an automated system to do that work. That means exponential growth in the number of users they can target.”
Some city council members and civil liberties groups have expressed concerns about who the targets of this enhanced surveillance will be. In 2012, documents obtained by the ACLU of Massachusetts and the National Lawyers Guild showed that the Boston Regional Intelligence Center monitored the internal activities of political groups and filmed protests associated with the Occupy Wall Street movement, according to WBUR. The operations labeled peace groups such as Veterans for Peace, United for Justice with Peace, and Stop the Wars Coalition as “extremist.”
In another case, from March, the Boston Regional Intelligence Center provided intelligence for a “gang raid” at the Lenox Street Housing Development in Boston’s South End that nabbed 27 people. All were arrested on non-violent drug and firearms charges. The indictment cites evidence from a music video posted to YouTube in July 2014, involving residents of the housing project who “appear to be openly smoking marijuana.”
Police officials defend the program by pointing out that the data they would be monitoring is open source. “The technology will be used in accordance to strict policies and procedures and within the parameters of state and federal laws,” police spokesman Lieutenant Detective Michael McCarthy said in a statement to The Boston Globe. “The information looked at is only what is already publicly available.”
Thomas Nolan, an Associate Professor in Criminology at Merrimack College and a former lieutenant in the Boston Police Department, argues that this claim is misleading. “They have access to mountains of data that none of us could ever retain and sift through, so its not just as simple as looking at publicly available data,” he says. “Thirty years ago, to establish these kinds of criminal links and charts they’d have to get a warrant to get data from phone companies … But the tech has evolved rapidly, and the law is lagging behind.”
Nolan believes that images from publicly posted social media could be taken out of context and used unfairly to arrest and prosecute people, especially young, poor people from black and Latino communities. “This is subject to anyone’s interpretation, particularly if the data is taken from a community that uses language in a way different from the mainstream dominant culture,” he says. “The meanings of gestures, words, and pictures in their communications are different… so if you are basing the foundation of your investigation on a fiction, and using that to establish probable cause, that would be troubling.”
As of now, police have not made their selection of the would-be vendor of this program public, and many other questions about how the program would work remain unanswered. “The Boston Police Department is not the NSA, but it seems to think it needs to be, for reasons that are unknown to anybody in Boston,” says the ACLU’s Crockford. “The LAPD paid $70,000 for social media monitoring over a three year period, and L.A. is way bigger. What could BPD possibly need to do with 1.4 million worth of software?”