How Search Engines Amplify Hate — in Parkland and Beyond

FT. LAUDERDALE – FEBRUARY 19: Nikolas Cruz appears in court for a status hearing before Broward Circuit Judge Elizabeth Scherer on February 19, 2018 in Ft. Lauderdale, Florida. (Photo by Mike Stocker-Pool/Getty Images)


March 9, 2018
Dr. Safiya U. Noble is the author of Algorithms of Oppression: How Search Engines Reinforce Racism and is an assistant professor of communication at the University of Southern California, Annenberg School of Communication & Journalism. She is a partner in Stratelligence and co-founder of the Information Ethics & Equity Institute.


Tech companies have been slow to respond to the way their platforms have been used to amplify hate. Anonymity on social media platforms often makes it difficult to identify the right-wing radicalization that is happening to some Americans online, exposing users to violent and often racist disinformation.

We need new business practices and policies that address public harm propagated in media-technology platforms, particularly as bad actors use these platforms to enact violence on others. Important developments are under way in terms of commercial content moderation, which allow humans to flag threats and other forms of dangerous content, but have yet to reach the level of impact needed, given the volume of media that traffics through these platforms. Algorithms and automated decision-making technologies are not yet sophisticated enough to recognize certain types of online threats before mass violence occurs.

We know that anti-government, anti-Black, anti-Muslim, anti-gay and anti-immigrant hate crime is a massive presence, and it’s important to note how white nationalist trolls attempted to take credit for Nikolas Cruz in the immediate aftermath of the mass shooting he carried out at Marjory Stoneman Douglas High School in Parkland, Florida, on Feb. 14, as part of their desire to amplify and enact hate.

Read more at …