In the 1960s, reporters became attuned to the power they had over the public’s attention, and some tried to use it judiciously. While white supremacists, especially members of the Ku Klux Klan, offered privileged insider access to reporters who provided favorable coverage, the black press chose to ignore the Klan unless it was to highlight the group’s decreasing power. Jewish civil-rights organizations suggested that journalists practice “quarantine” and actively choose not to cover the American Nazi Party. The Klan and the Nazis wanted attention. In each of these situations, media outlets acted as gatekeepers that could strategically silence those seeking to use the press as a megaphone.
Social media have fundamentally changed who controls the volume on certain social issues. Facebook, Google, and other platform companies want to believe they have created a circumvention technology that connects people directly to one another without any gates, walls, or barriers. Yet this connectivity has also allowed some of the worst people in this world to find one another, get organized, and use these same platforms to harass and silence others. The platform companies do not know how to fix, or perhaps do not understand, what they have built. In the meantime, previously localized phenomena spread around the globe, so much so that the culture of American-style white supremacy turned up in a terrorist attack on Muslims in New Zealand.
As a sociologist at Harvard Kennedy School’s Shorenstein Center, I study how technology is used by social movements, including groups on the far left and the far right. Since the uprisings in the Middle East and elsewhere in 2011, we have witnessed thousands of protests and events inspired by and organized through social media. Progressive social movements routinely use networking technologies to grow their ranks and publicize their ideas. White supremacists have their own ways of deploying the same technology.
In the aftermath of outbursts of violence such as the one in New Zealand, traditional news outlets draw heavily on social-media postings for insights into the perpetrator’s motives and mine them for details that make stories sound more authoritative and vivid. Certain oddball phrases, internet memes, and obscure message boards garner mainstream attention for the first time. Inevitably, people Google them.
The extra attention that these ideas gain in the aftermath of a violent attack isn’t just an unfortunate side effect of news coverage. It’s the sound system by which extremist movements transmit their ideas to a broader public, and they are using it with more and more skill.
One variable remains consistent across all networked movements: The moderation policies of different platforms directly affect how groups amplify political ideologies online. White supremacists and other extremists tend to use anonymous message boards to plan manipulation campaigns. These places traffic in racist, sexist, and transphobic content and link to obscure podcasts and blogs. Moderation is rare and tends to occur only when too much attention is drawn to a certain post. In some forums, posts self-delete and leave few traces behind.
Far more useful in reaching a new audience are places such as Twitter, YouTube, and Facebook, which remove objectionable content—but may not do so before it spreads virally.
Taking advantage of that dynamic, the murderer in New Zealand posted a full press kit on an anonymous message board prior to live-streaming his terrifying acts on Facebook. Many have labeled it a manifesto, but it reads more like a collection of copy-and-pasted white-supremacist conspiracy theories and memes. It would never have been notable on its own. This individual did not have the power or influence to boost these worn-out tropes. This manifesto could probably have existed in perpetuity on obscure document-hosting sites, and no one would have noticed. For platforms, this kind of content is simply white noise.
Explosive violence was the signal necessary to call attention to these posts. The New Zealand attacker used the live-streaming feature of Facebook to control the narrative, even to the point of saying “Subscribe to PewDiePie”—a meme referencing a popular right-wing YouTube influencer—during his broadcast. He succeeded in linking his deeds to PewDiePie’s fame. As of today, Google-search returns on “PewDiePie” include references to the Christchurch attack.
The New Zealand attacker also knew that others would be recording and archiving the video for further amplification. When choosing to publish on an anonymous forum first, he also ensured that that group of sympathetic trolls would re-upload content in the wake of takedowns by the major platforms. We’ve seen this tactic many times before. Sometimes it’s used in playful ways. When Scientology tried to get a leaked promotional video featuring Tom Cruise removed from the internet, users made a point of reposting it in a variety of places—making it impossible to stamp out. Other instances are darker: Some users attempted to keep videos on YouTube of a misogynistic murderer from Santa Barbara, California. The scale of these efforts can be startling. In the first 24 hours after the Christchurch attack, Facebook alone removed 1.5 million postings of the video. In a statement late Saturday, the company said it was still working around the clock to “remove violating content using a combination of technology and people.”
Weeks before Friday’s attack, the New Zealand shooter littered other social-media platforms with memes and articles about immigrants and Muslims to ensure that journalists would have plenty of material to scour. These sorts of cryptic trails are becoming an increasingly common tactic of media manipulators, who anticipate how journalists will cover them. The perpetrator of the New Zealand attack clearly hoped that a new white supremacist would hear a siren song by directly connecting with his words and deeds.
The sophistication of these manipulators presents a challenge for the media. In describing these dynamics, I’m not mentioning the New Zealand killer’s name. Other than PewDiePie, I’m not citing any of the personalities and tropes he tried to publicize. Withholding details runs counter to the usual rules of storytelling—show, don’t tell—but it also helps slow down the spread of white-supremacist keywords. Journalists and regular internet users need to be cognizant of their role in spreading these ideas, especially because the platform companies haven’t recognized theirs.
Just as journalists of the past learned to cover white supremacists differently from other groups, platform companies must address the role their technology plays as the megaphone for white supremacists. In designing, deploying, and scaling up their broadcast technologies, internet companies need to understand that white supremacists and other extremists will find and exploit the weak points. While Facebook, Google, Twitter, and others have resisted calls for accountability, there is no longer any doubt about how these platforms—and the media environment now growing up around them—are used to amplify hate.
from The Atlantic https://ift.tt/2FioxMA
0 comments:
Post a Comment