The internet has become a fertile landscape for extremism

0
23



kkk ku klux klan rally protest virginia
Members
of the Ku Klux Klan face counter-protesters as they rally in
support of Confederate monuments in Charlottesville, Virginia,
U.S. July 8, 2017.

Reuters/Jonathan
Ernst


Extremism has always been with us, but the internet has allowed
ideas that advocate hate and violence to reach more and more
people.

Whether it’s the deadly “Unite the Right” rally in
Charlottesville or the 2015 Charleston church massacre, it’s
important to understand the internet and social media’s role in
spreading extremism – and what can possibly be done to prevent
these views from leading to actual violence.

For six years, I’ve been director of the Center for Peace Studies
and Violence Prevention at Virginia Tech, which researches the
causes and consequences of violence in society.

While I’ve been studying extremist ideologies for over a decade,
I’ve focused on its online forms since 2013. From our research,
we’ve been able to track the growth of these views on the
internet – how they’re spread, who’s being exposed to them and
how they’re reinforced.

The internet’s fertile landscape

The First Amendment allows us to express any ideas, no matter how
extreme. So how should we define extremism? On the one hand, it’s
similar to Supreme Court Justice Stewart’s famous quote about pornography – “I know it
when I see it.”

Extremism is generally used to describe ideologies that support
terrorism, racism, xenophobia, left- or right-wing political
radicalism and religious intolerance. In a way, it’s a political
term describing beliefs that don’t reflect dominant social norms
and that reject – either formally or informally – tolerance and
the existing social order.

Extremist groups went online almost immediately after the
internet was developed and their numbers increased dramatically
after 2000, reaching over 1,000 by 2010. But the data on
organized groups don’t include the sheer number of individuals
who maintain websites or make extremist comments on social media
platforms.

As the number of sites spewing hate has grown, so have recipients
of the messages, with younger people particularly vulnerable. The
percentage of people between the ages of 15 and 21 who saw online extremist messages increased from
58.3 percent in 2013 to 70.2 percent in 2016.

While extremism comes in many forms, the growth of racist
propaganda has been especially pronounced since 2008: Nearly
two-thirds of those who saw extremist messages online said they
involved attacking or demeaning a racial minority.

Bubbles of hate

In recent years, the proliferation of social media – which gives
users the ability to reach millions instantaneously – has made it easier to spread extreme views.

But it is in more subtle ways that our online experiences may
amplify extremism. It’s now common practice for social networking
sites to collect the personal information of users,
with search engines and news sites using algorithms to learn
about our interests, wants, desires and needs – all of which
influences what we see on our screens.

This process can create filter bubbles that reinforce our preexisting
beliefs
, while information that challenges our assumptions or
points to alternative perspectives rarely appears.

Every time someone opens a hate group’s website, reads its blogs,
adds its members as Facebook friends or views its videos, the
individual becomes enmeshed in a network of like-minded people
espousing an extreme ideology. In the end, this process can
harden worldviews that people become comfortable spreading.

Unfortunately, this seems to be happening. When we began our
research in 2013, only 7 percent of respondents admitted to
producing online material that others would likely interpret as
hateful or extreme. Now, nearly 16 percent of respondents report producing
such materials
.

While most people who express extremist ideas do not call for
violence, many do. In 2015, about 20 percent of the messages people saw
online openly called for violence against the targeted group;
this number nearly doubled by 2016. Granted, not everyone
who sees these messages will be affected by them.

But given that the radicalization process often begins with simply being exposed to
extremism
, government authorities in the U.S. and around the
world have been understandably concerned.

The role of social control


Facebook
Facebook and other
companies are banning accounts associated with hate
groups.

Facebook

While all of this seems bleak, there is hope.

First, companies such as GoDaddy, Facebook and Reddit are banning accounts associated with hate
groups
.

Perhaps more importantly – as we saw during and after
Charlottesville – people are defending diversity and tolerance.

Over two-thirds of our respondents report that
when they see someone advocating hate online, they tell the
person to stop or defend the attacked group.

Similarly, people are using social media to expose the identities
of extremists, which is what happened to some of those involved in the Charlottesville
rally
.

Perhaps these acts of online and offline social control can
convince extremists that, somewhat ironically, a tolerant society
doesn’t tolerate extremist ideologies. This may create a more
tolerant virtual world, and, with luck, disrupt the
radicalization of the next perpetrator of hate-based violence.



Publish Date

Leave a Reply