
MADRID, Feb 17 (IPS) – In this world of war, massive weapons production, sale and use; of sharpening inequalities and deadly climate emergencies, hate speech and its inhumane impact, amplified on an “unprecedented scale” by new technologies.
Hate speech has now reached dangerous highs, fueling discrimination, racism, xenophobia and staggering human rights abuses.
It is primarily aimed at those who are not “like us”, ie. ethnic minorities, black, “coloured” and Asian peoples; and Muslims worldwide through widespread Islamophobia, let alone millions of migrants and billions of poor. In short, the most vulnerable people, let alone the world’s girls and women.
The UN reports that the new communication technology is one of the most common ways to spread divisive rhetoric on a global scale, threatening peace around the world.
A new UN Podcasts series, Unite against hateexplains how this dangerous phenomenon is being tackled worldwide.
Online hate speech is on the rise
According to a leading international human rights organization, Minority Rights Groupan analysis records a 400-fold increase in the use of online hate speech in Pakistan between 2011 and 2021.
Being able to monitor hate speech can provide valuable information for authorities to predict future crimes or take action after the fact.
There is concern among human rights experts and activists that hate speech is becoming more common, with views once perceived as extreme and extreme moving into the mainstream.
An episode of Unite against hate with Tendayi Achiume, the outgoing UN Special Rapporteur on Contemporary Forms of Racism, and Jaroslav Val?ch, who is the Project Manager for Fact-Checking and News Competence, at the Prague-based Media Development Organization “Transitions“.
“Hate speech is profitable”
For Tendayi Achiume, a former independent UN expert on human rights, more attention needs to be paid to the business models of social media companies.
“Many times people want to talk about content moderation, what should be allowed on these platforms, without paying much attention to the political economy of these social media platforms. And it turns out that hate speech is profitable.”
Hate speech and misinformation, closely related
Chris Tucker, CEO of The Sentinel Project warns that hate speech and disinformation are closely related: “Hate speech loads the gun, disinformation pulls the trigger.”
“And that’s the kind of relationship that we’ve come to understand over the years.”
It is now theoretically possible for any person with access to an Internet connection to become a producer of that kind of content. And it really changes things, and with a global reach, adds Chris Tucker.
The The Sentinel Project is a Canadian non-profit organization whose Hatebase initiative monitors the trigger words that appear on various platforms and are at risk of turning into actual violence.
Tucker describes it as an “early warning indicator that can help us identify an increased risk of violence.”
It works by monitoring online spaces, particularly Twitter, looking for certain keywords, in several different languages, and then applying certain contextual rules to determine what was or wasn’t most likely to be actual hateful content.
In the Balkans
Another organization that does a similar type of mapping of hate speech is Balkan Investigative Reporting Network.
The network monitors every single trial related to war crimes in Bosnia and Herzegovina and amounts to 700 open cases.
When mapping hate, you look for four different aspects; “hateful narratives from politicians, discriminatory language, denial of atrocities and actual incidents on the ground where minority groups have been attacked.”
Politicians fuel hatred
According to Dennis Gillick, managing director and editor of their Bosnia and Herzegovina branch, the main drivers of hate stories in the country are populist, ethno-nationalist politicians.
“The idea behind the whole mapping process is to prove the connection between political statements and political drivers of hate and the actual atrocities that are taking place.”
The network also wants to prove that “there is a lack of systematic prosecution of hate crimes and that hate speech allows this continuing cycle of violence, with more discriminatory language by politicians and fewer prosecutions.”
As a result of hate speech, we have seen an increasing number of far-right groups mobilize, explains Gillick.
Fake humanitarian groups spreading hate speech
“We see fake NGOs or fake humanitarian groups being mobilized to spread hateful or discriminatory language, to widen the gap between the three different ethnic and religious groups in this country.”
The real-world consequences reported by the network have included mosques or churches being destroyed or vandalized, depending on where a particular faith group is in the minority, and open calls for violence.
According to Gillick, this fuels the agenda of ethno-nationalist parties that want to create division.
Need to create counter narratives
The way to combat this toxic environment, according to Gillick, is to create counter-narratives, to spread accurate, factual information and stories that promote unity rather than division.
However, he admits that this is a big question.
“It is difficult to counter public broadcasters, large media with several hundred journalists and reporters with thousands of flights a day, with a group of 10 to 15 journalists trying to write about very specific topics, in a different way, and to do the analytical and investigative reporting.”
Minorities under attack
Another organization that tries to create counter-narratives is Kirkuk nowan independent media outlet in Iraq, which seeks to produce objective and quality content on these groups and share it on social media platforms.
“Our focus is on minorities, internally displaced persons, women and children and, of course, freedom of expression,” said Kirkuk Now editor-in-chief Salaam Omer.
“We see very little content in the Iraqi media mainstream. And if they are actually portrayed, they are portrayed as problems.”
Social media moguls are being urged to change
The heads of many of the world’s largest social media platforms were urged to change their business models and become more responsible in the fight against increasing hate speech online.
IN a detailed statementmore than two dozen UN-appointed independent human rights experts – including representatives from three different working groups and several special rapporteurs – called out executive directors by name.
They said the companies they lead “must urgently address posts and activities that advocate hatred and constitute incitement to discrimination, in line with international standards for freedom of expression.”
They too said the new tech billionaire owner of Twitter, Elon Musk, Meta’s Mark Zuckerberg, Sundar Pichai, who heads Google’s parent company Alphabet, Apple’s Tim Cook, “and CEOs of other social media platforms,” should “focus on human rights, racial justice, accountability, transparency, corporate social responsibility and ethics, in their business model.”
And they reminded that being corporately responsible for racial justice and human rights, “is a central social responsibility, advising that ‘respecting human rights is in the long-term interests of these companies and their shareholders.’
The human rights experts underlined that International Convention on the Elimination of Racial Discrimination, the International Covenant on Civil and Political Rightsand that UN Guiding Principles for Business and Human Rights provide a clear way forward for how this can be done.
Business failure
“We call on all CEOs and social media leaders to fully embrace their responsibility to respect human rights and tackle racial hatred.”
As evidence of the company’s failure to get a handle on hate speech Council for Human Rights– Appointed independent experts pointed to a “sharp increase in the use of the racist ‘N’ word on Twitter”, after it was recently acquired by Tesla CEO Elon Musk.
This showed the urgent need for social media companies to be more responsible “over the expression of hatred against people of African descent, they argued.
Shortly after Mr. Musk took over, Network Contagion Research Institute at Rutgers University in the USAhighlighted that the use of the N-word on the platform “increased by almost 500 percent within a 12-hour period”, compared to the previous average, the human rights experts said.
© Inter Press Service (2023) — All rights reservedOriginal source: Inter Press Service