Social media Toxicity Barometer reveals brand risk of online negativity – report

toxicity-barometer-original Image by Gerd Altmann from Pixabay

A new Toxicity Barometer survey has revealed the extent to which online negativity could place global brands at risk from online negativity.

Almost any user of social media will have witnessed hostile or inappropriate comments or reactions at some point online. 

Moreover, these could range from the relatively mild, “This is a load of bulls*it!” to vile racist, sexist, or violent language designed to inflame.

Online Toxicity Barometer

That’s according to real-time social media moderation platform, Bodyguard.ai, which has launched its inaugural “Online Toxicity Barometer”, a white paper analysing more than 170 million comments made across 1,200 brand social media and communications channels in six languages over 12 months (July 2021 – July 2022). 

Critical time for social media

The report comes at a critical time in the social media world, moreso given the recent acquisition of Twitter by billionaire Elon Musk, whose intentions over the freedom of speech on the site remain uncertain for the moment.

Image-by-Photo-Mix-from-Pixabay.jpg
Twitter time: The is waiting to see how Twitter might change under Elon Musk’s stewardship.

Of all the comments analysed in the Bodyguard.ai report, 5.24% or approx nine million are toxic content, and within that nine million 3.28% are defined as hateful comments and 1.96% are junk such as spam, scams, frauds or trolling comments. 

The hateful comments are comprised of nine sub-categories, with almost half (47%) of a high or very high severity.

Categories of Hateful Comments (3.28% of total):

  • Insult – negativity to person or group
  • Hatred – aggressive, personal attack on a specific target
  • Body Shaming – mocking personal appearance
  • Sexual Harrassment – sexual comments targeted to an individual
  • LGBTQ+ phobia – discrimination based on sexual orientation or identification
  • Racism – discrimination based on race or colour
  • Moral harassment – incitement to physical violence or enjoyment of misfortune of others
  • Threats – comments showing intention or desire to inflict pain, injury or other hostile action
  • Misogyny – remarks that perpetuate stereotypes of women as inferior or weak compared to men

 

The-9-hate-categories.jpg
Hate list: Bodyguard.ai report highlighted all the current forms of abuse online.

Bodyguard.ai also detects junk, i.e. comments that pollute a community space, such as spam, scams, frauds and trolls.

Content that incites or justifies violence, either physical or moral, represents just 6% of total toxic comments, yet also over 300,000 direct attacks on an individual or group. 

Discrimination accounts for over 200,000 comments, picked up by the platform’s intelligent moderation software. 

In many cases these are the employees responsible for moderating, interacting and responding to customers on a brand’s social media or communications platforms, such as customer forums. 

This can have a direct commercial impact on brands and businesses in terms of staff turnover and mental health, and revenue loss since 40% of users leave a brand platform after their first exposure to toxic content.

Fine line for brands

The study also highlights the fine line brands need to tread between moderation and censorship in order to maintain their valuable customer feedback channels, while protecting their staff and customers from online toxicity.

bodyguardai-abuse-targets.jpg
Troll targets: Everyone is at risk from online abuse, even global brands.

Matthieu Boutard, President and co-founder of Bodyguard.ai, said: “95% of potential UK brand customers we spoke to mentioned maintaining freedom of speech as a concern. 

“Brands want customers to speak freely about poor service, or good! However, they also recognise that the line between personal opinion and personal attacks must be protected. 

“In any shop or office you walk into you will see a sign at the door saying that violence and harassment will not be tolerated.

“Yet too often, what would get us thrown out of a bar, shop or public building, is tolerated online. The human cost should not be overlooked!”

Bodyguard_Matthieu-BOUTARD_Managing-Director_4.jpeg
Freedom of speech vs safety: Bodygaurd.ai President and Co-Founder, Matthieu Boutard, says human cost of online abuse should not be ignored.

As society increasingly chooses to transact and interact online, the barometer provides a useful insight into the potential scale of the problem facing brands who do not have appropriate moderation in place. 

Microcosm of online interactions

These 170 million comments represent just a microcosm of online interactions and yet, a trained human moderator needs around ten seconds to assess a comment – meaning the white paper would have taken almost 54 years to produce if each comment had been individually analysed. 

Today’s social media platform algorithms deployed to moderate content have an error rate of around 40%, largely because of its inability to detect nuance, comments between friends made in a specific context, or other shortcomings in its machine learning.

bodyguardai-abuse-targets-2.jpg
Hate and abuse: Discrimination is one of the most common problems facing social media today.

With the Online Safety Bill due to be introduced imminently, Bodyguard.ai is keen to see protections extend to business platforms as well as individual users.

Boutard added: “Behind every toxic comment there is a real person that is targeted on the basis of gender, sexual orientation, culture or just for doing their jobs.

“Ahead of the introduction of the Online Safety Bill, brands have an opportunity to get ahead of the issue and consider, plan and roll out moderation to protect their teams and their customers. 

“We would encourage digital leaders to be proactive in their approach and would love to see an industry standard or kitemark showing their commitment to this. 

“We’d also like to see B2B communications channels part of the consideration in the Online Safety Bill as part of making the internet a better, safer, freer place to connect and communicate.”