Marketing alliance plans to tackle harmful online content


Marketing alliance, the Global Alliance for Responsible Media (GARM), has announced a three-pronged plan to tackle harmful content online.

Marketing alliance aims

The aim is to create a more sustainable and responsible digital environment that protects consumers, the media industry and society.

The announcement was made from the sidelines of the World Economic Forum meeting currently underway in Davos, Switzerland.

An estimated 620 million pieces of harmful content were removed by YouTube, Facebook and Instagram, between July and September 2019.

GARMed and ready: Global alliance vowed at WEForum meeting in Davos to tackle harmful online content.

Most of this content was removed before consumers actually saw them, thanks to the platforms’ investments in teams and tools.

However, approximately 9.2 million pieces of harmful content still reached consumers during that 3-month period. This equates to roughly one piece of harmful content viewed per second.

The GARM said it is taking action in a collaborative approach to protecting the four billon consumers online today. The goal is to eliminate harmful online content and ensure that bad actors have no access to advertiser funding.

The marketing alliance has partnered with the WEForum to advance this goal of improving the safety of digital environments.

GARM has joined the Forum’s Platform for Shaping the Future of Media, Entertainment and Culture as a flagship project.

The Alliance was launched in June 2019 by the World Federation of Advertisers (WFA) in partnership with its US member, the Association of National Advertisers (ANA).

The initiative is driven by WFA and brings together an unprecedented coalition representing $97 billion in global advertising spending through 39 advertisers, six agency holding companies, seven leading media platforms and seven industry associations.

The coalition will accelerate progress by means of a three-pronged action plan starting June onwards:

Shared definitions 

The Alliance has developed and will adopt common definitions to ensure that the advertising industry is categorising harmful content in the same way.

The 11 key definitions covering areas such as explicit content, drugs, spam and terrorism will enable platforms, agencies and advertisers to a shared understanding of what is harmful content and how to protect vulnerable audiences, such as children.

Establishing these standards is the first step needed to stop harmful content from being monetised through advertising.

Common tools and systems

The Alliance will develop and adopt common tools that will create better links across advertiser controls, media agencies tools, and the platform efforts to categorise content.

Creating these linkages will improve transparency and accuracy in how media investments are steered towards safer consumer experiences. This applies across images, videos and editorial comments.

Independent oversight

The Alliance will establish shared measurement standards so that the industry and platforms can fairly assess their ability to block, demonetise, and take down harmful content.

Transparency via common measures, methodology for advertisers, agencies and platforms is key to guiding actions that enhance safety for consumers.

Adopting key measures and agreeing to independent verification will be key to driving improvement for all parties, GARM said. It added that it will look to track these annually.

A special working group from the GARM will be activating this strategy starting in April.

This new strategy is a major step forward to accelerating and integrating important efforts on improving safety across the media supply chain.

The long-term vision is to drive growth and connectivity for society on ad-supported media platforms, which foster and enable civil dialogue.

GARM is taking further steps which represent the industry’s contribution to the challenge of eliminating harmful content. The plan complements approaches that have been taken by governments and online platforms.