Misinformation across media & social platforms putting brands at risk

misinformation in media - mediahsotz

A new study by IPG Mediabrands has revealed a dramatic rise in both disinformation and misinformation; which social media outlets are making strides to eliminate misleading content, and those that are not; and what brands can do to ensure a more brand-safe environment.

The research examined the accelerating amount of inaccurate and misleading content appearing across online news outlets and social media platforms.

Misinformation report

The Dis/Misinformation Challenge for Marketers report cites a Pew Research Center study showing two-thirds of US news consumers believe social media has a negative effect on how information and news are shared, with one-third of that group citing the spread of misinformation as the major cause of that negative opinion. 

In the UK, 85% of consumers told researchers in a study by the Trustworthy Accountability Group and Brand Safety Institute they would reduce or stop buying brands that advertised near COVID-19 misinformation.

In addition to brand advertising appearing alongside misleading and deliberately false content, in some cases, brands themselves are the topics of misleading content. 

A 2020 conspiracy campaign linked online furniture retailer, Wayfair, to child trafficking, and the out-of-the-blue theory that 5G caused the coronavirus, harming the entire telecom industry.

Implications of disinformation and fake news - mediashotz
Cancel culture – brands the get embroiled in the fake news and misinformation on social are at risk.

Dis/misinformation creates real implications for brands, including cancel-culture backlash, which is leading to defence actions by marketers and agencies.

Within the report, Mediabrands compared platforms’ policies, as well as highlighting the proactive measures the platforms are, and are not taking. 

Twitch and Snapchat are highlighted for accounting for off-platform behaviour in their policies, Reddit and Twitch for their community moderation approach, and Facebook and YouTube for their redirects to authoritative sources of information across multiple key topics. 

The key finding is that while there are some good aspects emerging on combatting misinformation and disinformation, they should look across the fence to combine and align their efforts.

“The report details how social sharing of misinformation and the deliberate creation of disinformation proliferates” said study author Harrison Boys, Director Standards & Investment Product EMEA, at Mediabrands unit, MAGNA. 

“Marketers are right to be concerned when they find their advertising near misleading content as, unchecked, it could harm their reputations and the communities they serve. 

“The industry, which joined forces against online hate speech and supported online privacy, needs to take a stand against misinformation and disinformation today.”

Mediabrands calls on platforms to take immediate steps to remove disinformation and misinformation for the sake of consumers and brand safety. 

They suggest capabilities such as information centres, while encouraging, should not be relied on as the most effective way platforms can combat the issue. Mediabrands suggests that platforms should explicitly ban disinformation within polices and report on enforcement, much like the changes seen with Hate Speech in 2020, in combination with comprehensive proactive measures.

Brand safety

As marketers become more involved in brand safety across platforms, understanding the separation between media responsibility and brand safety adjacency is important. 

An overall view on the “health” of a platform, as aligned to brands values, is the crux of approaching media responsibility and is the first step of an overall platform safety strategy.

Typically, in newsfeed environments, there is little adjacency control, so brands can look to the strength of the platforms ability to moderate for brand safety.

Responsible platforms need to provide brands with the tools and processes needed to build a safe and suitable environment on their sites.

Insta, FB, YouTube and Twitter are fake pushers, report claims - mediashotz
Pushers of fake: Mediabrands’ report shows leading social channels allow for disinformation.

“While some platforms have policies on disinformation and misinformation, they are often vague or inconsistent, opening the door to bad actors exploiting platforms in a way that causes real-world harm to society and brands,” said Joshua Lowcock, Global Chief Brand Safety Officer and US Chief Digital Officer at Mediabrands network agency UM Worldwide. 

“With many platforms embracing or pivoting to the creator economy, platforms need to hold the organic reach of individual user content to the same standards of accountability as paid reach on the platform.”

Understanding platforms’ efforts to combat misleading content

The report details prevalent dis/misinformation topics around elections, health and vaccines, environment, and media manipulation.

It also breaks down steps the platforms are taking (or not) to create a safer environment for advertisers and consumers. 

Key findings include:  

  • Pinterest stood out for making a “U-turn on handling misinformation.” In 2019, it established guidelines to prevent vaccine misinformation, which were strengthened when COVID-19 hit. Ongoing Mediabrands’ Media Responsibility Index reports have tracked how Pinterest now suspends accounts that continually spread misinformation and fact-checks accounts with large followings.
  • Facebook, Instagram, YouTube and Twitter create leeway for misinformation and disinformation campaigners to push fake news. On YouTube, advertisers have seen their spots run alongside misleading videos, in effect monetising them.
  • Facebook’s testing of a “satire” tag addresses how “quite literally, people do not know if something is factual news or a joke,” noted the report. 
  • In a similar throttling measure, Twitter recently started asking posters if they read a news story before sharing it. 
  • Reddit is at a turning point, where the freewheeling atmosphere on some subreddits is balanced by robust community moderation. Twitter’s “Birdwatch” community-moderation program takes a page from Reddit’s.

“Responsible brands want to ensure their messages are seen and shared alongside the right content on platforms,” said Elijah Harris, MAGNA’s EVP, Global Digital Partnerships and Media Responsibility.

“Consumers have also been watching this space to see how platforms adapt their moderation and enforcement techniques to curb content looking to spread false information in a coordinated way.

Dis/Misinformation will remain an integral part of our research and upcoming Media Responsibility Index as we work with clients and platforms to share best practices around rooting out harm and upholding suitability standards for advertisers.”