Sharing is caring

YouTube, the world’s largest video browsing platform is no doubt home to billions of videos but unfortunately it is also prone to misinformation. Misinformation has moved from the marginal to the mainstream. In their latest blog post, YouTube threw light on tackling misinformation. Let’s go deeper and see How YouTube Tackles Misinformation.

Neal Mohan, Chief Product Officer of YouTube says, “we’ve seen misinformation spin up in the midst of breaking news. Following tragic events like violent attacks, theories emerge by the second on everything from a shooter’s identity to motive. In these moments, what happens in the world also happens on YouTube. We reflect the world around us, but know we can also help shape it. And that’s why we’ve made stopping the spread of misinformation one of our deepest commitments.

YouTube removes nearly 10 million videos a quarter, the majority of which don’t even reach 10 views. Speedy removals will always be important but the most important thing for YouTube is to increase the good and decrease the bad. Which is why YouTube is increasing information from trusted sources and reducing the spread of videos with harmful misinformation. Now, when people search for information, they get results optimized for quality, not for how sensational the content is. The following was stated by YouTube in making this as their core approach:

First, if we only focus on what we remove, we’re missing the massive amount of content that people actually see. Bad content represents only a tiny percentage of the billions of videos on YouTube (about .16-.18% of total views turn out to be content that violates our policies). And our policies center on the removal of any videos that can directly lead to egregious real world harm. For example, since February of 2020 we’ve removed over 1M videos related to dangerous coronavirus information, like false cures or claims of a hoax. In the midst of a global pandemic, everyone should be armed with absolutely the best information available to keep themselves and their families safe.

But in order to identify clear bad content, people would require a clear set of facts. For COVID, in particular YouTube relied on expert consensus from health organizations like the CDC and WHO to track the science as it develops. In situations like the aftermath of an attack, crowdsourced tips have helped in identifying the wrong culprit or victims, to devastating effect. 

In the absence of certainty, should tech companies decide when and where to set boundaries in the murky territory of misinformation? My strong conviction is no.

YouTube saw this play out during the days that followed the 2020 U.S. Presidential election. Where even without an official election certification to immediately point to, YouTube allowed voices from across the spectrum to remain up. They even began removing content with false claims that “widespread fraud changed the outcome” of any past U.S. presidential election.

YouTube further says that an aggressive approach towards content removals would also have a chilling effect on free speech. The company considers removals as a blunt instrument but its wide usage can send the idea that controversial ideas are unacceptable. “We’re seeing disturbing new momentum around governments ordering the takedown of content for political purposes. And I personally believe we’re better off as a society when we can have an open debate. One person’s misinfo is often another person’s deeply held belief, including perspectives that are provocative, potentially offensive, or even in some cases, include information that may not pass a fact checker’s scrutiny. Yet, our support of an open platform means an even greater accountability to connect people with quality information. And we will continue investing in and innovating across all our products to strike a sensible balance between freedom of speech and freedom of reach.Neal Mohan added.

Some people may disagree with YouTube’s approach but for the YouTube team responsibility is good for business.