Algorithmic Radicalization — The Making of a New York Times Myth

Mark Ledwich
7 min readDec 28, 2019

--

In 2018, Kevin Roose published a piece in the New York Times in which Caleb Cain, a liberal college dropout, described his experience of being radicalized by “a vortex of far-right politics on YouTube.” Caleb was a victim — as the theory goes — of the YouTube recommendation algorithm, which guides users down an “alt-right rabbit hole” of increasingly extreme far-right political content in order to maximize watch time and keep people glued to the site. This story is one of many penned by elitists in a battle over how we consume media, and we now have evidence that they are spectacularly wrong.

The idea that users are manipulated by an impersonal algorithm into a world of conspiracy theorists, provocateurs and racists is a story that the mainstream media has been eager to promote. An opinion piece for the New York Times went so far as to call YouTube, “one of the most powerful radicalizing instruments of the 21st century.” What makes this story easy to believe is our own experience of YouTube’s recommendations. We are shown a wider spectrum of content than traditional media — the wildest of which we are more likely to remember and mention.

A study I recently conducted with Anna Zaitsev (a postdoctoral researcher at UC Berkeley) uses a rigorous methodology to classify channels into ideological groupings. We use this large dataset on recommendations to shed light on the most widely held claims about algorithmic radicalization.

Three reviewers manually collected 760+ political channels and watched hours of content to classify them as left/center/right and give them soft tags (e.g. MRA, libertarian, anti-SJW). Our system collected 23M+ recommendations since November for 657K videos created since 2018.

We also built recfluence.net, which collects recommendations daily and lets you explore the recommendation flows across YouTube’s political landscape.

By looking at recommendation flows between various political orientations and subcultures, we show how YouTube’s late 2019 algorithm is not a radicalization pipeline, but in fact

  • Removes almost all recommendations for conspiracy theorists, provocateurs and white Identitarians
  • Benefits mainstream partisan channels such as Fox News and Last Week Tonight
  • Disadvantages almost everyone else

This is clear to see by comparing an estimate of the recommendations presented (grey) to the recommendations received (green) for each group. Recommendation impressions are an estimate of the number of times a video has been shown to another video who “received” that impression.

The next chart shows the relative portion of recommendations between groups. This matches my intuition about the real-world ideological closeness between groups. It also shows how the algorithm encourages filter bubbles, especially for partisans.

The chart below compares the daily balance of recommendation impressions between every group. For example, if there were 4K recommendations from MRA videos to Libertarian and 3K the other way then this chart will show 1K with an arrow towards the Libertarian group. It is the best chart to understand YouTube's influence on people's views.

Some groups are clearly advantaged; 14.6M more recommendations were from Centre/Left MSM videos towards Partisan Left than the other way. When it comes to more fringe groups, the recommendations always flow away from them. For viewers of conspiracy videos, there are 8.3M more recommendations to Partisan Right videos than vice versa.

The following chart is a simplification of the above to show the most out-of-balance recommendation flows.

There is a net flow of recommendations towards Partisan Left from Social Justice (5.9M per day) and Center/Left MSM channels (5.2M). There is also a “pipeline” from most groups via Center/Left MSM. The Partisan Right benefits with a net flow of recommendations from groups on the right (16.9M).

We also categorized channels into media types. Mainstream media is significantly advantaged at the expense of independent YouTubers and Missing Link Media (e.g. The Young Turks). It favors mainstream media and cable news content over independent YouTubers, with a slant towards partisan channels.

Contrary to the narrative promoted by the New York Times, the data suggests that YouTube’s recommendation algorithm actively discourages viewers from visiting content that one could categorize as radicalizing or otherwise questionable.

This chimes with what YouTube themselves have said. In a recent blog, the company said that over the last two years, they’ve “been working to raise authoritative voices on YouTube and reduce the spread of borderline content and harmful misinformation.” Researchers and journalists like Kevin Roose, who squarely blame the algorithm for creating “YouTube radicals” or sending people down a “rabbit hole,” should not be trusted.

Curiously, just as Caleb Cain was radicalized by far-right videos, he was also then de-radicalized by left-wing ones. Over time, he had “successfully climbed out of a right-wing YouTube rabbit hole, only to jump into a left-wing YouTube rabbit hole.” Kevin Roose remarks that

What is most surprising about Mr. Cain’s new life, on the surface, is how similar it feels to his old one. He still watches dozens of YouTube videos every day and hangs on the words of his favorite creators. It is still difficult, at times, to tell where the YouTube algorithm stops and his personality begins.

In his own way, Caleb defines what the left-leaning legacy media sees as the archetypal actor in our mediatized, post-truth era: Someone who completely lacks all critical thinking, consumes an endless stream of online information, and dogmatically believes any political position they are told. It's hard not to notice how this meme is symmetrical to the NPC meme created by the online-right. NPC stands for “non-player character” and is someone who has no agency, blindly believing left-wing media propaganda.

Penn State political scientists Joseph Philips and Kevin Munger describe this as the “Zombie Bite” model of YouTube radicalization, which treats users who watch radical content as “infected,” and that this infection spreads. As they see it, the only reason this theory has any weight is that “it implies an obvious policy solution, one which is flattering to the journalists and academics studying the phenomenon.” Rather than look for faults in the algorithm, Philips and Munger propose a “supply and demand” model of YouTube radicalization. If there is a demand for radical right-wing or left-wing content, the demand will be met with supply, regardless of what the algorithm suggests. YouTube, with its low barrier to entry and reliance on video, provides radical political communities with the perfect platform to meet a pre-existing demand.

Writers in old media frequently misrepresent YouTube’s algorithm and fail to acknowledge that recommendations are only one of many factors determining what people watch and how they wrestle with the new information they consume. I believe their fixation with algorithms and tech comes from subconsciously self-serving motives, a mechanical understanding of radicalization and a condescending attitude towards the public.

It works like this: If only YouTube would change their recommendation algorithm, the alternative media, the racists, cranks and conspiracy theorists, would diminish in power and we would regain our place as the authoritative gatekeepers of knowledge.

Old media’s war on decentralized media is not limited to misinformation about YouTube’s algorithm. I believe this motivation partially explains why this wild piece against free speech and this hit piece on Cenk Ugur found their way into the paper of record.

Cenk Uygur is the outspoken creator of The Young Turks — the third-largest YouTube-native political channel in our dataset. Bernie Sanders retracted his endorsement of Uygur for a California congressional seat, after it emerged that he had a history of making offensive comments about women. On his show, Uygur has hosted white supremacist figures such as David Duke. In one clip that circulated on Twitter, Duke ends an interview by saying, “I am not what you call a racist,” to which Uygur sarcastically replies, “No, of course not.” This clip was cut off from its context and repurposed to show Uygur in a bad light, as if he actually agreed with David Duke.

For these kinds of scenarios, we might rely on “authoritative sources” to set the record straight, dispel falsehoods and explain the broader context for those who aren’t aware. But a NYT political correspondent— in a now corrected article — failed to do all of these things. They made no mention of context or the fact that it was a sarcastic reply.

These events, along with the promotion of the now-debunked YouTube “rabbit hole” theory, reveal what many suspect — that old media titans, presenting themselves as non-partisan and authoritative, are in fact trapped in echo chambers of their own creation, and are no more incentivized to report the truth than YouTube grifters.

Update (31 Dec 2019)

Anna and I have responded to criticism of the study here. I should have included data showing the recommendation advantage chart for each month since Nov 2018. It has changed over this period, but the recommendation-rabbit-hole tunnel — if it ever existed — would need to have been before that.

--

--

Mark Ledwich

Software Engineer. Insatiable reader and podcast listener. Interested in biotech, psychology, philosophy, economics.