Please see the disclaimer.

I just published a post about about a talk that Tom Scott gave at the Royal Institution, and I have more comments. This time, it will be about a question that was asked and Tom’s answer.

The question and its answer are below at 7:51.

The correct answer to the question “What do you think is more likely to lead to radicalization: an echo chamber, or a place where there’s no regulations on free speech?” is, in fact, an echo chamber, as Tom said.

But the rest of the truth gets muddled in Tom’s answer. He says that if you have a free platform with a recommendation engine, that tends to become an attractor to radicalization. That is also correct, but that wasn’t quite the other half of the question.

The other half of the question had an implicit bent to it: there was an assumption that allowing free speech would tend to radicalize people. So while Tom’s answer was correct, it could have made people think that free speech also could lead to radicalization.

But if you think about it, that doesn’t make much sense. After all, the number one answer, echo chambers, are the very epitome of lack of free speech. Why are they so radicalizing when they have the opposite of free speech? The answer is that, since members of echo chambers only hear one argument, echo chambers are radicalizing, and they are radicalizing because they don’t have free speech. Radicalization is not inherent in free speech; if anything, it could actually have a moderating effect by exposing radicalized people to other arguments and showing that those arguments are not as bad as they were taught by the echo chamber.

Of course, if there is free speech, then some people will be exposed to ideas they hadn’t heard before and might end up going down a rabbit hole towards radicalization, though there is no guarantee that they will.

So what makes the platforms of today attractors toward radicalization? Tom already gave us the answer: recommendation engines.

Recommendation engines are designed to make users spend as much time as possible on the platform. One of the best ways to do that is to recommend more things that are just like the thing the user just clicked on. Often, it also pays to recommend things that are like that thing, but even more so. After the user clicks something, rinse and repeat, getting more and more extreme with each step.

Thus, recommendation engines push people towards radicalizing rabbit holes. It is not free speech that creates radicalization, it is free speech plus a recommendation engine that creates an attractor towards radicalization. And that is an important distinction because if social media companies start policing speech, it will only create more echo chambers, exacerbating the problem, not reducing it.

Update, 04 Jan 2020: Recently, researchers have called into question whether YouTube’s recommendation engine radicalizes at all. If that is true, then it strengthens my main point even further: free speech does not cause radicalization.

If we want to reduce radicalization, we should not be wary of free speech; we should be wary of echo chambers and the recommendation engines that lead to them.