Henry Farrell and Cosma Shalizi (2024), “Bias, Skew and Search Engines Suffice to Explain Online Toxicity,” Communications of the ACM, preprint, 67,4:25-28.
U.S. political discourse seems to have fissioned into discrete bubbles, each reflecting its own
distorted image of the world. Many blame machine-learning algorithms that purportedly maximize
“engagement” — serving up content that keeps YouTube or Facebook users watching videos or
scrolling through their feeds — for radicalizing users or strengthening their partisanship. Sociologist
Shoshana Zuboff [15] even argues that “surveillance capitalism” uses optimized algorithmic feedback
for “automated behavioral modification” at scale, writing the “music” that users then “dance” to.
There is debate over whether such algorithms in fact maximize engagement (their objective
functions also typically contain other desiderata). More recent research [3] offers an alternative
explanation, suggesting that people consume this content because they want it, independent of
the algorithm. It is impossible to tell which is right, because we cannot readily distinguish the
consequences of machine learning from users’ pre-existing proclivities. How much demand comes
from algorithms that maximize on engagement or some other commercially valuable objective
function, and how much would persist if people got information some other way?
Even if we can’t answer this question in any definitive way, we need to do the best we can. There
are many possible interface technologies that can help organize vast distributed repositories of
knowledge and culture like the Web.
Read the full text in this preprint.