Those sorts of edge cases are worrying, but at least in theory, they can be solved by tweaking the algorithms. In some ways, the harder question is what it means for kids’ experiences and development when the algorithm works correctly. Is it all basically fine?
The way YouTube treats videos for grown-ups gives some reason to worry. Guillaume Chaslot, a former Google engineer, recently conducted an experiment tracing YouTube’s recommendation algorithm. He found that, during the 2016 presidential campaign in the United States, users who viewed political videos were routinely steered toward ideologically extreme content and conspiracy theories.
Zeynep Tufekci, a University of North Carolina researcher, wrote in a New York Times Op-Ed article that Mr. Chaslot’s research suggested that YouTube could be “one of the most powerful radicalizing instruments of the 21st century.”
Is that true? We don’t have empirical proof that it is. But we also can’t know for sure that it isn’t true, in part because companies like YouTube and Facebook tend to be pretty guarded with data that could be used to estimate the impact of their platforms. Facebook, to its credit, is bringing on social scientists and sponsoring research.
Still, it’s telling that companies like Facebook are only beginning to understand, much less manage, any harm caused by their decision to divert an ever-growing share of human social relations through algorithms. Whether they set out to or not, these companies are conducting what is arguably the largest social re-engineering experiment in human history and no one has the slightest clue what the consequences are.