Personalized algorithms can quietly limit what people explore while making them feel more certain they understand a topic.
Personalized recommendation systems are designed to show people online content based on their past behavior, but new research suggests these same systems may interfere with learning. According to the study, when algorithms determined which information people saw, learning outcomes suffered.
The researchers found that when participants relied on algorithm-selected information to study a topic they knew nothing about, they explored only a narrow slice of the available material. Instead of examining the full range of information, they focused on a limited subset.As a result, participants often answered test questions incorrectly. Even so, they expressed strong confidence in their wrong answers.
The findings are troubling, said Giwon Bahg, who led the research as part of his doctoral dissertation in psychology at The Ohio State University.
Bias Can Form Even Without Prior Knowledge
Previous studies of personalized algorithms have largely examined how they influence opinions about political or social topics that people already understand to some degree.
“But our study shows that even when you know nothing about a topic, these algorithms can start building biases immediately and can lead to a distorted view of reality,” said Bahg, now a postdoctoral scholar at Pennsylvania State University.