We live in an era of ambient information. Amidst the daily flood of digital news, memes, opinion, advertising, and propaganda, there is rising concern about how popular platforms, and the algorithms they increasingly employ, may influence our lives, deepen divisions in society, and foment polarization, extremism, and distrust. For the past decade, Project Information Literacy (PIL) has conducted large-scale studies on how college students interact with information for school, for life, for work, and most recently, for engaging with the news. The latest report from PIL stands at the intersection of these critical information practices and asks: How much do students know about how internet giants such as Google, YouTube, Instagram, and Facebook work, and, in turn, how they work on society? This pivotal generation born before the constant connectivity of social media, has come of age aware, cautious, and curious about the implications of the current information landscape. Deeply skeptical, many of these students are conditioned to do research for themselves rather than deferring to experts or major news outlets. They understand that “free” platforms are convenient but also recognize they harvest massive amounts of personal data to target ads and influence what content they see. While many students worry about how the next generation will fare in terms of disinformation, privacy, and personal well-being, they do not fully understand how big data and artificial intelligence (AI) are being used in educational technology and society. Neither do their professors. While faculty are alarmed about the social impact of the internet giants and the loss of a common culture, they have little idea how to relate their concerns to the information tasks intrinsic to the courses they teach. When librarians and educators first adopted information literacy and critical thinking as essential educational outcomes, the algorithm driven platforms many of us turn to — Google, YouTube, Facebook, Instagram, and Amazon — did not exist. Though information literacy has grown to include news literacy in the wake of the “fake news” crisis, there is little consideration of how colossal sites like these influence what we see and learn, what we think, and ultimately, who we are. If we believe that information literacy educates students for life as free human beings who have the capacity to influence the world, then information literacy needs to incorporate an understanding of ways that news and information flows are shaped by algorithms. To do this, we need to know more about how students interact with algorithm-driven platforms. We must consider courses of action for educators preparing students to understand the technological and social forces shaping the circulation of news and information in society today.
In the growing research literature about students and algorithms, two recent studies help inform these efforts. A much-discussed 2018 survey of more than 4,500 Americans revealed widespread concerns about computer algorithms making automated decisions with real-life consequences, such as who gets a job or a loan. In 2017, a survey of college students found most were unaware of whether or not the news they got from Google and through Facebook was filtered using algorithms. Many questions remain, however, about what students already know, and need to know, about the individual and social effects of algorithmic filters.