Clickbait Bias and Propaganda in Information Networks (Fister et al.)

Categories:

Recommended

Based on Mike Caulfield’s Web Literacy for Student Fact-Checkers, this is a short handbook for understanding and evaluating information in a networked environment that bombards us with misinformation, opinion, news, satire, memes, and all the feels.

What If I’m Not Sure About a Source’s Reliability?

Authority and reliability are tricky to evaluate. Whether we admit it or not, most of us would like to ascribe authority to sites and authors who seem to support our viewpoints and approach publications that disagree with our worldview with skepticism.

How do we escape our own prejudices? Try applying Wikipedia’s guidelines for determining the reliability of publications. These guidelines were developed to help people with diametrically opposed positions argue in dispassionate ways about the reliability of sources using common criteria. For Wikipedians, reliable sources are defined by process, expertise, and aim.

Process

Above all, a reliable source for facts should have a process in place for encouraging accuracy, verifying facts, and correcting mistakes. Note that reputation and process might be apart from issues of bias. The editorial pages of the New York Times have a center-left bias, while those of the Wall Street Journal a center-right bias. The stories they choose to cover are also influenced by editors deciding what is important for their readership and role. Yet fact-checkers of all political stripes are happy to be able to track a fact down to one of these publications since they have reputations for a high degree of accuracy and issue corrections when they get facts wrong.

The same thing applies to peer-reviewed publications. While there is much debate about the inherent flaws of peer review, peer review does mean there are many eyes on data and results. This process helps to keep many obviously flawed results out of publication. If a peer-reviewed journal has a large following of experts, that provides even more eyes on the article, and more chances to spot flaws. Since one’s reputation for research is on the line in front of one’s peers, it also provides incentives to be precise in claims and careful in analysis in a way that other forms of communication might not.

Expertise

According to Wikipedians, researchers and certain classes of professionals have expertise, and their usefulness is defined by that expertise. For example, we would expect a marine biologist to have a more informed opinion about the impact of global warming on marine life than the average person, particularly if the biologist has done research in that area. Professional knowledge matters too: we’d expect a health inspector to have a reasonably good knowledge of health code violations, even if they are not a published scholar of the area. And while we often think researchers are more knowledgeable than professionals, this is not always the case. For a range of issues, professionals in a given area might have more nuanced and up-to-date insight than many researchers, especially where question deal with common practice.

Reporters, on the other hand, often have no domain expertise, but may strive to accurately summarize and convey the views of experts, professionals, and event participants. Reporters who write in a niche area (their “beat”) over many years (e.g. science or education policy) may acquire expertise themselves. Nevertheless, they will seek out experts for information when working on a story.

Category:

Attribution

“Clickbait Bias and Propaganda in Information Networks (Fister et al.)” by LibreTexts is licensed under CC BY.

VP Flipbook Maker

Created a flipbook like this. This flipbook is made with Visual Paradigm Online. Try this free flipbook maker and create you own flipbook now!