ALL RIGHT NOW

How to Decode Scientific Data

Carl Bergstrom, PhD ’98, has become an interpreter of fallacies.

September 2020

Reading time min

How to Decode Scientific Data

Illustration: Guillem Casasús

Before COVID-19 reached his home state of Washington in late January, Carl Bergstrom was busy fighting another contagion: the spread of misinformation.

As a professor of biology at the University of Washington and with a background in epidemiology, Bergstrom, PhD ’98, parlayed an interest in scientific communication into the undergraduate course Calling Bullshit, which teaches its students to recognize and refute misleading data and poorly conducted research in science.

Focusing on such topics as causality, data visualization and publication bias, the course identifies areas in which science is susceptible to misinformation and challenges what Bergstrom says is a common misconception that quantitative data is objective and irrefutable.

“Numbers are ideal vehicles for promulgating bullshit,” Bergstrom writes in Calling Bullshit: The Art of Skepticism in a Data-Driven World, the book he published with co-instructor Jevin West in August. “They feel objective, but are easily manipulated to tell whatever story one desires.”

Since its pilot in spring of 2017, the course has become one of the most popular at UW, quickly reaching its enrollment cap of 160 every time it has been offered. When the current pandemic ushered in a fresh wave of potent disinformation, Bergstrom felt compelled to put his teaching into practice.

On January 20, the Centers for Disease Control and Prevention announced the diagnosis of the first U.S. COVID-19 patient, in Washington state. Bergstrom soon recognized early signs of organized disinformation and took to Twitter to help debunk rumors and nascent conspiracy theories surrounding the virus.

‘Numbers are ideal vehicles for promulgating bullshit. They feel objective, but are easily manipulated to tell whatever story one desires.’

In March, a Bay Area technologist named Aaron Ginn published an article on Medium that purported to demonstrate how mass media accounts of COVID-19 were overblown. “You don’t need a special degree to understand what the data says and doesn’t say,” Ginn wrote, and claimed to be an authority on the matter because of his experience in driving the “viral adoption” of products. After the article had been retweeted thousands of times—“gaining too much traction,” according to Bergstrom—the professor posted a thread of 31 tweets dissecting the article’s many flaws, including Ginn’s tenuous claims of epidemiological credibility.

“Imagine Shakespeare run through Google Translate into Japanese, then translated back into English by someone who’d never heard of Shakespeare,” Bergstrom wrote. “So much depth would be missing. Same here.”

Trained in biological communication, Bergstrom was well equipped to sift through swarms of false or misleading virus claims but encountered a new challenge specific to COVID-19: the speed at which the science needs to be done—and the attendant risks of that.

Indeed, one product of the scientific community’s singular focus on COVID-19 is the popularization of preprints: papers that have not yet undergone formal peer review or publication. Primarily intended to enable rapid mobilization and communication among researchers, these raw scientific articles are then relied on by the public with a certainty they haven’t yet earned, and they have fueled a deepening partisan rift in which the scientific canon varies widely across party lines.

“People are doing this radically open science right now, so all of these preprints are going up and are being discussed on Twitter and many other forums,” Bergstrom says. “The problem is that the pandemic has been so heavily politicized. When people post a result, it supports one camp or another.”

By mid-February, popular media was already so saturated with conflicting reports about COVID-19 that the director-general of the World Health Organization declared an “infodemic.”

Portrait of Carl BergstromBergstrom (Photo: Kris Tsujikawa/CC BY-SA/Creative Commons) 

 

The next month, Bergstrom redoubled his efforts, interrogating the COVID-19 impact model from the Institute for Health Metrics and Evaluation at UW Medicine—one of the early models influential in U.S. policy-setting—in another long Twitter thread. In particular, he highlighted several assumptions that contributed to the model’s optimistic outlook, expressing concerns that the data could easily be taken out of context.

“I’ve already seen claims that this study proves we need fewer than 40,000 ventilators,” he tweeted at 3 a.m. from Seattle. “True, I guess, IF the curve fitting approach works and IF the death count data are right and IF we attain Wuhan-scale lockdown and IF we maintain it and IF there’s no second wave.”

Bergstrom’s tweets might appear trivial in the greater scope of an international pandemic, but without online voices like his, other scholars point out, disinformation spreads rapidly. Such was the case for a Facebook post by Kerri Rivera, a former Chicago real estate agent, which promoted chlorine dioxide, a highly toxic industrial bleach, as a COVID-19 cure. According to Sam Wineburg, PhD ’89, the Margaret Jacks Professor of Education, the post was shared more than 200,000 times before it was taken down by Facebook, revealing a fundamental problem in the way people process information online.

“In an age where a 9-year-old has a smartphone, we are driving on the information highway without so much as a driver’s license,” he says.

The high volume of COVID-19 misinformation can even influence the policy of federal organizations. On June 30, the CDC released a list of COVID-19 testing considerations for colleges and universities planning to bring students back to campus in the fall. In the notice, the CDC stated that it “does not recommend entry testing of all returning students, faculty and staff.”

‘In an age where a 9-year-old has a smartphone, we are driving on the information highway without so much as a driver’s license.’

Bergstrom was incredulous. In an op-ed for the Chronicle of Higher Education, he called the decision “inexplicable and irresponsible,” defending testing paired with isolation as a “proven means of disease control.” In a follow-up Twitter post, he argued that the agency’s decision was based primarily on “agnotology,” a term coined by Stanford professor of history Robert Proctor to describe culturally induced ignorance.

Taken together, the imprint of politics on scientific discovery and public policy, the reach of bad actors and disinformation, and the need for better media literacy education make for a complicated decision-making climate.

“We don’t really understand how having this kind of media environment affects the way that people get information and make decisions and respond to crises like COVID,” Bergstrom says. “I think we’re seeing a lot of the vulnerabilities associated with that right now and in failures of our pandemic response.”

As a silver lining, Bergstrom hopes that the infodemic will spur a greater interest in teaching media literacy and digital citizenship, for which his Calling Bullshit course can be a preliminary model. Already, several high schools in Washington state—where a law supporting media literacy and digital citizenship instruction in K–12 schools was passed in 2016—have adopted such curricula.

Still, Bergstrom and co-instructor West understand that improving online public awareness will take time and that the current disinformation crisis demands persistent action.

“Media literacy is a long-term solution,” West says. “It’s something that’s going to take a generation to really see the effects.”

Back on the Farm, researchers have also taken an interest in combating online disinformation, launching the Cyber Policy Center at the Freeman Spogli Institute for International Studies last year. Through collaboration with the Center for an Informed Public headed by West at UW, along with several other centers at Stanford, Kelly Born, MA ’09, executive director of the Cyber Policy Center, is leading an initiative to address online electoral disinformation in real time in advance of the 2020 elections.

“We have seen a paradigm shift away from thinking about moderating content based on what is being shared—so much of the concerning content isn’t categorically true or false—toward moderating the behavior of bad actors by limiting their ability to create fake accounts, deploy bot networks, micro-target in a predatory way, etc.,” she says.

Changing the way people think and behave on social media is no small task, yet this is exactly what Bergstrom strives to do with the book Calling Bullshit, which he saw as essential to the media consumer even before COVID-19. Now the book is more relevant than ever.

“You could rewrite the book replacing every example with an example from COVID.”


Andrew Tan, ’22, is an editorial intern at Stanford. Email him at stanford.magazine@stanford.edu.

Trending Stories

  1. Let It Glow

    Advice & Insights

  2. Meet Ryan Agarwal

    Student Life

  3. Neurosurgeon Who Walked Out on Sexism

    Women

  4. Art and Soul

    Arts/Media

  5. How to Joke in a Job Search

    Advice & Insights

You May Also Like

© Stanford University. Stanford, California 94305.