Illustration of robot with moving human head

Deceit Gets Smarter.
Can Truth Keep Up?

dotted line

Artificial intelligence is remaking the news. Those who control it are reshaping society.

by Deni Ellis Béchard
Illustrations by Brian Stauffer
July 2019
dotted line

When Tom Van de Weghe explains why the Chinese government favors the use of artificial intelligence to control journalists, he begins with the story of an assault. It was 2008, only months after the Beijing Olympics, and Van de Weghe, a Belgian investigative journalist, was in China’s Henan Province with an Australian cameraman and a Belgian-Chinese fixer. They were preparing a report for World AIDS Day on the HIV crisis in Henan, where in the 1990s local governments and businesses bought plasma from farmers. The get-rich-quick scheme—geared toward selling plasma globally—had infected thousands with contaminated equipment. Van de Weghe heard rumors of villages wiped out, leaving only orphans. He arranged to interview an orphanage director, but when he arrived, the director had just been arrested. Van de Weghe and his team interviewed the director’s wife in an alley, but afterward the police stopped them and beat them on the roadside so violently that Van de Weghe feared for his life. His camera equipment—including the tape with the recording—were confiscated.

The attack resulted in a PR disaster for China. The Olympics had been the nation’s opportunity to celebrate its opening to the world, and the central government had lifted restrictions on foreign journalists. The Guardian and the New York Times, among other papers, picked up Van de Weghe’s story. The Chinese government eventually issued an apology and returned the camera equipment, though the tape had been erased.

The following year brought even more bad publicity for China, with rallies in Hong Kong for Tiananmen’s 20th anniversary and the Ürümqi riots—clashes between police, Uighurs (a Muslim minority) and Han (China’s dominant ethnic group) in the Uighur autonomous region. The government rolled back press freedoms, but this time, rather than relying on physical intimidation to control journalists, it used artificial intelligence—automated surveillance systems that tracked journalists, reported on them to authorities and exercised strict censorship online, removing articles and social media posts.

“Camera surveillance was already present,” Van de Weghe says of the eight years he spent covering China, “but there wasn’t a significant system behind it. AI became the unifying element.”

As a 2019 John S. Knight journalism fellow at Stanford, Van de Weghe has been investigating the ways that AI can be a subtly powerful tool to silence journalists and shape the news—one that requires relatively little manpower and is less likely to generate the sort of bad publicity arising from a physical attack.

In recent years, AI has become a catchall term referring to many types of automated computer systems and machine learning software that perform activities traditionally thought to require human intelligence—such as interpreting data, finding patterns in it and extrapolating from those patterns to accomplish tasks. As research in AI has expanded, its uses have proliferated: self-driving cars, medical diagnostics, safeguards against fraudulent financial transactions and automated weapon systems. Its impact on news media in particular has been profound and immediate. Aside from monitoring journalists, as in China, it can also direct internet users to certain types of news, thereby skewing public opinion, consumer habits or election results. By controlling people’s access to information, AI can transform cultures without revealing that it is guiding billions of human lives. People click on news links and consume media that influences their beliefs and behavior, and yet they know little or nothing about who designed the AI or why, or even how the software is affecting them.

But just as AI can harm the free press, it can support it. Computer scientists and journalists are increasingly trying to democratize AI—to make sure its use isn’t limited to the powerful. In fact, dozens of scholars at Stanford are developing AI that can analyze data for investigative journalism or help newsrooms prevent bias and misinformation. They see access to AI as crucial to sustaining a free press and preventing the media—and its ability to shape cultural values—from falling under the exclusive control of governments and powerful interests.

Illustration of man holding sword with robot hand

Man vs. Machine

Van de Weghe has continued to study Chinese AI—how it tracks people with ever-improving facial recognition software. He describes the new “social credit” programs that use AI to combine data from numerous sources, assign scores to people’s behavior and allocate privileges accordingly. In 2013, when Liu Hu, a Chinese journalist, exposed a government official’s corruption, he lost his social credit and could no longer buy plane tickets or property, take out loans, or travel on certain train lines.

“With the AI that’s being developed,” Van de Weghe explains, “we would never be able to get to Henan. They would have been able to stop us from being able to board the plane.”

Jennifer Pan, an assistant professor of communication, explains why Chinese citizens accept social credit programs. “People think others spit in the street or don’t take care of shared, public facilities. They imagine that social credit could lead to a better, more modern China. This is an appealing idea. Political dissent is already so highly suppressed and marginalized that the addition of AI is unlikely to have anything more than an incremental effect.”

The result for journalists is that actual prisons (where many are currently held) are replaced by virtual prisons—less visible and therefore more difficult to report on. In the face of this, Van de Weghe says, many journalists he knows have quit or self-censored. And while reporters outside China can critique the general practice of censorship, thousands of individual cases go unnoticed. Government computers scan the internet for all types of dissidence, from unauthorized journalism to pro-democracy writing to photos of Winnie-the-Pooh posted by citizens to critique President Xi Jinping, who is thought to bear a resemblance. AI news anchors—simulations that resemble humans on-screen—deliver news 24/7. The government calls this media control “harmonization.” The Communist Party’s goal for sustaining its rule, according to Pan, “is to indoctrinate people to agree. Authoritarian regimes don’t want fear.”

Van de Weghe came to Stanford last fall, after four years as the D.C. bureau chief for the Belgian public broadcaster VRT and a stint as a geopolitical analyst. Since 1966, the JSK program has, in the words of its director, Dawn Garcia, MA ’08, “been helping train leaders in journalism and journalism innovation.” In recent years, the program has pivoted toward addressing how technology has disrupted journalism and transformed society, and how solutions can be found through collaboration. While developing individual projects, the fellows engage with scholars across the university. “Perhaps the only place where you don’t see fellows is brain surgery,” Garcia says.

Van de Weghe’s project is to create instructional material that newsrooms can use to identify and prevent the spread of deepfakes—videos in which AI software has seamlessly integrated misleading alterations. The name is a portmanteau of “fake” and “deep learning,” a type of machine learning built with neural networks. In 2017, deepfakes first appeared as revenge or celebrity porn with the face of the targeted individual—often Michelle Obama, Ivanka Trump or Scarlett Johansson—grafted onto the body of a porn actress. New technology can even generate deepfakes in real time, as if a filter has been applied to the original.

“Imagine China creating a deepfake of a journalist or a dissident saying anything,” Van de Weghe says and emphasizes the vastness of China’s surveillance video archives.

At Stanford, he has found a community debating AI’s applications—how they both harm the media and can be used to support it. Even on the subject of deepfakes, people are divided; some prepare newsrooms for them, while others are less concerned.

‘Fake news tends to be more interesting than real news. “Hillary Clinton had a pedophilia group in the basement of a pizza parlor!” Wow. That’s interesting.’

“As a journalist who looks for a balanced view,” Van de Weghe says, “it was helpful to have the different opinions—to learn that some people also saw synthetic media created by AI as a positive thing for news storytelling and content creation.”

Since March 2019, Stanford’s Institute for Human-Centered Artificial Intelligence (HAI) has supported interdisciplinary collaboration with the goal of creating AI to augment people’s lives and not disrupt them. To address AI’s impact on journalism, it is sponsoring two JSK-HAI fellows who will work alongside the many experts on ethics and engineering that HAI is bringing together to find solutions to the challenges of AI. (Read: New Institute Prompts Focus on People-Friendly AI)

Speaking at HAI’s launch, Tristan Harris, ’06, co-founder of the Institute for Humane Technology, identified a significant problem for the free press: the automated recommendation engines that detect the preferences of social media users and serve up news that matches their views and that is most likely to hold their attention.

At the heart of the problem, Harris explains, is the attention economy: the idea that people have limited attention and companies compete for a share of it. Recommendation AIs—souped up for the ever-more-crowded media marketplace—are fine-tuned for drawing attention. “The click is the source of authority in the entire internet,” Harris says. The more people click on a link, the more the AIs direct others to it. Spotlighting content that gets more attention and holds it longer is crucial to the profit model of online advertising. Given that roughly 68 percent of Americans get some news via social media, the concern is that fake news, heavily biased opinion pieces and conspiracy theories attract more attention and receive more clicks.

As AIs compete for attention with extreme content, people are drawn away from balanced news about what is happening in their countries at the local and national levels.

“More extreme things are often more interesting,” says philosophy professor John Etchemendy, PhD ’82, who co-directs HAI. “Fake news tends to be more interesting than real news. ‘Hillary Clinton had a pedophilia group in the basement of a pizza parlor!’ Wow. That’s interesting.”

Man Plus Machine

In recent years, an increasing number of journalists and engineers have been developing AI to augment the quality and quantity of journalism and reach larger audiences. When Marina Walker Guevara, deputy director of the International Consortium of Investigative Journalists (ICIJ), came to Stanford as a JSK fellow in 2018, harnessing AI for journalism was precisely her goal.

Walker Guevara began reporting in Argentina in 1998, and though she spent five years investigating crime, the state of prisons and corruption, she felt that she was having little impact. Argentina cycles through economic crises, and during a low point she couldn’t support herself and her mother. Having long admired the American tradition of investigative journalism and having read about its new methods for analyzing data, she moved to the United States to improve her skills and earn more.

“My mother died in the middle of that dream,” she says. “I decided that I wanted to become an investigative reporter in this country and focus on global issues.”

After a fellowship at the Philadelphia Inquirer and a master’s at the Missouri School of Journalism, she joined ICIJ, which was training journalists to find stories in large data sets. In 2015, the Panama Papers were leaked from the Panamanian law firm Mossack Fonseca. They comprised 11.5 million documents containing financial information on more than 200,000 offshore shell corporations, many of which were used for tax evasion by corporations and wealthy individuals. The German paper Süddeutsche Zeitung received the trove but, lacking technical capacity, sent it to ICIJ.

Processing the documents required advanced optical character recognition, a form of AI capable of extracting information from many types of documents—in this case, millions of emails, text files, database entries, PDFs and images. Additional algorithms then organized the information. One powerful tool was the software program Linkurious, designed to crack large datasets and find patterns of criminal activity. Its prototype was developed at Stanford’s Center for Spatial and Textual Analysis for “Mapping the Republic of Letters,” a project in which thousands of Enlightenment-era letters among luminaries such as Voltaire and Benjamin Franklin were analyzed for personal and geographical connections. Applied to the Panama Papers, the algorithm showed webs of corruption among dozens of countries, implicating banks, corporations, heads of government and other powerful individuals. Once processed, the data could be searched by hundreds of journalists worldwide as if they were using Google.

In the weeks after journalists began publishing stories about the Panama Papers, protests erupted around the world, resulting in the resignation of many government officials. Walker Guevara recalls her overwhelming emotion, her sense of finally having an impact, as she sat in the small windowless newsroom at the ICIJ and watched the protests on TV—“people out in the street,” she says, “in the old ways, protesting in the public squares and saying, ‘Enough!’”

Globally, the work of ICIJ and its collaborators has so far resulted in more than $1.2 billion recovered in taxes and numerous corporate reforms. In 2017, the Paradise Papers followed, comprising 13.4 million documents. Again, the ICIJ and its collaborators identified politicians and public figures for their roles in tax evasion.

When Walker Guevara came to Stanford as a JSK fellow, she wanted to “democratize AI.” Until recently, algorithmic tools for analyzing data have been affordable largely for governments and wealthy corporations. Her goal is to adapt the technology for newsrooms to improve the efficiency of computational journalism. “If you give the computer a more intelligent role,” Walker Guevara says, “teach it what money laundering looks like—loans with very low interest rates that bounce from jurisdiction to jurisdiction, in multiple countries—then, in 12 million documents, it might find 50 cases that match your definition of money laundering.”

At Stanford, she partners with the lab of Chris Ré, an associate professor of computer science and a MacArthur fellow. He focuses on “weak supervision,” a subset of machine learning that allows AI systems to learn from human experts. The human provides training sets that establish what elements in a body of documents mean. Once the AI knows the rules, it can rapidly analyze and label significantly more documents.

“This solves problems in very particular kinds of stories,” Walker Guevara says, “what I call impossible stories, because of the vastness of the data.”

Illustration of pencil

The benefits of AI to journalism become clearer in light of the budgetary challenges facing newsrooms. Ever since advertising went online and classified pages were replaced by websites like Craigslist, news-
paper profits have declined. From 2008 to 2017, budget cuts caused reductions of 45 percent in U.S. newspaper jobs and 23 percent in all newsroom jobs, including radio, broadcast television and cable. This trend has been matched abroad even as the global population expands. News sources increasingly rely on repurposing news gathered elsewhere, with the result that less local news is created.

James Hamilton, a communication professor and director of the journalism master’s program, is co-founder of the Journalism and Democracy Initiative, which develops algorithmic tools for journalists, who often lack the resources to do so. Hamilton teaches students to tell “stories by, through and about algorithms.” In the first case, he gives the example of how journalists can monitor AI systems that gather data online, whether from the weather service or corporate quarterly reports, and then automatically arrange that material into short texts. When the Associated Press began using AI software, it went from writing several hundred stories on business reports to thousands, with the result of increased trading in previously underreported stocks. Using similar software, Travis Shafer, MA ’16, a data news developer at Bloomberg and former student in the journalism program, “writes” thousands of stories each day using automated systems that analyze financial reports and reorganize their content into articles.

As for stories about algorithms, they include news that fosters the technology literacy necessary to navigate a future in which AI will touch on every aspect of our lives. Such stories range from how online recommendation algorithms are designed to hold people’s attention, to ProPublica’s 2016 exposés revealing that Facebook ads for jobs and housing targeted people by race, gender and age, which resulted in a lawsuit by the U.S. Department of Housing and Urban Development. Bias is increasingly becoming a focus of journalism about AI, since algorithms used to determine who gets parole or loans often favor white applicants over people of color.

Stories through algorithms—like those on the Panama Papers—have explored similar biases. The Stanford Open Policing Project, created through a partnership between the Computational Journalism Lab and Computational Policy Lab, has standardized and analyzed data on 100 million traffic stops from dozens of cities and states. The results have shown that black drivers are more likely than whites to be stopped during the day—but equally as likely to be stopped after dark, when they can’t be identified as black. Black and Hispanic drivers are also more likely to be searched than white drivers. The Open Policing Project has been used by more than 100 journalists, researchers and policymakers. Some police departments have even sent their data and requested that it be analyzed so they can address disparities.

In Bharat’s vision for the future, AI s will do background research, suggest questions for interviews, transcribe responses and fact-check articles. ‘Instead of speaking with 10 people for a story, a journalist might speak with a million.’

An offshoot of the Open Policing Project, Big Local News collects, standardizes and shares data that is inaccessible to small regional newsrooms that lack financial resources. Cheryl Phillips, a lecturer in communication and the initiative’s founder, leads a team of journalists and engineers to create AI software that can mine many types of data: property records, voter information, forest fire records, local civil asset forfeiture, and audits of local governments and nonprofits. The data can then be used to write stories on housing, health, education, criminal justice, local governance and the environment. “If an organization, county or public official knows they are being watched,” Phillips says, “that might make a difference in their behavior.”

The technologies developed for projects such as these may also have a long-term impact on newsrooms themselves, says Sharad Goel, executive director of the Computational Policy Lab and assistant professor of management science and engineering. He sees the possibility of a new generation of journalists doing the AI development that is currently happening in universities. “That’s an argument for these tools actually making newsrooms larger,” he says, “because now they have more impact.”

The Future of News—and Society

The ways that AI might someday influence journalism seem as numerous and varied as the minds that shape it to do tasks—whether assisting reporters, uncovering bias, detecting deepfakes or creating them.

Krishna Bharat, founder of Google News, is on the JSK board and teaches Exploring Computational Journalism, a course that pairs journalists with coders so that they can co-write software. He sees AI improving how stories are discovered, composed, distributed and evaluated. In his vision for the future, AIs will do background research, suggest questions for interviews, transcribe responses and fact-check articles.

“Instead of speaking with 10 people for a story,” Bharat says, “a journalist might speak with a million. The AI can do a survey for you. But for the survey to be effective, it can’t be a static set of questions. To be effective, the AI assistant has to adapt the interview to individual cases and gather data at scale that can be statistically significant.”

Just as AI might someday help manage the workloads of journalists, it can also monitor journalistic output for bias and uneven reporting. One such tool is being developed at Stanford’s Brown Institute for Media Innovation, housed in the School of Engineering. Will Crichton, a third-year PhD student in computer science, works with the past 10 years of TV news videos, which add up to “1.5 million hours of video—petabytes of data.” Using computer vision, the automated software scans faces and organizes the data. The results show that men have twice as much screen time as women do on every news program. This holds true of male and female guests when a woman is the host. Though the technology could also find the footage used for deepfakes to debunk them, the likely application is the real-time surveying of newsrooms for bias—from topic to race and gender. “For instance, in the 2016 election,” Crichton says, “Trump doubled Clinton’s screen time every week.”

Crichton has been working with 2019 JSK fellow Geraldine Moriba, who has more than two decades of newsroom experience from CBC to CNN and whose goal as a fellow is to rapidly distribute AI to newsrooms to help combat the bias they themselves create. “Can we change ourselves,” Moriba asks, “and make ourselves more fair using tools that count who is anchoring our stories, who is reporting our stories, what types of stories are told, what are the political biases in our reporting, who are the experts that we use, how often do we use mugshots (and when we use mugshots, who is in the mugshot), what are the examples of crime that we choose to report on—white-collar crime versus other crime?”

Crichton’s doctoral adviser, Maneesh Agrawala, ’94, PhD ’02, a professor of computer science and the Brown Institute’s director, has been developing automated software to help journalists with editing. “Something that isn’t appreciated,” he says, “is how much editing of the audio and speech there is before it goes out in a finished program.” Editors in broadcast media have to work with limited airtime, if not limited attention spans. They remove “uh” and “um” and edit video segments into shorter clips. With his software, an editor could alter a video transcript, and the AI would modify the video itself so that it seamlessly matches the altered transcript. He acknowledges that this could be misused. “You can make people say things. You can insert pauses that can change the meaning of things that were said.”

‘How are we going to deal when people can generate photo and video that resembles people? How does a newsroom deal with it once deception becomes common and easily done?’

Though in the wrong hands Agrawala’s work could create deepfakes, he sees them as a relatively minor threat. “The problem,” he says, “is fundamentally a person lying. Most of the propaganda and lies that are spread are using real videos and images and interpreting them in incorrect ways.” He explains that software will be created to identify digital signatures in deepfakes and that even as people find new ways to deceive, others will improve detection methods. “It’s an arms race,” he says. (Read: Time Out for Ethics—Professors Shape AI's Creators)

In 2018, Heather Bryant, technology manager for the Justice and Democracy Initiative and a former JSK fellow, presented a more urgent view in Nieman Lab, Harvard’s forum on the future of journalism. Her article “The Universe of People Trying to Deceive Journalists Keeps Expanding, and Newsrooms Aren’t Ready” documents the increasing precision and speed with which deepfakes can be generated. “Every year,” she writes, “[journalists] level up into a new class of challenges, with more antagonists, more complicated storylines and an adversarial machine that seems to know their next moves.”

Bryant is concerned that neither journalists nor legal systems are prepared. She references the case of Courtney Allen. The 2017 Wired article “How One Woman’s Digital Life Was Weaponized Against Her” describes how Allen’s online harasser tormented her, nearly destroying her personal and professional lives. “The court system and the police were fundamentally unable to deal with this,” Bryant says. “How are we going to deal when people can generate photo and video that resembles people? How does a newsroom deal with it once deception becomes common and easily done?”

The larger question is then how countries will respond to deception when it is implemented in an aggressive and strategic way. Larry Diamond, ’73, MA ’78, PhD ’80, a professor of political science and founding co-editor of the Journal of Democracy, expects deepfakes in the 2020 American elections. “I think we need to be prepared for the Kremlin to roll out levels of disinformation on a scale of sophistication that we didn’t even begin to see in 2016 and can’t imagine.”

As video editing tools are quickly coming onto the market—such as Adobe’s nefarious-sounding “Project Cloak,” an After Effects software update that allows users to easily make aspects of videos vanish—the greatest risk might be that deceptive AI will “muddy the water,” in the words of Jeff Hancock, a professor of communication and founding director of the Social Media Lab. The lasting harm, when public trust in journalism is low, would be increasing skepticism in all forms of media. “This plays right into the handbook rule No. 1 of authoritarianism,” he says: to discredit journalism.

Hancock works with companies and the U.S. government on tools for detecting online deceit. Of the video alterations he has seen, “the most worrisome” was the clip showing Donald Trump and CNN White House correspondent Jim Acosta arguing. An altered version distributed by Infowars sped up the motion of Acosta’s arm, making him appear to strike a White House aide as she reached for his microphone. A “shallow-fake” of this sort does not require AI, only basic editing skills. Citing the video, the White House revoked Acosta’s press privileges—a decision that highlighted how damaging even the smallest changes to a video can be and how disruptive fake videos might become once they are more complex and commonplace. Both Obama and Trump have been the subject of deepfakes, and recently a Belgian political party, the Socialistische Partij Anders, commissioned a deepfake in which Trump tells Belgians to withdraw from the Paris Climate Accords. Though the video was intended to provoke discussion and not be taken as truth—with Trump saying at the end that he didn’t actually say these things—many viewers believed it and responded with outrage.

In April 2019, China’s Standing Committee of the National People’s Congress deliberated whether to forbid the distortion of a person’s picture or voice through technology and drafted a law that might make deepfake technology illegal, a step that, were it taken in the United States, would likely raise questions about freedom of expression. The China Daily quoted Shen Chunyao, a senior legislator of the NPC’s Constitution and Law Committee: “We added the prohibitions because some authorities pointed out that the improper use of AI technology not only damages people’s portrait rights, but also harms national security and the public interest.” Given that so much of China’s AI is based around enforcing the social order, the ban on deepfakes could protect the government not only from misinformation created to weaken it, but also, as Van de Weghe points out, from truthful information meant to contest it. “A sensitive video report based on true facts,” he explains, “could be labeled a deepfake by a regime, and producers of the video—journalists or human rights defenders—could then be prosecuted for violating these laws.”

China’s New Generation Artificial Intelligence Development Plan—announced in July 2017—would, if successful, increase the government’s control domestically and its influence internationally. The plan foresees AI transforming virtually every aspect of human life, reinventing industries and creating new ones. China’s leadership has clearly stated its goal of outclassing American AI by 2030 in ways that would affect far more than journalism. In 2018, when Google and Apple sponsored a contest in which algorithms had to correctly interpret camera images taken under a variety of weather conditions, China’s National University of Defense Technology won. A few months later, an executive at one of the largest defense firms in China spoke about plans to develop autonomous weapons, saying, “In future battlegrounds, there will be no people fighting.” Earlier this year, investor and philanthropist George Soros, whose Open Society Foundations support independent media, addressed the World Economic Forum, calling China’s advances in AI a “mortal danger facing open societies.”

Jerry Kaplan, a research affiliate at the Center on Democracy, Development and the Rule of Law and a fellow at the Center for Legal Informatics at Stanford Law School, explains that China’s AI is designed not only for its domestic needs but also for sale in the competitive overseas market. Just as China strives to make its 5G technology the global standard, it is doing the same with AI, positioning itself to be the dominant economic and cultural player globally by exporting automated systems that allow other countries to govern and do business like China. In recent months, the New York Times reported on China’s use of AI facial recognition to track Uighurs and its exports of automated policing technology overseas.

Though used domestically to prevent dissent, terrorism and crime, China’s AI systems, once exported, spread its model of governance. “Inevitably, in subtle and unintentional ways,” Kaplan says, “the use of complex technologies embodies values and cultures and establishes economic and social ties.”

And yet, as the debate around AI and journalism illustrates, AI’s dangers are not inherent to it. Fei-Fei Li, a co-director of the Institute for Human-Centered Artificial Intelligence and professor of computer science, argues that if AI is to support people, it must be democratized—put into the hands of the very people it will serve. She compares AI to any other technology that can benefit or harm society depending on who shapes it.

“When I talk to students,” she says, “I make sure they understand that AI is technology. It is a tool. There are no independent machine values. Machine values are human values.”

Deni Ellis Béchardis a senior writer at STANFORD. Email him at dbechard@stanford.edu.

© Stanford University. Stanford, California 94305.