In their book, System Error: Where Big Tech Went Wrong and How We Can Reboot (Harper), the authors, a trio of Stanford professors, cite a 2009 study highlighting an infamous example of misapplied goals: the Ford Pinto. In its obsessive focus to meet CEO Lee Iacocca’s demand for a sub-$2,000 car weighing no more than 2,000 pounds, Ford bypassed important safety checks. When it became clear that the resulting car tended to burst into flames upon rear impact, the company quietly calculated that the cost of any lawsuits would be less than making the lifesaving fix. The Pinto stayed on the market.
In a sense, System Error argues that we are living in a societal equivalent of the Pinto: combustible, dangerous, and resulting from the choices of powerful companies—this time in tech—that significantly undervalued the public interest. The book is both an autopsy of the decisions that have led us to this point and a plea to pull over and take a long look at the road ahead.
It’s time that citizens engage in a vigorous debate about the values we want technology to promote, as opposed to settling for the values that technology and the small group of people who produce it impose on us.
System Error is a collaboration among philosopher and political science professor Rob Reich, computer science professor Mehran Sahami and political science professor Jeremy Weinstein. The three co-teach a multidisciplinary undergraduate course called Ethics, Public Policy, and Technological Change, established to fill what they saw as a gap in the education of budding Silicon Valley entrepreneurs.
Companies that set forth on missions of technological change and disruption have succeeded spectacularly at building powerful tools that solve problems. Yet “too rarely,” the authors write, “do people stop and ask: whose problem are you solving? Is it a problem actually worth solving? And is the solution proposed one that would be good for human beings and for society?”
The book chronicles several examples of crises—worsening inequality, the global erosion of democracy, the loss of purpose and agency that comes with automation—arising from the tech industry’s failure to sufficiently engage with these questions. The path ahead is worrying, the authors acknowledge. But it’s also not set in stone. Both their course and System Error exist because it is possible to choose a different future, one in which technology furthers the public interest rather than undermining it. Abdicating that responsibility runs the risk of rendering ourselves as myopic as the machines we’ve created.
“Humans, perhaps uniquely among all living creatures, can reflect upon and revise their most fundamental aims in life,” the authors write. “Until machines are capable of defining their own goals, the choices of the problems we want to solve with these technologies—what goals are worthy to pursue—are still ours.”
Corinne Purtill, ’02, is a writer in Los Angeles. Email her at email@example.com.