The Institute for Human-Centered Artificial Intelligence started with a conversation between two neighbors, in their driveways: John Etchemendy, PhD ’82, a philosopher, and Fei-Fei Li, a computer scientist. Li expressed her growing discomfort that the people creating AI—primarily white men—were not representative of the millions affected by it.
“Throughout human history,” Li says, “every time something is invented or produced, if we’re not careful, it favors a particular group. My favorite example is scissors. Humans have been using scissors for thousands of years, but they were designed for right-handed people.”
She also had more worrisome examples in mind: medicine tested on men but prescribed to women with little understanding of its impact on them; AI facial recognition that failed to identify people with dark skin; racially biased AIs used for parole hearings or loans.
“If the data is biased,” she says, “then we have serious human consequences.”
The vision for HAI, which launched in March 2019, is multifold. The institute’s written mission contains three areas of focus: developing technologies inspired by human intelligence; guiding and forecasting the human and societal impact of AI; designing AI applications that augment human capabilities.
As important as changing the AI itself will be shaping AI’s creators by designing educational programs.
“Ethical questions need to be built into systems from the beginning,” says Etchemendy, who co-directs HAI with Li. “We want to educate professionals from all walks of life—executives, journalists, congresspeople, senators, lawyers: What is the reality of the tech as opposed to the hype? What should we be worrying about, and what do we not yet need to be worrying about?”
He acknowledges that with the rise of AI technologies, some disruption will take place and that AI systems should be developed to augment humans whenever possible. He gives the example of a bank that randomly sampled a tiny percentage of daily transactions and had analysts examine them for evidence of money laundering. When an AI was built to scan all transactions for suspicious activity, the same analysts received only the files that it flagged.
“That is augmenting what the humans can do,” Etchemendy says, “making their jobs more rewarding and taking away the drudgery.”
Machine automation has always led to increased productivity, and he expects the same with AI. So that more people benefit from that productivity, HAI aims to influence governance through policy summits and working groups on best practices and regulation. With Microsoft founder Bill Gates and California Gov. Gavin Newsom speaking at the launch, the institute showed its ability to bring together academics, industry and government. Its advisers comprise numerous industry leaders, including LinkedIn co-founder Reid Hoffman, ’89, former Google CEO Eric Schmidt, and former CEO and president of Yahoo Marissa Mayer, ’97, MS ’99, as well as nearly 20 HAI fellows and at least 140 members of Stanford’s faculty from diverse academic backgrounds.
“We promised a lot,” Etchemendy says, “and now we have to produce.”
Deni Ellis Béchard was a senior writer at Stanford. Email him at stanford.magazine@stanford.edu.