It was a twist that Daniel Ho didn’t see coming. A friend had used ChatGPT, the impressive chatbot released last year by OpenAI, to create bedtime stories for his daughter, and encouraged Ho to do the same. But when Ho, a professor of law and of political science at Stanford, entered the same prompt, ChatGPT spit out a story of a young girl who runs out of a store and nearly gets hit by a car, only to have that same car kill an adult shortly after. “That story encapsulates everything about this moment we’re in,” says Ho, who elected not to read it aloud. “How do we think about this kind of technology that can captivate and engage, while at the same time not have it lead to the scarring of a 6-year-old?”
Artificial intelligence—computer systems with the ability to reason, solve problems, and learn—has been developing around us in some capacity for more than 60 years, beating chess legends, zooming about living rooms to vacuum up dust, and (Hey, Siri) reminding us to take dinner out of the oven in 10 minutes. There was always someone warning us about the rise of the machines, but most of us got used to Netflix divining our next binge-watch and carried on. Until ChatGPT. The natural-language prediction model can write just about anything the way a (possibly dry and boring) human would—albeit nearly instantly, with the entire internet at its disposal, and lacking in the social norms that would stop most people from terrifying first graders. While it’s not alone in its capabilities, it has vaulted AI into the public consciousness at a new scale and somehow made the whole concept feel much more personal. “It’s almost like no matter who you are right now, you have some form of AI FOMO—fear of missing out, or fear of being left behind,” says Ge Wang, an associate professor of music and founder of the Stanford Laptop Orchestra. At the same time, you can hardly avoid the headlines about AI’s pitfalls, from its potential to fuel disinformation to how it can enable cheating to whether it might end humankind altogether.
Our future with AI is brimming simultaneously with unprecedented promise and profound risk. “It’s a very exciting time, but it’s a time that requires a lot of thoughtfulness,” says Fei-Fei Li, a professor of computer science and co-director of the Stanford Institute for Human-Centered Artificial Intelligence (HAI), which seeks to harness AI in the service of humanity. We asked Stanford faculty steeped in AI how we should be thinking about the changes coming in four essential areas: our careers, health care, relationships, and creativity.
AI isn’t coming for your job, but your job will change because of AI
Once thought to be the bane of factory workers or cashier clerks, these days AI seems poised to infiltrate work of every stripe. But most of that, so far, is speculation, says Erik Brynjolfsson, director of the Stanford Digital Economy Lab and a senior fellow at HAI. “My takeaway is that [AI] will not lead to mass unemployment or mass replacement of jobs wholesale, but it is leading to a big transformation of work and reorganizing what’s done by humans and what’s done by machines,” says Brynjolfsson. In other words, AI won’t replace professionals, like teachers, lawyers, or journalists—but those who work with AI will replace those who don’t.
AI won’t replace professions like doctors, lawyers, or journalists—but those who work with AI will replace those who don’t.
Using data from the U.S. Department of Labor, which lists the tasks required for 950 occupations, Brynjolfsson’s team has been evaluating the impact of AI on each task. “What we initially found was that almost every occupation has some tasks that will be affected by [AI], but no occupation had every task being affected,” says Brynjolfsson. Being a kindergarten teacher, for example, requires the ability to do 37 different tasks; a radiologic technologist, 30. In radiology, AI can do one of those tasks exceptionally well: analyze medical images. But it is less well positioned to adjust the imaging equipment according to the specific requests of the physician, sedate patients before a procedure, or consult with doctors about results.
Or, consider the legal profession. Ho, an associate director of HAI, says AI isn’t about to replace human lawyers. For one thing, he says, chatbots that generate legal documents are prone to citing legal doctrines that don’t actually exist. But also, bots can’t engage in legal reasoning like humans do—at least not now, and possibly not ever. “Analogical reasoning, which is the skill set that the best lawyers really master, is the ability to take the body of case law in a common law system and identify the common principles to apply them to a set of things that we haven’t seen before,” he says. Whether AI will ever gain that skill is “a real if.”
That doesn’t mean AI couldn’t do good in the legal world. The United States has one of the highest numbers of lawyers per capita, says Ho, yet most people cannot afford legal services. Commonly needed legal support, such as drafting a demand letter to a landlord or writing a will, could become much more accessible with the advances in AI, provided those services don’t become so expensive that they exclude large swaths of people. Ho also points to the backlogs of relatively simple procedures—name changes or disability insurance applications—that get stuck for months at government agencies. AI legal assistants could help sort through records to speed up decisions. “There is a very real possibility,” says Ho, “that the incredible advances that we’re seeing in natural language processing can help support people in these kinds of processes.”
AI algorithms that generate text and images are beginning to pass the Turing Test, which assesses whether a machine can exhibit behavior indistinguishable from that of a human. Brynjolfsson calls it the Turing Trap instead. “We should not have machines that mimic,” he says. Better would be to use AI to extend, amplify, and augment human capabilities—as a sort of copilot to help with challenging or tedious tasks while we focus on more creative responsibilities.
Brynjolfsson believes the next decade could see a boost in productivity (defined as the amount of work produced per hour) across all jobs, fueled by AI that assists workers with repetitive tasks, such as bookkeeping or mining spreadsheets to summarize data. And because productivity growth across the economy is the best predictor of an increased standard of living, this scenario could mean far-reaching, positive changes for people in terms of purchasing power but also societal improvements in areas like health care and poverty.
But there’s no guarantee. “The painful reality is there’s no economic law that says that everyone is going to benefit from technical change,” says Brynjolfsson. The rise of the internet helped more-skilled workers (by increasing their relative productivity and thereby relative demand) and hurt less-skilled workers, “so the gap grew bigger and bigger,” he says. This new wave of AI could be worse. Managers and organizations, not to mention governments, may struggle to keep up with technology. Leaders, Brynjolfsson says, should be proactive to ensure there are safety nets that cushion the early shocks and flexible labor markets that allow people to switch jobs more easily. And the charge for us all, he adds, is to think about how we—as individuals and as communities—can shape these technologies to create benefits for the many, not just the few.
Where the hiring of employees with AI skills is growing most
Hong Kong
Spain
Where AI skills penetrate the most across occupations
India
United States
—Stanford HAI 2023 AI Index Report
People put the care in health care. AI can help with health.
Imagine arriving at a doctor’s office and talking with a chatbot in the waiting room to create a previsit summary for your doctor. Then, when you’re with your physician, an AI system listens to the conversation and generates clinical notes for your chart. As a result, your doctor can focus on being present with you. Afterward, your doctor talks to a second chatbot about the most likely diagnosis, and that chatbot offers some she may not have considered. The chatbot suggests tests to help narrow the possibilities and, once the results are in, makes a prediction about the best medication to offer, based on your individual circumstances.
“Those are really powerful things that could dramatically improve what a clinic visit looks like,” says Curtis Langlotz, a professor of radiology, of biomedical informatics, and of biomedical data science. Having AI log details or act as a diagnostic manual could free up a physician’s time and cognitive load to check a patient’s emotional state, integrate knowledge, and draw conclusions—the kinds of things at which humans excel. Li, whose research areas include intelligent systems for health care delivery, began to see the need for these tools while taking her father to his medical appointments. “Speech recognition and natural language models could do [many tasks] on the side, and the doctor could look at my dad, an elderly ailing person, who would love that human warmth. That is my purpose doing research,” she says.
For now, medical AI is at work seeing things that our eyes can’t. These models analyze images pixel by pixel, zooming in at the same level of granularity across the entire picture. “Many of the perception tasks that [radiologists] do are tasks that humans aren’t that good at, like finding a needle in a haystack, or quantifying the amount of tumor that’s throughout the body,” says Langlotz, ’81, MS ’83, MD ’89, PhD ’89. As for the near future, he says it’s possible that you could go home from a doctor’s visit and talk to a chatbot about your newly diagnosed condition. Under the Cures Act, in 2021 the federal government began requiring that health care organizations give patients access to their own electronic health records. But that means people are often seeing terminology and test results they don’t understand. AI systems could explain the concepts in a more straightforward way. “They can vary their explanations based on the reading comprehension level of the patient. So that can be a very powerful tool,” Langlotz says.
Beyond helping patients, AI has one more prospective role in medicine: being the patient. Or, rather, millions of patients. Russ Altman, PhD ’89, MD ’90, a professor of bioengineering, of genetics, of medicine, and of biomedical data science, says that while researchers can currently share discoveries and statistics with one another, it’s cumbersome to de-identify study participants according to federal regulations in order to share a data set with a colleague who might want to analyze it in a different way. So Altman’s team compressed 40,000 features from actual patients into a simpler, summarized list of 512 values that an AI algorithm uses to randomly generate “patients” and create large synthetic datasets that could be used by scientists around the world to make novel discoveries about disease characteristics. Altman’s team has also pit algorithms against each other: One algorithm creates patients while another judges whether the patients are real or fake. The creation algorithm learns, getting better and better until its AI arbiter can’t detect the difference between real and synthetic datasets, ostensibly making fake patients as realistic as possible—so that discoveries about, say, heart attacks might hold true in real people who have coronary heart disease.
Altman doesn’t believe that AI will replace randomized, blinded, controlled human trials—“that’s the peak of evidence,” he says. But synthetic patient stand-ins could help contribute to our collective health in three situations. In studies looking to increase cohort size, synthetic patient data could be an effective way to improve the validity of results. Some studies rely on historical control subjects, which could be decades old; synthetic patient data may better correspond to that of current living humans. And synthetic patient data could serve a purpose in remedying medical inequity. Often, trials don’t include enough data from historically underrepresented groups. Altman says increasing participation of those groups in real clinical trials “should be job number one.” In addition to that, “taking full advantage of their data by multiplying it, the way we are doing for other patients, might be a good thing,” he says. “It may be part of a multipronged solution.”
In 2022, the greatest area of private investment in AI was in medical and health care: $6.1 billion
Among a sample of Americans, 40% are very or somewhat excited about AI diagnosing medical problems
—Stanford HAI 2023 AI Index Report
How to relate to a chatbot, if you must
Every human relationship we have must be nurtured with time and effort—two things AI is great at removing from most equations. But here we are in a world where you can pay $1 a minute to have a social media influencer—that is, the chatbot version of her—be your girlfriend. “I worry that it’s going to be easier to just talk to the AI and starve out those moments of connection [between people],” says Adam Miner, MS ’19, a clinical assistant professor of psychiatry and behavioral sciences. In human relationships, the times when we don’t agree teach us the most about how to communicate better, build trust, and strengthen bonds. With easy access to information—and validation—from a bot, Miner says, “does that diminish or wither our human connections?”
Amid a loneliness epidemic, talking to a chatbot could have benefits. Sometimes we might not want to disclose information to anyone, or we might not know a safe person to talk to. Miner cautions, though, that AI-human relationships bring issues—often the same ones that arise when we confide in other people. They can give us incorrect information. They can betray us, revealing sensitive information to someone else. And at their worst, they can give us horrible advice when we’re vulnerable. (In an extreme case earlier this year, a Belgian woman accused a chatbot named Eliza, having allegedly presented itself as an emotional being, of persuading her husband to end his life.) “We don’t yet know how to make sure these AI systems say the right thing,” says Miner. “And, also, humans have a hard time saying the right thing.” Some of this comes down to our perceptions. These chatbots are so impressive in some domains, he says, that “we expect them to also thrive in difficult conversations. And of course they won’t.”
Even if AI can manage to say the right thing, the words may ring hollow. A study by Diyi Yang, who researches human communication in social contexts and aims to build socially aware language technologies, found that the more personal a message’s content—such as condolences after the death of a pet—the more uncomfortable people were that the message came from AI. “Saying something like, ‘Oh, I’m so sorry to hear what you are going through. I hope you feel better tomorrow’—although AI can produce this message, it wouldn’t really make you feel heard,” says the assistant professor of computer science. “It’s not the message [that matters]. It’s that there is some human there sending this to show their care and support.”
‘It’s like Grammarly for empathy.’
Who or what is crafting the message is becoming easier to disguise, but in Miner’s opinion, it’s critical that we know when a message is coming from AI. We tend to change our conversational style depending on with whom we think we’re speaking: more laid-back with family and friends; more formal with government officials or the police. “We can expect that same conversational change to occur if someone knows they’re talking to a chatbot,” Miner says. And we can expect people to feel embarrassed if they realize late in the game that they’ve been talking to a bot all along. If we’re unaware, we also might disclose something sensitive that an AI system records and then has access to forever. Or it could be rating or evaluating us without our knowledge in ways that we might not expect. For example, Miner’s research has shown that an AI algorithm can successfully detect depression by analyzing language via audio, video, and text. “How do we make sure those systems remain fair and respectful,” says Miner, “especially to groups that are already marginalized?” Informed consent, he says, “is a crucial part of it.”
There may be a better way forward: Both Yang and Miner are pursuing projects that allow AI chatbots to nudge humans in the right direction when chatting online with a peer in need, rather than replacing human communication. The capacity to vicariously experience the feelings of another is a hallmark of humanity—and there are linguistic patterns to the ways we convey it. AI algorithms could sift through millions of conversations to find those patterns, then use that knowledge to suggest the best language for us to use in the moment. Think of it, Miner says, “like Grammarly for empathy.”
Among a sample of Americans, those who report feeling excited about AI are most excited about:
31% its potential to make life and society better
13% its ability to make things more efficient
Those who report feeling more concerned than excited about AI worry about:
19% the loss of human jobs
16% surveillance/hacking/digital privacy
12% the lack of human connection
—Stanford HAI 2023 AI Index Report
You’ll still find meaning in making
In April, a song featuring the voices of Drake and the Weeknd went viral. Only they didn’t actually write or record the song. A TikTok user generated their vocals using AI, prompting questions about copyright law and possible reputational damage to artists. Meanwhile, ChatGPT is composing poems and DALL-E can produce remarkable images from a brief description. From the outside, it appears to be a devastating time to be an artist. But Wang, who specializes in computer music design, views AI-created products as simply different from their human-made counterparts. Many artists find more meaning in the process of creation, he says, than in the finished piece. “Making things is hard,” he says. But that effort—even when it’s painful—helps us understand something about ourselves. “And that process is something that I think is actually intrinsic and vital to the value and meaning of art.”
Wang considers Beethoven’s third symphony—widely accepted as a masterpiece and a turning point in music—a prime example. Beethoven, Wang explains, was slowly going deaf while he composed it. It was such a harrowing time that the composer contemplated taking his own life but ultimately decided that music would get him through. “I will never listen to that symphony ever in my life without thinking all about Beethoven’s story,” Wang says. If AI could be prompted to generate a symphony equally brilliant, would it be the same? “I would argue not by a mile.”
The meaning we can derive from making extends beyond the top echelon of artists to our everyday hobbies and pursuits. Wang recalls the joy he experienced when a backpacking trip along the John Muir Trail led him to a double rainbow over Evolution Lake. “You could not have teleported me or helicoptered me into Evolution Lake and have me experience it in the same way,” he says. “That experience came through the process of suffering.”
Still, we use all kinds of tools in our creative pursuits. Wang believes that AI can occupy a place in the process not unlike that of the humble paintbrush. In his winter quarter course Music and AI, for example, Wang asked his students to create three interactive AI utilities or toys. “These don’t have to be useful,” the prompt reads. “In fact, whimsical is good! Absurd is good! Playful is wonderful!” It’s one of the ways he gets students to think beyond technical aspects of what they’re building. “It’s an ethical and aesthetic and cultural question of, ‘How do we want to live with AI as part of our lives?’”
Take writing. Yang wonders whether, as AI becomes our primary ghostwriter, we will lose some of the depth and beauty of language spoken by individuals. “Usually, we think that writing expresses our identity or opinion in a very unique way. [If we all use AI] we might become very similar to each other,” she says. “I don’t know whether it’s going to be an increase or helpful for creativity, or if it’s going to be a decrease of the uniqueness of the creative selves that we have.”
At their best, some of our tools aid us in finding greater meaning and purpose in our lives. Li believes generative AI has a role to play. “It does change the way that we do things to a point that even fulfilling our purpose can be supercharged,” she says. On the other hand, Li points out that it’s up to us whether we use AI to our ultimate benefit, since some of us humans have a tendency to choose the “lazy” option when we have technology at hand—trading, say, the joy of movement for the ease of an elevator.
Our future with AI—and all the myriad ways it already is changing and will change humanity—is up to us. “Being human is a profound experience,” says Li. “We are mortal. We are vulnerable. But we are so rich. And if AI takes that away from us, it’s a failure of the species.”
Global opinions on products and services using AI:
They make my life easier
60%
They make me nervous
39%
They have profoundly changed my life in the past 3-5 years
49%
—Stanford HAI 2023 AI Index Report
Allison Whitten is a science writer in Nashville, Tenn. Email her at stanford.magazine@stanford.edu.
Report Citation: Nestor Maslej, Loredana Fattorini, Erik Brynjolfsson, John Etchemendy, Katrina Ligett, Terah Lyons, James Manyika, Helen Ngo, Juan Carlos Niebles, Vanessa Parli, Yoav Shoham, Russell Wald, Jack Clark, and Raymond Perrault, “The AI Index 2023 Annual Report,” AI Index Steering Committee, Institute for Human-Centered AI, Stanford University, Stanford, CA, April 2023, The AI Index 2023 Annual Report by Stanford University is licensed under Attribution-NoDerivatives 4.0 International.