FEATURES

They're watching. How can that be a good thing?

Robots will intrude on privacy, and spur better laws as a result.

January/February 2014

Reading time min

They're watching. How can that be a good thing?

What is it about robots? Our fascination with these machines dates back centuries. The ancient Greeks built them. Robots haunted the Industrial Revolution. For a time in the 1980s, the decade that brought us Short Circuit, The Terminator and RoboCop, it seemed that the United States had caught robot fever.

Recently a number of factors—cheaper sensors, advances in software—have conspired to bring robotics as close as ever to the mainstream. Sophisticated investors bet heavily on robotic startups. Amazon purchased a robotics company for $775 million to organize its warehouses. Hardly a week goes by without a major publication taking up the issue of cars that drive themselves or the day U.S. skies will darken with private drones.

Robots occupy a special place in our imagination; their impending ubiquity will influence American society in ways prosaic and profound. An obvious starting point for this impact is privacy. Robots have the potential to compromise privacy in new ways, but also to drag our privacy law finally into the 21st century.

Built to sense and navigate the world, robots affect privacy practically by definition. They can go places people cannot go, see things people cannot see; indeed, one of the principal uses to which we put robots is surveillance. Privacy has defined the debate about the domestic use of drones, but robots of all kinds have observation as their primary purpose. Why else build a robot that can climb the side of a building or squeeze under a door? To what other purpose do we put a robot that looks like a bug or a hummingbird?

It is not just the capability to engage in cheaper, more pervasive surveillance that sets robots apart from previous technologies; their very presence can affect us. Stanford professor Clifford Nass was a man "married to amazement," to paraphrase the poet Mary Oliver. His death last November saddened so many within and beyond the Stanford community. Among other contributions, Professor Nass helped pioneer an area of study known as CASA—Computers As Social Actors. What CASA shows is that people are hardwired to react to anthropomorphic technology as though a person were present. This includes the feeling of being observed.

Robots, meanwhile, far more than any previous technology, are designed to engage people socially. Author Pamela McCorduck tells a story about the Nursebot, a robotic assistant for the elderly developed at Carnegie Mellon. The first version was apparently very machine-like; it had a boxy appearance and an unnatural voice. Nursebot reminded patients to take their pills, and patients usually ignored it. But when the team made the next version of Nursebot—nicknamed "Pearl"—more humanlike, suddenly patients responded to requests almost as though a human nurse had made them. Indeed, up to a point, the more humanlike a robot appears, the more people like and respond to it.

As a consequence, robots feel different to us than other appliances or objects. Following a series of studies in which children and others refused to characterize robots as either objects or living beings, my colleague Peter Kahn at the University of Washington proposes that robots may belong in an entirely new ontological category. "We do not, for example, talk to a brick wall and expect it to talk back," Kahn and his colleagues observe in a recent article, "nor do we attribute to it mental capabilities or think of it as a possible friend. But robots appear different."

A robotic hummingbird positioned on a stand.
SPY BOT: The nano hummingbird, developed by DARPA, can hover up to 8 minutes and dart through doorways. Its onboard camera delivers real-time video to a remote operator. (Photo: Courtesy Aerovironment, Inc)

That we confuse robots for people at some hardwired level implicates privacy in new ways. I had occasion recently to tour Microsoft's Home of the Future—a full-scale demo of domestic technologies from the not-so-distant future. If, in the kitchen, you take out a bag of flour and place it on the counter, the light in the room suddenly changes. A disembodied voice addresses you: "Hello. I'm Grace. It looks like you're baking a cake. Would you like a recipe?" Which is great . . . if you're in fact trying to bake a cake. Less welcome, perhaps, if you're listening to music and dancing in your underwear. The fact that we are "wired for speech," to borrow again from Professor Nass, means that the introduction of anthropomorphic design into private spaces like the home must be done carefully to avoid interrupting dwindling opportunities for solitude.

There is also the related issue of intimacy. No one much cares how you use your washing machine. But the way you engage with a technology that feels social could be far more revealing. Robot & Frank is a lovely 2012 movie about an older ex-jewel thief who receives a robot helper as a gift from his son. The relationship starts off rocky but blossoms into what can only be characterized as a friendship. The two appear to share trust, even plan and commit a heist together (the law is not a part of the robot's programming). Indeed, the film's central conflict turns on whether the robot should "sacrifice" his memory so as not to incriminate Frank. But the evidence the robot carries—the video of the actual theft—pales in comparison to the true record of events.

A jury would see what the audience has seen: a vulnerable and lonely old man developing a connection over time. Truly the stuff of intimacy.

I write at length about the privacy implications of anthropomorphic design like Grace, and Siri of iPhone fame, in an article entitled "People Can Be So Fake." I write more generally about how robots will affect privacy in an MIT Press book chapter (creatively titled 'Robots & Privacy'). A couple of years ago, however, in an essay in Stanford Law Review Online, I argue something else entirely: The fact that we can easily visualize the problem of robot surveillance—literally picture it—means that robots have the potential to spark a necessary debate around privacy. This, coupled with recent revelations about National Security Agency spying, may mean we finally see an update to our privacy laws that reflects the contemporary state of surveillance technology.

Think of how much attention has been paid to the use of drones by U.S. law enforcement. I live in Seattle, where the police department owns two drones it cannot use. When the department announced its intention to use drones, people protested so vociferously that the mayor quickly grounded the program. Similarly, when I and others testified before the Senate Judiciary Committee last year about the domestic use of drones, protesters from Code Pink, a self-described social justice group, heckled the industry representative. A community in Colorado went so far as to issue hunting licenses to shoot down drones overhead. You can't make this stuff up.

A rendering of a robotic spider.Image: iStock

People really get the problem with drone surveillance. And for good reason: Drones will indeed make it easier and cheaper to watch the populace, with all the attendant worries about dragnet or ubiquitous surveillance. But they are hardly the first technology to do so. Where is the widespread, visceral indignation over loyalty cards, centralized databases or online cookies? Privacy violations in the digital age tend to be hard to visualize. Maybe somewhere, in some distant server farm, the government correlates two pieces of disparate information. Perhaps a shopper's purchase of an organic product increases the likelihood she is Democrat just enough that her identity gets sold to a campaign.

One consequence is that privacy law tends to lag behind surveillance technology. For example: The federal law governing electronic eavesdropping dates back to 1986, a decade before the commercialization of the Internet. Even the revelations that our NSA monitors Internet traffic and collects phone records in bulk has resulted in no change in the law to date, whereas more than a dozen states have passed or considered laws regulating the use of a few drones.

I also have explored how the very design of technology can convey a kind of notice to the consumer or citizen that they are being observed—far more so than the written document we will rely on today. Given that robots actually do collect, process and store data, perhaps it is a good thing that, for once, they look like what they are. As I noted earlier, a central issue with digital privacy is that it does not feel like anyone is tracking you, when in fact many are. As interfaces become more anthropomorphic, the reality of tracking finally realigns with user experiences of technology—a phenomenon I've called "visceral notice." You do not need to read a long, legalistic privacy policy to realize the thing that is staring at you can see you. It is of course possible that we will acclimate to anthropomorphic technology in our midst. I doubt it, given how hardwired the reaction seems to be. Either way, robots have the potential for a time to focus us in on the effects of living among sophisticated surveillance technologies, and hopefully strike a better balance than we have to date.

The robots are coming. You will see more and more evidence, if you are not convinced already. Robots, once they leave the factory and battlefield and enter everyday life, will have complex implications for privacy and other values, owing largely to their special place among objects. But this does not translate straightforwardly into even less privacy. Rather, we should be taking advantage of the opening of what political scientist Priscilla Regan calls a "policy window." Robotic technology reintroduces a long-missing feeling into human affairs: being chronically observed. And that's not necessarily a bad thing.


Ryan Calo is a law professor at the University of Washington and an affiliate scholar at Stanford Law School's Center for Internet and Society.

Trending Stories

  1. Let It Glow

    Advice & Insights

  2. Meet Ryan Agarwal

    Student Life

  3. Neurosurgeon Who Walked Out on Sexism

    Medicine

  4. Art and Soul

    School of Humanities & Sciences

  5. Three Cheers

    Alumni Community

You May Also Like

© Stanford University. Stanford, California 94305.