Follow us on Instagram
Try our daily mini crossword
Subscribe to the newsletter
Download the app

Roommate upgrade: Alexis 2.0

I’ve been thinking about Ramona a lot recently, as my roommate of three years, Alexis, prepares to jet off to Spain for the spring semester, forcing me to seriously consider who will fill her hot-pink stiletto shoes in 2009. Who else will mock other people with me at 3 a.m., entertain me by breaking a new cell phone every month, or sing along loudly and off-key to Miley Cyrus songs while I’m trying to finish problem sets?

It’s this last concern that made me think of Ramona, and the more I learned about her, the more I discovered how much she and Alexis have in common: They’re both bad singers, they both have brown hair, and they both like tight clothes, cannoli and Australian sheepdogs. Bingo!

ADVERTISEMENT

So, I was all set to replace Alexis with Ramona when it occurred to me: What happens when Alexis comes back? What if it turns out that Ramona, who, after all, isn’t likely to steal my hair gel or reserve a corner of our room for her giant stuffed dog, is a better roommate? For all my considerable charm and tact, would I be able to tell Alexis I’d chosen a different roommate for our senior year without hurting her feelings?

I started thinking more seriously about the consequences of replacing people with computers. I don’t just mean roommates, I mean the surgical scrub nurses and taxi cab drivers and foreign language translators whose roles, as I’ve argued in previous AI columns, might some day be better performed by robots — if they’re not already.

This transition from a human workforce to a largely robotic one will bring with it many direct benefits for people, including better healthcare and fewer car accidents, but one has to wonder how appreciative the 43,000 out-of-work New York City cab drivers will be.

Such changes may be beneficial even with regard to employment opportunities for humans, though, computer science professor and Center for Information Technology Policy director Ed Felten said. “Some jobs will be lost, but new jobs will be created,” he explained. “The new jobs will generally be better and more interesting than the old ones.”

Certainly, a computer-run world would require more engineers and programmers to build, maintain and manage robots. The more manpower we devote to these tasks, however, the more progress we are likely to make in developing even better robots, and the more useless we, as humans, are likely to become.

Marvin Minsky GS ’54, co-founder of MIT’s AI laboratory, said he thinks this may simply be the natural order of things.

ADVERTISEMENT

“Will robots inherit the earth? Yes, but they will be our children,” Minsky wrote in a 1994 Scientific American article. “We owe our minds to the deaths and lives of all the creatures that were ever engaged in the struggle called Evolution. Our job is to see that all this work shall not end up in meaningless waste.”

We are still a ways off from this next stage of evolution, if, indeed, it is in our future. We don’t yet have robots we treat as children or consider our moral and intellectual equals, much less superiors, but as computers assume more and more responsibility in the modern world, we will be forced to consider what they can and can’t be held accountable for. When a patient dies on the operating table during a surgery performed by a robot, whom does the grieving family sue? When two autonomous vehicles crash into each other, killing six human passengers, who’s to blame?

These questions don’t have clear answers — not yet, anyway. As more AI technology penetrates our daily lives, though, policymakers — many of whom are likely to have only a feeble understanding of the underlying science — will be forced to define legal guidelines for a robotic world.

“Technology designers will have some responsibility for what their products do, but the users of technology will also have some responsibility to oversee what the machines are doing,” Felten said. “It will be difficult to draw lines in specific cases. Courts and policymakers will struggle to understand how their decisions will apply to complex technological systems.”

Subscribe
Get the best of the ‘Prince’ delivered straight to your inbox. Subscribe now »

Ron Brachman ’71, who directed AT&T’s AI Principles Research Department and headed the cognitive computation arm of the Defense Advanced Research Projects Agency (DARPA), said in an e-mail that the complexity of these technological systems may pose the greatest challenge to those involved in developing new policies.

“One thing that does concern me is the potential for policy- or law-making to be driven by a lack of understanding of what machines can and cannot do,” he said. “The potential for some concerned lawmaker to oversimplify machine intelligence and appeal to fear and potential panic by constituents is significant, and it’s conceivable that research and development could be stopped prematurely by people who don’t understand the science.”

AI technology policy will undoubtedly be greatly affected by how well we can manipulate robots to resemble ourselves. Achieving human-level intelligence in computers, for instance, could very possibly mean holding them to human-level legal standards.

“Much depends on whether we succeed in building consciousness and emotion into robots,” psychology professor Daniel Osherson said in an e-mail. “If so, we may end up considering them as (legal and moral) persons — thus, as loci of responsibility, merit and blame. Constructing robots of this kind, however, will require not just technical prowess but also conceptual breakthroughs in understanding the nature of experience.”

Even before we make these kinds of breakthroughs, though, computers will be capable of filling many roles today occupied primarily by humans. After all, you don’t really need emotion in your cab driver. Just because you don’t need chatty cab drivers, however, doesn’t mean you won’t miss them when they’re gone.

“In addition to a loss of jobs, I also worry about a loss of human contact with technological advances,” computer science professor Robert Schapire said. “When shopping at Home Depot or certain supermarkets, I find it sad to see the check-out clerks being replaced by self-serve check-out stations. I always try to go to a lane with a human operator just for those few seconds of human contact and to try to hold back the day when there are no humans at all in the store.”

I, too, find myself worrying about a loss of human contact as I watch Alexis scrambling to make visa appointments and send in study abroad forms a mere month or two past their deadlines. After all, even if Ramona’s fun, she’ll probably never think to buy me birthday cards one-and-a-half years late, update me on the rugby team drama or abandon homework at 5 a.m. to hunt down a pint of Ben & Jerry’s Half-Baked ice cream in the frigid New Jersey winter. Of course, I could program Ramona to do any of the things Alexis has done in the past, but how could I program her to do whatever crazy thing Alexis thinks of next? And, what’s more, if there’s one robot in the room, wouldn’t she eventually want to trade me in for an upgrade? Surely someone out there is already building a better technocrat.

This is the sixth and final in a series of articles examining current and emerging artificial intelligence technologies and their impact on today’s world.