Well, your laptop would have instant access to billions of pieces of useful information — the dates of every Civil War battle, the conjugation of the German verb for “to bleach,” the most stable conformational isomer of trans-1-ethyl-2-methylcyclohexane — but only you would be able to walk to the exam room, turn over the test when it’s time to begin, read the questions and hold a pencil to write down the answers. Ideally, of course, you would also know many of the answers — that the Battle of Chickamauga happened in 1863, or that acetylene is the common name of ethyne — but, for most of us, that’s the hardest part. For our laptops, it’s the easiest.
This paradox has permeated artificial intelligence (AI) research for decades: The things that are most challenging for humans are often easy to teach machines, but the tasks that every toddler is capable of performing are surprisingly difficult for machines to learn.
As Harvard psychology professor Steven Pinker put it in his 1994 book “The Language Instinct,” “The main lesson of thirty-five years of AI research is that the hard problems are easy and the easy problems are hard.”
The early breakthroughs in artificial intelligence technology seemed to imitate, and at times even surpass, the very highest levels of human intelligence: what we expected to be the hard problems. These “expert” machines were very good at performing very specific tasks such as assisting doctors with medical diagnoses, troubleshooting broken mechanical systems or, perhaps most famously, playing chess.
Sometimes, it seemed, they were better than the very best of us. In 1997, for example, the computer Deep Blue beat then-reigning world champion Garry Kasparov in a chess match.
“It was generally believed that you had to be really smart and skillful and have a good memory and a lot of practice in order to be a doctor or a champion chess player,” said Ron Brachman ’71, a vice president at Yahoo! who has worked on AI research throughout his career. “So when people originally saw these computers that could diagnose illness or play chess, they thought that this was the ultimate kind of intelligence. But then we started realizing that each of those systems’ capabilities were very, very narrow. The computers were sort of idiot savants.”
Deep Blue, for instance, could play chess very well but couldn’t do anything else. In fact, Deep Blue couldn’t even really “play chess”: It was completely incapable of doing anything other than deciding how to move the chess pieces.
“We tend to assume that the hard part of chess is deciding what moves to make,” said computer science professor Robert Schapire, who researches machine learning. “What about actually playing the game? What about looking at the board, or picking up the pieces and physically moving them? Deep Blue couldn’t do any of that.”
The easiest elements of chess for humans — moving pawns forward or observing opponents’ moves — were too hard for Deep Blue, despite its superhuman mastery of chess.
“It always turns out to be more difficult to implement every-day common-sense reasoning than it does to implement expert reasoning [in machines],” Brachman said. “It’s really hard, for instance, for machines to learn that when you’re wearing clothes, and you go from one room to another, your clothes go with you.”
Researchers in the field of machine learning, who study how machines can acquire and store information, say that the key to computers acquiring more diverse and extensive intelligence lies in developing a greater variety of learning techniques for them. Computers, like humans, can learn in many different ways, including through observation, experimentation, past experience, reading and manipulation of the physical world, Schapire said.
For instance, when you buy two books on amazon.com, the website learns that people who are interested in reading one of them may also be interested in the other. If 100 people all purchase that same pair of books, the site may learn from experience and observation that when someone buys one of those books, it should recommend the other to that customer.

Alternatively, when you select a book to purchase online, say Janet and Isaac Asimov’s “Norby, the Mixed-Up Robot,” the website can search for other books with matching key terms and recommend books that share an author, such as “Fantastic Voyage II: Destination Brain,” or have similar titles, like Allen Dale Anderson’s genealogical study “The Ancestors and Descendants of Pernille and Halvor Norby.”
By experimenting with these different types of recommendations and studying which are most successful, websites can learn about what products might interest customers.
Though they learn and reason in many similar ways, people and machines are often successful in very different areas of knowledge. Humans, for instance, have excellent language skills. Practically every person on the planet can speak and understand at least one language beginning from a relatively young age. Computers, on the other hand, are good at searching and sorting through huge amounts of data.
These differences mean computer and human intelligence may actually serve to complement each other especially well in the future. The ability of machines to process vast quantities of information, for instance, will be important for fields like science and finance in which people are often looking for patterns and trends in enormous data sets, Schapire said.
“One of machine learning’s next success stories is going to be in becoming a tool for scientists to help make sense of their data and inform new hypotheses,” computer science professor David Blei said. “Besides just confirming hypotheses with data analysis, these new tools will guide us to find structures and patterns in the data that we wouldn’t otherwise see.”
In the past, many visions of AI have focused on a machine’s ability to imitate human intelligence, but the question of how machine intelligence might instead supplement human knowledge and open our eyes to new ways of learning is equally important. Our computers learn a lot from us — what kinds of e-mail to classify as spam or which of thousands of search-engine results are most useful and should be listed first — but there’s also a lot we can learn from computers.
On my own, I almost certainly won’t ace my midterms, and I suspect that my MacBook wouldn’t either. Together, though, we’d make one hell of a team. As we learn more about how machines can supplement our own intelligence, I can even imagine a future of exams that actually test our ability to work together, though I wonder whether my contribution would turn out be some combination of human seeing-eye dog and autonomous wheelchair.
For now, I’m in control. I wrote this column with the assistance of my laptop and not the other way around. Perhaps, some day, our roles will be reversed and, who knows, the readers (browsers?) may like it better that way.
This is the second in a series of articles examining current and emerging artificial intelligence technologies and their impact on today’s world.