Jeopardy!: IBM Challenge S2011 / En/a
- B+ Community Grade
Well, that was… fun? Instructive? Terrifying? Throughout the three-day, two-game competition between IBM’s Jeopardy-playing supercomputer “Watson” and former champs Ken Jennings and Brad Rutter, the Internet has been buzzing with blog-essays and social media posts about the show, along with a lot of debate over whether the success of Watson is an example of how far we’ve come in computer technology or whether this is just an amusing—and perhaps unfair—parlor trick. Myself, I’m split, though I lean more to the “this is pretty amazing” side. I watched the Nova episode last week about Watson, so I know that the breakthroughs that IBM has made in developing this machine could have real-world applications to everything from search-engine technology to robotics. Getting a computer to understand and respond to the various quirks of human communication isn’t easy. But at the same time, given that the Watson team knew the kinds of questions and answers that Jeopardy requires, they had the advantage of working towards a machine that wouldn’t have to answer anything, just trivia questions. So as a demonstration of what we can get a computer to do, the Watson project is both awe-inspiring and—speaking strictly as novice in these matters, mind you—unsurprising.
As television though, these three days of Jeopardy have been pretty thrilling. (And here I speak not as a novice, but as a confirmed game show fanatic.)
Granted, Watson’s ability to process information quickly and buzz in with a precision no human can match made for stretches of both games where the computer dominated to an exhausting degree. But the producers made a few smart choices. First, by spreading the first game across two days and enhancing it with explanations for how Watson came to be, Jeopardy made the computer’s performance look all the more impressive (especially to viewers who missed Nova). Seeing a computer answering questions quickly is cool enough, but it’s better once you see all the practice sessions where the computer made dumb mistakes. Watson had to come a long way to achieve his current rate of success.
Secondly, I thought the box at the bottom of the screen displaying Watson’s three top potential answers was a nifty touch, even if it made playing along at home a little harder. The idea was to explain the calculations behind Watson’s answers, because seeing its thought processes helps us to understand how it reads the questions. (Yes, yes, I know that on Jeopardy, the answers come first, and the contestants provide the questions. It just makes more sense to describe the show the other way ‘round.)
Still, I can’t deny that I was pulling for the humans here—and Jennings in particular—which meant that the most exciting moments for me in these episodes were the parts where Watson whiffed. Particularly fascinating was seeing just how the computer made mistakes. On day one, for example, Watson didn’t seem to understand that all the answers in a certain category would be decades, and so it kept trying to calculate exact years. Watson also can’t “hear,” so when Ken Jennings gave a wrong answer on day one, Watson didn’t know what Jennings had said, and he repeated the same wrong answer. (I wonder if it made Jennings feel better that the computer had come to same incorrect conclusion.) Sometimes Watson would seize on the wrong words in a question and would deliver an answer that didn’t satisfy all (or any) of the question’s requirements. And on day three especially, Watson seemed far more tentative and answered less, perhaps because the questions either had too many data points or not enough.
Of course Watson still won handily, even though the second game was closer than I would’ve expected after the shellacking of the first game. I don’t know whether we’ll look back at this moment as a landmark for computing the way we do with Deep Blue’s defeat of Garry Kasparov—a match so famous that my chess-crazy 9-year-old can recount it almost move-by-move—but I do know that these weren’t any ordinary Jeopardy episodes and that the conversations they’ve started about everything from artificial intelligence to "fairness" may end up being an even greater contribution to society than any voice-recognition software this project spawns.
And now, one more thing: Usually we avoid ending reviews with the phrase “What did you think?” because we trust that you all know by now that you’re free to comment in the section below. But this time I am going to ask, because I know that’s one question that—at the moment at least—Watson can’t answer.
So what did you think?