In an interview that sounds suspiciously like that scene in a movie where one misguided person tries to convince everyone else that the monsters are harmless, the head of Facebook’s AI research facility has suggested that we should stop worrying about Terminators trying to wipe out mankind—or, at the very least, that we should stop putting pictures of Terminators on every news article about artificial intelligence (a request The A.V. Club vehemently denies). The aforementioned AI expert is Yann LeCun, and he runs Facebook’s FAIR research facility. He recently spoke with The Verge about how people perceive AI research, and though he doesn’t specifically list any examples, he does call out the “complete misrepresentation” in stories like, say, this one about a Facebook chatbot that almost became evil.
That being said, LeCun does think people in general are becoming “more aware” of what’s really happening with AI. “It used to be that you could not see an article in the press without the picture being Terminator. It was always Terminator, 100 percent. And you see less of that now, and that’s a good thing.” He’s definitely right about that, as some articles about AI use images from Blade Runner 2049 or The Matrix, but his larger argument is that we’re “very far” from building anything close to a Terminator so it’s silly to worry about someone developing an AI that will wipe us all out.
LeCun even brings up AlphaGo, the AI that mastered a famously complex board game and was referred to as “the latest salvo in the ongoing war between man and machine” by some dumb website. He says teaching the AI to be really good at the game is “completely separate” from making “intelligent robots running round the streets,” adding that “in terms of general intelligence we’re not even close to a rat.” He also admits that there are “real dangers” that go along with developing artificial intelligence, but “there’s no danger in the immediate or even medium term.”
Of course, this is all under the assumption that LeCun himself has not been replaced by a malevolent AI, but surely there’s “no danger” of that happening, right?