It might not be making front page news, but I’ve been following a game of Go that’s happening over in South Korea at the moment. What’s at stake? Google’s DeepMind artificial intelligence is matched up against one of the best players ever, Lee Se-dol. Ahead of the matches (they’re playing 5 total), the human was totally confident, predicting a 5-0 victory or maybe (if he was feeling generous) a 4-1 win.
DeepMind took the first game, stunning the Go world.
Why do I think this is particularly noteworthy? For a couple of reasons. First, I loved reading about the way DeepMind won the second match. Viewers didn’t understand what the computer was up to for most of the beginning of the game. It seemed to be making random misplays, and there was some question if there was something wrong with its programming. Fast forward to the end of the game, and those “misplays” turned out to be key strategic decisions that none of the Go fans (or even the experts) understood at the time. Now in retrospect, the experts are seeing why it was a great move, and contemplating how that will change strategies in the future.
In other words, the computer was playing beyond the ability of our experts to understand it.
Second, there’s the way DeepMind has been programmed. Or, rather, not programmed. I’m not well-versed in the field enough to really understand this, but somehow they aren’t programming the computer to win. They’re letting it learn from watching games and through trial and error, in a process that resembles much more how humans learn.
Think about that for a bit. A computer is teaching itself to beat humans at an incredibly complex game, and it’s doing it so well that it’s now outthinking humans in that game.
Welcome to the world of tomorrow, friends. Next stop? Skynet.