Google's DeepMind has already produced systems that can beat the world's best Go players, as well as systems that can compete with humans in Starcraft II. But according to IEEE Spectrum, DeepMind's AlphaZero system is in a league of its own. AlphaZero can allegedly "crack any game that provides all the information that's relevant to decision-making," given a little bit of time to train. For example, the system taught itself to beat Stockfish, one of the best and most highly developed chess programs in the world, in a mere 24 hours. While the AI has only been trained on Go, Chess and Shogi, the company claims that more complicated games like Poker or even Minecraft are within reach. For what it's worth, the system appears to be very good at Go, as it seriously stressed out a high ranking Go player in the video here. Poker furnishes a good example of such games of "imperfect" information: Players can hold their cards close to their chests. Other examples include many multiplayer games, such as StarCraft II, Dota, and Minecraft. But they may not pose a worthy challenge for long. "Those multiplayer games are harder than Go, but not that much higher," Campbell tells IEEE Spectrum. "A group has already beaten the best players at Dota 2, though it was a restricted version of the game; Starcraft may be a little harder. I think both games are within 2 to 3 years of solution."