AlphaZero Can Definitely Beat You At Chess, and is Coming to Other Games Soon

AlphaAtlas

[H]ard|Gawd
Staff member
Joined
Mar 3, 2018
Messages
1,713
Google's DeepMind has already produced systems that can beat the world's best Go players, as well as systems that can compete with humans in Starcraft II. But according to IEEE Spectrum, DeepMind's AlphaZero system is in a league of its own. AlphaZero can allegedly "crack any game that provides all the information that's relevant to decision-making," given a little bit of time to train. For example, the system taught itself to beat Stockfish, one of the best and most highly developed chess programs in the world, in a mere 24 hours. While the AI has only been trained on Go, Chess and Shogi, the company claims that more complicated games like Poker or even Minecraft are within reach.

For what it's worth, the system appears to be very good at Go, as it seriously stressed out a high ranking Go player in the video here.

Poker furnishes a good example of such games of "imperfect" information: Players can hold their cards close to their chests. Other examples include many multiplayer games, such as StarCraft II, Dota, and Minecraft. But they may not pose a worthy challenge for long. "Those multiplayer games are harder than Go, but not that much higher," Campbell tells IEEE Spectrum. "A group has already beaten the best players at Dota 2, though it was a restricted version of the game; Starcraft may be a little harder. I think both games are within 2 to 3 years of solution."
 
While this could lead eventually to better game AI's it can be very frustrating to play against something that always wins. So they need to have ways of introducing mistakes to the AI so it can play at lower levels. I'd much prefer this to the big advantages the AI gets in games like Civilization. Maybe one way to do this is copy the AI at various points in its learning process.
 
While this could lead eventually to better game AI's it can be very frustrating to play against something that always wins. So they need to have ways of introducing mistakes to the AI so it can play at lower levels. I'd much prefer this to the big advantages the AI gets in games like Civilization. Maybe one way to do this is copy the AI at various points in its learning process.

One possibility is to have multiple states of the AI along its learning. Easy is it learned for a few hours. Medium it learned for ten, impossible it learned for a day.

I'd like to see how big, memory wise, these AIs get. Do they stay relatively static size wise, linear growth, or exponential?
 
Yeah, good luck with poker. I doubt there will be an AI capable enough in at least another decade.
 
Last edited:
This is it. The day it happened. The day they gave every machine the ability to learn to KILL US ALL!
 
What about 3D chess?

hqdefault.jpg
 
"crack any game that provides all the information that's relevant to decision-making,"
Aka know things that are not displayed to the human player. Like positions of all bases and mines and how much of everything is in them prior to start.
I call this cheating.
 
Perfect information games are nothing like StarCraft. Blizzard has opened their API more than an year ago and so far no one has shown an AI player that works. I'll give it a few more years.

"crack any game that provides all the information that's relevant to decision-making,"
Aka know things that are not displayed to the human player. Like positions of all bases and mines and how much of everything is in them prior to start.
I call this cheating.

How is having a look at the chess board cheating?
 
Perfect information games are nothing like StarCraft. Blizzard has opened their API more than an year ago and so far no one has shown an AI player that works. I'll give it a few more years.



How is having a look at the chess board cheating?
Starcraft 2 seeing the whole board and all information about it while Fog of war is still active.
 
Back
Top