Unity Partners With Google for Game Playing AI Challenge

AlphaAtlas

[H]ard|Gawd
Staff member
Joined
Mar 3, 2018
Messages
1,713
Google has pushed the field of game playing AIs by leaps and bounds recently. Last week, their DeepMind AI put up a remarkable fight against two fast-clicking StarCraft 2 Pros, beating them 10-1. And today, in what appears to be a partnership with Google, Unity announced the "Obstacle tower Challenge." On February 11, 2019, game playing AI developers will compete with each other for a $25,000 first place prize, as well as $75,000 in additional prizes. Apparently, Unity thinks that neural networks trained by developers that can run on client hardware are an integral part of future games, and they've been heavily promoting the tech this past year. I for one, can't wait to see better AI in games, and a clip from a recent developer conference highlights just how close it may be.

Unity devs got about 50 AI "agents" running on an iPhone at a reasonable framerate, which you can in the video here.

The Obstacle Tower Challenge combines platforming-style gameplay with puzzles and planning problems, all in a tower with an endless number of floors for agents to learn to solve. Critically, the floors become progressively more difficult as the agent advances.
 
whats the endgame here? will people enjoy watching AI vs AI? i expect the giant if-then to eventually beat real players. BUT CAN IT PAINT A BEAuTIFUL PAINTING OR SYMPHONY
 
More realistic NPC behavior, less hard coded tactics, AI to tailor the game play to you, better interaction, faster decisions, etc.
 
whats the endgame here? will people enjoy watching AI vs AI? i expect the giant if-then to eventually beat real players. BUT CAN IT PAINT A BEAuTIFUL PAINTING OR SYMPHONY

A better AI means better play against humans too, and the whole point of the competition is to further develop the toolset.

If you look at high level player versus AI play, in anything from SC2 to Civ to FPS games and so on, the current strategy is almost always "find and bug in the AI and exploit it". If developers get to the point where they can train an AI as players play, that could close those kind of loopholes and make AIs more fun to play against (or with, if we're talking about in-game allies).
 
How the Hell do you call it Machine Learning from scratch, that makes no sense.

From everything I read ML uses data and is only as good as the data provided is the whole pitch, I was just reading about what machine learning is few hours ago...super misleading name, just a damn phrase to toss around to get investment for algorithms is all it seems to be. Every single example was used to show something similar to analytics.

Saying it can learn to play a game with zero data provided sure sounds like AI and not ML
 
Nuot sure who you are talking abot about 'machine learning', but this is definitely a mix of both. The agent is ran through multiple seeded generations of the tower to learn then tested against five. Vision, understanding, planning, movement, reward system, goals, trade off of direct route vs timing.. etc.

https://storage.googleapis.com/obst...6f0214c49a65313d157c09583&elqaid=2003&elqat=2

The unity toolkit is pretty snazzy though.

The Unity Machine Learning Agents Toolkit (ML-Agents) is an open-source Unity plugin that enables games and simulations to serve as environments for training intelligent agents. Agents can be trained using reinforcement learning, imitation learning, neuroevolution, or other machine learning methods through a simple-to-use Python API. We also provide implementations (based on TensorFlow) of state-of-the-art algorithms to enable game developers and hobbyists to easily train intelligent agents for 2D, 3D and VR/AR games. These trained agents can be used for multiple purposes, including controlling NPC behavior (in a variety of settings such as multi-agent and adversarial), automated testing of game builds and evaluating different game design decisions pre-release.
https://github.com/Unity-Technologies/ml-agents
 
Nuot sure who you are talking abot about 'machine learning', but this is definitely a mix of both. The agent is ran through multiple seeded generations of the tower to learn then tested against five. Vision, understanding, planning, movement, reward system, goals, trade off of direct route vs timing.. etc.

https://storage.googleapis.com/obst...6f0214c49a65313d157c09583&elqaid=2003&elqat=2

The unity toolkit is pretty snazzy though.


https://github.com/Unity-Technologies/ml-agents

"Essentially it learned to walk from scratch with ML agents"

and "It learned to fetch the stick and bring it back to the user"
How would ML learn that with zero data/user input? The way ML is described is it uses data to learn. If you provide 0 data to it and it starts learning to play a game?

That sounds like AI programmed to get the user to provide input.



If not they should just drop this code off in an fully automated facility since it will learn to do everything without any AI or Input and voila skynet :D
 
"Essentially it learned to walk from scratch with ML agents"

and "It learned to fetch the stick and bring it back to the user"
How would ML learn that with zero data/user input? The way ML is described is it uses data to learn. If you provide 0 data to it and it starts learning to play a game?

That sounds like AI programmed to get the user to provide input.



If not they should just drop this code off in an fully automated facility since it will learn to do everything without any AI or Input and voila skynet :D

I think you are misconstruing what is being said. The system is using Machine Learning to figure out how to do things. It does receive data. But it starts with no pre-loaded data (thus from scratch). It then gets fed data and it starts learning from that data. As it gets more data, it continues to learn and adjust. What the machine is learning is how to build and adjust algorithms to accomplish a task or to do the task more efficiently.

For instance in the SC2 case, they had a tone of AI components playing matches against each other and learning strategy from those matches. They then pitted it against humans and it learned from human strategies. Then they had the AIs compete against each other to continue to learn and develop. Rinse repeat.

What they showed last week was the result of all that work when they could load the AI onto a regular system and limit it to some similar constraints as humans and then it still won 10-1.
 
You can adblock Google, but you can't block their aim bots and heat sensing robots from killing you.
 
It would be interesting to have an AI which can monitor my emotions and make a game just challenging enough to be exciting without being so challenging that it becomes frustrating.

I remember playing Prince of Persia: The Sands of Time, and being surprised that the difficulty level was just perfect for me. I have yet to repeat that experience.
 
One of the big companies did patent a biometric chair of sorts.. Though I would wager the MS Cognitive Services could do a descent job scanning your face for emotion.

Not quite ready for The Mind Game, but getting there.
 
Maybe the endgame is to move AI from the CPU to the GPU?


the biggest issue with that is the obvious killing of frame rates on any GPU intensive game.

as i have stated in other threads, i would like like to see AI as a cloud service. that could be neat
 
katanaD - What would you be expecting as the result of a game's AI in the cloud?


I'd like to see AI get enough resources so it doesnt have to "cheat" like it does now.

Games like GalCiv3 that have AI routines that run while the player takes their time doing their move is one thing, but we simply dont have the CPU resources to run in depth AI checks on how to proceed.

cloud computing would actually be great for that as you would then have massive clusters of resources that could "play" the game much more in depth to choose a course of action. could be used in turn based games, and even real time games as seen by some examples.

on even my kick ass system i have at home, the game AI can only play through so much before people start to get frustrated with turn wait times. moving the AI to the cloud could help to alleviate that

IMO.
 
I'd like to see AI get enough resources so it doesnt have to "cheat" like it does now.

Games like GalCiv3 that have AI routines that run while the player takes their time doing their move is one thing, but we simply dont have the CPU resources to run in depth AI checks on how to proceed.

cloud computing would actually be great for that as you would then have massive clusters of resources that could "play" the game much more in depth to choose a course of action. could be used in turn based games, and even real time games as seen by some examples.

on even my kick ass system i have at home, the game AI can only play through so much before people start to get frustrated with turn wait times. moving the AI to the cloud could help to alleviate that

IMO.

I think that is part of the benefit of ML. DeepMind can use the ML to learn and improve AI which can then be run on normal computers. That is what they did for the SC2 competition. It wasn't Human players vs a massive supercomputer, it was human computers vs an AI running on a desktop computer. There is no real need to have the human player play in the cloud vs some massive AI, just use the cloud resources for ML and develop a better AI which can run on normal systems.
 
Back
Top