Google’s DeepMind AI Beat ‘StarCraft’ Pros At Their Own Game

As advances in automation and artificial intelligence continue to roll out, some Americans have become fearful of having their job replaced by machines and robots. This is not out of the realm of possibility, considering the recent media attention on self-driving, autonomous vehicles.

But it seems that professional gamers may soon be targeted by the rise of advanced computing if a recent exhibition match is anything to go by.

As reported by the Verge, a new system, developed as part of Google’s DeepMind AI, has beaten a team of professionals at their own game. The new tech, dubbed AlphaStar, competed in 11 rounds of StarCraft II against some of the world’s best players. In a stunning development, the AI managed to beat the human players 10 times in a row. In the eleventh — and final — match, professional gamer Grzegorz “MaNa” Komnicz was able to clinch a victory for the human team.

This is not the first time that artificial intelligence has managed to beat humans in recreational games. As previously reported by IEEE Spectrum, AlphaZero — another AI system developed by Google-owned DeepMind — has managed to beat the world’s best human Chess and Go players, marking a significant leap forward for the development of artificial intelligence.

As impressive as previous attempts have been, crafting a system to take on the world’s best professional gamers presents an entirely new challenge. Unlike board games such as Chess and Go, video games such as StarCraft II are incredibly complex, as moves have to be executed in real-time, with virtually no downtime to take a step back and assess the battlefield. In a blog post on DeepMind’s website, the development team explains how the system is trained.

“AlphaStar’s behaviour is generated by a deep neural network that receives input data from the raw game interface (a list of units and their properties), and outputs a sequence of instructions that constitute an action within the game. More specifically, the neural network architecture applies a transformer torso to the units, combined with a deep LSTM core, an auto-regressive policy head with a pointer network, and a centralised value baseline. We believe that this advanced model will help with many other challenges in machine learning research that involve long-term sequence modelling and large output spaces such as translation, language modelling and visual representations,” the blog post reads.

It’s worth noting that DeepMind has limited the AlphaStar system, preventing it from performing more actions per minute — essentially, clicks — than a human player could.

Aside from an overview video (embedded above) which explains the basics of AlphaStar, DeepMind has also posted an archived live stream on YouTube, which shows AlphaStar facing off against human players.