← Home

Quick answer

AI Summary: Presents AlphaStar, the first AI system to defeat top human professionals in the highly complex, real-time strategy game StarCraft II using multi-agent self-play.

Claim

Grandmaster level in StarCraft II using multi-agent reinforcement learning

Oriol Vinyals·
Igor Babuschkin·
Wojciech M. Czarnecki·
Michaël Mathieu·
Andrew Dudzik·
David Silver

ABSTRACT

The game of StarCraft II has emerged as a grand challenge for artificial intelligence research owing to its complex, multi-agent, and partially observable environment. Here we introduce AlphaStar, an artificial intelligence that plays StarCraft II at a Grandmaster level using a multi-agent reinforcement learning algorithm. AlphaStar uses a novel neural network architecture that incorporates a transformer torso and a deep LSTM core, trained via supervised learning from human replays followed by a multi-agent league of self-play. AlphaStar reached Grandmaster level for all three StarCraft races (Protoss, Terran, and Zerg), demonstrating that general-purpose learning algorithms can scale to highly complex, imperfect-information environments.

Review Snapshot

Explore ratings

4.4
★★★★
5 ratings
5 star
60%
4 star
20%
3 star
20%
2 star
0%
1 star
0%

Recommendation

100%

recommend this content.

Review this content

Share your opinion to help other learners triage faster.

Write a review

Invite a reviewer

Invite someone by email to share an invited review for Grandmaster level in StarCraft II using multi-agent reinforcement learning.

Author Inquiries

Public questions about this content. Attendemia will route your question to the author. Vote on the most important ones. No guarantee of response.
Post an inquiry
Sort by: Most helpful
Grandmaster level in StarCraft II using multi-agent reinforcement learning | Attendemia