← Home

Quick answer

AI Summary: Introduces the Asynchronous Advantage Actor-Critic (A3C) algorithm, which massively accelerates reinforcement learning by using multiple parallel agents to stabilize training and eliminate the need for experience replay.

Claim

Asynchronous Methods for Deep Reinforcement Learning

Volodymyr Mnih·
Adrià Puigdomènech Badia·
Mehdi Mirza·
Alex Graves·
Timothy P. Lillicrap·
Tim Harley·
David Silver·
Koray Kavukcuoglu

ABSTRACT

We propose a conceptually simple and lightweight framework for deep reinforcement learning that uses asynchronous gradient descent for optimization of deep neural network controllers. We present asynchronous variants of four standard reinforcement learning algorithms and show that parallel actor-learners have a stabilizing effect on training, allowing all four methods to successfully train neural network controllers. The best performing method, an asynchronous variant of actor-critic (A3C), surpasses the current state-of-the-art on the Atari domain while training for half the time on a single multi-core CPU instead of a GPU. Furthermore, we show that A3C succeeds on a wide variety of continuous motor control problems as well as on a new task of navigating random 3D mazes using a visual input.

Review Snapshot

Explore ratings

4.2
★★★★
5 ratings
5 star
40%
4 star
40%
3 star
20%
2 star
0%
1 star
0%

Recommendation

100%

recommend this content.

Review this content

Share your opinion to help other learners triage faster.

Write a review

Invite a reviewer

Invite someone by email to share an invited review for Asynchronous Methods for Deep Reinforcement Learning.

Author Inquiries

Public questions about this content. Attendemia will route your question to the author. Vote on the most important ones. No guarantee of response.
Post an inquiry
Sort by: Most helpful