← Home

Quick answer

AI Summary: Introduces TRPO, a mathematically rigorous reinforcement learning algorithm that guarantees monotonic policy improvement by constraining updates within a 'trust region', stabilizing complex robotic and game-playing agents.

Claim

Trust Region Policy Optimization

John Schulman·
Sergey Levine·
Pieter Abbeel·
Michael Jordan·
Philipp Moritz

ABSTRACT

We describe an iterative procedure for optimizing policies, with guaranteed monotonic improvement. By making several approximations to the theoretically-justified procedure, we develop a practical algorithm, called Trust Region Policy Optimization (TRPO). This algorithm is similar to natural policy gradient methods and is effective for optimizing large nonlinear policies such as neural networks. Our experiments demonstrate its robust performance on a wide variety of tasks: learning simulated robotic swimming, hopping, and walking gaits, as well as playing Atari games using images for input. TRPO's key innovation is a KL-divergence constraint that prevents policy updates from changing the network's behavior too drastically in a single step.

Review Snapshot

Explore ratings

4.2
★★★★
5 ratings
5 star
40%
4 star
40%
3 star
20%
2 star
0%
1 star
0%

Recommendation

100%

recommend this content.

Review this content

Share your opinion to help other learners triage faster.

Write a review

Invite a reviewer

Invite someone by email to share an invited review for Trust Region Policy Optimization.

Author Inquiries

Public questions about this content. Attendemia will route your question to the author. Vote on the most important ones. No guarantee of response.
Post an inquiry
Sort by: Most helpful