Latest news about Bitcoin and all cryptocurrencies. Your daily crypto news habit.
Reinforcement learning (RL) practitioners have produced a number of excellent tutorials. Most, however, describe RL in terms of mathematical equations and abstract diagrams. We like to think of the field from a different perspective. RL itself is inspired by how animals learn, so why not translate the underlying RL machinery back into the natural phenomena they’re designed to mimic? Humans learn best through stories.
This is a story about the Actor Advantage Critic (A2C) model. Actor-Critic models are a popular form of Policy Gradient model, which is itself a vanilla RL algorithm. If you understand the A2C, you understand deep RL.
After you’ve gained an intuition for the A2C, check out:
- Our simple code implementation of the A2C (for learning) or our industrial-strength PyTorch version based on OpenAI’s TensorFlow Baselines model
- Barto & Sutton’s Introduction to RL, David Silver’s canonical course, Yuxi Li’s overview and Denny Britz’ GitHub repo for a deep dive in RL
- fast.ai’s awesome course for intuitive and practical coverage of deep learning in general, implemented in PyTorch
- Arthur Juliani’s tutorials on RL, implemented in TensorFlow.
Illustrations by @embermarke
Intuitive RL: Intro to Advantage-Actor-Critic (A2C) was originally published in Hacker Noon on Medium, where people are continuing the conversation by highlighting and responding to this story.
Disclaimer
The views and opinions expressed in this article are solely those of the authors and do not reflect the views of Bitcoin Insider. Every investment and trading move involves risk - this is especially true for cryptocurrencies given their volatility. We strongly advise our readers to conduct their own research when making a decision.