Thanks! We'll be in touch in the next 12 hours
Oops! Something went wrong while submitting the form.

Exploring OpenAI Gym: A Platform for Reinforcement Learning Algorithms

Vipul Vaibhaw

Artificial Intelligence / Machine Learning

Introduction 

According to the OpenAI Gym GitHub repository “OpenAI Gym is a toolkit for developing and comparing reinforcement learning algorithms. This is the gym open-source library, which gives you access to a standardized set of environments.”

Open AI Gym has an environment-agent arrangement. It simply means Gym gives you access to an “agent” which can perform specific actions in an “environment”. In return, it gets the observation and reward as a consequence of performing a particular action in the environment.

Open AI Gym Architecture

There are four values that are returned by the environment for every “step” taken by the agent.

  1. Observation (object): an environment-specific object representing your observation of the environment. For example, board state in a board game etc
  2. Reward (float): the amount of reward/score achieved by the previous action. The scale varies between environments, but the goal is always to increase your total reward/score.
  3. Done (boolean): whether it’s time to reset the environment again. E.g you lost your last life in the game.
  4. Info (dict): diagnostic information useful for debugging. However, official evaluations of your agent are not allowed to use this for learning.

Following are the available Environments in the Gym:

  1. Classic control and toy text
  2. Algorithmic
  3. Atari
  4. 2D and 3D robots

Here you can find a full list of environments.

Cart-Pole Problem

Here we will try to write a solve a classic control problem from Reinforcement Learning literature, “The Cart-pole Problem”.

The Cart-pole problem is defined as follows:
“A pole is attached by an un-actuated joint to a cart, which moves along a frictionless track. The system is controlled by applying a force of +1 or -1 to the cart. The pendulum starts upright, and the goal is to prevent it from falling over. A reward of +1 is provided for every timestep that the pole remains upright. The episode ends when the pole is more than 15 degrees from vertical, or the cart moves more than 2.4 units from the center.”

The following code will quickly allow you see how the problem looks like on your computer.

CODE: https://gist.github.com/velotiotech/09274149d911d927b0762aba2a10d189.js

This is what the output will look like:

Cart Pole Problem

Coding the neural network 

CODE: https://gist.github.com/velotiotech/34666213c056eb940339dcfe381ec440.js

This is what the result will look like:

Reinforcement Learning Literature

Conclusion

Though we haven’t used the Reinforcement Learning model in this blog, the normal fully connected neural network gave us a satisfactory accuracy of 60%. We used tflearn, which is a higher level API on top of Tensorflow for speeding-up experimentation. We hope that this blog will give you a head start in using OpenAI Gym.

We are waiting to see exciting implementations using Gym and Reinforcement Learning. Happy Coding!

Get the latest engineering blogs delivered straight to your inbox.
No spam. Only expert insights.
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.

Did you like the blog? If yes, we're sure you'll also like to work with the people who write them - our best-in-class engineering team.

We're looking for talented developers who are passionate about new emerging technologies. If that's you, get in touch with us.

Explore current openings

Exploring OpenAI Gym: A Platform for Reinforcement Learning Algorithms

Introduction 

According to the OpenAI Gym GitHub repository “OpenAI Gym is a toolkit for developing and comparing reinforcement learning algorithms. This is the gym open-source library, which gives you access to a standardized set of environments.”

Open AI Gym has an environment-agent arrangement. It simply means Gym gives you access to an “agent” which can perform specific actions in an “environment”. In return, it gets the observation and reward as a consequence of performing a particular action in the environment.

Open AI Gym Architecture

There are four values that are returned by the environment for every “step” taken by the agent.

  1. Observation (object): an environment-specific object representing your observation of the environment. For example, board state in a board game etc
  2. Reward (float): the amount of reward/score achieved by the previous action. The scale varies between environments, but the goal is always to increase your total reward/score.
  3. Done (boolean): whether it’s time to reset the environment again. E.g you lost your last life in the game.
  4. Info (dict): diagnostic information useful for debugging. However, official evaluations of your agent are not allowed to use this for learning.

Following are the available Environments in the Gym:

  1. Classic control and toy text
  2. Algorithmic
  3. Atari
  4. 2D and 3D robots

Here you can find a full list of environments.

Cart-Pole Problem

Here we will try to write a solve a classic control problem from Reinforcement Learning literature, “The Cart-pole Problem”.

The Cart-pole problem is defined as follows:
“A pole is attached by an un-actuated joint to a cart, which moves along a frictionless track. The system is controlled by applying a force of +1 or -1 to the cart. The pendulum starts upright, and the goal is to prevent it from falling over. A reward of +1 is provided for every timestep that the pole remains upright. The episode ends when the pole is more than 15 degrees from vertical, or the cart moves more than 2.4 units from the center.”

The following code will quickly allow you see how the problem looks like on your computer.

CODE: https://gist.github.com/velotiotech/09274149d911d927b0762aba2a10d189.js

This is what the output will look like:

Cart Pole Problem

Coding the neural network 

CODE: https://gist.github.com/velotiotech/34666213c056eb940339dcfe381ec440.js

This is what the result will look like:

Reinforcement Learning Literature

Conclusion

Though we haven’t used the Reinforcement Learning model in this blog, the normal fully connected neural network gave us a satisfactory accuracy of 60%. We used tflearn, which is a higher level API on top of Tensorflow for speeding-up experimentation. We hope that this blog will give you a head start in using OpenAI Gym.

We are waiting to see exciting implementations using Gym and Reinforcement Learning. Happy Coding!

Did you like the blog? If yes, we're sure you'll also like to work with the people who write them - our best-in-class engineering team.

We're looking for talented developers who are passionate about new emerging technologies. If that's you, get in touch with us.

Explore current openings