Integrating with OpenAI Gym

OpenAI Gym is a recently released reinforcement learning toolkit that contains a wide range of environments and an online scoreboard. rllab now provides a wrapper to run algorithms in rllab on environments from OpenAI Gym, as well as submitting the results to the scoreboard. The example script in examples/trpo_gym.py provides a simple example of training an agent on the Pendulum-v0 environment. The content of the file is as follows:

Running the script will automatically record the episodic total reward and periodically record video. When the script finishes running, you will see an instruction of how to upload it to the online scoreboard, similar to the following text (you will need to first register for an account on https://gym.openai.com, and set the environment variable OPENAI_GYM_API_KEY to be your API key):

***************************

Training finished! You can upload results to OpenAI Gym by running the following command:

python scripts/submit_gym.py data/local/experiment/experiment_2016_04_27_18_32_31_0001/gym_log

***************************

Comparison between rllab and OpenAI Gym

Both rllab and OpenAI Gym set out to be frameworks for developing and evaluating reinforcement learning algorithms.

OpenAI Gym has a wider range of supported environments, as well as an online scoreboard for sharing the training results. It makes no assumptions of how the agent should be implemented.

rllab offers a set of built-in implementations of RL algorithms. These implementations are agnostic how the environment or the policy is laid out, as well as fine grained components for developing and experimenting with new reinforcement learning algorithms. rllab is fully compatible with OpenAI Gym. The rllab reference implementations of a wide range of RL algorithms enable faster experimentation and rllab provides seamless upload to Gym’s scoreboard.