Top 5 open-source reinforcement learning frameworks

The continuous reward and reprimand system of reinforcement learning has come a long way from its initial days. While the technique has taken time to develop and doesn’t have the simplest application, it is behind some of the most important advancements in AI, like leading self-driving software in automobile vehicles and AI racking up wins in games like Poker. Reinforcement learning algorithms like AlphaGo and AlphaZero were able to excel at a game like Go by just playing it on its own. Despite the challenges involving reinforcement learning, it is the method that is closest to human cognitive learning. Fortunately, aside from the gaming domain, which is more competitive and cutting-edge, there are a growing number of reinforcement learning frameworks that are publicly available now.

DeepMind’s OpenSpiel

DeepMind is one of the most active contributors to open-source deep learning stacks. Even back in 2019, Alphabet’s DeepMind introduced a reinforcement learning framework that was game-oriented, called OpenSpiel. Basically, the framework contains a package of environments and algorithms that can help with research in general reinforcement learning, especially in the context of games. OpenSpiel provides tools for searching and planning in games as well as analysing learning dynamics and other common evaluation metrics.

The framework supports more than 20 single and multi-agent game types, including collaborative, zero-sum games, one-shot games and sequential games. That is, in addition, to strictly turn-taking games, auction games, matrix games, and simultaneous-move games, plus perfect games (where players are perfectly informed of all the events that have previously occurred when making a decision) and imperfect information games (where decisions are made simultaneously).

The developers kept simplicity and minimalism as the main ethos while building OpenSpiel, due to which it uses reference implementations instead of fully optimized and high performing codes. The framework also has minimum dependencies and keeps install footprints at a minimum, cutting down the chance of compatibility issues. The framework is also easily installed and easy to understand and extend.

Source: Research PaperGame implementations in OpenSpiel

Games in OpenSpiel are written in C++, while some custom RL environments are available in Python.

Open AI’s Gym

Source: OpenAI GymSample of a Gym environment for a game

OpenAI created Gym with the intention to maintain a toolkit that develops and compares reinforcement learning algorithms. It is a Python library that contains a vast number of testing environments so that users can write general algorithms and test them on Gym’s RL agent algorithms’ shared interface. Gym has specific environments that are arranged in an environment-agent style. This means that the framework gives the user access to an agent that can perform certain actions in an environment. Once it performs the action, Gym gets the observation and reward as the reaction to the action taken.

The environments that Gym offers are: Algorithmic, Atari, classic control and toy text, 2D and 3D robots. Gym was created to fill in the gap that was present in the standardisation of environments used in various publications. A small tweak in the definition of the problem, such as the reward or the actions, can up the difficulty level of the task. Besides, there was also a need for better benchmarks as the pre-existing open-source RL frameworks were not diverse enough.

Back1 of 2

Leave a Comment