Pytorch implementation of MPO (works cited below) with the help of other repositories (also cited below).
Policy evaluation is done using Retrace.
Currently only accommodate Discrete gym environments.
Look at main.py for examples of using MPO.
The architectures for Actor and Critic can be changed in mpo_net.py.
- Maximum a Posteriori Policy Optimisation (Original MPO algorithm)
- Relative Entropy Regularized Policy Iteration (Improved MPO algorithm)
- daisatojp's mpo github repository (MPO implementation as reference)
- Openai's ACER github repository (Replay buffer implementation as reference)
https://github.com/openai/baselines/tree/master/baselines/acer
- 5 parallel environments
- 5 paralle environments