This repository contains:
- policy gradient methods (TRPO, PPO, A2C)
- Generative Adversarial Imitation Learning (GAIL)
- The code now works for PyTorch 0.4. For PyTorch 0.3, please check out the 0.3 branch.
- To run mujoco environments, first install mujoco-py and my modified version of gym which supports mujoco 1.50.
- If you have a GPU, I recommend setting the OMP_NUM_THREADS to 1 (PyTorch will create additional threads when performing computations which can damage the performance of multiprocessing. This problem is most serious with Linux, where multiprocessing can be even slower than a single thread):
export OMP_NUM_THREADS=1
- Support CUDA. (x10 faster than CPU implementation)
- Support discrete and continous action space.
- Support multiprocessing for agent to collect samples in multiple environments simultaneously. (x8 faster than single thread)
- Fast Fisher vector product calculation. For this part, Ankur kindly wrote a blog explaining the implementation details.
- Trust Region Policy Optimization (TRPO) -> examples/trpo_gym.py
- Proximal Policy Optimization (PPO) -> examples/ppo_gym.py
- Synchronous A3C (A2C) -> examples/a2c_gym.py
- python examples/ppo_gym.py --env-name Hopper-v1
- python gail/save_expert_traj.py --model-path assets/expert_traj/Hopper-v1_ppo.p
- python gail/gail_gym.py --env-name Hopper-v1 --expert-traj-path assets/expert_traj/Hopper-v1_expert_traj.p