RoboSkate-RL
README.md

This repository is not the original repository. It is for presentation purposes and contains only code and documentation of which the author is Finn Süberkrüb. Potentially Copyright protected content, datasets and trained models were removed.

G1 RoboSkate

In most subfolders there are README files that explain further things. Under documentation you can also find the final report, important terminal commands and more information about the RoboSkate interface.

OpenAI Gym

This repository contains important scripts, trained models, datasets and documentation to use RoboSkate as an OpenAI Gym Environment and to train it easily with the tools of rl-baselines3-zoo. The OpenAI Gym Environment is at this time (07/15/2021) still in a submodule hosted on GitHub.com.

Under trained_agents you can find trained agents (+Tensorboard logs) as well as trained models for a VAE to extract features from the image data of RoboSkate which can then be fed into the training.

Under scripts are tools like a remote control for the RoboSkate agent, something to train the VAE as well as a script to collect image data. There are also older scripts that are not yet adapted to the current environoments but are interesting as soon as the topics become interesting again, e.g. Behavior cloning.

submodule rl-baselines3-zoo is a fork of the original and contains training parameters for RoboSkate.

submodule gym is a fork of the original and contains Open AI RoboSkate environments.

submodule stable-baselines3 is a fork of the original and contains a multi input policy for the associated Gym Environment.

Under expert_data, for example, labeled images can be found.

Quick start Open AI Gym

If this repository is cloned, it must be ensured that the submodules are also available. The submodules are located on GitHub.com In addition, the Branch RoboSkate must be checkedout so that the relevant code parts are available.

e.g. /gym/gym/envs/RoboSkate/ must be available

https://github.com/Finn-Sueberkrueb/rl-baselines3-zoo.git

https://github.com/Finn-Sueberkrueb/stable-baselines3.git

https://github.com/Finn-Sueberkrueb/gym.git

gym and stable baselines 3 must be installed before it can be used.

cd ./gym
pip install -e .

cd ./stable-baselines3
pip install -e .

Training

Training can be started directly from the rl-baselines3-zoo folder. (RoboSkate game should be running)

python train.py --algo sac --env RoboSkateNumerical-v0

algo: reinforcement algorithm. More detail in rl-baselines3-zoo/README.dm
env: gym environment. All environments and more detail are in gym/gym/env/RoboSkate

Trained agent is saved in rl-baselines3-zoo/logs/#algo#/ (replace "#algo#" with RL algorithm you used)

Running agents

Running a pretrained agent can be done with the command directly from the rl-baselines3-zoo folder:

simple:
python enjoy.py --algo sac --env RoboSkateNumerical-v0

advance:
python enjoy.py --algo sac --env RoboSkateNumerical-v0 --folder logs/ --load-best --env-kwargs headlessMode:False random_start_level:False startLevel:0 --exp-id 1

folder: More detail in rl-baselines3-zoo/README.dm
load-best: More detail in rl-baselines3-zoo/README.dm
env-kwargs: More detail in rl-baselines3-zoo/README.dm and in the used environment gym/gym/env/RoboSkate
exp-id: More detail in rl-baselines3-zoo/README.dm

Running best agents

Best models are saved in trained_models/. They can be run directly from the rl-baselines3-zoo folder:

python enjoy.py --algo sac --env RoboSkateNumerical-v0 --folder ../trained_models/RoboSkate --load-best --env-kwargs headlessMode:False random_start_level:False startLevel:0 --exp-id 1

List of available agents:
RoboSkateNumerical-v0 with indexes 1 to 4
RoboSkateSegmentation-v0 with indexes 1 to 3

More details in trained_models\README.dm