Skip to content

Latest commit

 

History

History
12 lines (8 loc) · 620 Bytes

README.md

File metadata and controls

12 lines (8 loc) · 620 Bytes

PPO-Jumpstart

About

This is a repo that provides the bare minimum to jumpstart a PPO project. It has the model setup with a simple Feed Forward Neural network for the actor and critic and a template for a custom environment using gymnasium

How to use

  • Clone the repo onto your computer or download the latest release and create a virtual environment using the create_env file for your operating system (.bat for windows, .sh for linux)
  • Activate the virtual environment and install the required libraries from requirements.txt
  • Make adjustments to the environment and model for your project
  • Run main.py