MARLware is a comprehensive framework for Multi-Agent Reinforcement Learning (MARL) based on pymarl2, designed to seamlessly integrate with the Ray engine and Hydra for efficient and scalable distributed task management. This robust platform aims to facilitate the implementation and experimentation with a variety of MARL algorithms.
- Ray Engine Integration: Enhanced with Ray for distributed task management, ensuring scalability and efficiency in complex MARL scenarios.
- Hydra Configuration: Utilizes Hydra for dynamic and flexible configuration, streamlining the adaptation and tuning of MARL algorithms.
- Modular Design: Built with a focus on modularity, allowing for easy integration and experimentation with different MARL algorithms.
- Python Compatibility: Supports Python versions >= 3.6, making it accessible to a broad range of developers and researchers.
Set up MARLware by following the installation instructions for Ray, Hydra, and other necessary dependencies:
Important
It is necessary to install pysc2 before using this repository, please refer to pysc2 installation
source activate_env.sh
source install_docker.sh
Effortlessly conduct advanced experiments in MARL with MARLware.
python3 src/tune.py
Or specify a custom configuration:
python3 src/tune.py --config-name="<custom_config>.yaml"
python3 src/tune.py trainable=qmix_large
MARLware is adept at handling sophisticated coordination tasks among multiple agents. Its flexibility and scalability make it suitable for strategic games, collaborative robotics, and complex multi-agent simulations.
Contribute to and collaborate on MARLware as it evolves with cutting-edge technologies in Multi-Agent Reinforcement Learning.
Inspired by existing frameworks in the field:
For referencing MARLware in academic and research work:
@misc{chojancki2023marl-engineering,
title={MARLware: Modular Multi-Agent Reinforcement Learning},
author={James Chojnacki},
year={2023},
publisher={GitHub},
howpublished={\url{https://github.com/marl-engineering/marlware}}
}