diff --git a/README.md b/README.md index be19718..ed09540 100755 --- a/README.md +++ b/README.md @@ -1,6 +1,41 @@ -# SimBa: Simplicity Bias for Scaling Up Parameters in Deep Reinforcement Learning +# SimBa: Simplicity Bias for Scaling Up Parameters in Deep RL -This is a repository of an official implementation of Simba: Simplicity Bias for Scaling Up Parameters in Deep Reinforcement Learning. +This is a repository of an official implementation of + +Simba: Simplicity Bias for Scaling Up Parameters in Deep Reinforcement Learning by + +Hojoon Lee, +Dongyoon Hwang, +Donghu Kim, +Hyunseung Kim, +Jun Jet Tai, +Kaushik Subramanian, + +Peter R. Wurman, +Jaegul Choo, +Peter Stone, +Takuma Seno. + + +[[Website]](https://sonyresearch.github.io/simba) [[Paper]](https://arxiv.org/abs/2410.09754) + +## Overview + +### TL;DR + +Stop worrying about algorithms, just change the network architecture to SimBa. + +### Method + +SimBa is a network architecture designed for RL that avoids overfitting by embedding simplicity bias. + +Image description + +### Results + +When integrated SimBA with Soft Actor Critic (SAC), it matches the performance of state-of-the-art RL algorithms. + +Image description ## Getting strated @@ -105,8 +140,9 @@ If you find our work useful, please consider citing our paper as follows: ``` @article{lee2024simba, - title={Simba: Simplicity Bias for Scaling Up Parameters in Deep Reinforcement Learning}, - author={Hojoon Lee and Dongyoon Hwang and Donghu Kim and Hyunseung Kim and Jun Jet Tai and Kaushik Subramanian and Peter R.Wurman and Jaegul Choo and Peter Stone and Takuma Seno}, + title={SimBa: Simplicity Bias for Scaling Up Parameters in Deep Reinforcement Learning}, + author={Hojoon Lee and Dongyoon Hwang and Donghu Kim and Hyunseung Kim and Jun Jet Tai and Kaushik Subramanian and Peter R. Wurman and Jaegul Choo and Peter Stone and Takuma Seno}, + journal={arXiv preprint arXiv:2410.09754}, year={2024} } ``` diff --git a/deps/environment.yaml b/deps/environment.yaml index 1dcc17b..6766b86 100644 --- a/deps/environment.yaml +++ b/deps/environment.yaml @@ -31,3 +31,5 @@ dependencies: - termcolor==2.4.0 - tqdm==4.66.1 - wandb==0.16.6 + - moviepy==1.0.3 + - imageio==2.33.1 \ No newline at end of file diff --git a/deps/requirements.txt b/deps/requirements.txt index 442a649..f27a2ff 100644 --- a/deps/requirements.txt +++ b/deps/requirements.txt @@ -20,3 +20,5 @@ tensorflow-probability==0.24.0 termcolor==2.4.0 tqdm==4.66.1 wandb==0.16.6 +moviepy==1.0.3 +imageio==2.33.1 \ No newline at end of file diff --git a/docs/index.html b/docs/index.html index 5b0ce16..7060306 100644 --- a/docs/index.html +++ b/docs/index.html @@ -17,7 +17,7 @@

SimBa
Simplicity Bias for Scaling Parameters in Deep Reinforcement Learning

-

Under review

+

Preprint

@@ -49,7 +49,7 @@

@@ -313,7 +313,7 @@

Paper

Hojoon Lee*, Dongyoon Hwang*, Donghu Kim,
Hyunseung Kim, Jun Jet Tai, Kaushik Subramanian, Peter R. Wurman,
Jaegul Choo, Peter Stone, Takuma Seno


- arXiv preprint

+ arXiv preprint

@@ -324,9 +324,6 @@

Paper

-
- View on arXiv -
@@ -335,8 +332,14 @@

Citation

If you find our work useful, please consider citing the paper as follows:

- -
+@article{lee2024simba, + title={SimBa: Simplicity Bias for Scaling Up Parameters in Deep Reinforcement Learning}, + author={Hojoon Lee and Dongyoon Hwang and Donghu Kim and Hyunseung Kim and Jun Jet Tai and + Kaushik Subramanian and Peter R. Wurman and Jaegul Choo and Peter Stone and Takuma Seno}, + journal={arXiv preprint arXiv:2410.09754}, + year={2024} +} +