From 178ede4fe6945c8ea962812773c5ebe606385b4c Mon Sep 17 00:00:00 2001 From: hojoonlee-sony Date: Tue, 15 Oct 2024 14:08:24 +0900 Subject: [PATCH 1/5] fix links in project page --- README.md | 36 ++++++++++++++++++++++++++++++++---- deps/environment.yaml | 2 ++ deps/requirements.txt | 2 ++ docs/index.html | 17 ++++++++++------- 4 files changed, 46 insertions(+), 11 deletions(-) diff --git a/README.md b/README.md index be19718..51c5c0c 100755 --- a/README.md +++ b/README.md @@ -1,6 +1,33 @@ -# SimBa: Simplicity Bias for Scaling Up Parameters in Deep Reinforcement Learning +# SimBa: Simplicity Bias for Scaling Up Parameters in Deep RL -This is a repository of an official implementation of Simba: Simplicity Bias for Scaling Up Parameters in Deep Reinforcement Learning. +This is a repository of an official implementation of + +Simba: Simplicity Bias for Scaling Up Parameters in Deep Reinforcement Learning by + +Hojoon Lee, +Dongyoon Hwang, +Donghu Kim, +Hyunseung Kim, +Jun Jet Tai, +Kaushik Subramanian, + +Peter R. Wurman, +Jaegul Choo, +Peter Stone, +Takuma Seno. + + +[[Website]](https://sonyresearch.github.io/simba) [[Paper]](https://arxiv.org/abs/2410.09754) + +## Overview + +SimBa is a network architecture designed for RL that avoids overfitting by embedding simplicity bias. + +Image description + +When integrated with Soft Actor Critic (SAC), SAC + SimBa matches performance to state-of-the-art off-policy algorithms by only changing the network architecture. + +Image description ## Getting strated @@ -105,8 +132,9 @@ If you find our work useful, please consider citing our paper as follows: ``` @article{lee2024simba, - title={Simba: Simplicity Bias for Scaling Up Parameters in Deep Reinforcement Learning}, - author={Hojoon Lee and Dongyoon Hwang and Donghu Kim and Hyunseung Kim and Jun Jet Tai and Kaushik Subramanian and Peter R.Wurman and Jaegul Choo and Peter Stone and Takuma Seno}, + title={SimBa: Simplicity Bias for Scaling Up Parameters in Deep Reinforcement Learning}, + author={Hojoon Lee and Dongyoon Hwang and Donghu Kim and Hyunseung Kim and Jun Jet Tai and Kaushik Subramanian and Peter R. Wurman and Jaegul Choo and Peter Stone and Takuma Seno}, + journal={arXiv preprint arXiv:2410.09754}, year={2024} } ``` diff --git a/deps/environment.yaml b/deps/environment.yaml index 1dcc17b..6766b86 100644 --- a/deps/environment.yaml +++ b/deps/environment.yaml @@ -31,3 +31,5 @@ dependencies: - termcolor==2.4.0 - tqdm==4.66.1 - wandb==0.16.6 + - moviepy==1.0.3 + - imageio==2.33.1 \ No newline at end of file diff --git a/deps/requirements.txt b/deps/requirements.txt index 442a649..f27a2ff 100644 --- a/deps/requirements.txt +++ b/deps/requirements.txt @@ -20,3 +20,5 @@ tensorflow-probability==0.24.0 termcolor==2.4.0 tqdm==4.66.1 wandb==0.16.6 +moviepy==1.0.3 +imageio==2.33.1 \ No newline at end of file diff --git a/docs/index.html b/docs/index.html index 5b0ce16..394e680 100644 --- a/docs/index.html +++ b/docs/index.html @@ -49,7 +49,7 @@

@@ -313,7 +313,7 @@

Paper

Hojoon Lee*, Dongyoon Hwang*, Donghu Kim,
Hyunseung Kim, Jun Jet Tai, Kaushik Subramanian, Peter R. Wurman,
Jaegul Choo, Peter Stone, Takuma Seno


- arXiv preprint

+ arXiv preprint

@@ -324,9 +324,6 @@

Paper

-
- View on arXiv -
@@ -335,8 +332,14 @@

Citation

If you find our work useful, please consider citing the paper as follows:

- -
+@article{lee2024simba, + title={SimBa: Simplicity Bias for Scaling Up Parameters in Deep Reinforcement Learning}, + author={Hojoon Lee and Dongyoon Hwang and Donghu Kim and Hyunseung Kim and Jun Jet Tai and + Kaushik Subramanian and Peter R. Wurman and Jaegul Choo and Peter Stone and Takuma Seno}, + journal={arXiv preprint arXiv:2410.09754}, + year={2024} +} +