Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Hojoonlee/update project page #2

Merged
merged 5 commits into from
Oct 15, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
44 changes: 40 additions & 4 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,41 @@
# SimBa: Simplicity Bias for Scaling Up Parameters in Deep Reinforcement Learning
# SimBa: Simplicity Bias for Scaling Up Parameters in Deep RL

This is a repository of an official implementation of Simba: Simplicity Bias for Scaling Up Parameters in Deep Reinforcement Learning.
This is a repository of an official implementation of

<i>Simba: Simplicity Bias for Scaling Up Parameters in Deep Reinforcement Learning </i> by

<a href="https://joonleesky.github.io">Hojoon Lee</a>,
<a href="https://godnpeter.github.io">Dongyoon Hwang</a>,
<a href="https://i-am-proto.github.io">Donghu Kim</a>,
<a href="https://mynsng.github.io">Hyunseung Kim</a>,
<a href="https://taijunjet.com">Jun Jet Tai</a>,
<a href="https://kausubbu.github.io">Kaushik Subramanian</a>,

<a href="https://www.pwurman.org">Peter R. Wurman</a>,
<a href="https://sites.google.com/site/jaegulchoo">Jaegul Choo</a>,
<a href="https://www.cs.utexas.edu/~pstone/">Peter Stone</a>,
<a href="https://takuseno.github.io/">Takuma Seno</a>.


[[Website]](https://sonyresearch.github.io/simba) [[Paper]](https://arxiv.org/abs/2410.09754)

## Overview

### TL;DR

Stop worrying about algorithms, just change the network architecture to SimBa.

### Method

SimBa is a network architecture designed for RL that avoids overfitting by embedding simplicity bias.

<img src="docs/images/simba_architecture.png" alt="Image description" width="800">

### Results

When integrated SimBA with Soft Actor Critic (SAC), it matches the performance of state-of-the-art RL algorithms.

<img src="docs/images/overview.png" alt="Image description" width="800">


## Getting strated
Expand Down Expand Up @@ -105,8 +140,9 @@ If you find our work useful, please consider citing our paper as follows:

```
@article{lee2024simba,
title={Simba: Simplicity Bias for Scaling Up Parameters in Deep Reinforcement Learning},
author={Hojoon Lee and Dongyoon Hwang and Donghu Kim and Hyunseung Kim and Jun Jet Tai and Kaushik Subramanian and Peter R.Wurman and Jaegul Choo and Peter Stone and Takuma Seno},
title={SimBa: Simplicity Bias for Scaling Up Parameters in Deep Reinforcement Learning},
author={Hojoon Lee and Dongyoon Hwang and Donghu Kim and Hyunseung Kim and Jun Jet Tai and Kaushik Subramanian and Peter R. Wurman and Jaegul Choo and Peter Stone and Takuma Seno},
journal={arXiv preprint arXiv:2410.09754},
year={2024}
}
```
2 changes: 2 additions & 0 deletions deps/environment.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -31,3 +31,5 @@ dependencies:
- termcolor==2.4.0
- tqdm==4.66.1
- wandb==0.16.6
- moviepy==1.0.3
- imageio==2.33.1
2 changes: 2 additions & 0 deletions deps/requirements.txt
Original file line number Diff line number Diff line change
Expand Up @@ -20,3 +20,5 @@ tensorflow-probability==0.24.0
termcolor==2.4.0
tqdm==4.66.1
wandb==0.16.6
moviepy==1.0.3
imageio==2.33.1
19 changes: 11 additions & 8 deletions docs/index.html
Original file line number Diff line number Diff line change
Expand Up @@ -17,7 +17,7 @@
</head>
<div class="header" id="top">
<h1><span class="bold simba">SimBa </span><br/>Simplicity Bias for Scaling Parameters in Deep Reinforcement Learning</h1>
<h3><a class="bold default-color">Under review</span><br/></h3>
<h3><a class="bold default-color">Preprint</span><br/></h3>
<table class="authors">
<tbody>
<tr>
Expand Down Expand Up @@ -49,7 +49,7 @@ <h4>
</tbody>
</table>
<div class="links">
<a href="https://arxiv.org" class="btn"><i class="fa">&#xf1c1;</i>&ensp;Paper</a><a href="https://github.com/joonleesky/scale_rl" class="btn"><i class="fa fa-github"></i>&ensp;Code</a>
<a href="http://arxiv.org/abs/2410.09754" class="btn"><i class="fa">&#xf1c1;</i>&ensp;Paper</a><a href="https://github.com/SonyResearch/simba" class="btn"><i class="fa fa-github"></i>&ensp;Code</a>
</div>
</div>
<div class="content">
Expand Down Expand Up @@ -313,7 +313,7 @@ <h2>Paper</h2>
<span class="italic">Hojoon Lee&ast;, Dongyoon Hwang&ast;, Donghu Kim,<br/>
Hyunseung Kim, Jun Jet Tai, Kaushik Subramanian, Peter R. Wurman,<br/>
Jaegul Choo, Peter Stone, Takuma Seno</span><br/><br/>
<a href="https://arxiv.org">arXiv preprint</a><br/><br/>
<a href="https://arxiv.org/abs/2410.09754">arXiv preprint</a><br/><br/>
<div class="page" style="background-image: url(thumbnails/0.png);"></div>
<div class="page" style="background-image: url(thumbnails/1.png);"></div>
<div class="page" style="background-image: url(thumbnails/2.png);"></div>
Expand All @@ -324,9 +324,6 @@ <h2>Paper</h2>
<div class="page" style="background-image: url(thumbnails/7.png);"></div>
<div class="page" style="background-image: url(thumbnails/8.png);"></div>
<div class="page" style="background-image: url(thumbnails/9.png);"></div>
<div style="margin: auto; margin-top: 32px;">
<a href="https://arxiv.org/abs/2310.16828">View on arXiv</a>
</div>
</div>
<div class="hr"></div>
<div style="padding-bottom: 64px; text-align: center;">
Expand All @@ -335,8 +332,14 @@ <h2>Citation</h2>
If you find our work useful, please consider citing the paper as follows:
</p>
<div id="bibtex-text" class="bibtexsection" onClick="window.getSelection().selectAllChildren(document.getElementById('bibtex-text'));">

</div>
@article{lee2024simba,
title={SimBa: Simplicity Bias for Scaling Up Parameters in Deep Reinforcement Learning},
author={Hojoon Lee and Dongyoon Hwang and Donghu Kim and Hyunseung Kim and Jun Jet Tai and
Kaushik Subramanian and Peter R. Wurman and Jaegul Choo and Peter Stone and Takuma Seno},
journal={arXiv preprint arXiv:2410.09754},
year={2024}
}
</div>
</div>
</div>
<footer>
Expand Down
Loading