Skip to content

Commit

Permalink
Prep for release
Browse files Browse the repository at this point in the history
  • Loading branch information
warner-benjamin committed Oct 16, 2023
1 parent 23c07b0 commit 826a155
Show file tree
Hide file tree
Showing 2 changed files with 71 additions and 113 deletions.
116 changes: 39 additions & 77 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,20 +1,11 @@
fastxtend
================
# fastxtend

<!-- WARNING: THIS FILE WAS AUTOGENERATED! DO NOT EDIT! -->

<div>

### Train fastai models faster (and other useful tools)

</div>

<div>

![fastxtend accelerates fastai](nbs/images/imagenette_benchmark.png)

</div>

Train fastai models faster with fastxtend’s [fused
optimizers](optimizer.fused.html), [Progressive
Resizing](callback.progresize.html) callback, and integrated [FFCV
Expand All @@ -39,7 +30,10 @@ DataLoader](ffcv.tutorial.html).
**General Features**

- Fused implementations of modern optimizers, such as
[Adan](optimizer.adan.html) and [Lion](optimizer.lion.html).
[Adan](optimizer.adan.html), [Lion](optimizer.lion.html), &
[StableAdam](optimizer.stableadam.html).
- Hugging Face [Transformers compatibility](text.huggingface.html) with
fastai
- Flexible [metrics](metrics.html) which can log on train, valid, or
both. Backwards compatible with fastai metrics.
- Easily use [multiple losses](multiloss.html) and log each individual
Expand All @@ -66,48 +60,13 @@ DataLoader](ffcv.tutorial.html).
- A flexible implementation of fastai’s
[`XResNet`](https://fastxtend.benjaminwarner.dev/vision.models.xresnet.html#xresnet).

**Audio**

- [`TensorAudio`](https://fastxtend.benjaminwarner.dev/audio.01_core.html#tensoraudio),
[`TensorSpec`](https://fastxtend.benjaminwarner.dev/audio.01_core.html#tensorspec),
[`TensorMelSpec`](https://fastxtend.benjaminwarner.dev/audio.01_core.html#tensormelspec)
objects which maintain metadata and support plotting themselves using
librosa.
- A selection of performant [audio augmentations](audio.augment.html)
inspired by fastaudio and torch-audiomentations.
- Uses TorchAudio to quickly convert
[`TensorAudio`](https://fastxtend.benjaminwarner.dev/audio.01_core.html#tensoraudio)
waveforms into
[`TensorSpec`](https://fastxtend.benjaminwarner.dev/audio.01_core.html#tensorspec)
spectrograms or
[`TensorMelSpec`](https://fastxtend.benjaminwarner.dev/audio.01_core.html#tensormelspec)
mel spectrograms using the GPU.
- Out of the box support for converting one
[`TensorAudio`](https://fastxtend.benjaminwarner.dev/audio.01_core.html#tensoraudio)
to one or multiple
[`TensorSpec`](https://fastxtend.benjaminwarner.dev/audio.01_core.html#tensorspec)
or
[`TensorMelSpec`](https://fastxtend.benjaminwarner.dev/audio.01_core.html#tensormelspec)
objects from the Datablock api.
- Audio [MixUp and CutMix](audio.mixup.html) Callbacks.
- [`audio_learner`](https://fastxtend.benjaminwarner.dev/audio.04_learner.html#audio_learner)
which merges multiple
[`TensorSpec`](https://fastxtend.benjaminwarner.dev/audio.01_core.html#tensorspec)
or
[`TensorMelSpec`](https://fastxtend.benjaminwarner.dev/audio.01_core.html#tensormelspec)
objects before passing to the model.

Check out the documentation for additional splitters, callbacks,
schedulers, utilities, and more.

<div>

## Documentation

<https://fastxtend.benjaminwarner.dev>

</div>

## Install

fastxtend is avalible on pypi:
Expand All @@ -116,49 +75,38 @@ fastxtend is avalible on pypi:
pip install fastxtend
```

To install with dependencies for vision, FFCV, audio, or all tasks run
one of:
fastxtend can be installed with task-specific dependencies for `vision`,
`ffcv`, `text`, `audio`, or `all`:

``` bash
pip install fastxtend[vision]

pip install fastxtend[ffcv]

pip install fastxtend[audio]

pip install fastxtend[all]
pip install "fastxtend[all]"
```

Or to create an editable development install:

``` bash
git clone https://github.com/warner-benjamin/fastxtend.git
cd fastxtend
pip install -e ".[dev]"
```

To easily install prerequisites for all fastxtend features, use
To easily install most prerequisites for all fastxtend features, use
[Conda](https://docs.conda.io/en/latest) or
[Miniconda](https://docs.conda.io/en/latest/miniconda.html):

``` bash
conda create -n fastxtend python=3.10 "pytorch>=2.0.0" \
torchvision torchaudio pytorch-cuda=11.8 cuda fastai nbdev \
pkg-config libjpeg-turbo opencv tqdm terminaltables psutil \
numpy numba librosa=0.9.2 timm kornia rich typer wandb \
-c pytorch -c nvidia/label/cuda-11.8.0 -c fastai \
-c huggingface -c conda-forge
conda create -n fastxtend python=3.11 "pytorch>=2.1" torchvision torchaudio \
pytorch-cuda=12.1 fastai nbdev pkg-config libjpeg-turbo opencv tqdm psutil \
terminaltables numpy "numba>=0.57" librosa timm kornia rich typer wandb \
"transformers>=4.34" "tokenizers>=0.14" "datasets>=2.14" ipykernel ipywidgets \
"matplotlib<3.8" -c pytorch -c nvidia -c fastai -c huggingface -c conda-forge

conda activate fastxtend

pip install "fastxtend[all]"
```

replacing `pytorch-cuda=11.8` and `nvidia/label/cuda-11.8.0` with your
preferred [supported version of
Cuda](https://pytorch.org/get-started/locally). Then install fastxtend
using `pip`:
replacing `pytorch-cuda=12.1` with your preferred [supported version of
Cuda](https://pytorch.org/get-started/locally).

To create an editable development install:

``` bash
pip install fastxtend[all]
git clone https://github.com/warner-benjamin/fastxtend.git
cd fastxtend
pip install -e ".[dev]"
```

## Usage
Expand All @@ -184,6 +132,18 @@ Use a fused ForEach optimizer:
Learner(..., opt_func=adam(foreach=True))
```

Or a bitsandbytes 8-bit optimizer:

``` python
Learner(..., opt_func=adam(eightbit=True))
```

Speed up image training using Progressive Resizing:

``` python
Learner(... cbs=ProgressiveResize())
```

Log an accuracy metric on the training set as a smoothed metric and
validation set like normal:

Expand All @@ -201,10 +161,12 @@ mloss = MultiLoss(loss_funcs=[nn.MSELoss, nn.L1Loss],
Learner(..., loss_func=mloss, metrics=RMSE(), cbs=MultiLossCallback)
```

Apply MixUp, CutMix, or Augmentation while training:
Compile a model with `torch.compile`:

``` python
Learner(..., cbs=CutMixUpAugment)
from fastxtend.callback import compiler

learn = Learner(...).compile()
```

Profile a fastai training loop:
Expand Down
68 changes: 32 additions & 36 deletions nbs/index.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -52,7 +52,8 @@
"\n",
"**General Features**\n",
"\n",
"* Fused implementations of modern optimizers, such as [Adan](optimizer.adan.html) and [Lion](optimizer.lion.html).\n",
"* Fused implementations of modern optimizers, such as [Adan](optimizer.adan.html), [Lion](optimizer.lion.html), & [StableAdam](optimizer.stableadam.html).\n",
"* Hugging Face [Transformers compatibility](text.huggingface.html) with fastai\n",
"* Flexible [metrics](metrics.html) which can log on train, valid, or both. Backwards compatible with fastai metrics.\n",
"* Easily use [multiple losses](multiloss.html) and log each individual loss on train and valid.\n",
"* [Multiple profilers](callback.profiler.html) for profiling training and identifying bottlenecks.\n",
Expand All @@ -66,15 +67,6 @@
"* More [attention](vision.models.attention_modules.html) and [pooling](vision.models.pooling.html) modules\n",
"* A flexible implementation of fastai’s `XResNet`.\n",
"\n",
"**Audio**\n",
"\n",
"* `TensorAudio`, `TensorSpec`, `TensorMelSpec` objects which maintain metadata and support plotting themselves using librosa.\n",
"* A selection of performant [audio augmentations](audio.augment.html) inspired by fastaudio and torch-audiomentations.\n",
"* Uses TorchAudio to quickly convert `TensorAudio` waveforms into `TensorSpec` spectrograms or `TensorMelSpec` mel spectrograms using the GPU.\n",
"* Out of the box support for converting one `TensorAudio` to one or multiple `TensorSpec` or `TensorMelSpec` objects from the Datablock api.\n",
"* Audio [MixUp and CutMix](audio.mixup.html) Callbacks.\n",
"* `audio_learner` which merges multiple `TensorSpec` or `TensorMelSpec` objects before passing to the model.\n",
"\n",
"Check out the documentation for additional splitters, callbacks, schedulers, utilities, and more."
]
},
Expand All @@ -101,40 +93,31 @@
"pip install fastxtend\n",
"```\n",
"\n",
"To install with dependencies for vision, FFCV, audio, or all tasks run one of:\n",
"```bash\n",
"pip install fastxtend[vision]\n",
"\n",
"pip install fastxtend[ffcv]\n",
"\n",
"pip install fastxtend[audio]\n",
"\n",
"pip install fastxtend[all]\n",
"```\n",
"\n",
"Or to create an editable development install:\n",
"fastxtend can be installed with task-specific dependencies for `vision`, `ffcv`, `text`, `audio`, or `all`:\n",
"```bash\n",
"git clone https://github.com/warner-benjamin/fastxtend.git\n",
"cd fastxtend\n",
"pip install -e \".[dev]\"\n",
"pip install \"fastxtend[all]\"\n",
"```\n",
"\n",
"To easily install prerequisites for all fastxtend features, use [Conda](https://docs.conda.io/en/latest) or [Miniconda](https://docs.conda.io/en/latest/miniconda.html):\n",
"To easily install most prerequisites for all fastxtend features, use [Conda](https://docs.conda.io/en/latest) or [Miniconda](https://docs.conda.io/en/latest/miniconda.html):\n",
"\n",
"```bash\n",
"conda create -n fastxtend python=3.10 \"pytorch>=2.0.0\" \\\n",
"torchvision torchaudio pytorch-cuda=11.8 cuda fastai nbdev \\\n",
"pkg-config libjpeg-turbo opencv tqdm terminaltables psutil \\\n",
"numpy numba librosa=0.9.2 timm kornia rich typer wandb \\\n",
"-c pytorch -c nvidia/label/cuda-11.8.0 -c fastai \\\n",
"-c huggingface -c conda-forge\n",
"conda create -n fastxtend python=3.11 \"pytorch>=2.1\" torchvision torchaudio \\\n",
"pytorch-cuda=12.1 fastai nbdev pkg-config libjpeg-turbo opencv tqdm psutil \\\n",
"terminaltables numpy \"numba>=0.57\" librosa timm kornia rich typer wandb \\\n",
"\"transformers>=4.34\" \"tokenizers>=0.14\" \"datasets>=2.14\" ipykernel ipywidgets \\\n",
"\"matplotlib<3.8\" -c pytorch -c nvidia -c fastai -c huggingface -c conda-forge\n",
"\n",
"conda activate fastxtend\n",
"\n",
"pip install \"fastxtend[all]\"\n",
"```\n",
"replacing `pytorch-cuda=11.8` and `nvidia/label/cuda-11.8.0` with your preferred [supported version of Cuda](https://pytorch.org/get-started/locally). Then install fastxtend using `pip`:\n",
"replacing `pytorch-cuda=12.1` with your preferred [supported version of Cuda](https://pytorch.org/get-started/locally).\n",
"\n",
"To create an editable development install:\n",
"```bash\n",
"pip install fastxtend[all]\n",
"git clone https://github.com/warner-benjamin/fastxtend.git\n",
"cd fastxtend\n",
"pip install -e \".[dev]\"\n",
"```"
]
},
Expand Down Expand Up @@ -165,6 +148,17 @@
"Learner(..., opt_func=adam(foreach=True))\n",
"```\n",
"\n",
"Or a bitsandbytes 8-bit optimizer:\n",
"```python\n",
"Learner(..., opt_func=adam(eightbit=True))\n",
"```\n",
"\n",
"Speed up image training using Progressive Resizing:\n",
"\n",
"```python\n",
"Learner(... cbs=ProgressiveResize())\n",
"```\n",
"\n",
"Log an accuracy metric on the training set as a smoothed metric and validation set like normal:\n",
"```python\n",
"Learner(..., metrics=[Accuracy(log_metric=LogMetric.Train, metric_type=MetricType.Smooth),\n",
Expand All @@ -179,9 +173,11 @@
"Learner(..., loss_func=mloss, metrics=RMSE(), cbs=MultiLossCallback)\n",
"```\n",
"\n",
"Apply MixUp, CutMix, or Augmentation while training:\n",
"Compile a model with `torch.compile`:\n",
"```python\n",
"Learner(..., cbs=CutMixUpAugment)\n",
"from fastxtend.callback import compiler\n",
"\n",
"learn = Learner(...).compile()\n",
"```\n",
"\n",
"Profile a fastai training loop:\n",
Expand Down

0 comments on commit 826a155

Please sign in to comment.