Skip to content

Commit

Permalink
Merge branch 'sync/v4.48.x' of https://github.com/TimoImhof/adapters
Browse files Browse the repository at this point in the history
…into sync/v4.48.x
  • Loading branch information
TimoImhof committed Feb 23, 2025
2 parents 4c22adf + ed6a9f2 commit e0359fa
Show file tree
Hide file tree
Showing 157 changed files with 2,505 additions and 2,150 deletions.
19 changes: 15 additions & 4 deletions Makefile
Original file line number Diff line number Diff line change
Expand Up @@ -28,18 +28,29 @@ style:
isort $(check_dirs)
${MAKE} extra_style_checks

# Run tests for the library
# Library Tests

# run all tests in the library
test:
python -m pytest -n auto --dist=loadfile -s -v ./tests/
python -c "import transformers; print(transformers.__version__)"

# run all tests for the adapter methods for all adapter models
test-adapter-methods:
python -m pytest --ignore ./tests/models -n auto --dist=loadfile -s -v ./tests/
python -m pytest -n auto --dist=loadfile -s -v ./tests/test_methods/

# run a subset of the adapter method tests for all adapter models
# list of all subsets: [core, heads, embeddings, composition, prefix_tuning, prompt_tuning, reft, unipelt, compacter, bottleneck, ia3, lora, config_union]
subset ?=
test-adapter-method-subset:
@echo "Running subset $(subset)"
python -m pytest -n auto --dist=loadfile -s -v ./tests/test_methods/ -m $(subset)


# run the hugginface test suite for all adapter models
test-adapter-models:
python -m pytest -n auto --dist=loadfile -s -v ./tests/models
python -m pytest -n auto --dist=loadfile -s -v ./tests/test_models/

# Run tests for examples

test-examples:
python -m pytest -n auto --dist=loadfile -s -v ./examples/pytorch/
6 changes: 3 additions & 3 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -32,7 +32,7 @@ A Unified Library for Parameter-Efficient and Modular Transfer Learning
<a href="https://arxiv.org/abs/2311.11077">Paper</a>
</h3>

![Tests](https://github.com/Adapter-Hub/adapters/workflows/Tests/badge.svg?branch=adapters)
![Tests](https://github.com/Adapter-Hub/adapters/workflows/Tests/badge.svg)
[![GitHub](https://img.shields.io/github/license/adapter-hub/adapters.svg?color=blue)](https://github.com/adapter-hub/adapters/blob/main/LICENSE)
[![PyPI](https://img.shields.io/pypi/v/adapters)](https://pypi.org/project/adapters/)

Expand All @@ -45,7 +45,7 @@ _Adapters_ provides a unified interface for efficient fine-tuning and modular tr

## Installation

`adapters` currently supports **Python 3.8+** and **PyTorch 1.10+**.
`adapters` currently supports **Python 3.9+** and **PyTorch 2.0+**.
After [installing PyTorch](https://pytorch.org/get-started/locally/), you can install `adapters` from PyPI ...

```
Expand Down Expand Up @@ -147,7 +147,7 @@ Currently, adapters integrates all architectures and methods listed below:

| Method | Paper(s) | Quick Links |
| --- | --- | --- |
| Bottleneck adapters | [Houlsby et al. (2019)](https://arxiv.org/pdf/1902.00751.pdf)<br> [Bapna and Firat (2019)](https://arxiv.org/pdf/1909.08478.pdf) | [Quickstart](https://docs.adapterhub.ml/quickstart.html), [Notebook](https://colab.research.google.com/github/Adapter-Hub/adapters/blob/main/notebooks/01_Adapter_Training.ipynb) |
| Bottleneck adapters | [Houlsby et al. (2019)](https://arxiv.org/pdf/1902.00751.pdf)<br> [Bapna and Firat (2019)](https://arxiv.org/pdf/1909.08478.pdf)<br> [Steitz and Roth (2024)](https://openaccess.thecvf.com/content/CVPR2024/papers/Steitz_Adapters_Strike_Back_CVPR_2024_paper.pdf) | [Quickstart](https://docs.adapterhub.ml/quickstart.html), [Notebook](https://colab.research.google.com/github/Adapter-Hub/adapters/blob/main/notebooks/01_Adapter_Training.ipynb) |
| AdapterFusion | [Pfeiffer et al. (2021)](https://aclanthology.org/2021.eacl-main.39.pdf) | [Docs: Training](https://docs.adapterhub.ml/training.html#train-adapterfusion), [Notebook](https://colab.research.google.com/github/Adapter-Hub/adapters/blob/main/notebooks/03_Adapter_Fusion.ipynb) |
| MAD-X,<br> Invertible adapters | [Pfeiffer et al. (2020)](https://aclanthology.org/2020.emnlp-main.617/) | [Notebook](https://colab.research.google.com/github/Adapter-Hub/adapters/blob/main/notebooks/04_Cross_Lingual_Transfer.ipynb) |
| AdapterDrop | [Rücklé et al. (2021)](https://arxiv.org/pdf/2010.11918.pdf) | [Notebook](https://colab.research.google.com/github/Adapter-Hub/adapters/blob/main/notebooks/05_Adapter_Drop_Training.ipynb) |
Expand Down
5 changes: 5 additions & 0 deletions conftest.py
Original file line number Diff line number Diff line change
Expand Up @@ -87,3 +87,8 @@ def check_output(self, want, got, optionflags):


doctest.OutputChecker = CustomOutputChecker


def pytest_collection_modifyitems(items):
# Exclude the 'test_class' group from the test collection since it's not a real test class and byproduct of the generic test class generation.
items[:] = [item for item in items if 'test_class' not in item.nodeid]
2 changes: 1 addition & 1 deletion docs/installation.md
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
# Installation

The `adapters` package is designed as an add-on for Hugging Face's Transformers library.
It currently supports Python 3.8+ and PyTorch 1.10+. You will have to [install PyTorch](https://pytorch.org/get-started/locally/) first.
It currently supports Python 3.9+ and PyTorch 2.0+. You will have to [install PyTorch](https://pytorch.org/get-started/locally/) first.

```{eval-rst}
.. important::
Expand Down
4 changes: 4 additions & 0 deletions docs/training.md
Original file line number Diff line number Diff line change
Expand Up @@ -223,3 +223,7 @@ trainer = AdapterTrainer(
_Adapters_ supports fine-tuning of quantized language models similar to [QLoRA (Dettmers et al., 2023)](https://arxiv.org/pdf/2305.14314.pdf) via the `bitsandbytes` library integrated into Transformers.
Quantized training is supported for LoRA-based adapters as well as bottleneck adapters and prefix tuning.
Please refer to [this notebook](https://colab.research.google.com/github/Adapter-Hub/adapters/blob/main/notebooks/QLoRA_Llama_Finetuning.ipynb) for a hands-on guide.

## Gradient Checkpointing
Gradient checkpointing is supported for all models (e.g. Llama 1/2/3) except for the models that are not supported by Hugging Face Transformers (like ALBERT). Please refer to [this notebook](https://colab.research.google.com/github/Adapter-Hub/adapters/blob/main/notebooks/Gradient_Checkpointing_Llama.ipynb) for a hands-on guide.

2 changes: 1 addition & 1 deletion examples/pytorch/language-modeling/run_clm.py
Original file line number Diff line number Diff line change
Expand Up @@ -442,7 +442,7 @@ def main():
else:
model = AutoModelForCausalLM.from_config(config, trust_remote_code=model_args.trust_remote_code)
n_params = sum({p.data_ptr(): p.numel() for p in model.parameters()}.values())
logger.info(f"Training new model from scratch - Total size={n_params/2**20:.2f}M params")
logger.info(f"Training new model from scratch - Total size={n_params / 2**20:.2f}M params")

# Convert the model into an adapter model
adapters.init(model)
Expand Down
Loading

0 comments on commit e0359fa

Please sign in to comment.