Skip to content

Commit

Permalink
adding quickstart section to README.md
Browse files Browse the repository at this point in the history
  • Loading branch information
SkalskiP committed Sep 11, 2024
1 parent d614d25 commit 5aba660
Show file tree
Hide file tree
Showing 2 changed files with 41 additions and 8 deletions.
45 changes: 37 additions & 8 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,4 +1,3 @@

<div align="center">

<h1>maestro</h1>
Expand All @@ -23,13 +22,43 @@ Pip install the supervision package in a
pip install maestro
```

## 🚀 example
## 🔥 quickstart

### CLI

VLMs can be fine-tuned on downstream tasks directly from the command line with
`maestro` command:

```bash
maestro florence2 train --dataset='<DATASET_PATH>' --epochs=10 --batch-size=8
```

Documentation and Florence-2 fine-tuning examples for object detection and VQA coming
soon.
### SDK

Alternatively, you can fine-tune VLMs using the Python SDK, which accepts the same
arguments as the CLI example above:

```python
from maestro.trainer.models.florence_2 import (
train,
TrainingConfiguration,
MeanAveragePrecisionMetric
)

config = TrainingConfiguration(
dataset='<DATASET_PATH>',
epochs=10,
batch_size=8,
metrics=[MeanAveragePrecisionMetric()]
)

train(config)
```

## 🚧 roadmap
## 🦸 contribution

- [ ] Release a CLI for predefined fine-tuning recipes.
- [ ] Multi-GPU fine-tuning support.
- [ ] Allow multi-dataset fine-tuning and support multiple tasks at the same time.
We would love your help in making this repository even better! We are especially
looking for contributors with experience in fine-tuning vision-language models (VLMs).
If you notice any bugs or have suggestions for improvement, feel free to open an
[issue](https://github.com/roboflow/multimodal-maestro/issues) or submit a
[pull request](https://github.com/roboflow/multimodal-maestro/pulls).
4 changes: 4 additions & 0 deletions maestro/trainer/models/florence_2/core.py
Original file line number Diff line number Diff line change
Expand Up @@ -144,6 +144,10 @@ def train(config: TrainingConfiguration) -> None:
validation_metrics_tracker.as_json(
output_dir=os.path.join(config.output_dir, "metrics"),
filename="validation.json")

# Log out paths for latest and best checkpoints
print(f"Latest checkpoint saved at: {checkpoint_manager.get_latest_checkpoint_path()}")
print(f"Best checkpoint saved at: {checkpoint_manager.get_best_checkpoint_path()}")


def prepare_peft_model(
Expand Down

0 comments on commit 5aba660

Please sign in to comment.