- We proposed a Weighted-distribution Calibration (WC) to alleviate bias of distribution of novel classes by generating more data from a calibrated distribution, which takes advantage of transferred statistics of all base classes.
- With a backbone of ResNet-12 and a logistic regression classifier, WC successfully improves the model's performance from a base accuracy of 57.33% to a surprising accuracy of 63.87%.
- We experimented on Vision Transformer (ViT) based backbone feature extractor. We expected ViT to pay more attention to target objects, thus decreasing the redundancy in extracted features. Unfortunately, it did not work well in our experiments, possibly, due to its loss of reception field.
The implementations of backbone networks are adapted from here.
Note due to the size limit, the pretrained model, miniImageNet dataset are not included but they can be downloaded from the following links:
- The
config
folder contains all.yaml
files to configure models, loggers, and miscellaneous parameters. - The
core
folder contains all the code for initializing models, training, and testing them. In particulartrain.py
andtest.py
are used for training and testing the ResNet-12 backbone.ViTtrainer.py
andViTtest.py
are used for training and testing the ViT backbone.
- Download miniImageNet Dataset and extract the dataset.
- Move dataset to a designated location, and match data_root in
/config/headers/data.yaml
to this location. - install all dependencies in
requirements.txt
- Download the Pretrained ResNet 12 Backbone
- Modify
PATH
variable inrun_test.py
to point to the downloaded backbone. python run_test.py
- Modify configuration in corresponding
[backbone_name].yaml
file - Start training:
python run_trainer.py
- Modify test configuration in
config.yaml
file at the root path - Start testing:
python run_test.py
- Modify configuration in
vitconfig.yaml
- Start training:
python run_vit_trainer.py
- Modify test configuration in
vitconfig.yaml
file at the root path - Start testing:
python run_vit_test.py