Adjusting LandCoverSeg example for 2 instead of 5 classes #188
Replies: 5 comments 2 replies
-
The weights correspond to the percentage of pixels for each class, as some classes will be underrepresented - for example, there will be (in general) far fewer pixels corresponding to roads than woods in a typical image. The weights make sure those pesky neural networks don't take the shortcut and actually make an effort to account for these underrepresented classes in the final segmentation result. Not sure what your classes are, but you can change it to a vector of two even numbers for starters, and it should work:
|
Beta Was this translation helpful? Give feedback.
-
Beta Was this translation helpful? Give feedback.
-
As you can see above I got this workflow to successfully train the first time.
But it eventually just stops, even though its running with a Nvidia T4 on colab, with no errors:
How do I finetune an existing model with new imagery/labels? |
Beta Was this translation helpful? Give feedback.
-
Hi @jducnuigeen, |
Beta Was this translation helpful? Give feedback.
-
An update after 2 months of struggle: While I got the first run above to work in Google Colab (which uses Tesla T4 GPU's using Cuda version 12.2 and the Turing architecture) I tried the same workflow on a local machine using a newer Nvidia RTX 2000ADA GPU which uses Cuda 12.4 and the ADA Lovelace architecture. Apparently these GPU's use different architectures and it makes a difference to the pytorch version. My trials on this newer GPU fails with lots of problems related to this GPU where it seems the torch kernel I used above has to be configured/compiled to use a this different GPU. I tried updating the torch version to 2.6.0 and the pytorch-lightning version 2.5.0, but now I get more error like
So the tutorial I wanted to write will have to be on hold until I can figure out this rabbit hole of issues to solve. Note: I stopped using Google Colab because it has a 12hr runtime limit. |
Beta Was this translation helpful? Give feedback.
-
I've been trying to train a model on my own data using your example LandCoverSeg example with the aim to train a more pure segmentation model.
I've closely matched my custom input dataset to match the folder structure and image types for the images/masks/train/val/tests.
My dataset only has two classes : background, and buildings. Your original code running on the Landseg data had 5 classes.
I've also made some changes to hopefully accommodate the code for my two classes:
In src->datamodules->datasets I modified landcoverseg_dataset.py
changing
mask = np.zeros((*label.shape[:2], 5), dtype=np.uint8)
to
mask = np.zeros((*label.shape[:2], 2), dtype=np.uint8)
In src->models I modified segmenter_visualisate_utils.py
changing the palatte from
palette = [
0, 0, 0,
255,0,0,
0,255,0,
0,0,255,
255,255,0, ]
to
palette = [
0, 0, 0,
1, 0, 0, ]
and of course changed the config, with mainly just class names and paths.
It tries to run, but I gets to /LandCoverSeg/src/losses/focal_dice_loss.py
and gives this error:
and i see that there are 5 torch tensor weight hardcoded on line 15 of that file, but I don't understand where they came from, or how I modify them for just two classes?
self.weights = torch.tensor([[0.34523039, 23.35764161, 0.60372157, 3.09578992, 12.32148783]]).cuda()
Are there other places I need to adjust to get this to work with 2 classes?
Thoughts?
Beta Was this translation helpful? Give feedback.
All reactions