diff --git a/docs/source/creators/creators_description_classes.rst b/docs/source/creators/creators_description_classes.rst index c9e84ad..95c2ab8 100644 --- a/docs/source/creators/creators_description_classes.rst +++ b/docs/source/creators/creators_description_classes.rst @@ -34,6 +34,9 @@ For each output class, a separate vector layer can be created. Output report contains information about percentage coverage of each class. +The model should have at least two output classes, one for the background and one (or more) for the object of interest. The background class should be the first class in the output. +Model outputs should sum to 1.0 for each pixel, so the output is a probability map. To achieve this, the output should be passed through a softmax function. + =============== Detection Model diff --git a/src/deepness/processing/models/segmentor.py b/src/deepness/processing/models/segmentor.py index 9ce8252..091bba8 100644 --- a/src/deepness/processing/models/segmentor.py +++ b/src/deepness/processing/models/segmentor.py @@ -37,7 +37,8 @@ def postprocessing(self, model_output: List) -> np.ndarray: np.ndarray Batch of postprocessed masks (N,H,W,C), 0-1 """ - labels = np.clip(model_output[0], 0, 1) + # labels = np.clip(model_output[0], 0, 1) + labels = model_output[0] # no need for clipping I think - see #149 return labels