Replies: 1 comment 1 reply
-
The affine transform is transforming the output space to the input space to fetch input values, following the scipy ndimage convention https://docs.scipy.org/doc/scipy/reference/generated/scipy.ndimage.affine_transform.html |
Beta Was this translation helpful? Give feedback.
1 reply
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Hi,
I have a question regarding the Affine transform which I'm using for a project on medical image registration. I am using the MEDNIST dataset where I load a new image (calling it the fixed image) and then I create an affinely transformed version of this image (calling it the moving image). For a simple affine transform, I just try scaling, however, the Affine method doesn't give the expected result.
train_transforms = Compose( [ LoadImageD(keys=["fixed_hand", "moving_hand"]), EnsureChannelFirstD(keys=["fixed_hand", "moving_hand"]), ScaleIntensityRanged(keys=["fixed_hand", "moving_hand"], a_min=0., a_max=255., b_min=0.0, b_max=1.0, clip=True,), AffineD(keys=['moving_hand'], scale_params=(2, 2)), ToTensorD(keys=["fixed_hand", "moving_hand"]), ] )
In the above code snippet, I expect the moving image to be fixed image upscaled by a factor of 2, but instead, I am getting the opposite:
I also noticed this pattern with other parameters e.g. in translation, I set the parameters as 10 for both the x and y direction, but the image moves 10 pixels left and up, rather than right and up. I'm not sure why this is happening. Could someone please help me with this? Thank you so much!
Beta Was this translation helpful? Give feedback.
All reactions