Replies: 4 comments
-
I don't believe you need to retrain, but It's possible you need to tweak how the model is loaded. @mu40 will know more, so tagging him here. Can you post some example images by any chance? |
Beta Was this translation helpful? Give feedback.
-
While you should not have to retrain this model to use a different image shape, it is hard to predict what went wrong without seeing your code. If the image shape is |
Beta Was this translation helpful? Give feedback.
-
Thanks for the responses! I've attached some example images here and the notebook I'm using; only the last cell contains my code, which lists shape of both inputs as My goal is to map EM (fixed) and MS (moving) datasets which have been warped, as seen in the examples. You can see that the edges don't perfectly align, the EM data is actually manually cropped in these to roughly contain the MS data. Any tips on how best to tweak training would be appreciated! As mentioned, I'm thinking of shifting / cropping either the fixed or moving images in the label_maps. |
Beta Was this translation helpful? Give feedback.
-
Thanks for sharing your code and images. The model weights are independent of the input image size. However, you need to specify the image shape you want when constructing the model, by passing |
Beta Was this translation helpful? Give feedback.
-
Hi,
I'm looking to use the SynthMorph shapes model to do image registration for multiple images taken at an EM / intracellular scale. I'm new to using DL models, so forgive me if these are basic questions.
Following the tutorial notebook, I can use the shapes model with the pretrained weights to unwarp my images, but only when scaled down to 256x256 resolution. When I try and use the original resolutions (or (2048, 2048), etc), I get the error
ConcatOp : Dimension 2 in both shapes must be equal: shape[0] = [1,256,32,32] vs. shape[1] = [1,256,256,256]
. Do I need to train a new model from scratch and have the input images all be my desired size instead of 256x256? Or is there some way to use different resolutions than the training data?Also, we're scanning small sections of tissue samples, which don't have defined boundaries like MRI data and don't always fully overlap. I see that in the Voxelmorph tutorial there is blank padding added to the edges for MRI data, but how would you suggest modifying training data to allow for the moving and fixed images to not be fully overlapping? I could randomly shift or crop the generated warped and/or fixed training images, but wanted your thoughts on if that is a valid strategy.
Thanks!
Beta Was this translation helpful? Give feedback.
All reactions