-
Notifications
You must be signed in to change notification settings - Fork 25
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
How to change normalization strategy and weight map in fine tuning? #40
Comments
The newest version of the plugin allows you to select the normalization mode in "U-Net->Utilities->Create New Model" (there you can also change the weights). If you want to use custom pre-normalization, select "No normalization", "Zero-Mean, unit standard deviation" is also available as normalization strategy. Since v_bal comes into play in two places, once for tile selection and a second time in pixel weighting, I'd suggest to not increase the factor above sqrt(bg/fg ratio). In your case this would be sqrt(0.003) (around 1/18). There is a good reason to choose it closer to 1: The lower the balancing term the more biased the network will be towards foreground. If you had an arbitrary amount of training time, I would suggest to entirely avoid re-balancing, so that the network learns the correct foreground background bias. |
Thanks a lot. With a different normalization strategy, how can I use the pre-trained weights? |
The modeldef.h5 file only contains architecture and hyperparameters for pre-processing and augmentation. The actual model weights are stored in a corresponding file ending in .caffemodel.h5. You have two options:
|
|
Maybe the model is indeed not 100% compatible then, but training from scratch makes sense in your case anyways. |
There are two different modes of segmentation: Semantic segmentation whcih simply classifies each pixel as belonging to (any) object or to background or Instance segmentation in which the goal is to additionally tell different instances of foreground objects apart. IoU measures semantic segmentation quality, so an increase in IoU means the segmentations become finer and more accurate. F1 measures the ability to separate instances. It is the harmonic mean of precision (how many detections are true positives) and recall (How many objects are detected at all). Since your IoU still increases and your validation loss still decreases I would continue training, although both scores indicate a rather good model already. |
Thank you! |
I find foregroundBackgroundRatio in .modeldef.h5 file(['unet_param']['oixelwise_loss_weights']['foregroundBckgroundRatio']). Vbal = foregroundBackgroundRatio, am I right? If I change the value of foregroundBackgroundRatio, Vbal will be changed? |
Correct. |
Hi,
Because the features of my images are diverse, so I am trying to subtract the mean and divide by sd of each image. How can I realize it in fine tuning?
For my images, foreground/background is about 0.003. To decrease the weight of background, what is the best value for Vbal? And how can I change it?
Thank you very much!
The text was updated successfully, but these errors were encountered: