Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Problem using different value of num_classes #19

Open
darshats opened this issue Feb 10, 2022 · 15 comments
Open

Problem using different value of num_classes #19

darshats opened this issue Feb 10, 2022 · 15 comments

Comments

@darshats
Copy link
Contributor

Hi,
I think I encountered an error when I try to change the predefined unet to my own that does binary segmentation. From what I can gather, during train_ae, the class is compared with itself as the prediction. Since the input image (X) is RGB it expects 3 channel output (prediction) as well. If I try to change the unet to a two class output, I get an error here:

_loss1 = criterion(prediction, X) File "/media/App/anaconda3/envs/NN/lib/python3.9/site-packages/torch/nn/modules/module.py", line 889, in _call_impl result = self.forward(*input, **kwargs) File "/media/App/anaconda3/envs/NN/lib/python3.9/site-packages/torch/nn/modules/loss.py", line 528, in forward return F.mse_loss(input, target, reduction=self.reduction) File "/media/App/anaconda3/envs/NN/lib/python3.9/site-packages/torch/nn/functional.py", line 2928, in mse_loss expanded_input, expanded_target = torch.broadcast_tensors(input, target) File "/media/App/anaconda3/envs/NN/lib/python3.9/site-packages/torch/functional.py", line 74, in broadcast_tensors return VF.broadcast_tensors(tensors) # type: ignore RuntimeError: The size of tensor a (2) must match the size of tensor b (3) at non-singleton dimension 1

Is there a better way to plug in a custom unet where n_classes != 3?

Thanks
Darshat

@choosehappy
Copy link
Owner

choosehappy commented Feb 11, 2022 via email

@darshats
Copy link
Contributor Author

Hi,
Thanks for your reply. To give better context, I'm replacing the entire model with a custom resnet50 based encoder unet. So I cant use the model change you mentioned above.

So I dont want to have to run the train_ae script ideally. The problem I've run into is the model created by train_ae script in folder 0 is needed to let the retrain_dl script go forward. I think some other internal db structure are also updated by that script.

So for now I retained the train_ae as is, with the model that comes with the app. In retrain_dl I changed it to ignore output of train_ae completely (looking at folder 0 model).

That got me past this issue. It would have been nice to be able to do this better so we can skip the train_ae cleanly.

Thanks for reply!
Darshat

@choosehappy
Copy link
Owner

choosehappy commented Feb 11, 2022 via email

@darshats
Copy link
Contributor Author

Gutting the script didnt quite work, if I trigger the training "from base" option, the API call http://localhost:5555/api/resnet2/retrain_dl?frommodelid=0 fails with:
{"error":"Deep learning model 0 doesn't exist"}
It also causes the superpixel algo to error out, though that is more of a clutter in the logs, since I dont use it.
To get past, I had to copy a dummy model in that 0 location.

The other critical functional issue is that I'd prefer making annotations at zoom to get boundaries right. At moment zoom works on the original image, not on the annotation window. How difficult would it be to change that? (Crop is not an option since I dont want to creating roi of differing sizes)

Thanks,
Darshat

@choosehappy
Copy link
Owner

choosehappy commented Feb 13, 2022 via email

@darshats
Copy link
Contributor Author

(let me know if I should start a different thread!)
Wrt superpixel, yes its restored by copying over the custom model to 0 folder. And I think I will start using it soon - that plus the embedding based selection is a very thoughtful feature 👍

Wrt zooming the annotation window, to explain better, below is screenshot at 125% of browser zoom
image
Zoom is needed to mark the nuclei boundaries in this pic since they can be very close.
Using browser resolution is not bad suggestion, but everything is bigger. Just having the annotation window get magnified would help mark finer details. Especially since you've coded up edge weights in the model :)

@choosehappy
Copy link
Owner

Thanks for sending this over

I'll admit, i'm a bit confused here, because it look like your "regular" window is actually somehow bigger than your annotation window, which seems weird to me

This is what I expected it to look like:

image

where you can see that the annotation window is a much higher magnification version of the input image, and also would allow for accurate cell level segmentation/annotation

I'm wondering why this wouldn't be true in your case...what are the size of the images you're uploading?

@darshats
Copy link
Contributor Author

All image I use are 256x256, and patchsize is also 256. I keep them the same because I use the import annotation script and that assigns to train test. If I update an annotation, it will also assign an roi to train test. To keep image sizes for training uniform I set size=256 everywhere.

@choosehappy
Copy link
Owner

choosehappy commented Feb 23, 2022 via email

@darshats
Copy link
Contributor Author

Thats a good option in the import script. The issue I run into is if I update an imported annotation in the tool, the UI doesnt show if it already belongs to train or test. On saving I have to assign again. This probably affects earlier assignment and skews the ratio.

In any case, the solution I'd like to have is to be able to have a magnified view in the tool. I'm using a wacom pad for annotation to get the boundaries right.

@choosehappy
Copy link
Owner

i think we should be able to pretty easily give an indication (e.g., change border color) of the annotated ROIS so that it is clear if they are in the training/testing set...I'll make a new issue for this

my javascript is terrible so I've asked the development team to look into it : )

In any case, the solution I'd like to have is to be able to have a magnified view in the tool.

I've also asked them to look into this.

if you're looking for a super quick hack, you can simply resize your 256 x 256 images to say, 2x the size and work on those keeping the same "annotation box area". this way the 256 x 256 image (now 512 x 512) would appear twice as large in the annotation window

appreciate it isn't ideal but is something you can do immediately

@choosehappy
Copy link
Owner

Coming back to this, just merged in #22 which will give you the ability to zoom : )

Can you please take a look and provide feedback?

@darshats
Copy link
Contributor Author

darshats commented Apr 7, 2022

Sure, I will try and get back. Thanks!

@darshats
Copy link
Contributor Author

Hi the zoom works, partially. It would be preferable to have scrollbars on the annotation window, because it quickly goes beyond the boundary of the browser, see screenshot:
QA

@choosehappy
Copy link
Owner

choosehappy commented Apr 11, 2022 via email

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants