-
Notifications
You must be signed in to change notification settings - Fork 25
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Script: U-Net integration into python pipeline. #23
Comments
Hi Jan, Yes, the plugin prepares files containing data, labels and weights datasets in caffe-compatible hdf5 format. The off-the-shelf-models expect data to be 4D blobs (n,c,y,x) for 2D data or 5D blobs (n,c,z,y,x) for 3D data. Segmentation/Detection: This is easily implementable in python pipelines. The finetuning case is a little trickier. Edge weight computation requires computation of the distance of every background pixel to its second-nearest foreground instance. This is expensive, both in Java and python, because it requires iteration over all object instances and computation of a distance transform for each object. One could restrict computation to narrow bands around object instances, to get it faster. Then model.prototxt and solver.prototxt are extracted from the modeldef.h5 file, adapted to match the number of input channels and output classes and given file names, a text file is generated containing all train files. If validation is requested additional layers for forward passes are added to compute IoU, and F1 and the validation images tiled into overlapping tiles matching the network input shape and the corresponding validation files enumerated in a second txt-file. Finally caffe is called with the solver.prototxt. At least data augmentation is then performed by caffe, so you don't have to think about this. You can of course use a tensorflow or pytorch implementation of U-Net, that lifts the burden of generating all these book-keeping files for caffe, but then you must additionally implement data augmentation in python (or use a ready library), which is not particularly hard but also some effort. |
Hey Thorsten, Based on your advice, I wrote a little python function that runs 2D segmentation tasks using your caffe_unet and modelfiles/weight files created by your ImageJ plug-in. Maybe it's useful to someone else as well, I'll add the file below (if there are better ways to make it available, let me know). I'll also look into the finetuning case during the next weeks. Cheers, As file:
|
Great, this may be interesting for others as well! |
Happy to hear that ;) |
Hello, thank you for this. I would like to know if this means that I can run the analysis in the cloud for instance on Colab. This might help those who may not have access to a GPU |
Hey all,
I currently have my image processing pipeline for cell detection in python and I've been looking for ways to integrate your UNET implementation into it. More specifically, I have a datajoint pipeline which holds images and ground truth outlines/center locations, then trains models based on that, makes predictions for all models&images and finally computes quality metrics. Now I want to try to add your model to it.
Thus, I'm looking for a GUI-free way to start and control finetuning/segementation/detection jobs from a python environment. Is there a "command line-like" level that I could access?
As far as I understood from a brief look through your code, you prepare the model definition and input files in a certain way (creating blobs / h5files within the Java part of the code), and then submit these to the respective caffe functions, which then return results to the Java plugin.
I'm not sure if this is correct and where in that sequence it would be wise to start with a python integration (or if that is a bad idea to begin with, and implementing UNET myself is easier). If you have any thoughts on this or if you could point me to some documentation of your Java/caffe interface, I'd be very grateful!
Cheers,
Jan.
The text was updated successfully, but these errors were encountered: