This repository is a fork of an app developped by ULiège Cytomine Research team: Cytomine-ULiege/S_Segment-ML-MultiOutputET-Pred-BI
A Cytomine (https://cytomine.org) app for applying a segmentation (ML) model. This differs from the original ULiège implementation by the fact that the prediction will be performed exclusively in regions of interest (ROIs).
For the associated training software, see:
This implementation follows Cytomine (> v3.0) external app conventions based on container technology.
Summary: It uses a binary segmentation model based on subwindow and multiple output extra-trees to segment a set of regions of interest.
Typical application: Predict the regions of interest from chosen areas that corresponds to a certain term (e.g. tumor regions in histology slides).
Based on: pixel classification model to detect regions of interest, the methodology is presented here.
Parameters: cytomine_host, cytomine_public_key, cytomine_private_key, cytomine_id_project and cytomine_id_software
These are parameters needed for Cytomine external app. They will allow the app to be run on the Cytomine instance (determined with its host), connect and communicate with the instance (with the key pair). An app is always run within a project (cytomine_id_project) and to be run, the app must be previously declared to the plateform as a software (cytomine_id_software).
Parameters: cytomine_id_images, cytomine_id_roi_term, cytomine_zoom_level
The algorithm will only segment the provided images (cytomine_id_images), or all the images of the project if the parameter is omitted or empty. The images will be opened at the provided zoom level (cytomine_zoom_level, 0 for highest magnification). In the selected images, only pixels in the provided region of interests (ROI) will receive a prediction. The ROI are determined by a term identifier (cytomine_id_roi_term).
Parameters: cytomine_id_job
The machine learning model to use must have been generated by one of the training apps listed in the introduction. These apps associate the model as an attached file to the training job (cytomine_id_job).
Parameters: cytomine_tile_size, cytomine_tile_overlap
The algorithm will consume the image tile by tile, each tile being a square image of the provided size (cytomine_tile_size). Tiles can overlap if requested (cytomine_tile_overlap).
Parameters: pyxit_predictions_per_pixel
An approximate value for the number of predictions to produce per pixel (pyxit_predictions_per_pixel). This value can be increased for improved segmentation performance but at the expense of computation time.
Parameters: cytomine_id_predict_term, min_annotation_area, max_annotation_area, tile_filter_min_stddev, tile_filter_max_mean
If the segmentation model used for the prediction is binary, the segmented areas will be associated with the provided term (cytomine_id_predict_term). For multi-class segmentation, this parameter can be omitted and each class will be assigned the term it was associated with during training.
During prediction, tiles for which the content is considered background will be ignored. A tile is considered background if the standard deviation of the pixel intensities for at least one of its channels is smaller than a value (tile_filter_min_stddev) or if the average of the pixel intensities for at least one of its channels is larger than the provided value (tile_filter_max_mean).
Resulting segmentation areas will be removed if they are smaller than the provided area (min_annotation_area) or larger than the provided area (max_annotation_area) both in pixels at zoom level 0 .
Parameters: n_jobs
Number of CPUs to use for executing the computations (n_jobs, -1 for all CPUs).