- Image encoders are imported now only from timm models.
- Add
enc_out_indices
to model classes, to enable selecting which layers to use as the encoder outputs.
- Removed SAM and DINOv2 original implementation image-encoders from this repo. These can be found from timm models these days.
- Removed
cellseg_models_pytorch.training
module which was left unused after example notebooks were updated.
- Updated example notebooks.
- Added new example notebooks utilizing UNI foundation model from the MahmoodLab.
- Added new example notebooks utilizing the Prov-GigaPath foundation model from the Microsoft Research.
- NOTE: These examples use the huggingface model hub to load the weights. Permission to use the model weights is required to run these examples.
- Update timm version to above 1.0.0.
- Lose support for python 3.9
- The
self.encoder
in each model is new, thus, models with trained weights from previous versions of the package will not work with this version.
- Update the
Ìnferer.infer()
-method api to accept arguments related to saving the model outputs.
-
Add
CPP-Net
. https://arxiv.org/abs/2102.06867 -
Add option for mixed precision inference
-
Add option to interpolate model outputs to a given size to all of the segmentation models.
-
Add DINOv2 Backbone
-
Add support for
.geojson
,.feather
,.parquet
file formats when running inference.
- Add
CPP-Net
example trainng with Pannuke dataset.
- Fix resize transformation bug.
-
add a stem-skip module. (Long skip for the input image resolution feature map)
-
add UnetTR transformer encoder wrapper class
-
add a new Encoder wrapper for timm and unetTR based encoders
-
Add stem skip support and upsampling block options to all current model architectures
-
Add masking option to all the criterions
-
Add
MAELoss
-
Add
BCELoss
-
Add base class for transformer based backbones
-
Add SAM-VitDet image encoder with support to load pre-trained SAM weights
-
Add
CellVIT-SAM
model.
-
Add notebook example on training Hover-Net with lightning from scratch.
-
Add notebook example on training StarDist with lightning from scratch.
-
Add notebook example on training CellPose with accelerate from scratch.
-
Add notebook example on training OmniPose with accelerate from scratch.
-
Add notebook example on finetuning CellVIT-SAM with accelerate.
-
Fix current TimmEncoder to store feature info
-
Fix Up block to support transconv and bilinear upsampling and fix data flow issues.
-
Fix StardistUnet class to output all the decoder features.
-
Fix Decoder, DecoderStage and long-skip modules to work with up scale factors instead of output dimensions.
- Add mps (Mac) support for inference
- Add cell class probabilities to saved geojson files
- Add StrongAugment data augmentation to data-loading pipeline: https://arxiv.org/abs/2206.15274
-
Enable writing folder & hdf5 datasets with only images (previously needed image-mask pairs)
-
Enable writing datasets without patching.
-
Add long missing h5 reading utility function to
FileHandler
-
Add hdf5 input file reading to
Inferer
classes. -
Add option to write pannuke dataset to h5 db in
PannukeDataModule
andLizardDataModule
. -
Add a generic model builder function
get_model
tomodels.__init__.py
-
Rewrite segmentation benchmarker. Now it can take in hdf5 datasets.
- Add pytorch lightning in-built
auto_lr_finder
option toSegmentationExperiment
- Add Multi-scale-convolutional-attention (MSCA) module (SegNexT).
- Add TokenMixer & MetaFormer modules.
- Add transformer modules
- Add exact, slice, and memory efficient (xformers) self attention computations
- Add transformers modules to
Decoder
modules - Add common transformer mlp activation functions: star-relu, geglu, approximate-gelu.
- Add Linformer self-attention mechanism.
- Add support for model intialization from yaml-file in
MultiTaskUnet
. - Add a new cross-attention long-skip module. Works with
long_skip='cross-attn'
- Added more verbose error messages for the abstract wrapper-modules in
modules.base_modules
- Added more verbose error catching for xformers.ops.memory_efficient_attention.
- Bump old versions of numpy & scipy
- Use the inferer class as input to segmentation benchmarker class
- Throw away some unnecessary parts of the cellpose post-proc pipeline that just brought overhead and did nothing.
-
Refactor the whole cellpose post-processing pipeline for readability.
-
Refactored multiprocessing code to be reusable and moved it under
utils
.
-
Add exact euler integration (on CPU) for cellpose post-processing.
-
added more pathos.Pool options for parallel processing. Added
ThreadPool
,ProcessPool
&SerialPool
-
add all the mapping methods for each Pool obj. I.e.
amap
,imap
,uimap
andmap
- Add option to return encoder features, and decoder features along the outputs in the forward pass of any model.
- Reverse engineered the
stardist
post-processing pipeline to python. Accelerated it with Numba and optimized it even further. Now it runs almost 2x faster than the original C++ verion.
- unnecessary torchvision dependency
- torch-optimizer removed from the optional dependency list. Started to cause headache.
- Moved saving utilities to
FileHandler
and updated tests.
- Added geojson saving support for inference
-
Support to return all of the feature maps from each decoder stage.
-
Add multi-gpu inference via DataParallel
- Add a Wandb artifact table callback for loading a table of test data metrics and insights to wandb.
-
Symmetric CE loss fixed.
-
Add option to return binary and instance labelled mask from the dataloader. Previously binary was returned with
return_inst
flag which was confusing. -
Fix the
SegmentationExperiment
to return preds and masks at test time.
- Update loss tests
-
Add a conv block
BasicConvOld
to enableDippa
to cellseg conversion of models. -
Fix
inst_key
,aux_key
bug inMultiTaskUnet
-
Add a type_map > 0 masking for the
inst_map
s in post-processing -
Modify the optimizer adjustment utility function to adjust any optim/weight params.
-
Modify lit
SegmentationExperiment
according to new changes.
-
Add optional spectral decoupliing to all losses
-
Add optional Label smoothing to all losses
-
Add optional Spatially varying label smoothing to all losses
-
Add mse, ssim and iqi torchmetrics for metric logging.
-
Add wandb per class metric callback for logging.
-
Add
from_yaml
init classmethod to initialize from yaml files.
- Update tests for Inferes and mask utils.
- Add tests for the benchmarkers.
- init and typing fixes
- Typo fies in docs
-
Add numba parallellized median filter and majority voting for post-processing
-
Add support for own semantic and type seg post-proc funcs in Inferers
-
Add segmentation performance benchmarking helper class.
-
Add segmentation latency benchmarking helper class.
- Update
save2db
&save2folder
for optional type_map and sem_map args. - Pre-processing (
pre-proc
) callable arg for_get_tiles
method. This enables the Lizard datamodule. - Fix- padding bug with sliding window inference.
-
Lizard datamodule (https://arxiv.org/abs/2108.11195)
-
Add a universal multi-task U-net model builder (experimental)
-
Update dataset tests.
-
Update tests for multi-task U-Net
- Fix incorrect type hints.
- Add cellpose training with Lizard dataset notebook.