Skip to content

Ziwei-Niu/DG_for_MedIA

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

62 Commits
 
 
 
 
 
 

Repository files navigation

Awesome Domain Generalization for Medical Image Analysis

🔥 This is a repository for organizing papers ,codes, and etc related to Domain Generalization for Medical Image Analysis (DG for MedIA).

đź’— Medical Image Analysis (MedIA) plays a critical role in computer aided diagnosis system, enabling accurate diagnosis and assessment for various diseases. Over the last decade, deep learning (DL) has demonstrated great success in automating various MedIA tasks such as disease diagnosis, lesion segmentation, prognosis prediction, etc. Despite their success, in many real-world healthcare scenarios, the difference in the image acquisition, such as device manufacturer, scanning protocol, image sequence, and modality, introduces domain shifts, resulting in a significant decline in performance when deploying the well-trained model to clinical sites with different data distributions. Additionally, considering that medical data involves privacy concerns, data sharing restrictions and requires manual annotations by medical experts, collecting data from all possible domains to train DL models is expensive and even prohibitively impossible. Therefore, enhancing the generalization ability of DL models in MedIA is crucial in both clinical and academic fields.

🎯 We hope that this repository can provide assistance to researchers and practitioners in medical image analysis and domain generalization.

Table of Contents

Papers (ongoing)

Data Manipulation Level

Data Augmentation

Augmentation is widely employed in vision tasks to mitigate overfitting and improve generalization capacity, including operations like flipping, cropping, color jittering, noise addition, and others. For domain generalization in medical image analysis, augmentation methods can be broadly categorized as randomization-based, adversarial-based, and normalization-based.

Normalization-based

Normalization-based methods aims to normalize the raw intensity values or statistics to reduce the impact of variations in image intensity across different domains. Specifically, these methods are usually employed for specific tasks, such as pathological images.

Randomization-based

The goal of randomization-based methods is to generate novel input data by applying random transformations to the image-space, frequency-space and feature space.

Image-space

Frequency-space

  • Title: Fourier-based augmentation with applications to domain generalization
  • Publication: Pattern Recognition 2023
  • Summary: Propose a Fourier-based data augmentation strategy called AmpMix by linearly interpolating the amplitudes of two images while keeping their phases unchanged to simulated domain shift. Additionally a consistency training between different augmentation views is incorporated to learn invariant representation.
  • [Code]

Feature-space

Adversarial-based

Adversarial-based data augmentation methods are driven by adversarial training, aiming to maximize the diversity of data while simultaneously constraining its reliability.

Data Generation

Data generation is devoted to utilizing generative models such as Variational Autoencoder (VAE), Generative Adversarial Networks (GANs), Diffusion Models and etc., to generate fictional and novel samples. With source domain data becoming more complex, diverse, and informative, the generalization ability can be increased.

  • Title: GH-DDM: the generalized hybrid denoising diffusion model for medical image generation
  • Publication: Multimedia Systems 2023
  • Summary: Introduce a generalized hybrid denoising diffusion model to enhance generalization ability by generating new cross-domain medical images, which leverages the strong abilities of transformers into diffusion models to model long-range interactions and spatial relationships between anatomical structures.
  • Title: Generative Adversarial Domain Generalization via Cross-Task Feature Attention Learning for Prostate Segmentation
  • Publication: ICONIP 2021
  • Summary: Propose a new Generative Adversarial Domain Generalization (GADG) network, which can achieve the domain generalization through the generative adversarial learning on multi-site prostate MRI images. Additionally, to make the prostate segmentation network learned from the source domains still have good performance in the target domain, a Cross-Task Attention Module (CTAM) is designed to transfer the main domain generalized features from the generation branch to the segmentation branch.
  • Title: Multimodal Self-supervised Learning for Medical Image Analysis
  • Publication: IPMI 2021
  • Summary: Propose a novel approach leveraging self-supervised learning through multimodal jigsaw puzzles for cross-modal medical image synthesis tasks. Additionally, to increase the quantity of multimodal data, they design a cross-modal generation step to create synthetic images from one modality to another using the CycleGAN-based translation model.

Feature Level Generalization

Invariant Feature Representation

For medical image analysis, a well-generalized model focuses more on task-related semantic features while disregarding task-unrelated style features. In this regard, three types of methods have been extensively investigated: feature normalization, explicit feature alignment, and domain adversarial learning.

Feature normalization

This line of methods aim to enhance the generalization ability of models by centering, scaling, decorrelating, standardizing, and whitening extracted feature distributions. This process aids in accelerating the convergence of algorithms and prevents features with larger scales from overpowering those with smaller ones. Common techniques include traditional scaling methods like min-max and z-score normalization, as well as deep learning methods such as batch, layer, and instance normalization.

Explicit feature alignment

Explicit feature alignment methods attempt to remove domain shifts by reducing the discrepancies in feature distributions across multiple source domains, thereby facilitating the learning of domain-invariant feature representations.

  • Title: Measuring Domain Shift for Deep Learning in Histopathology
  • Publication: JBHI 2020
  • Summary: Design a dual-normalization module to estimate domain distribution information. During the test stage, the model select the nearest feature statistics according to style-embeddings in the dual-normalization module to normalize target domain features for generalization.
  • [Code]

Domain adversarial learning

Domain-adversarial training methods are widely used for learning domain-invariant representations by introducing a domain discriminator in an adversarial relationship with the feature extractor

  • Title: Localized adversarial domain generalization
  • Publication: CVPR 2022
  • Summary: Propose a general-purpose framework for Adversarially-Regularized Mixed Effects Deep learning (ARMED). The ARMED employ an adversarial classifier to regularize the model to learn cluster-invariant fixed effects (domain invariant). The classifier attempts to predict the cluster membership based on the learned features, while the feature extractor is penalized for enabling this prediction.
  • [Code]

Feature disentanglement

Feature disentanglement methods aim to decompose the features of input samples into domain-invariant (task-unrelated) and domain-specific (task-related) components, i.e., $\mathbf{z} = [\mathbf{z}\text{invariant}, \mathbf{z}\text{specific}] \in \mathcal{Z}$. The objective of robust generalization models is to concentrate exclusively on the task-related feature components $\mathbf{z}\text{invariant}$ while disregarding the task-unrelated ones $\mathbf{z}\text{specific}$. The mainstream methods of feature disentanglement mainly include multi-component learning and generative modeling.

Multi-component learning

Multi-component learning achieves feature disentanglement by designing different components to separately extract domain-invariant features and domain-specific features, thereby achieving feature decoupling.

  • Title: Towards principled disentanglement for domain generalization
  • Publication: CVPR 2022
  • Summary: Introduce disentanglement-constrained domain generalization (DDG) for cross-center tumor detection, which simultaneously learns a semantic encoder and a variation encoder for feature disentanglement, and further constrains the learned representations to be invariant to inter-class variation.
  • Title: Contrastive Domain Disentanglement for Generalizable Medical Image Segmentation
  • Publication: Arxiv 2022
  • Summary: Propose Contrastive Domain Disentanglement and Style Augmentation (CDDSA) for image segmentation in the fundus and MR images. This method introduce a disentangle network to decompose medical images into an anatomical representation and a modality representation, and a style contrastive loss function is designed to ensures that style representations from the same domain bear similarity while those from different domains diverge significantly.
  • Title: DoFE: Domain-Oriented Feature Embedding for Generalizable Fundus Image Segmentation on Unseen Datasets
  • Publication: IEEE TMI 2020
  • Summary: Proposed Domain-oriented Feature Embedding (DoFE) for fundus image segmentation. The DoFE framework incorporates a domain knowledge pool to learn and store the domain prior information (domain-specic) extracted from the multi-source domains. This domain prior knowledge is then dynamically enriched with the image features to make the semantic features more discriminative, improving the generalization ability of the segmentation networks on unseen target domains.

Generative Learning

Generative models are also effective techniques for traditional feature disentanglement, such as InfoGAN and $\beta$-VAE. For domain generalization, generative learning based disentanglement methods attempt to elucidate the sample generation mechanisms from the perspectives of domain, sample, and label, thereby achieving feature decomposition.

  • Title: Learning domain-agnostic representation for disease diagnosiss
  • Publication: ICLR 2023
  • Summary: Leverage structural causal modeling to explicitly model disease-related and center-effects. Guided by this, propose a novel Domain Agnostic Representation Model (DarMo) based on variational Auto-Encoder and design domain-agnostic and domain-aware encoders to respectively capture disease-related features and varied center effects by incorporating a domain-aware batch normalization layer.
  • Title: DIVA: Domain Invariant Variational Autoencoders
  • Publication: PLMR 2022
  • Summary: Propose Domain-invariant variational autoencoder (DIVA) for malaria cell image classification, which disentangles the features into domain information, category information, and other information, which is learned in the VAE framework.
  • [Code]
  • Title: Variational Disentanglement for Domain Generalization
  • Publication: TMLR 2022
  • Summary: Propose a Variational Disentanglement Network (VDN) to classify breast cancer metastases. VDN disentangles domain-invariant and domain-specific features by estimating the information gain and maximizing the posterior probability.
  • [Code]

Model Training Level

Learning Strategy

Learning strategies have gained significant attention in tackling domain generalization challenges across various fields. They leverage generic learning paradigms to improve model generalization performance, which can be mainly categorized into three categories: ensemble learning, meta-learning, and self-supervised learning.

Ensemble Learning

Ensemble learning is a machine learning technique where multiple models are trained to solve the same problem. For domain generalization, different models can capture domain-specific patterns and representations, so their combination could lead to more robust predictions.

Meta Learning

Meta-learning, also known as learning to learn, is a machine learning method focused on designing algorithms that can generalize knowledge from diverse tasks. In medical domain generalization tasks, it plays a significant role in addressing the challenge of expensive data collecting and annotating, which divide the source domain(s) into meta-train and meta-test sets to simulate domain shift.

Self-supervised Learning

Self-supervised learning is a machine learning method where a model learns general representations from input data without explicit supervision. These representations enhance the model's generalization capability, enabling it to mitigate domain-specific biases. This approach is particularly valuable in scenarios where labeled data is scarce or costly to obtain and annotate, such as in medical imaging.

  • Title: Robust and data-efficient generalization of self-supervised machine learning for diagnostic imaging
  • Publication: Nature Biomedical Engineering 2023
  • Summary: Propose robust and efficient medical imaging with self-supervision (REMEDIS) for technology, demographic and behavioral domain shifts, which combines large-scale supervised transfer learning on natural images and intermediate contrastive self-supervised learning on medical images and requires minimal task-specific customization.

Optimization Strategy

Optimization strategies play a crucial role in minimizing overfitting to specific domains, which is achieved by adjusting hyperparameters, selecting appropriate loss functions, regularization techniques, and optimization algorithms.

  • Title: Model-Based Domain Generalization
  • Publication: NeurIPS 2021
  • Summary: Present a model-based domain generalization framework to rigorously reformulate the domain generalization problem as a semi-infinite constrained optimization problem. employed group distributionally robust optimization (GDRO) for the skin lesion classification model. This optimization involves more aggressive regularization, implemented through a hyperparameter to favor fitting smaller groups, and early stopping techniques to enhance generalization performance.
  • [Code]
  • Title: DOMINO++: Domain-Aware Loss Regularization for Deep Learning Generalizability
  • Publication: MICCAI 2023
  • Summary: Introduce an adaptable regularization framework to calibrate intracranial MRI segmentation models based on expert-guided and data-guided knowledge. The strengths of this regularization lie in its ability to take advantage of the benefits of both the semantic confusability derived from domain knowledge and data distribution.

Datasets

We list the widely used benchmark datasets for domain generalization including classification and segmentation.

Dataset Task #Domain #Class Description
Fundus OC/OD Segmentation 4 2 Retinal fundus RGB images from three public datasets, including REFUGE, DrishtiGSand RIM-ONE-r
Prostate MRI Segmentation 6 1 T2-weighted MRI data collected three public datasets, including NCI-ISBI13, I2CVB and PROMISE12
Abdominal CT & MRI Segmentation 2 4 30 volumes Computed tomography (CT) and 20 volumes T2 spectral presaturation with inversion recovery (SPIR) MRI
Cardiac Segmentation 2 3 45 volumes balanced steady-state free precession (bSSFP) MRI and late gadolinium enhanced (LGE) MRI
BraTS Segmentation 4 1 Multi-contrast MR scans from glioma patients and consists of four different contrasts: T1, T1ce, T2, and FLAIR
M&Ms Segmentation 4 3 Multi-centre, multi-vendor and multi-disease cardiac image segmentation dataset contains 320 subjects
SCGM Segmentation 4 1 Single channel spinal cord gray matter MRI from four different centers
Camelyon17 Detection & Classification 5 2 Whole-slide images (WSI) of hematoxylin and eosin (H&E) stained lymph node sections of 100 patients
Chest X-rays Classification 3 2 Chest X-rays for detecting whether the image corresponds to a patient with Pneumonia from three dataset NIH, ChexPert and RSNA

Libraries

We list the libraries of domain generalization.

Other Resources

  • A collection of domain generalization papers organized by amber0309.
  • A collection of domain generalization papers organized by jindongwang.
  • A collection of papers on domain generalization, domain adaptation, causality, robustness, prompt, optimization, generative model, etc, organized by yfzhang114.
  • A collection of awesome things about domain generalization organized by junkunyuan.

Contact

  • If you would like to add/update the latest publications / datasets / libraries, please directly add them to this README.md.
  • If you would like to correct mistakes/provide advice, please contact us by email ([email protected]).
  • You are welcomed to update anything helpful.

Acknowledgements

Contributors

naturalknight
Naturalknight
Ziwei-Niu
ZiweiNiu
zerone-fg
Zerone-fg

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published