In the dynamic landscape of medical artificial intelligence, this study explores the vulnerabilities of the Pathology Language-Image Pretraining (PLIP) model, a Vision Language Foundation model, under targeted attacks like PGD adversarial attack.
pytorch
attention-mechanism
clip
vulnerability-detection
pathology
trustworthiness
adversarial-attacks
attention-visualization
pathology-image
histopathology-images
pgd-adversarial-attacks
contrastive-learning
trustworthy-machine-learning
vision-transformer
trustworthy-ai
plip-model
histopathology-image-classfication
vision-language-model
-
Updated
May 18, 2024 - Jupyter Notebook