A classical-quantum or hybrid neural network with adversarial defense protection
-
Updated
May 24, 2024 - Jupyter Notebook
A classical-quantum or hybrid neural network with adversarial defense protection
Hybrid neural network model is protected against adversarial attacks using either adversarial training or randomization defense techniques
Beacon Object File (BOF) launcher - library for executing BOF files in C/C++/Zig applications
A reading list for large models safety, security, and privacy.
Adversarial attacks on LLMs, for influencing outputs of hidden layer linear probes and steering generations.
Adversarial Robustness Toolbox (ART) - Python Library for Machine Learning Security - Evasion, Poisoning, Extraction, Inference - Red and Blue Teams
A list of recent papers about adversarial learning
A novel physical adversarial attack tackling the Digital-to-Physical Visual Inconsistency problem.
Adversary Emulation Framework
PyTorch implementation of adversarial attacks [torchattacks].
A collection of anomaly detection methods (iid/point-based, graph and time series) including active learning for anomaly detection/discovery, bayesian rule-mining, description for diversity/explanation/interpretability. Analysis of incorporating label feedback with ensemble and tree-based detectors. Includes adversarial attacks with Graph Convol…
A classical or convolutional neural network model with adversarial defense protection
The Fast Gradient Sign Method (FGSM) combines a white box approach with a misclassification goal. It tricks a neural network model into making wrong predictions. We use this technique to anonymize images.
The Security Automation Toolkit
Generate adversarial patches against YOLOv5 🚀
In the dynamic landscape of medical artificial intelligence, this study explores the vulnerabilities of the Pathology Language-Image Pretraining (PLIP) model, a Vision Language Foundation model, under targeted attacks like PGD adversarial attack.
WACV 2024 Papers: Discover cutting-edge research from WACV 2024, the leading computer vision conference. Stay updated on the latest in computer vision and deep learning, with code included. ⭐ support visual intelligence development!
Learning-Based Atk: a black box adversarial attack method for TSF.
A Python API that facilitates training, creating, and transferring attacks with quantized DNNs
Attack to induce LLMs within hallucinations
Add a description, image, and links to the adversarial-attacks topic page so that developers can more easily learn about it.
To associate your repository with the adversarial-attacks topic, visit your repo's landing page and select "manage topics."