2022 Spring Semester, Personal Project Research
-
Updated
Jun 27, 2022 - Python
2022 Spring Semester, Personal Project Research
In the dynamic landscape of medical artificial intelligence, this study explores the vulnerabilities of the Pathology Language-Image Pretraining (PLIP) model, a Vision Language Foundation model, under targeted attacks like PGD adversarial attack.
my MA thesis (code, paper & presentation) about adversarial out-of-distribution detection
Adversarially-robust Image Classifier
Evaluating CNN robustness against various adversarial attacks, including FGSM and PGD.
Adversarially Training of Autoencoders for Unsupervised Anomaly Segmentation
An University Project for the AI4Cybersecurity class.
Learning Adversarial Robustness in Machine Learning both Theory and Practice.
The Fast Gradient Sign Method (FGSM) combines a white box approach with a misclassification goal. It tricks a neural network model into making wrong predictions. We use this technique to anonymize images.
Adversarial Sample Generation
WideResNet implementation on MNIST dataset. FGSM and PGD adversarial attacks on standard training, PGD adversarial training, and Feature Scattering adversarial training.
An ASR (Automatic Speech Recognition) adversarial attack repository.
[ICCV'19] Improving Adversarial Robustness via Guided Complement Entropy
Add a description, image, and links to the pgd-adversarial-attacks topic page so that developers can more easily learn about it.
To associate your repository with the pgd-adversarial-attacks topic, visit your repo's landing page and select "manage topics."