adversarial-training

Adversarial Training against Location-Optimized Adversarial Patches

Deep neural networks have been shown to be susceptible to adversarial examples -- small, imperceptible changes constructed to cause mis-classification in otherwise highly accurate image classifiers. As a practical alternative, recent work proposed …

Adversarial Patch Training

Adversarial Training against Location-Optimized Adversarial Patches