Part 3 – Projected Gradient Descent (PGD)
Welcome to Part 3 of our blog series on Adversarial Machine Learning! In this installment, we will delve into the intricacies of the Projected Gradient Descent (PGD) method—a powerful iterative technique for crafting adversarial examples. We will explore how PGD enhances the Fast Gradient Sign Method (FGSM) by iterating and refining the perturbations, leading to more robust and effective attacks. I will also provide a step-by-step code implementation of PGD using PyTorch and compare the attack success rates of FGSM and PGD.
Read More