ADAN

Mar, 2021 → Feb, 2022

Partner: armasuisse
Partner contact: Gérôme Bovet
EPFL laboratory: Signal Processing Laboratory (LTS4)
EPFL contact: Prof. Pascal Frossard

Modulation recognition state-of-the-art architectures are based on computer vision deep learning models. However, these models have recently been shown to be quite vulnerable to very small and carefully crafted perturbations, which pose serious questions in terms of safety, security, or performance guarantees at large. Several works have been developed in computer vision to make models more robust against these perturbations, with the best one being adversarial training, in which the model is fine-tuned with these adversarial perturbations.

However, adversarial training has several drawbacks. First, adversarial training requires generating strong adversarial attacks at each iteration of the training, which is computationally expensive. Some works try to address this by using weaker attacks but, while this works at the start, this leads to the second drawback: “catastrophic overfitting” arises during the training, making the model not robust to strong attacks. This instability during adversarial training has yet to be explained. Third, adversarial training only makes the model robust to the type of perturbation it is being fine-tuned on. This drawback is especially important in modulation recognition, where several types of perturbations are typical but not well-tested: impulse noise, changes on the signal energy or phase and frequency offsets. Finally, one assumption made in adversarial training is that the adversaries share the same label as the samples they have perturbed. While this is generally true for very small energy perturbation, it may not be the case for modulations with a high number of states, where smaller perturbations may be required to not change the true label.

Building on our recent results, which show that adversarial training is effective on modulation recognition, we propose to tackle one by one each of the disadvantages of adversarial training by implementing several novel adaptive algorithms. In this project we achieve the following:

  • Incorporate information obtained from the adversarial training optimization using adaptive importance sampling to reduce robust overfitting
  • Analyze the loss landscape with respect to the model weights. We show that convergence only happens in the last layers of the network and that catastrophic overfitting is related with the high increase in gradient variance around the natural points.
  • We show with a simple synthetic example that the model susceptibility to adversarial examples is highly dependent on the quality of the training data, where even a small amount of gaussian noise can cause the network to learn completely different classification boundaries.

Paper: Javier Maroto, Gérôme Bovet, and Pascal Frossard, “SafeAMC: Adversarial training for robust modulation recognition models” in EUSIPCO 2022.