Imperceptible adversarial attacks against traffic scene recognition

2021
Adversarial examples have begun to receive widespread attention owning to their potential destructions to the most popular DNNs. They are crafted from original images by embedding well-calculated perturbations. In some cases, the perturbations are so slight that neither human eyes nor detection algorithms can notice them, and this imperceptibility makes them more covert and dangerous. For the sake of investigating the invisible dangers in the applications of traffic DNNs, we focus on imperceptible adversarial attacks on different traffic vision tasks, including traffic sign classification, lane detection and street scene recognition. We propose a universal logits map-based attack architecture against image semantic segmentation and design two targeted attack approaches on it. All the attack algorithms generate the micro-noise adversarial examples by the iterative method of C&W optimization and achieve 100% attack rate with very low distortion, among which, our experimental results indicate that the MAE (mean absolute error) of perturbation noise based on traffic sign classifier attack is as low as 0.562, and the other two algorithms based on semantic segmentation are only 1.503 and 1.574. We believe that our research on imperceptible adversarial attacks has a certain reference value to the security of DNNs applications.
    • Correction
    • Source
    • Cite
    • Save
    23
    References
    0
    Citations
    NaN
    KQI
    []
    Baidu
    map