Counteracting Adversarial Attacks in Autonomous Driving

2022
This article studies the robust deep stereo vision in autonomous driving systems and counteracting adversarial attacks. The autonomous system operation requires real-time processing of measurement data which often contain significant uncertainties and noise. Adversarial attacks have been widely studied to simulate these perturbations in recent years. To counteract the practical attacks in autonomous systems, novel methods based on simulated attacks are proposed in this article. Univariate and multivariate functions are adopted to represent the relationships between the left and right input images and the deep stereo model. A stereo regularizer is proposed to guide the model to learn the implicit relationship between the images and characterize the loss function’s local smoothness. The attacks are generated by maximizing the regularizer term to break the linearity and smoothness. The model then defends the attacks by minimizing the loss and regularization terms. Two techniques are developed in this article. The first technique, SmoothStereo , explores the basic knowledge from the physical world and smoothness, while the second technique, SmoothStereoV2 , improves SmoothStereo through leveraging the smooth activation functions during the defense. SmoothStereoV2 can learn and utilize the gradient information concerning the attacks. The gradients of the smooth activation functions can handle attacks for improving the model robustness. Numerical experiments on KITTI datasets demonstrate that the proposed methods offer superior performance.
    • Correction
    • Source
    • Cite
    • Save
    0
    References
    0
    Citations
    NaN
    KQI
    []
    Baidu
    map