Dynamic Video Mix-up for Cross-Domain Action Recognition

2021
Abstract In recent years, action recognition has been extensively studied. For some general action datasets, such as UCF101 [1] , the recognition accuracy in a specific domain can reach 95 % . However, due to the existence of the domain-wise discrepancy, the performance of the model will be significantly reduced when deployed to realistic scenes. Therefore, to support the generalization of the action recognition model in practical scenes, the cross-domain problem should be addressed urgently. In this paper, we propose a cross-domain video data fusion mechanism to reduce the difference between domains. Our method is different from existing methods in two points: (1) Instead of performing mix-up at the feature-level, we propose to execute the mix-up directly at the input-level, which introduces more original information beyond the middle features. In addition, a progressive learning method is introduced for adaptive cross-domain fusion. (2) To make full use of the action class knowledge from the source domain, we also propose pseudo-label guided mix-up data learning. Note that only top-ranking confident pseudo labels are selected to ensure the stable similarity between the source and target domains. We evaluate the proposed method on two widely used cross-domain datasets, including the UCF101-HMDB51full and UCF-Olympic. Extensive experimental results have shown that the proposed method is effective and achieves the state-of-the-art performance. In the HMDB51(source domain) → UCF101(target domain) direction, the accuracy of our method can reach 98.60 % , which is 9.54 % improvement over the existing state-of-the-art method.
    • Correction
    • Source
    • Cite
    • Save
    52
    References
    0
    Citations
    NaN
    KQI
    []
    Baidu
    map