iEnhancer-CLA: Self-attention-based interpretable model for enhancers and their strength prediction

2021
Enhancer is a class of non-coding DNA cis-acting elements that plays a crucial role in the development of eukaryotes for their transcription. Computational methods for predicting enhancers have been developed and achieve satisfactory performance. However, existing computational methods suffer from experience-based feature engineering and lack of interpretability, which not only limit the representation ability of the models to some extent, but also make it difficult to provide interpretable analysis of the model prediction findings.In this paper, we propose a novel deep-learning-based model, iEnhancer-CLA, for identifying enhancers and their strengths. Specifically, iEnhancer-CLA automatically learns sequence 1D features through multiscale convolutional neural networks (CNN), and employs a self-attention mechanism to represent global features formed by multiple elements (multibody effects). In particular, the model can provide an interpretable analysis of the enhancer motifs and key base signals by decoupling CNN modules and generating self-attention weights. To avoid the bias of setting hyperparameters manually, we construct Bayesian optimization methods to obtain model global optimization hyperparameters. The results demonstrate that our method outperforms existing predictors in terms of accuracy for identifying enhancers and their strengths. Importantly, our analyses found that the distribution of bases in enhancers is uneven and the base G contents are more enriched, while the distribution of bases in non-enhancers is relatively even. This result contributes to the improvement of prediction performance and thus facilitates revealing an in-depth understanding of the potential functional mechanisms of enhancers. Author summaryThe enhancers contain many subspecies and the accuracy of existing models is difficult to improve due to the small data set. Motivated by the need for accurate and efficient methods to predict enhancer types, we developed a self-attention deep learning model iEnhancer-CLA, the aim is to be able to distinguish effectively and quickly between subspecies of enhancers and whether they are enhancers or not. The model is able to learn sequence features effectively through the combination of multi-scale CNN blocks, BLSTM layers, and self-attention mechanisms, thus improving the accuracy of the model. Encouragingly, by decoupling the CNN layer it was found that the layer was effective in learning the motif of the sequences, which in combination with the self-attention weights could provide interpretability to the model. We further performed sequence analysis in conjunction with the model-generated weights and discovered differences in enhancer and non-enhancer sequence characteristics. This phenomenon can be a guide for the construction of subsequent models for identifying enhancer sequences.
    • Correction
    • Source
    • Cite
    • Save
    13
    References
    0
    Citations
    NaN
    KQI
    []
    Baidu
    map