A multi-componential analysis of emotions during complex learning with an intelligent multi-agent system

2015 
Novel approach for measuring and synchronizing emotion data from three modalities.Modalities: facial expression, self-report, and electrodermal activity data.Examined if modalities identified the same emotions or complementary information.High level of agreement between the self-report and facial recognition modalities.Low level of agreement between electrodermal activation and others. This paper presents the evaluation of the synchronization of three emotional measurement methods (automatic facial expression recognition, self-report, electrodermal activity) and their agreement regarding learners' emotions. Data were collected from 67 undergraduates enrolled at a North American University whom learned about a complex science topic while interacting with MetaTutor, a multi-agent computerized learning environment. Videos of learners' facial expressions captured with a webcam were analyzed using automatic facial recognition software (FaceReader 5.0). Learners' physiological arousal was recorded using Affectiva's Q-Sensor 2.0 electrodermal activity measurement bracelet. Learners' self-reported their experience of 19 different emotional states on five different occasions during the learning session, which were used as markers to synchronize data from FaceReader and Q-Sensor. We found a high agreement between the facial and self-report data (75.6%), but low levels of agreement between them and the Q-Sensor data, suggesting that a tightly coupled relationship does not always exist between emotional response components.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    51
    References
    105
    Citations
    NaN
    KQI
    []
    Baidu
    map