DrawInAir: A Lightweight Gestural Interface Based on Fingertip Regression

2018
Hand gesturesform a natural way of interaction on Head-Mounted Devices (HMDs) and smartphones. HMDs such as the Microsoft HoloLens and ARCore/ARKit platform enabled smartphones are expensive and are equipped with powerful processors and sensors such as multiple cameras, depth and IR sensors to process hand gestures. To enable mass marketreach via inexpensive Augmented Reality (AR) headsetswithout built-in depth or IR sensors, we propose a real-time, in-air gesturalframework that works on monocular RGB input, termed, DrawInAir. DrawInAir uses fingertip for writing in air analogous to a pen on paper. The major challenge in training egocentric gesture recognitionmodels is in obtaining sufficient labeled datafor end-to-end learning. Thus, we design a cascade of networks, consisting of a CNN with differentiable spatial to numerical transform (DSNT) layer, for fingertip regression, followed by a Bidirectional Long Short-Term Memory(Bi-LSTM), for a real-time pointing hand gestureclassification. We highlight how a model, that is separately trained to regress fingertip in conjunction with a classifier trained on limited classification data, would perform better over end-to-end models. We also propose a dataset of 10 egocentricpointing gesturesdesigned for AR applications for testing our model. We show that the framework takes 1.73 s to run end-to-end and has a low memory footprintof 14 MB while achieving an accuracy of 88.0% on egocentricvideo dataset.
    • Correction
    • Source
    • Cite
    • Save
    26
    References
    5
    Citations
    NaN
    KQI
    []
    Baidu
    map