Recognizing Hand Gestures using Temporally Blended Image Date and Deep Learning
Arsekar, Shubharaj Pradeep
MetadataShow full item record
Hand gestures can allow for natural approach to human-computer interaction. A novel low com- putation Hand Gesture Recognition System (HGRS) using temporally blended image data with a convolutional neural network (CNN) is presented. The goal of HGRS is to recognize hand gestures in an optimized and efficient way. We created a dataset using Kinect depth and body data stream frames. The dataset comprised of eight different hand gestures, each gesture was performed with the right hand within a duration of three seconds. Data is first processed by segmenting the hand from the background using body data joints mapped onto depth data. Reduction in the computation of the HGRS was achieved by blending the temporal depth data frames into a single frame. The blending of temporal depth data frames is defined as the addition of the frames into a single frame by increasing the intensity of each consecutive frame. The resolution of the depth data frames was reduced to an empirically evaluated frame size of 50 × 50 which further improved the computational efficiency of HGRS. We trained and validated a CNN model for hand gesture classification which consists of three convolutional layers each followed by a max pooling layer, and two fully connected layers in the end. We tested the performance of the model and observed a test accuracy of 98.45%. We performed a quantitative analysis to measure the overall performance of the model.