Recognizing Hand Gestures using Temporally Blended Image Date and Deep Learning

dc.contributor.advisorKing, Scott A.
dc.contributor.advisorSheta, Alaa A.
dc.contributor.advisorKing, Scott A.
dc.contributor.advisorSheta, Alaa A.King, Scott A.
dc.contributor.advisorSheta, Alaa A.
dc.contributor.authorArsekar, Shubharaj Pradeep
dc.contributor.authorArsekar, Shubharaj Pradeep
dc.contributor.committeeMemberLee, Byung Cheol Bruce
dc.contributor.committeeMemberLee, Byung Cheol
dc.date.accessioned2020-04-18T03:39:12Z
dc.date.accessioned2020-04-18T03:39:12Z
dc.date.available2020-04-18T03:39:12Z
dc.date.available2020-04-18T03:39:12Z
dc.date.issued2018-12
dc.date.issued2018-122018-12
dc.description.abstractHand gestures can allow for natural approach to human-computer interaction. A novel low com- putation Hand Gesture Recognition System (HGRS) using temporally blended image data with a convolutional neural network (CNN) is presented. The goal of HGRS is to recognize hand gestures in an optimized and efficient way. We created a dataset using Kinect depth and body data stream frames. The dataset comprised of eight different hand gestures, each gesture was performed with the right hand within a duration of three seconds. Data is first processed by segmenting the hand from the background using body data joints mapped onto depth data. Reduction in the computation of the HGRS was achieved by blending the temporal depth data frames into a single frame. The blending of temporal depth data frames is defined as the addition of the frames into a single frame by increasing the intensity of each consecutive frame. The resolution of the depth data frames was reduced to an empirically evaluated frame size of 50 × 50 which further improved the computational efficiency of HGRS. We trained and validated a CNN model for hand gesture classification which consists of three convolutional layers each followed by a max pooling layer, and two fully connected layers in the end. We tested the performance of the model and observed a test accuracy of 98.45%. We performed a quantitative analysis to measure the overall performance of the model.en_US
dc.description.collegeCollege of Science and Engineeringen_US
dc.description.departmentComputing Sciencesen_US
dc.format.extent83 pagesen_US
dc.identifier.urihttps://hdl.handle.net/1969.6/87826
dc.identifier.urihttps://hdl.handle.net/1969.6/87826https://hdl.handle.net/1969.6/87826
dc.language.isoen_USen_US
dc.rightsThis material is made available for use in research, teaching, and private study, pursuant to U.S. Copyright law. The user assumes full responsibility for any use of the materials, including but not limited to, infringement of copyright and publication rights of reproduced materials. Any materials used should be fully credited with its source. All rights are reserved and retained regardless of current or future development or laws that may apply to fair use standards. Permission for publication of this material, in part or in full, must be secured with the author and/or publisher.en_US
dc.subjectcomputer scienceen_US
dc.subjectcomputer visionen_US
dc.subjectdeep learningen_US
dc.subjectImage Processingen_US
dc.subjectmachine learningen_US
dc.subjectVisualizationen_US
dc.titleRecognizing Hand Gestures using Temporally Blended Image Date and Deep Learningen_US
dc.typeTexten_US
dc.type.genreThesisen_US
thesis.degree.disciplineComputer Scienceen_US
thesis.degree.grantorTexas A & M University--Corpus Christien_US
thesis.degree.levelMastersen_US
thesis.degree.nameMaster of Scienceen_US

Files

Original bundle

Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
Arsekar_Shubharaj_Thesis.pdf
Size:
11.85 MB
Format:
Adobe Portable Document Format
Description:

License bundle

Now showing 1 - 1 of 1
No Thumbnail Available
Name:
license.txt
Size:
1.72 KB
Format:
Item-specific license agreed upon to submission
Description: