Digital Repository

Deep learned Visual Model for Human Computer Interaction (HCI)

Show simple item record

dc.contributor.author Kumarika, B.M.T.
dc.contributor.author Dias, N.G.J.
dc.date.accessioned 2016-07-15T08:26:46Z
dc.date.available 2016-07-15T08:26:46Z
dc.date.issued 2015
dc.identifier.citation Kumarika, B.M.T. and Dias, N.G.J. (2015) Deep learned Visual Model for Human Computer Interaction (HCI). In: Research Forum E Proceeding, Staff Development Centre Research Forum, Cycle 15-2015, University of Kelaniya, Kelaniya. en_US
dc.identifier.issn 2448-9743
dc.identifier.uri
dc.identifier.uri http://repository.kln.ac.lk/handle/123456789/13829
dc.description.abstract Background and rationale: The modern hand gesture recognition approaches can be classified as ‘contact’ and ‘vision’ based. Contact based approaches like Data Glove require a physical contact which can cause health issues and uncomfortable for some users. In contrast, users wear nothing in vision-based approaches where camera(s) capture the images of hands interacting with computers (Dan & Mohod, 2014).Therefore, vision-based approach is simple, natural and convenient. However, challenges to be addressed include illumination change, varying sizes of hand gestures, background clutters in visual patternidentification(Symonidis K,2000). Aim:Therefore, thepractical applicationof computer vision-based hand gesture recognition systems necessitates an efficient algorithm capable ofhandling those challenges. Theoretic al underpinning / Conceptual framework: As a solution to the complexity of the problem this research proposes Deep Neural Network (DNN) as robust, deep learned visual model. Deep learning attempts to model high-level abstractions (features) in data by using a biologically inspired model. In deep learning, the visual cortex of our brain is well-studied and shows a sequence of areas each of which contains a representation of the input, and signals flow from one to the next. Thus, each level of this feature hierarchy represents the input at a different level of abstraction, with more abstract features further up in the hierarchy, defined in terms of the lower-level ones where classification will be easy. Proposedmethodology:Created database of the hand gesture images is used for training and testing. Greedy layer-wise training is used to avoid the problems of training deep net in supervised fashion such as slow training, over fitting and unlabelled data. The results will be compared with test data which is a 15% of the data set.The results of the two tests oftraditional networksand deep network willalsobecompared. Expected outcomes: This will provide a robust Deep Neural Network as an efficient visual pattern recognition algorithm for real time hand gesture recognition. en_US
dc.language.iso en en_US
dc.publisher Staff Development Center, University of Kelaniya, Sri Lanka en_US
dc.subject Visual Pattern Recognition en_US
dc.subject Deep Neural Network en_US
dc.subject HumanGesture Recognition en_US
dc.title Deep learned Visual Model for Human Computer Interaction (HCI) en_US
dc.type Article en_US


Files in this item

Files Size Format View

There are no files associated with this item.

This item appears in the following Collection(s)

Show simple item record

Search Digital Repository


Browse

My Account