1. are in use ,which vary from place to

1.      
Abstract sign language is 
language which uses visually exhibited sign patterns to define by
simultaneously combining hand shapes, orientation and movement of the hands,
arms or body, and facial expressions to fluently express one’s thoughts or else
to communicate with others and is usually used by the physically impaired
people who are physically challenged . 
Automatic Sign Language system needs faster and accurate methods for
identifying static signs or a sequence of produced signs to help interpret
their appropriate meaning. Major components of 
Sign Languages are Hand Gesture. In this paper, a robust approach for
recognition of bare handed sign language which is static is presented, using a
novel combination of features. These include Local Binary Patterns histogram
features based on colour and depth information, and also geometric features of
hand. Linear binary Support Vector Machine classifiers are used for
recognition, coupled with template matching in the case of multiple matches. The
research aims working on hand gesture recognition for sign language
interpretation as a Human Computer Interaction application.

 

keywords—Indian Sign Language,
Support Vector Machine, Linear Discriminant Analysis , and Local Binary Pattern .

We Will Write a Custom Essay Specifically
For You For Only $13.90/page!


order now

 

INTRODUCTION

Sign language is a language used by physically impaired persons. It
is language which uses hand gestures to convey the appropriate meaning,
opposite to that of acoustically conveyed sound patterns. It is analogous to
spoken languages and it is a reason why linguists consider it to be one of the
natural languages, but there are also some 
notable variations form spoken languages . Though sign language is used
over the globe, it is not universal. Several Hundreds of these are in use
,which vary from place to place and are at the core of local deaf cultures.
Some sign languages have achieved recognition legally, while some have nothing.
Regionally American Sign Language , 
German Sign Language ,French Sign Language , British Sign Language,
Indian Sign Language, etc. have been evolved.

     Indian Sign Language is
one of the oldest known sign languages and is considered extremely important in
its history but is rarely used nowadays. In linguistic notations, in spite of
the common gossip that they are not real languages, sign languages are as rich
and complex as any spoken language. Study on these languages by professional
linguists  found that many sign languages
exhibit the basic properties of all the spoken languages . The elements of a
sign are Hand shape, or palm Orientation, Movement , and facial Expression
summarized in the acronym HOLME. The core concept behind the method proposed is
to exploit a novel combination of color, depth, and geometric information
of  hand sign to increase recognition
performance while most approaches only attempt to use a combination of two or
less . This enables to recognise a vast range of signs though they appear to be
very common.

 

Fig.
1.  Overview of the proposed hand pose
recognition system.

II. LITERATURE SURVEY

Researchers
face a very challenging problem of improvising a vision based human computer
interaction system for interpretating sign languages . This survey conveys
theoretical and literature foundation . The researches based on sign languages
and the challenges faced are reviewed . Some of the problems that spoken and
written language of a country is differs from other countries. The syntax and
semantics of a language is varies from one region to another in spite of the
fact that same language has been used by several countries. For instance,
English is the official language of many nations including the UK, the USA. The
usage of English differs at country level. Also the sign language also varies
from one country to another.

The focus of this survey is on improvisation of sign languages at
global level . Earlier, to obtain data for SLI, data gloves and also
accelerometers were used for specification of hand. Orientation and velocity,in
addition to location were measured using tracker and/ or data gloves. These
methods gave exact positions, but they had the disadvantage of high cost and
restricted movements, which changed the signs.These disadvantages made vision based
systems come into screens and gain popularity . Sequence of images are captured
from a combination of cameras ,as the input of 
vision based systems. Monocular, stereo and/ or orthogonal cameras are
used to capture a sequence of images. External light sources were used to
illuminate the scene and also a multi-view geometry to construct deeper image
by Feris and team.

 Proposals of the advances in the concepts of hybrid
classification architectures with the consideration of hand gesture and face
recognition was done by Xiaolong Zhu and team. They built the
hybrid architecture by the use of  ensemble of connectionist networks- radial
basis functions and inductive decision trees, which helps in the combination of
merits of holistic template matching with abstractive matching using discrete
properties and subject to both positive and negative learning. Investigation of
effective body gesture in video sequences beyond facial reactions was done by C. Huang and team. Proposal to fuse body gesture and facial expressions at
the feature level using Canonical Correlation Analysis was given by them. An
integration of hand gesture and face recognition was proposed by Z. Ren and team. They argued that face recognition rate could be better by
recognition of hand gestures. They have proposed security lift scenario. They
made it clear that the combination of two search engines that they proposed is
generic and it is not shrunken to face and hand gesture recognition purposes
alone.

 

III. HAND GESTURE RECOGNITION

In a sign language , a
sign consists of three main parts which include manual features, non-manual
features and finger spelling . For the
interpretation of the meaning of a sign, analysis of all these parameters are
to be done simultaneously. Sign language poses an important challenge of being
multichannel. Every channel in the system is separately built , analysed and
the corresponding outputs are combined at the final level to come to a conclusion.

The research in Sign Language Interpretation started with Hand
Gesture Recognition. Hand gestures are most commonly used in human non-verbal
communication by hearing impaired and speech impaired persons. Sometimes normal
people too use sign languages for communicating. But still sign language is not
universal. Sign languages do exist in places where hearing impaired people live.
To make communication between them and normal people simple and effective, it
is essential that this process might be automated. Number of methodologies have
been developed for automating HGR. The overall process of Hand Gesture
Recognition system is shown as block diagram in figure 2. There are three
similar steps in HGR:

1.      
Hand acquisition that deals
with hand extraction from a given static image and tracking and hand extraction
from a video.

2.      
Feature extraction that deals
with compressed representation of data that will enable the recognition of the
hand gesture.

3.      
Classification/ recognition of
the hand gesture following some rules.

Fig. 2.  Block Diagram for Process of Hand Gesture
Recognition

 

 

 

IV.
DATA SETS ACQUISTION

 

 Two different data sets are made
use in ISL recognition system in this survey.  The data sets are ISL digits (0-9) and single
handed ISL alphabets (A-Z). For the purpose of data set acquisition, dark
background for uniformity and easy in manipulation of images for feature
extraction and division is preferred. A digital camera, Cyber shot H70, is used
for capturing images. All the images are captured with flash light in an
intelligent auto mode. The usual file format JPEG is used to capture images.
Each original image is 4608×3456 pixel and requires roughly about 5.5 MB
storage space. To create an efficient data set with a reasonable size, the
images are cropped to 200×300 RGB pixels and barely 25 KB memory space is
required per image. The data set is collected from 100 signers. Out of these
signers, 69 are male and 31 are female with average age group of 27. The
average height of a signer is about 66 inches. The data set contains isolated
ISL numerical signs (0-9). Five images per ISL digit sign is captured from each
signer. Therefore, a total of 5,000 images are available in the data set. The
sample images of the data set are shown in figure 3