Vision-Based American Sign Language Classification Approach via Deep Learning
DOI:
https://doi.org/10.32473/flairs.v35i.130616Keywords:
American Sign Language, Deep Learning, Convolution Neural Network, Gesture ClassificationAbstract
Hearing-impaired is the disability of partial or total hearing loss that causes a significant problem for communication with other people in society. American Sign Language (ASL) is one of the sign languages that most commonly used language used by Hearing impaired communities to communicate with each other. In this paper, we proposed a simple deep learning model that
aims to classify the American Sign Language letters as a step in a path for removing communication barriers that are related to disabilities.
Downloads
Published
How to Cite
Issue
Section
License
Copyright (c) 2022 Nelly Elsayed
This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.