Vision-Based American Sign Language Classification Approach via Deep Learning
DOI:
https://doi.org/10.32473/flairs.v35i.130616Schlagworte:
American Sign Language, Deep Learning, Convolution Neural Network, Gesture ClassificationAbstract
Hearing-impaired is the disability of partial or total hearing loss that causes a significant problem for communication with other people in society. American Sign Language (ASL) is one of the sign languages that most commonly used language used by Hearing impaired communities to communicate with each other. In this paper, we proposed a simple deep learning model that
aims to classify the American Sign Language letters as a step in a path for removing communication barriers that are related to disabilities.
Downloads
Veröffentlicht
Zitationsvorschlag
Ausgabe
Rubrik
Lizenz
Copyright (c) 2022 Nelly Elsayed
Dieses Werk steht unter der Lizenz Creative Commons Namensnennung - Nicht-kommerziell 4.0 International.