Vision-Based American Sign Language Classification Approach via Deep Learning

Authors

  • Nelly Elsayed University of Cincinnati

DOI:

https://doi.org/10.32473/flairs.v35i.130616

Keywords:

American Sign Language, Deep Learning, Convolution Neural Network, Gesture Classification

Abstract

Hearing-impaired is the disability of partial or total hearing loss that causes a significant problem for communication with other people in society. American Sign Language (ASL) is one of the sign languages that most commonly used language used by Hearing impaired communities to communicate with each other. In this paper, we proposed a simple deep learning model that
aims to classify the American Sign Language letters as a step in a path for removing communication barriers that are related to disabilities.

Downloads

Published

04-05-2022

How to Cite

Elsayed, N. (2022). Vision-Based American Sign Language Classification Approach via Deep Learning. The International FLAIRS Conference Proceedings, 35. https://doi.org/10.32473/flairs.v35i.130616