Belal Chaudary on Using Transfer Learning/CNN to translate American Sign Language Alphabets from video to text.
Drinks and snacks provided kindly provided by SAP.
Using Transfer Learning/CNN to translate American Sign Language Alphabets from video to text.
5-10% of the world population is deaf or hard-of-hearing and rely on sign language as their primary form of communication. In this project, I set out to prototype a real-time system to translate the American Sign Language (finger spelled alphabet) from video into text. I will walk you through the current pipeline which utilises convolutional neural networks, transfer learning and a webcam with Keras and OpenCV - and some of my learning when implementing deep learning for real-time classification.