Ayuda
Ir al contenido

Dialnet


Machine learning for bidirectional translation between different sign and oral languages

  • Autores: Muhammad Imran Saleem
  • Directores de la Tesis: Miguel Angel Luque Nieto (dir. tes.), Pablo Otero Roth (tut. tes.)
  • Lectura: En la Universidad de Málaga ( España ) en 2023
  • Idioma: español
  • Tribunal Calificador de la Tesis: Alfonso Ariza Quintana (presid.), Andrés Roldán Aranda (secret.), Muhammad Yousuf Irfan Zia (voc.)
  • Programa de doctorado: Programa de Doctorado en Ingeniería de Telecomunicación por la Universidad de Málaga
  • Materias:
  • Enlaces
  • Resumen
    • Deaf and mute (D-M) people are an integral part of society, and it is particularly important to provide them with a platform to be able to communicate without the need for any training or learning. These D-M individuals, who rely on sign language, but for effective communication, it is expected that others can understand sign language. Learning sign language is a challenge for those with no impairment. In practice, D-M face communication difficulties mainly because others, who generally do not know sign language, are unable to communicate with them. This thesis presents a solution to this problem through (i) a system enabling the non-deaf and mute (ND-M) to communicate with the D-M individuals without the need to learn sign language, and (ii) hand gestures of different languages are supported. The hand gestures of D-M people are acquired and processed using deep learning (DL), and multiple language support is achieved using supervised machine learning (ML). The D-M people are provided with a video interface where the hand gestures are acquired, and an audio interface to convert the gestures into speech. Speech from ND-M people is acquired and converted into text and hand gesture images. The system is easy to use, low cost, reliable, modular, based on a commercial-off-the-shelf (COTS) Leap Motion Device (LMD). A supervised ML dataset is created that provides multi-language communication between the D-M and ND-M people, which includes three sign language datasets, i.e., American Sign Language (ASL), Pakistani Sign Language (PSL), and Spanish Sign Language (SSL). The proposed system has been validated through a series of experiments, where the hand gesture detection accuracy of the system is more than 90% for most, while for certain scenarios, this is between 80% and 90% due to variations in hand gestures between D-M people.


Fundación Dialnet

Dialnet Plus

  • Más información sobre Dialnet Plus

Opciones de compartir

Opciones de entorno