Ayuda
Ir al contenido

Dialnet


Resumen de Case study: Empirical study on indoor environmental sound classification to support hearing-impaired persons

M. Nakaya, T. Asakura

  • Hearing-impaired persons often find it extremely difficult to catch and identify environmental sounds, which often include a wide variety of information types that would help them to function more effectively in their surrounding environments. Because of this, it would be very helpful if such persons could be supported by providing them with various kinds of event information in the form of visual data. The ultimate purpose of this research is to classify various types of environmental sounds that can provide important clues from the viewpoint of safe-living and then apply machine learning techniques to visually display that information in real time on smartglasses. Specifically, our method extracts the acoustic features of environmental sounds from their time-, frequency-, and time-frequency-domain characteristics and then classifies those sounds based on their extracted features. Ultimately, such information will be visually displayed on smartglasses, thus allowing hearing-impaired persons to process those environmental sounds while continuing to observe the world around them in a normal manner. In this paper, the validity of our empirical and academic research into machine learning-based environmental sound classifications is discussed.


Fundación Dialnet

Dialnet Plus

  • Más información sobre Dialnet Plus