Project Description

Head-mounted displays can provide private and glanceable speech and sound feedback to deaf and hard of hearing people, yet prior systems have largely focused on speech transcription. We introduce HoloSound, a HoloLens-based augmented reality (AR) prototype that uses deep learning to classify and visualize sound identity and location in addition to providing speech transcription.


Sound Sensing and Feedback Techniques for Deaf and Hard of Hearing People

Dhruv Jain

UW CS PhD Dissertation

HoloSound: Combining Speech and Sound Identification for Deaf or Hard of Hearing Users on a Head-mounted Display

Greg Guo, Robin Yiru Yang, Johnson Kuang, Xue Bin, Dhruv Jain, Steven Goodman, Leah Findlater, Jon E. Froehlich

Poster Proceedings of ASSETS 2020