Project Description

2019–Present
Automated sound recognition tools can be a useful complement to d/Deaf and hard of hearing (DHH) people’s typical communication and environmental awareness strategies. Pre-trained sound recognition models, however, may not meet the diverse needs of individual DHH users. While approaches from human-centered machine learning can enable non-expert users to build their own automated systems, end-user ML solutions that augment human sensory abilities present a unique challenge for users who have sensory disabilities: how can a DHH user, who has difficulty hearing a sound themselves, effectively record samples to train an ML system to recognize that sound? In this project, we are conducting formative inquiries to learn about interactive machine learning (IML) approaches for DHH users and their ideas for personalizable sound recognition tools and then drawing on our findings to design, implement, and evaluate custom IML tools for personalized sound recognition.

Publications

Sound Sensing and Feedback Techniques for Deaf and Hard of Hearing People

Dhruv Jain

UW CS PhD Dissertation 2022

ProtoSound: A Personalized and Scalable Sound Recognition System for Deaf and Hard-of-Hearing Users

Dhruv Jain, Khoa Nguyen, Steven Goodman, Rachel Grossman-Kahn, Hung Ngo, Aditya Kusupati, Ruofei Du, Alex Olwal, Leah Findlater, Jon E. Froehlich

Proceedings of CHI 2022 | Acceptance Rate: 404.9% (2579 / 637)

Toward User-Driven Sound Recognizer Personalization with People who are d/Deaf or Hard of Hearing

Steven Goodman, Ping Liu, Dhruv Jain, Emma McDonnell, Jon E. Froehlich, Leah Findlater

IMWUT 2021