Automated sound recognition tools can be a useful complement to d/Deaf and hard of hearing (DHH) people’s typical communication and environmental awareness strategies. Pre-trained sound recognition models, however, may not meet the diverse needs of individual DHH users. While approaches from human-centered machine learning can enable non-expert users to build their own automated systems, end-user ML solutions that augment human sensory abilities present a unique challenge for users who have sensory disabilities: how can a DHH user, who has difficulty hearing a sound themselves, effectively record samples to train an ML system to recognize that sound? In this project, we are conducting formative inquiries to learn about interactive machine learning (IML) approaches for DHH users and their ideas for personalizable sound recognition tools and then drawing on our findings to design, implement, and evaluate custom IML tools for personalized sound recognition.