We’re going to Galway, Ireland! The Makeability Lab has two full papers and three posters at ASSETS’18.
UMD PhD student Lee Stearns led a project entitled Design of an Augmented Reality Magnification Aid for Low Vision Users and explores augmented reality solutions for magnifying text for low-vision users. Lee also had a poster paper accepted entitled Applying Transfer Learning to Recognize Clothing Patterns Using a Finger-Mounted Camera. Both projects are in collaboration with UW HCDE professor Leah Findlater.
UW CSE PhD student Dhruv Jain led a project entitled Towards Accessible Conversations in a Mobile Context for People Who Are Deaf and Hard of Hearing and examines real-time captioning solutions for people who are DHH and on-the-move. This work is a collaboration among UW CSE (Dhruv Jain, Jon Froehlich), UW HCDE (Rachel Franz, Leah Findlater) and Gallaudet University in DC (Raja Kushalnagar).
The Project Sidewalk team also got two posters in: A Feasibility Study of Using Google Street View and Computer Vision to Track the Evolution of Urban Accessibility , which was based on Ladan Najafizdeh's MS thesis work and Interactively Modeling and Visualizing Neighborhood Accessibility at Scale: An Initial Study of Washington DC , which was led by undergrad extraordinaire Anthony Li along with Manaswi Saha).
Our CSE PhD Student Dhruv Jain presented a poster at ACM DIS 2018 in Hong Kong!
The poster is entitled Exploring Augmented Reality Approaches to Real-Time Captioning: A Preliminary Autoethnographic Study and examines Dhruv’s experiences with using real-time captioning on HoloLens in lectures and group meetings. This work is a collaboration among UW CSE (Dhruv Jain, Jon Froehlich), UW HCDE (Bonnie Chinh, Leah Findlater) and Gallaudet University (Raja Kushalnagar).
The GlassEar sound awareness feedback prototype
GlassEar is a real-time sound awareness display using an HMD and a non-wearable microphone array. In the above image sequence, a DHH user is engaged in conversation with three oral conversation partners. Arrows direct attention towards active speakers.
Persons with hearing loss use visual signals such as gestures and lip movement to interpret speech. While hearing aids and cochlear implants can improve sound recognition, they generally do not help the wearer localize sound necessary to leverage these visual cues. In this paper, we design and evaluate visualizations for spatially locating sound on a head-mounted display (HMD). To investigate this design space, we developed eight high-level visual sound feedback dimensions. For each dimension, we created 3-12 example visualizations and evaluated these as a design probe with 24 deaf and hard of hearing participants (Study 1). We then implemented a real-time proof-of-concept HMD prototype and solicited feedback from 4 new participants (Study 2). Study 1 findings reaffirm past work on challenges faced by persons with hearing loss in group conversations, provide support for the general idea of sound awareness visualizations on HMDs, and reveal preferences for specific design options. Although preliminary, Study 2 further contextualizes the design probe and uncovers directions for future work.
This project is part of a larger research agenda exploring sound awareness support for people who are deaf or hard of hearing.
Head-Mounted Display Visualizations to Support Sound Awareness for the Deaf and Hard of Hearing