Project Description
Learning a new language is an exciting and important, yet often challenging journey. To support foreign language acquisition, we introduce EARLL, an embodied and context-aware language learning application for AR glasses. EARLL leverages real-time computer vision and depth sensing to continuously segment and localize objects in users’ surroundings, check for hand-object manipulations, and then subtly trigger foreign vocabulary prompts relevant to that object. In this demo paper, we present our initial EARLL prototype and highlight current challenges and future opportunities with always-available, wearable, embodied AR language learning.
Publications
Embodied AR Language Learning Through Everyday Object Interactions: A Demonstration of EARLL
Extended Abstract Proceedings of UIST 2024