On-body interaction, which employs the user’s own body as an interactive surface, offers several advantages over existing touchscreen devices: always-available control, an expanded input space, and additional proprioceptive and tactile cues that support non-visual use. While past work has explored a variety of approaches such as wearable depth cameras, bio-acoustics, and infrared reflectance (IR) sensors, these systems do not instrument the gesturing finger, do not easily support multiple body locations, and have not been evaluated with visually impaired users (our target). In our work, we have been exploring computer vision based approaches using a suite of sensors mounted on the finger. In early work, we established the feasibility of localizing small surface images of the skin on the user's body (ICPR 2016). Later, we built real-time prototypes and evaluated them with sighted and blind users (ASSETS'17 and IMWUT'17)