Public transportation provides vital connectivity to people with disabilities, facilitating access to work, education, and health services. While modern navigation applications provide a suite of information about transit options—including real-time updates—they lack data about the accessibility of the transit stops themselves. Bus stop features such as seatings, shelters, and landing areas are critical, but few cities provide this information. In this demo paper, we introduce BusStopCV, a crowd+AI web prototype for scalably collecting data on bus stop features using real-time computer vision and human labeling. We describe BusStopCV’s design, custom training with the YOLOv8 model, and an evaluation of 100 randomly selected bus stops in Seattle, WA. Our findings demonstrate the potential of BusStopCV and highlight opportunities for expanding the detection of additional bus stop features.