Assistive-Technology Aided Manual Accessibility Testing in Mobile Apps, Powered by Record-and-Replay

Billions of people use smartphones on a daily basis, including 15\% of the world's population with disabilities. Mobile platforms encourage developers to manually assess their apps’ accessibility in the way disabled users interact with phones, i.e., through Assistive Technologies (AT) like screen readers. However, most developers only test their apps with touch gestures and do not have enough knowledge to use AT properly. Moreover, automated accessibility testing tools typically do not consider AT. This paper introduces a record-and-replay technique A11yPuppetry that records the developers' touch interactions, replays the same actions with an AT, and generates a visualized report of various ways of interacting with the app using ATs. Empirical evaluation of this technique on real-world apps revealed that while user studies are the most reliable way of assessing accessibility, our technique can aid developers in detecting complex accessibility issues at different stages of development and detect 20% to 62% more user-confirmed issues compared to existing tools.

How does A11yPuppetry work?

A11yPuppetry consists of four main phases, (1) Record, (2) Action Translate, (3) Replay, and (4) Report. The process starts with the Record phase when the developer interacts with a device enabled with the Recorder service. The Recorder service listens to UI changes events and adds a transparent GUI widget overlay on top of the screen to record the developer's touch gestures. After receiving a touch gesture on the overlay, the Recorder replicates the gesture on the underlying app, and sends the recorded information to the server as an Action Execution Report. The server will store the recorded information in the database. In the second phase, Action Translation, the Action Translator component receives the Action Execution Report from the Recorder (containing UI hierarchy, screenshot, and the performed gesture) and translates it to its equivalent TalkBack Action. In the Replay phase, the TalkBack Action is sent to several replayer devices that perform the action. Each replayer device has a running TB Replayer service that receives TalkBack Action from the server, creates and maintains a TalkBack Element Navigation Graph of the app, and performs the received actions with a navigation mode. Once an action is performed, a TalkBack Execution Report is stored in the database. The TalkBack Execution Report consists of actions that are executed with TalkBack, screenshots, and UI hierarchy files of the different states of the app before, during, and after execution. In the final phase (Report), the A11y Analyzer component reads the stored information in the database, i.e., Action and TalkBack Execution Reports, and produces an Aggregated Report of the recording, replaying, and the detected accessibility issues. The user can access this report using a web application.

[A11yPuppetry picture]

Artifacts

The artifacts are publicly available here.

Publications

More details about can be found in our publication below:


[seal's logo]
[uci's logo]