The Best Paper Award of the 19th International Conference on Universal Access in Human-Computer Interaction
has been conferred to
Rianne Marie Azzopardi, Peter Albert Xuereb, May Agius, Sharon Borg Schembri and Dylan Seychell
(University of Malta, Malta)
for the paper entitled
"Enhancing Communication for Individuals with Complex Communication Needs (CCN) through AI and Visual Scene Display Technology"

Rianne Marie Azzopardi
(presenter)

Best Paper Award for the 19th International Conference on Universal Access in Human-Computer Interaction, in the context of HCI International 2025, Gothenburg, Sweden, 22 - 27 June 2025

Certificate for Best Paper Award of the 19th International Conference on Universal Access in Human-Computer Interaction presented in the context of HCI International 2025, Gothenburg, Sweden, 22 - 27 June 2025
Paper Abstract
Individuals with Complex Communication Needs (CCN) often face significant challenges in communication, requiring innovative and accessible solutions to bridge these gaps. This paper presents Snap-n-Tell, an Android-based Augmentative and Alternative Communication (AAC) application that uses Artificial Intelligence (AI) and Visual Scene Display (VSD) technology to enhance communication. Snap-n-Tell uses advanced object detection to automate the creation of interactive hotspots in photographs, thus reducing caregiver effort and promoting user independence. Its AI-driven process also addresses challenges like overlapping objects and dynamic scenes. AI is also used in supporting users’ progression from early-stage communication with VSD to grid-based systems. Developed through a co-creation methodology involving Speech-Language Pathologists (SLPs), Snap-n-Tell prioritizes user-centric design, accessibility, and scalability, particularly for underserved Android platforms.
Usability evaluations highlight the transformative potential of AI in AAC, demonstrating the potential for improved user engagement and significant reductions in caregiver programming time. The application addresses critical gaps in existing AAC technologies by combining the context-rich and intuitive interface of VSDs with the automation and intelligence of AI. By empowering individuals with CCN to communicate effectively and independently, Snap-n-Tell represents a pivotal step forward in applying AI to assistive communication technologies.
The full paper is available through SpringerLink, provided that you have proper access rights.