The focus of this paper is Graphic Representation Systems (GRS) within each user’s Augmentative and Alternative Communication ( AAC) communication technology. The rapid advancement of AI – particularly Large Language Models (LLMs) – offers exciting possibilities to enhance communication for AAC users.
The inclusive design project described in this paper, titled Baby Bliss Bot, is led by a group of AI programmers collaborating with AAC users as co-designers in an interdisciplinary initiative. It uses Blissymbolics in its first phase to explore how the LLMs of AI can be leveraged to support AAC users in learning language and in communicating meaningfully across contexts and dialogue types – at home, in school, at work and from inquiry and discovery, to humour, creativity and beyond. Blissymbolics was chosen as the GRS in Phase One because of its comprehensive language capabilities and affordances that facilitated a multi-tiered approach. Through collaborative experimentation, the project investigates not only how to make AI systems more inclusive for outliers and minorities, but also how to design alternative training methods and interfaces. The aim is to support context-aware, personalized expression that respects the individuality of each AAC user.
Blissymbols were first introduced to children who lacked functional speech by a clinical interdisciplinary team in Toronto in 1971. Because Blissymbolics had been intended by its creator, Charles K. Bliss, as a graphic meaning-based (semantic) international language not relying on phonology, those learning it for AAC have found many ways to apply its rich language capabilities for their language, communication and pre-literacy development. The innovation and creativity of Bliss users guided the AAC application of Blissymbolics and contributed to the use of many grammatical and semantic rules and strategies. The combinatorial features of Blissymbolics resulting from the discrete units at both Bliss-word and Bliss-sentence level, and the rules and strategies that can be used to express ideas and develop literacy, are being applied to develop a LLM that supports both efficient and nuanced expression.
An adaptive palette is being developed which explores using Blissymbolics combined with AI and a LLM displayed on an onscreen keyboard, allowing users to choose Bliss-characters and Bliss-words. The palette implements the Fundamental Rules (BCI, 2020) so that Bliss-words can be composed by the user. It uses scalable vector graphics to render the symbols on the display. The palette editor lets individuals assemble their own palettes/vocabularies as well as define a personalized means of navigation. To help create personal palettes, there is a search function for quickly finding Bliss-words from the authorized vocabulary (BCI, 2025). With respect to AI, the user can provide, using Bliss-words, the gist or main ideas of what they are trying to say – sometimes termed “telegraphic input” – and then query the LLM to suggest full sentence completions.
The BBB project applies inclusive co-design, language development and AAC strategies to design a palette that accommodates the developmental level and the wishes of each user, providing an individualized, on-device LLM that supports privacy and offers a broad range of communication possibilities.