“Artificial Intelligence,” or AI, is a hot topic in the world and specifically in AAC – but “AI” implementations vary widely. We will identify different types of “AI,” their uses in AAC, and considerations around these uses. The uses of machine learning techniques for brain-computer interfaces, personal voices, and large language models (LLMs) differ – but all fall under “AI” and all have potential applications to AAC. By addressing these cases separately, we identified key considerations.
First, we consider authenticity. Professionals express concerns about authorship: LLMs may alter or invent content. AAC users have concerns about message style and tone (Valencia et al., 2023) but also about undue judgment regarding authorship (Holyfield & Williams, 2025). Also in the area of authenticity, some AAC users and their families are excited by the idea that they can have a voice that sounds like their voice (or a guess at what their voice would be.) Others question if this should even be the goal, or about this becoming an expectation rather than an option (Preece et al., 2024).
Second, we consider privacy. “AI” models often involve cloud processing in their creation, adaptation, and/or use. This has trade-offs with authenticity: models tuned with specific user data (neural signals, audio, or text output) can be more effective and authentic, but they then include that data and may reveal it in unintended ways (Valencia et al., 2023).
Third, we consider barriers to learning. A literate user can check LLM-generated text and make decisions about speed, effort, and tone. However, much like calculator use elides the math skills to effectively use a calculator, LLM use elides the language skills to check and edit LLM output.
Finally, we consider availability and accessibility – some applications of “AI” are becoming ubiquitous. Others may be touted as the future of AAC while remaining inaccessible (e.g. voice banking, Preece et al., 2024) or rarely available (e.g. brain computer interfaces, Sellwood et al., 2024).
Addressing “AI” in AAC effectively involves considering each of its use cases on its own merits. It also involves addressing these considerations – not as issues unique to the intersection of AAC and AI, but as applications of broader issues to this intersection.
References
Holyfield, C., & Williams, K. (2025, March 15). Predictive Text: Who Controls the Conversation? ASHA LeaderLive. https://leader.pubs.asha.org/do/10.1044/leader.FTR1.30032025.FAAC-predictive-text.36/full/
Preece, J., Sullivan, E., Tams-Gray, F., & Pullin, G. (2024). Making my voice and owning its future. Medical Humanities, 50(4), 624-634.
Sellwood, D., McLeod, L., Williams, K., Brown, K., & Pullin, G. (2024). Imagining alternative futures with augmentative and alternative communication: a manifesto. Medical Humanities, 50(4), 620-623.
Valencia, S., Cave, R., Kallarackal, K., Seaver, K., Terry, M., & Kane, S. K. (2023, April). “The less I type, the better”: How AI language models can enhance or impede communication for AAC users. In Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems (pp. 1-14).