What Is Augmentative Communication and How Does It Work?

Augmentative communication is any method, tool, or strategy that supplements or replaces spoken speech for people who have difficulty communicating verbally. It’s formally known as augmentative and alternative communication (AAC), and it encompasses everything from simple hand gestures to sophisticated tablet apps that speak aloud on a person’s behalf. More than 2 million children and adults in the United States use some form of AAC to meet their daily communication needs.

How AAC Works

The term “augmentative” refers to tools and strategies that add to whatever speech a person already has. “Alternative” refers to methods that replace speech entirely. In practice, most people use a combination of both, which is why the field groups them together under AAC.

A child who can say a few words but struggles with full sentences might use a picture board to fill in gaps during conversation. An adult who has lost the ability to speak after a stroke might rely entirely on a device that produces speech for them. The specific setup depends on what each person needs, and it often changes over time as their abilities or circumstances shift.

Types of AAC: No-Tech to High-Tech

AAC systems fall along a spectrum from no-tech to high-tech, and many people use tools from more than one category throughout their day.

No-tech and low-tech options require little or no equipment. These include:

  • Facial expressions, gestures, and body language
  • Manual sign language
  • Writing or drawing on paper
  • Pointing to letters to spell out words
  • Pointing to photos, pictures, or printed words on a board

High-tech options use electronic devices that can generate speech. These include tablet apps (like those on an iPad) with symbol-based vocabulary systems, and dedicated speech-generating devices with built-in speakers. Some high-tech systems use synthesized voices, while others play back pre-recorded natural speech. The line between low-tech and high-tech has blurred in recent years as affordable tablets have become powerful enough to run sophisticated communication software.

Who Uses AAC

People of all ages use augmentative communication. Children with autism, cerebral palsy, Down syndrome, or childhood apraxia of speech are among the most common younger users. Adults may need AAC after a stroke, traumatic brain injury, or progressive conditions like ALS (Lou Gehrig’s disease) or Parkinson’s disease. Some people use AAC temporarily during recovery, while others rely on it for life.

The range of users is broader than most people realize. Someone with a mild speech impairment might only use a low-tech backup for difficult conversations, while someone with no voluntary movement below the neck might control a communication device using only their eyes.

AAC Does Not Prevent Speech Development

One of the most persistent concerns parents and caregivers have is that giving a child a communication device will discourage them from learning to talk. Research consistently shows the opposite. Evidence from multiple studies indicates that providing AAC may actually improve speech outcomes in children, including children on the autism spectrum with minimal speech. Introducing aided communication within developmental therapy does not negatively impact speech development and may even facilitate spoken language growth.

This makes sense when you consider that AAC gives a child a reason and a way to engage in communication. Practicing the back-and-forth of conversation, even through pictures or a device, builds the same foundational language skills that support spoken words later.

How Someone Gets Evaluated for AAC

The process typically starts with a speech-language pathologist who evaluates a person’s current communication abilities, cognitive skills, and physical capabilities. The evaluation looks at how severe the speech impairment is, whether it’s expected to improve or worsen, and what the person’s daily communication needs look like. Can they get by with gestures in some situations? Do they need a device that works in a noisy classroom or workplace?

When choosing a specific AAC system, professionals weigh several factors. For children, expressive and receptive language abilities tend to be the most important considerations, followed by cognitive ability, diagnosis, and age. A child’s motivation to communicate through AAC and their anticipated progress also heavily influence which system gets recommended. Physical abilities and device characteristics like durability or cost, while relevant, tend to rank lower in the decision-making process.

Support from communication partners (parents, teachers, caregivers) plays a significant role too. A sophisticated device won’t help much if the people around the user don’t know how to support its use in daily life.

Insurance Coverage for Devices

Speech-generating devices can be expensive, but insurance often covers them when specific criteria are met. Medicare, for example, covers these devices when a speech-language pathologist has completed a formal evaluation documenting that the person has a severe expressive speech impairment, that natural communication methods can’t meet their needs, and that other treatment options were considered first. The evaluation must also lay out functional communication goals and explain why that particular device was selected.

One important requirement: the speech-language pathologist who performs the evaluation cannot be employed by or have a financial relationship with the company supplying the device. This rule exists to prevent conflicts of interest. If any of the required criteria aren’t met, coverage gets denied as not medically necessary. Many private insurers and state Medicaid programs have similar requirements, though the specifics vary.

Common Barriers to Successful Use

Despite the clear benefits of AAC, between 30% and 50% of users eventually abandon or underuse their systems. Understanding why helps explain what successful AAC use actually requires.

The most frequently cited barrier across parents, educators, and clinicians is a lack of knowledge. Nearly all educator groups and over half of clinicians in one multi-stakeholder study ranked insufficient AAC knowledge among their top three barriers. Many clinicians reported that their university training covered AAC only in theory, with little hands-on practice selecting devices or building vocabulary systems. “We never touched a device, never done a disability practical,” one clinician noted. This knowledge gap means that the people responsible for supporting AAC users often feel underprepared.

Financial and time constraints create additional pressure. Families, schools, and therapy teams all report that competing demands limit how many opportunities they can create for AAC practice. There are also practical anxieties around being responsible for expensive equipment, particularly fears about what happens if a device is damaged or lost.

Poor coordination between stakeholders is another recurring problem. Educators have described situations where a device was selected by a clinician without consulting the school where the child spends most of their day, resulting in a system that doesn’t work well in the classroom. Successful AAC use depends on everyone in a person’s life being involved in the process, from selection through daily practice.

Advances in Eye-Tracking and Brain-Computer Interfaces

For people with severe motor impairments who can’t use their hands or reliably point, eye-tracking technology has become a practical option. These systems use a small camera mounted on a computer screen to follow eye movements, allowing the user to select letters, words, or symbols simply by looking at them. Devices like the Tobii Dynavox PCEye let users click, type, scroll, and navigate a computer entirely with their gaze.

Brain-computer interfaces represent the next frontier. These systems translate brain signals directly into digital commands, bypassing the need for any physical movement at all. Clinical trials are now underway comparing the performance of eye-tracking devices and implantable brain-computer interfaces (including the Neuralink N1 implant) for tasks like typing and navigating digital devices. The goal is to develop standardized ways to measure how much these tools actually improve a person’s ability to independently use computers, phones, and other digital systems in real life, not just in a lab setting.

These technologies are still most relevant for people with the most severe physical limitations, but they point toward a future where the physical ability to move is less and less of a barrier to full communication.