Avoid These Five Live Captioning Pitfalls
Successful live captioning isn’t only about supporting remote attendees. Done well, they can be an accessibility boon for in-person attendees who are hard of hearing, whose first language is not the language of the presenter, and who are easily distracted.
In other words, it helps most of us.
“Live captioning boosts accessibility and inclusivity, allowing meeting planners to ensure a seamless experience for all attendees, regardless of their hearing abilities,” says Ryan Sheehan, digital solutions product manager at Encore, an event experience organization that includes live captioning among its services.
Consider it a variant of the “curb cut effect,” or the idea that supporting one group supports everyone. If live captioning makes your meetings more accessible for some, it benefits everyone.
Tech companies such as XanderGlasses and XRAI Glass are developing live captioning wearables that could someday shift the entire captioning landscape. Until then, though, following the protocols set forth by DCMP, the Described and Captioned Media Program, can go some way toward ensuring you’re making your event accessible. Take it a step further by learning about these five live captioning mistakes—and how to avoid them.
Live Captioning Pitfall #1: Letting the Visuals Speak for Themselves
Accessibility isn’t the only reason to use live captioning, but it’s a big one. Yet if presenters rely on visuals to “speak for themselves,” the presentation becomes less accessible overall.
Instead, invite presenters to follow DCMP’s guidelines for describing visual features within a presentation. Clear, contextualized, intent-oriented descriptions of onscreen content support accessibility, and it also clears room for attendees without accessibility needs to take in visual content without competing with spoken content.
Live Captioning Pitfall #2: Skimping on Context
This tip isn’t solely about live captioning, but rather about how our brains retain information—which live captioning supports. Beginning sessions with a brief anchor text, even a scripted one, helps root the attendee in what they’re about to take in. By priming the brain to help attendees understand the overview of what’s to come, they’re more likely to be able to draw meaning from the session instead of being overwhelmed by both reading and listening to a presentation.
Live Captioning Pitfall #3: Relying on the Machines
Automatic speech recognition (ASR) programs have come a long way in recent years. But the goal of captioning is errorless consistency—something that ASR programs haven’t yet mastered.
Adding a professional captioner to the equation costs more, but for big-ticket or high-profile sessions or keynotes, the investment in quality pays off.
“While both solutions offer captioning, their costs and accuracy vary greatly,” Sheenan says. “Each solution has specific scenarios where it is more applicable than the other.” He suggests that planners work with professional captioners for critical events or those with legal implications, while ASR programs may be more efficient for routine meetings or webinars.
Professionals often work in tandem with ASR technology to ensure real-time, error-free captioning (including when the human captioner has to, say, sneeze). Their superior skill in distinguishing between voices, transmitting the right word thanks to context cues, and making captions more readable make for a more accessible experience that can last well after the event.
Also note that ASR programs do not yet meet the standards of the Americans With Disabilities Act. As Samantha Evans, certification manager at the International Association of Accessibility Professionals, told Associations Now in 2022, “They’re a great starting point, but should not be the goal. You wouldn’t want to get your prescription instructions read to you with captions that were right maybe 80% of the time.”
Live Captioning Pitfall #4: Letting Captions Overtake Visuals
As noted above, visuals add key context to a presentation. If an in-room (or remote) screen has live captioning plastered over those visuals, it diminishes the value of both.
“Graphical content should leave 20% of screen real estate for the display of any onscreen captioning if applicable,” Sheehan says.
Work with your presenters ahead of time to understand when they’ll be using visuals that would be marred by captioning, and work with an in-room technician to change the area where the visuals display during those periods, or to temporarily adjust the size of the visuals.
Live Captioning Pitfall #5: Ignoring Specialized Vocabulary
Even skilled professional captioners will produce more reliable transcriptions if they know what to expect. Giving captioners or advanced ASR programs a word list before the session increases the chances that they’ll be able to successfully distinguish between terms that may be bandied about in a session but that aren’t a part of the vernacular.
Technical or specialized terms are the clearest example—some ASR programs wouldn’t be able to handle, say, a gathering of otorhinolaryngologists. But a word list can also help distinguish between similar-sounding words that may be used in a session. A psychology conference with a presentation about the relationship between interpersonal intelligence and intrapersonal intelligence, for example, would benefit from a word list containing both interpersonal and intrapersonal.
Sheehan also suggests breaking down long paragraphs into shorter phrases, omitting non-essential details, and simplifying complex sentences. He also recommends ensuring that presenters are aware of the importance of clear pronunciation and advising them to pause when introducing new terms.
Whatever option you choose for live captioning, know that you’re fulfilling attendee expectations. When TikTok added automatic captioning to its videos in 2023, it laid down a clear line: People want information shared in ways that will help them understand what they’re experiencing—and as a meeting planner, that’s what you want too.