Video captioning is a crucial aid to a lot of people, especially those with particular disabilities. We produced a news story earlier this year to tell you how to add captions to all your videos. However, we think it is also important to understand what captions are and why we use them.

Captions or subtitles?

It’s important before we begin to define what we are discussing. Some people use terms such as captions, closed captions and subtitles interchangeably, but they have different meanings.

Subtitles are pieces of text that relay spoken word – dialogue, narration, commentary – on to the screen. However, there are many more aspects to the soundscape than spoken word.

Captions represent a larger range of the sounds appearing in a film or video.  As well as the voices on screen, captions can convey the sound of the weather, expressions of emotion (sighs, screams, whimpers of delight), the woof-woof and meow of doggies and kitties, the noise of vehicles or weapons etc.

Closed captions are captions that are optional – you can switch them on or off. This article discusses closed captions, which for the sake of brevity we are going to refer to simply as captions or captioning.

Open captions are always in view and unlike their closed cousins, cannot be switched off. The best example is when you are watching a film in which characters switch to another language – whether you want it or not, the producers of the film have added a translation to be displayed on screen.

Sound reasoning: why we use captions

The main reason we employ closed captioning is to allow people with audio disabilities to access the content – factual information, for instance, or the storyline. This is the case whether your video is one aimed at recruiting, retaining, informing or educating students at Carleton, or if you are using third-party content (e.g., showing a clip of a documentary movie in class). Naturally, in educational settings, it is crucial that all information is conveyed and accessible.

But it is not only people with an acute hearing disability or who are deaf who are assisted by captioning. Viewers with ADHD also find it useful. Many individuals also have undiagnosed auditory processing disorder (APD), which makes it harder to process information you take in aurally. Captions not only help engage but can provide information that is easier to process when read, as some learners with ADHD will find it harder to process or retain what they hear.

Emotions described in the captions also help some viewers who are on the autism spectrum. While the emotions of characters or the mood of a scene don’t immediately register, seeing the emotions written out on screen (e.g., “Nervous whispering”) can allow them to understand immediately what the character of the scene is.

Many disabilities operate on a spectrum of severity. There is a difference between being legally blind and needing reading glasses, but the ability to blow up text on a screen to a large font is useful to people who have forgotten their reading glasses.  In the same way, people’s hearing can decrease slowly over time and make captions necessary without it ever becoming a severe disability. There can also be a period after exposure to prolonged loud noise (the days after attending a rock concert or loud night club) when hearing is impaired to the point that ordinary video cannot be heard.

But while people with disabilities form a major portion of those who use closed captions, research shows that most people use the feature at least some of the time, and that the majority of those who do use it do not have an auditory disability.

Research in the USA (full report here) showed that in a surveyed population of university students across the country, 54% of respondents used captioning all (35%) or some of (19%) the time. In such a population (the average age of respondents was 24), the gradual hearing loss associated with an aging demographic is not present. Meanwhile, Netflix conducted a survey that found that 80% of users employ captions at some point every month. And the BBC found in a survey in 2021 that while fewer than a quarter of viewers aged 56-75 used captions, a whopping 80% of viewers in the 18-24 year-old bracket switch on captions.

In the US study listed above, 90% of students with disabilities found the captions moderately to extremely helpful, which might be expected; the stunning statistic is that 87.5% of students without disabilities expressed the same opinion. Why is this the case?

Here’s a caption for you: Accessibility is for everyone

We are like a stuck record with this: accessibility helps everybody. Therefore, it makes sense that as with so many features designed to aid accessibility, closed captioning is going to help anyone who chooses to use it.

To start with, there are many environmental reasons why captions on a video make it easier to access. You can mute something on screen but continue to watch and comprehend perfectly what is going on if someone in the same room has to take a phone call, or you do not want to wake someone sleeping in the next room. The rise of captions can also be linked to COVID and enforced working and studying from home in close confines with roommates or family. Learning as well as TV consumption with closed captioning engaged suddenly became socially necessary as well as acceptable.

It is also much more suitable to use captions if watching the video on public transit. Many people can consume an additional two hours of media a day this way. In a survey by Preply, three-quarters of Generation Z watch video content, including material for school, in public with nearly half doing so on mass transit. Anyone who can read captions can consume in silence. (Unless you take the number 10 bus from Carleton, where someone is always watching “The Masked Singer” unmuted and full blast.)

Hearing or reading the information is one thing, focus is another factor when consuming onscreen content. There is a much higher ability to concentrate on video with captions, allowing viewers to take in more of the material. And even more crucially, the rates of comprehension and retention of the material is much higher.

Often people find it hard to decipher what is being said. This could be because of a narrator’s accent that is foreign to them, or because faculty may not speak clearly due to a disability or out of habit. It is much easier therefore if what is said is also written out.

Additionally, if you are studying English subject matter, but this is not your first language, you can be thrown by homonyms (in fact, this can happen even if it is your first language. And not only in English). Closed captioning lets you know if someone just said their, there, or they’re. In general, listening with the written words underneath is great practice to improve your skills in a language.

What is in for us?

As a creator of content, how does this benefit you, and how does it benefit the university? Clearly, there is a big boon to the retention of students if they can access course materials, and as word spreads of Carleton’s commitment to a culture and practice of accessibility, this will hopefully also encourage recruitment.

But the practice of captioning videos correctly also helps in other ways. Google cannot search the content of a video but if you create captions you can also publish the transcripts of these which Google can index. In other words, captions improve your search engine optimization and more people will find your content.

Oh, and as we often mention last of all when it comes to accessibility: it is the law – AODA rules state that all non-live video has to have subtitles on public-facing sites. For non-public sites, such as educational videos in Brightspace, it’s just the right thing to do (as well as still being subject to civil law).

Now you have read about why we use them, read about how captions can help and highlighting examples – some good, some funny, and some bad.