A Brief History of Closed Captioning

Closed captioning and subtitles have been part of accessible media since 1972 and has played a large role in the development of disability rights as well as civil rights in the United States. Created initially for the Deaf and hard-of-hearing, it has been implemented on a large scale since that time across television programming via video captioning and now on the Internet with YouTube and many other sharing services. The primary benefit of these services is to provide a convenient visual interpretation through text or symbol of televised audio content including speech, music and sound effects.

The first demonstration of captioning services was at the First National Conference on Television for the Hearing Impaired in 1971. By 1976, PBS was a major proponent in engineering and transmitting the technology of captioning to television viewers for pre-recorded programs.

Real-time captioning of live broadcasts was developed by the National Captioning Institute in 1982. This process involved the use of highly trained individuals capable of typing over two hundred words a minute to produce captions in close to real time. Public television station WGBH-TV in Boston, one of the earliest users of closed captioning, remains a major producer of captions.

In 1980, through the influence of the newly created National Captioning Institute, commercial television stations began regularly scheduled uses of closed captioning through a telecaption adapter. Large steps have been taken in the last 30 years to make closed captioning more readily accessible to the Deaf and hard-of-hearing. The technology is now programmed directly into televisions themselves, making adapters obsolete. In 2014, the FCC approved implementing higher quality standards for text-based interface, ensuring that progress in these technologies continues.

In the development and increased use of subtitles over the past 42 years, there have been many innovations, including the translation and transcription of audio content in a variety of languages. These translations have become common in the modern film and television industry. Closed captioning in the United States is now required to be available for regular Spanish-language television programming.

What is the difference between subtitles and captioning?

Subtitles are the visual representation of dialog transcribed or translated and displayed onscreen (either embedded in or superimposed over a portion of the picture) in films, television, or video games. Subtitles are displayed generally on the bottom of the screen along with the audio track for the purpose of making the dialog more comprehensible to viewers who may not understand the native language or when the audio is temporarily unintelligible (i.e., when drowned out by other sound in a public setting). Subtitles may also be used when a viewer wishes to hear the audio track in its original language and read the text in translation rather than hearing the dialog dubbed into the viewer’s own language.

Captions are similar to subtitles but may include in addition to dialog other audio information such as sound effects, symbolic representations of music, and may even indicate the speaker when this information is not clearly evident visually. The words captions and subtitles are frequently used interchangeably, but the main distinction between the two is that subtitles assume viewers can hear the audio track but for some reason find it unintelligible. Captions, however, are intended primarily for an audience that is unable or has difficulty hearing the audio track.

Rabins Sharma Lamichhane

Rabins Sharma Lamichhane is senior ICT professional who talks about #it, #cloud, #servers, #software, and #innovation. Rabins is also the first initiator of Digital Nepal. Facebook: rabinsxp Instagram: rabinsxp

Leave a Reply

Your email address will not be published. Required fields are marked *