Respeaking

Live subtitling (or “real-time captioning”, as it is known in the United States) is the real-time transcription of spoken words, sound effects, important musical cues, and other relevant audio information to enable deaf or hard-of-hearing persons to follow a live audiovisual programme. Commonly regarded (since it was introduced in the United States and Europe in the early 1980s) as one of the most challenging modalities within media accessibility, it can be produced through different methods: standard QWERTY keyboards, Velotype and the two most common approaches, namely stenography and respeaking.

At GALMA we are involved in research (and practice) on stenography and especially, on respeaking, which may be defined as:


a technique in which a respeaker listens to the original sound of a (live) programme or event and respeaks it, including punctuation marks and some specific features for the deaf and hard-of-hearing audience, to a speech recognition software, which turns the recognized utterances into subtitles displayed on the screen with the shortest possible delay
Romero-Fresco, 2011:1

In many ways, respeaking is to subtitling what interpreting is to translation, namely a leap from the written to the oral modality without the safety net of time. Although respeakers are normally encouraged to repeat the original soundtrack, and hence produce verbatim subtitles, the fast-paced delivery of speech in media content often makes this difficult. The challenges arising from high speech rates are compounded by other constraints. These include the need to incorporate punctuation marks through dictation while the respeaking of the original soundtrack is unfolding; and the expectation that respoken output will abide by standard viewers’ reading rates. Consequently, respeakers often end up paraphrasing, rather than repeating or shadowing, the original soundtrack. At GALMA we are conducting leading research on the quality of live subtitles with several governments, universities and companies around the world using our NER model. We are also delivering face-to-face and online training on intra- and interlingual respeaking and we have set up LiRICS (Live Reporting International Certification Standard), the first worldwide certification process for professional respeakers.



References:

Eugeni, C. and G. Mack (2006) (eds.) Intralinea, Special Issue on New Technologies in Real Time Intralingual Subtitling. Available online: http://www.intralinea.org/specials/respeaking [last access 20 December 2017].

Romero-Fresco, P. (2011). Subtitling through speech recognition: Respeaking. Manchester: Routledge.

Romero-Fresco, P. (2016). Accessing communication: The quality of live subtitles in the UK. Language & Communication, 49, 56–69.

Universidade de VigoXunta de GaliciaMinisterio de EconomíaEuropean Union

Universidade de Vigo Facultade de Filoloxía e Tradución | Despacho Newton 5 | Campus de Vigo As Lagoas | Ctra. de Marcosende | 36310 Vigo (España)
Back to top