Idioma: Español
Fecha: Subida: 2021-04-14T00:00:00+02:00
Duración: 20m 13s
Lugar: Conferencia
Visitas: 1.248 visitas

Multimodal Corpus Analysis of English Presentations for EFL teaching: Speech Contents (...)

Miharu Fuyuno (Kyushu University) and Takeshi Saitoh (Kyushu Institute of Technology)

Descripción

This study examines connections between speakers’ spoken contents (words), facial movements, use of slides and gaze points of audience in English presentations through analyzing data from a multimodal corpus. Needs for delivering English public speaking are increasing in this globalizing society, and it is also the case among EFL learners. For example, the Common European Framework of Reference for Languages: Learning, teaching, assessment (CEFR) points out the importance of four main communication modes in language learning and teaching: reception, production, interaction and mediation. In fact, delivering public speaking and listening to public speaking are both highly relative to all of these modes. Detailed skills related to public speaking are also listed in the can-do list of the CEFR.
However, public speaking is known as one of major examples of social phobia and gaining proficiency and confidence in it is a hard task (Kessler, Stein, & Berglund, 1998). This is more so when speakers deliver public speaking in a foreign language. Effective evidence-based materials and methods for teaching and learning English public speaking are needed for EFL classrooms (Author1 et al., 2016a; 2016b; Author 1, Komiya & Author 2, 2018).
In the field of ELT, methods for teaching and learning public speaking have been target of various studies. Previous studies argued that not only speech contents but speakers’ eye contact and other nonverbal behaviors play critical rolls in effective English presentations (Sellnow, 2004; Slater et al., 1999). However, although there have been many studies on speakers’ speech contents, nonverbal behaviors and subjective evaluation by audience, objective data analysis of speakers’ speech contents, nonverbal behaviors and behaviors by audience has been rare.
This paper approaches the issue by analyzing multimodal data of speakers’ speech contents (words), nonverbal behaviors and gaze points by audience. The data were obtained from a multimodal corpus of English presentations constructed from digital audio and video data, and eye-tracking analysis data of multiple audience. Subjective evaluation data by audience were also included. For the audio data, speech pauses were extracted using acoustic analysis software. The spoken content (words) of each speech unit between two pauses was then annotated. For the video data, speakers’ eye-contact, hand gestures and use of slides were annotated using multimedia annotation software ELAN (cf. Jewitt et al., 2016). In order to examine audience behaviors, gaze points of audience were recorded and annotated using Tobii Eye Tracker 4C.
The results of multimodal analysis indicated speakers with larger number of eye contact tended to be watched by audience more than those with less eye contact, and certain words and gestures effectively led audience to keep engaging with the presentation. These results may allow us to develop effective and detailed teaching materials of public speaking for EFL learners.

Propietarios

Congreso Cilc 2021

Comentarios

Nuevo comentario

Serie: CILC2021: Los corpus y la adquisición y enseñanza del lenguaje / Corpora, LA and teaching (+información)