Asier Romero e Irati de Pablo (Universidad del País Vasco / Euskal Herriko Unibertsitatea)
Multimodal Corpus Analysis of English Presentations for EFL teaching: Speech Contents (...)
Miharu Fuyuno (Kyushu University) and Takeshi Saitoh (Kyushu Institute of Technology)
However, public speaking is known as one of major examples of social phobia and gaining proficiency and confidence in it is a hard task (Kessler, Stein, & Berglund, 1998). This is more so when speakers deliver public speaking in a foreign language. Effective evidence-based materials and methods for teaching and learning English public speaking are needed for EFL classrooms (Author1 et al., 2016a; 2016b; Author 1, Komiya & Author 2, 2018).
In the field of ELT, methods for teaching and learning public speaking have been target of various studies. Previous studies argued that not only speech contents but speakers’ eye contact and other nonverbal behaviors play critical rolls in effective English presentations (Sellnow, 2004; Slater et al., 1999). However, although there have been many studies on speakers’ speech contents, nonverbal behaviors and subjective evaluation by audience, objective data analysis of speakers’ speech contents, nonverbal behaviors and behaviors by audience has been rare.
This paper approaches the issue by analyzing multimodal data of speakers’ speech contents (words), nonverbal behaviors and gaze points by audience. The data were obtained from a multimodal corpus of English presentations constructed from digital audio and video data, and eye-tracking analysis data of multiple audience. Subjective evaluation data by audience were also included. For the audio data, speech pauses were extracted using acoustic analysis software. The spoken content (words) of each speech unit between two pauses was then annotated. For the video data, speakers’ eye-contact, hand gestures and use of slides were annotated using multimedia annotation software ELAN (cf. Jewitt et al., 2016). In order to examine audience behaviors, gaze points of audience were recorded and annotated using Tobii Eye Tracker 4C.
The results of multimodal analysis indicated speakers with larger number of eye contact tended to be watched by audience more than those with less eye contact, and certain words and gestures effectively led audience to keep engaging with the presentation. These results may allow us to develop effective and detailed teaching materials of public speaking for EFL learners.
Serie: CILC2021: Los corpus y la adquisición y enseñanza del lenguaje / Corpora, LA and teaching (+información)
Silvia Sánchez Calderón (Universidad de Educación a Distancia -UNED-)
Xiaolong Lu (University of Arizona)
Nan Jiang (Vanderbilt University)
Andrea Listanti (University for Foreigners of Siena), Jacopo Torregrossa (Goethe-Universität Frankfurt) and Liana Tronci (University for Foreigners of Siena)
A corpus-based analysis of clausal complexity in experts’ and learners’ academic texts: A case study
Elizaveta Smirnova (National Research University Higher School of Economic)
Alicia San Mateo Valdehíta y Marc Rodius (Universidad Nacional de Educación a Distancia -UNED-)
Begoña Clavel Arroitia and Barry Pennock Speck (Universitat de Valencia)
Raquel Mateo Mendaza (Universidad de La Rioja)
Daniel Díez Lorenzo (Universidad de Cantabria)
Silvia Aguinaga Echeverria (Universidad de Navarra) and Nausica Marcos Miguel (Denison University)
Francisco Javier Fernández Polo (Universidad de Santiago de Compostela)
Vanessa Cardoso Egrejas (CLUNL) and Antonio Chenoll (Universidade Aberta)
Ting Xu y Amaya Mendikoetxea (Universidad Autónoma de Madrid)
Adrián Granados y Francisco Lorenzo (Universidad Pablo Olavide)
Anita Ferreira (Universidad de Concepción)