Ayuda
Ir al contenido

Dialnet


Multimodal meaning making: the annotation of nonverbal elements in multimodal corpus transcription

    1. [1] Trier University of Applied Sciences

      Trier University of Applied Sciences

      Kreisfreie Stadt Trier, Alemania

  • Localización: Research in Corpus Linguistics (RiCL), ISSN-e 2243-4712, Vol. 9, Nº. Extra 1, 2021 (Ejemplar dedicado a: "Challenges of combining structured and unstructured data in corpus development"), págs. 63-88
  • Idioma: inglés
  • Enlaces
  • Resumen
    • The article discusses how to integrate annotation for nonverbal elements (NVE) from multimodal raw data as part of a standardized corpus transcription. We argue that it is essential to include multimodal elements when investigating conversational data, and that in order to integrate these elements, a structured approach to complex multimodal data is needed. We discuss how to formulate a structured corpus-suitable standard syntax and taxonomy for nonverbal features such as gesture, facial expressions, and physical stance, and how to integrate it in a corpus. Using corpus examples, the article describes the development of a robust annotation system for spoken language in the corpus of Video-mediated English as a Lingua Franca Conversations (ViMELF 2018) and illustrates how the system can be used for the study of spoken discourse. The system takes into account previous research on multimodality, transcribes salient nonverbal features in a concise manner, and uses a standard syntax. While such an approach introduces a degree of subjectivity through the criteria of salience and conciseness, the system also offers considerable advantages: it is versatile and adaptable, flexible enough to work with a wide range of multimodal data, and it allows both quantitative and qualitative research on the pragmatics of interaction.


Fundación Dialnet

Dialnet Plus

  • Más información sobre Dialnet Plus

Opciones de compartir

Opciones de entorno