< Back to previous page

Project

MUST - Multimodal stance-taking in interaction

A fundamental property of language is its ability to simultaneously represent subjects, objects or events, and express the speaker’s stance towards these representations. Although stance-taking as a socially contextualized and recognized interpersonal phenomenon has received substantial attention in different subfields of linguistics, its multimodal realization in real-life interaction still remains largely unexplored. The proposed research project zooms in on the interplay of different semiotic resources, including manual gestures and signs, posture, facial expressions, touch and eye gaze in complex stance-taking acts, which may be realized simultaneously (stance-stacking) and/or sequentially, within or across speakers engaged in interaction (co-stacking). Through a balanced set of three interrelated phenomena (multimodal grounding and distancing in irony, depiction of embodied performances and full-body enactments of others), involving different interaction types (spontaneous interactions, narratives, masterclasses) and languages (Dutch, Flemish Sign Language, English), we aim to develop a full-fledged corpus-based socio-cognitive embodied account of multimodal stance-taking.
Date:1 Oct 2020  →  Today
Keywords:multimodality, stance-taking, social interaction, embodiment, signed language, irony, music masterclass, demonstration, social cognition
Disciplines:Corpus linguistics, Sign language research, Discourse analysis, Pragmatics