Conférence
Notice
Langue :
Anglais
Conditions d'utilisation
Attribution-NonCommercial 4.0 International (CC BY-NC 4.0)
DOI : 10.60527/jpe2-w464
Citer cette ressource :
LLF. (2021, 20 mai). Multimodality and Memory: Outlining Interface Topics in Multimodal Natural Language Processing , in Dialogue, Memory and Emotion. [Vidéo]. Canal-U. https://doi.org/10.60527/jpe2-w464. (Consultée le 19 septembre 2024)

Multimodality and Memory: Outlining Interface Topics in Multimodal Natural Language Processing

Réalisation : 20 mai 2021 - Mise en ligne : 20 juin 2021
  • document 1 document 2 document 3
  • niveau 1 niveau 2 niveau 3
Descriptif

Multimodal dialogue, the use of speech and non-speech signals, is the basic form of interaction. As such, it is couched in the basic interaction mechanism of grounding and repair. This apparently straightforward view already has a couple of repercussions: firstly, non-speech gestures need representations that are subject to parallelism constraints for clarification requests known from verbal expressions. Secondly, co-activity between speaker and addressee on some channel is the rule virtually for the whole timecourse of interaction and leads to multimodal overlap as a norm, thereby questioning the orthodox notion of sequential turns. Thirdly, if turns are difficult to maintain, a new form of interaction has to be given: we propose to think of it in terms of polyphonic interaction inspired by (classical) music. The multimodal speech and non-speech examples given throughout the talk all seem to be explainable only when considering at least the interaction of contents and dialogue semantics, working memory constraints, and attentional mechanisms, and hence are examples of interface topics for cognitive science.

Thème

Dans la même collection

Sur le même thème