You are here

Brain mechanisms for processing co-speech gesture: A cross-language study of spatial demonstratives

Brain mechanisms for processing co-speech gesture: A cross-language study of spatial demonstratives

Authors: 
J. Stevens & Y. Zhang
Year: 
2014
Journal: 
Journal of Neurolinguistics
Abstract: 

This electrophysiological study investigated the relationship between language and nonverbal socio-spatial context for demonstrative use in speech communication. Adult participants from an English language group and a Japanese language group were asked to make congruency judgment for simultaneous presentation of an audio demonstrative phrase in their native language and a picture that included two human figures as speaker and hearer, as well as a referent object in different spatial arrangements. The demonstratives (“this” and “that” in English, and “ko,” “so,” and “a” in Japanese) were varied for the visual scenes to produce expected and unexpected combinations to refer to an object based on its relative spatial distances to the speaker and hearer. Half of the trials included an accompanying pointing gesture in the picture, and the other half did not. Behavioral data showed robust congruency effects with longer reaction time for the incongruent trials in both subject groups irrespective of the presence or absence of the pointing gesture. Both subject groups also showed a significant N400-like congruency effect in the event-related potential responses for the gesture trials, a finding predicted from previous work (Stevens & Zhang, 2013). In the no-gesture trials, the English data alone showed a P600 congruency effect preceded by a negative deflection. These results provide evidence for shared brain mechanisms for processing demonstrative expression congruency, as well as language-specific neural sensitivity to encoding the co-expressivity of gesture and speech.

© Copyright 1999 - 2024 ANT Neuro | www.ant-neuro.com | Terms of Use | Privacy Statement | Contact | USA Customers