Thursday, April 25, 2019



APRIL 25, 2019

Gestures and visual animations
reveal cognitive origins of linguistic meaning

by New York University

Credit: CC0 Public Domain

Gestures and visual animations can help reveal the cognitive origins of meaning, indicating that our minds can assign a linguistic structure to new informational content "on the fly"—even if it is not linguistic in nature.


These conclusions stem from two studies, one in linguistics and the other in experimental psychology, appearing in Natural Language & Linguistic Theory and Proceedings of the National Academy of Sciences (PNAS).

"These results suggest that far less is encoded in words than was originally thought," explains Philippe Schlenker, a senior researcher at Institut Jean-Nicod within France's National Center for Scientific Research (CNRS) and a Global Distinguished Professor at New York University, who wrote the first study and co-authored the second. "Rather, our mind has a 'meaning engine' that can apply to linguistic and non-linguistic material alike.

"Taken together, these findings provide new insights into the cognitive origins of linguistic meaning."

Contemporary linguistics has established that language conveys information through a highly articulated typology of inferences. For instance, I have a dog asserts that I own a dog, but it also suggests (or "implicates") that I have no more than one: the hearer assumes that if I had two dogs, I would have said so (as I have two dogs is more informative).

Unlike asserted content, implicated content isn't targeted by negation. I don't have a dog thus means that I don't have any dog, not that I don't have exactly one dog. There are further inferential types characterized by further properties: the sentence I spoil my dog still conveys that I have a dog, but now this is neither asserted nor implicated; rather, it is "presupposed"—i.e. taken for granted in the conversation. Unlike asserted and implicated information, presuppositions are preserved in negative statements, and thus I don't spoil my dog still presupposes that I have a dog.

A fundamental question of contemporary linguistics is: Which of these inferences come from arbitrary properties of words stored in our mental dictionary and which result from general, productive processes?

In the Natural Language & Linguistic Theory work and the PNASstudy, written by Lyn Tieu of Australia's Western Sydney University, Schlenker, and CNRS's Emmanuel Chemla, the authors argue that nearly all inferential types result from general, and possibly non-linguistic, processes.


Their conclusion is based on an understudied type of sentence containing gestures that replace normal words. For instance, in the sentence You should UNSCREW-BULB, the capitalized expression encodes a gesture of unscrewing a bulb from the ceiling. While the gesture may be seen for the first time (and thus couldn't be stored in our mental dictionary), it is understood due to its visual content.

This makes it possible to test how its informational content (i.e. unscrewing a bulb that's on the ceiling) is divided on the fly among the typology of inferences. In this case, the unscrewing action is asserted, but the presence of a bulb on the ceiling is presupposed, as shown by the fact that the negation (You shouldn't UNSCREW-BULB) preserves this information. By systematically investigating such gestures, the Natural Language & Linguistic Theory study reaches a ground-breaking conclusion: nearly all inferential types (eight in total) can be generated on the fly, suggesting that all are due to productive processes.

The PNAS study investigates four of these inferential types with experimental methods, confirming the results of the linguistic study. But it also goes one step further by replacing the gestures with visual animations embedded in written texts, thus answering two new questions: First, can the results be reproduced for visual stimuli that subjects cannot possibly have seen in a linguistic context, given that people routinely speak with gestures but not with visual animations? Second, can entirely non-linguistic material be structured by the same processes?

Both answers are positive.

In a series of experiments, approximately 100 subjects watched videos of sentences in which some words were replaced either by gestures or by visual animations. They were asked how strongly they derived various inferences that are the hallmarks of different inferential types (for instance, inferences derived in the presence of negation). The subjects' judgments displayed the characteristic signature of four classic inferential types (including presuppositions and implicated content) in gestures but also in visual animations: the informational content of these non-standard expressions was, as expected, divided on the fly by the experiments' subjects among well-established slots of the inferential typology.


Explore furtherChimp communication gestures found to follow human linguistics rules
More information: Lyn Tieu et al, Linguistic inferences without words, Proceedings of the National Academy of Sciences (2019). DOI: 10.1073/pnas.1821018116
Journal information: Proceedings of the National Academy of Sciences


Provided by New York University

No comments:

Post a Comment