Mimes help us 'see' objects that don't exist

JOHNS HOPKINS UNIVERSITY

Research News

When we watch a mime seemingly pull rope, climb steps or try to escape that infernal box, we don't struggle to recognize the implied objects -- our minds automatically "see" them, a new study concludes.

To explore how the mind processes the objects mimes seem to interact with, Johns Hopkins University cognitive scientists brought the art of miming into the lab, concluding that invisible, implied surfaces are represented rapidly and automatically. The work appears today in the journal Psychological Science.

"Most of the time, we know which objects are around us because we can just see them directly. But what we explored here was how the mind automatically builds representations of objects that we can't see at all but that we know must be there because of how they are affecting the world," said senior author Chaz Firestone, an assistant professor who directs the university's Perception & Mind Laboratory. "That's basically what mimes do. They can make us feel like we're aware of some object just by seeming to interact with it."

In the experiments, 360 people were tested online. They watched clips where a character (Firestone himself) mimed colliding with a wall and stepping over a box in a way that suggested those objects were there, only invisible. Afterward, a black line appeared in the spot on the screen where the implied surface would have been. This line could be horizontal or vertical, so it either matched or didn't match the orientation of the surface that had just been mimed. Participants had to quickly answer if the line was vertical or horizontal. The team found people responded significantly faster when the line aligned with the mimed wall or box, suggesting that the implied surface was actively represented in the mind - so much so that it affected responses to the real surface participants saw immediately after.

Participants had been told not to pay attention to the miming, but they couldn't help but be influenced by those implied surfaces, said lead author Pat Little, who did the work as an undergraduate at Johns Hopkins, and is now a graduate student at New York University.

"Very quickly people realize that the mime is misleading them, and that there is no actual connection between what the person does and the type of line that appears," Little said. "They think, 'I should ignore this thing because it's getting in my way', but they can't. That's the key. It seems like our minds can't help but represent the surface that the mime is interacting with - even when we don't want to."

The work is partly inspired by a phenomenon in psychology called the Stroop Effect, where the name for one color is written in ink of a different color (e.g., the word "red" is written in blue ink); when a person is given the task of saying the color of the ink (blue), they can't help but read the mismatched text (red), which distracts them and slows them down. In this regard, miming is like reading: Just as you can't help but read the text you see (even when you're supposed to ignore it), you can't help but recognize the object being mimed, even when it's getting in the way of another task.

While it could seem that the findings diminish the work of mimes - since it suggests our brains are going to imagine these objects automatically - the researchers insist mimes still deserve credit.

"This suggests that miming might be different from other kinds of acting," Little said. "If the mime is skilled enough, understanding what's going on doesn't require any effort at all -- you just get it automatically."

The findings could also inform artificial intelligence related to vision.

"If you're trying to build a self-driving car that can see the world and steer around objects, you want to give it all the best tools and tricks," Firestone said. "This study suggests that, if you want a machine's vision to be as sophisticated as ours, it's not enough for it to identify objects that it can see directly -- it also needs the ability to infer the existence of objects that aren't even visible at all."

###

The work was supported by the National Science Foundation (Grant #2021053), the Johns Hopkins Science of Learning Institute, and a STAR award from the Johns Hopkins Office of Undergraduate Research.