Although the differences between the cue words in the different panels may appear slight, in fact there were significant differences that pointed in the expected direction: The “happy” cue engendered a greater proportion of happy facial expression than the “sad” cue, and more sadness was observed with the “sad” cue than any of the others, and the “city” cue kept the faces primarily neutral. (The “other” category in the above figure represents the sum of angry, surprised, scared, and disgusted components. It can be ignored for present purposes).
This result by itself shows that when people retrieve personal memories, their faces reflect the emotional content of those memories. Given that expressions were tracked automatically by software from a video recording, the results hint at the possibility that the contents of a person’s memories might be detectable without knowledge of what the person is actually reporting.
Turning to the differentiation between components of autobiographical memories, El Haj and colleagues classified the recollective reports according to the following criteria:
Memories describing personal events situated in time and space, with a duration of less than 1 day, and accompanied by phenomenological details (i.e., perceptions, feelings, thoughts) were considered episodic
Memories describing events situated in time and space without any phenomenological details or describing repeated or extended events (e.g., a summer vacation) or only general information about an autobiographical theme (e.g., my childhood vacations) were considered to constitute semantic
By this classification, around 75% of all memories turned out to be episodic rather than semantic.
When the facial expressions were analyzed for each class of memories separately, again considering the mix of emotions separately for each type of cue, the variability between expressions was far greater for episodic memories than for semantic memories. That is, when a recalled memory was episodic, facial expressions differed far more between the cues “happy” and “sad” than when the recalled memory was semantic in nature.
This difference is of more than passing interest: El Haj and colleagues suggest that “listeners tend to automatically encode, mimic, and synchronize facial expressions to converge emotional messages”—that is, when we listen to someone telling us about their holiday in San Juan in 1998, our own emotions may be tuned to the speaker’s experiences not just based on what they are saying, but also on what they are visually feeling. By contrast, a person relating information about their childhood holidays that is more generic may seem to be far less emotionally engaging.
There is something about stories—even if they have been in oblivion for 2,000 years
—that allows us to peak into “someone else’s consciousness” and to see how they feel and think. Engaging stories may even reduce people’s race bias and increase empathy with others
. Listening to a specific autobiographic memory may likewise allow us to share in the emotional experience of another person.
Article focused on in this post:
El Haj, M., Antoine, P., & Nandrino, J. L. (2016). More emotional facial expressions during episodic than during semantic autobiographical retrieval.Cognitive, Affective, & Behavioral Neuroscience
, 16, 374-381. DOI:10.3758/s13415-015-0397-9.