Donate | Print Page | Contact Us | Report Abuse | Sign In | Register
News & Information: Featured Content

When you were a famous rock star: reading emotional facial expression during autobiographical recall

Tuesday, March 22, 2016   (0 Comments)
Share |
Stephan Lewandowsky

We all remember what it was like to be young, when we were rock stars, Blues singers, invincible volleyball players, or marathon runners.

We may also remember that specific marathon on a Tuesday in November where some particularly clever person put a billboard next to the finish line that said “turning back now would be foolish.”

These are examples of the two main components of autobiographical memory, our memory of our self and our own lives: a semantic component that refers to generic representations not involving a particular situation in a particular time and place (e.g., I was a rock star), and an episodic component that refers to memories of specific personal experiences in a particular time and place (e.g., I remember that specific marathon).

What determines the quality of our autobiographical memories? One important variable is the emotionality of the content and the emotional circumstances when we acquired a memory. Time and again, research has shown that emotional events are typically easier to retrieve than neutral ones, and that emotional cues are more likely to elicit an autobiographical memory than a neutral cue.  We also experience emotional memories as being more vivid—we all have vivid recollections of how, where, and when we learned of the tragic events of 9/11 (although that vividness need not always translate into accuracy).

Emotional memories are known to elicit significant changes in cardiovascular and electrophysiological activity, although it is less well known how those physiological and bodily changes differ between semantic and episodic components of autobiographical memories.

recent article in the Psychonomic Society’s journal Cognitive, Affective, & Behavioral Neuroscience examined how those two components differed with respect to the physical response that emotional memories elicit.

Researchers El Haj, Antoine and Nandrino used automatic facial-expression detection software to examine how people’s expressions changed with the retrieval of happy, sad, or neutral autobiographical memories. Although participants were not told to assume any particular facial expressions, the researchers expected that the mere retrieval of different types of memory might, involuntarily and automatically, elicit different facial expressions.

Participants were asked to generate three autobiographical events in response to the cue “happy”, “sad”, or “city”. The word “city” was deemed to be a cue that might evoke relatively unemotional memories such as a trip to Minneapolis in 1969. Participants were also instructed that “the event had to be personally experienced in the past, and that the description had to be precise and specific (e.g., where and when the event occurred, what they were doing during this event, who was present).” Participants were given 2 minutes to record their recollections of each event. After that, they rated the emotionality of each event on a scale from “very negative” to “very positive”.

The principal analysis involved automatic analysis of participants’ facial expressions from a video record of the recall episodes. The software provided a pie-chart representation of the percentage to which the face exhibited various emotions, namely happy, sad, angry, surprised, scared, disgusted, and neutral (plus another category corresponding to non-classified expressions).

The overall results—not broken down by episodic and semantic components—are shown in the figure below:

Although the differences between the cue words in the different panels may appear slight, in fact there were significant differences that pointed in the expected direction: The “happy” cue engendered a greater proportion of happy facial expression than the “sad” cue, and more sadness was observed with the “sad” cue than any of the others, and the “city” cue kept the faces primarily neutral. (The “other” category in the above figure represents the sum of angry, surprised, scared, and disgusted components. It can be ignored for present purposes).

This result by itself shows that when people retrieve personal memories, their faces reflect the emotional content of those memories. Given that expressions were tracked automatically by software from a video recording, the results hint at the possibility that the contents of a person’s memories might be detectable without knowledge of what the person is actually reporting.

Turning to the differentiation between components of autobiographical memories, El Haj and colleagues classified the recollective reports according to the following criteria:

Memories describing personal events situated in time and space, with a duration of less than 1 day, and accompanied by phenomenological details (i.e., perceptions, feelings, thoughts) were considered episodic autobiographical memories.

Memories describing events situated in time and space without any phenomenological details or describing repeated or extended events (e.g.,  a summer vacation) or only general information about an autobiographical theme (e.g., my childhood vacations) were considered to constitute semantic autobiographical memories.

By this classification, around 75% of all memories turned out to be episodic rather than semantic.

When the facial expressions were analyzed for each class of memories separately, again considering the mix of emotions separately for each type of cue, the variability between expressions was far greater for episodic memories than for semantic memories. That is, when a recalled memory was episodic, facial expressions differed far more between the cues “happy” and “sad” than when the recalled memory was semantic in nature.

This difference is of more than passing interest: El Haj and colleagues suggest that “listeners tend to automatically encode, mimic, and synchronize facial expressions to converge emotional messages”—that is, when we listen to someone telling us about their holiday in San Juan in 1998, our own emotions may be tuned to the speaker’s experiences not just based on what they are saying, but also on what they are visually feeling. By contrast, a person relating information about their childhood holidays that is more generic may seem to be far less emotionally engaging.

There is something about stories—even if they have been in oblivion for 2,000 years—that allows us to peak into “someone else’s consciousness” and to see how they feel and think. Engaging stories may even reduce people’s race bias and increase empathy with others. Listening to a specific autobiographic memory may likewise allow us to share in the emotional experience of another person.

Article focused on in this post:

El Haj, M., Antoine, P., & Nandrino, J. L. (2016). More emotional facial expressions during episodic than during semantic autobiographical retrieval.Cognitive, Affective, & Behavioral Neuroscience16, 374-381. DOI:10.3758/s13415-015-0397-9.

2424 American Lane • Madison, WI 53704-3102 USA
Phone: +1 608-441-1070 • Fax: +1 608-443-2474 • Email:

Use of Articles
Legal Notice