|
Sunday Recap Saturday Meeting Highlights
Speaker: Pscyhe Loui, Northeastern University, USA
Summary by Hannah Mechtenberg, University of Connecticut, USA

Psyche Loui, Director of the Music, Imaging, and Neural Dynamics (MIND) Laboratory
at Northeastern University, began provocatively—stating that studying the evolution of music goes far and beyond looking at the Taylor Swifts and Beethovens of the world. Rather, we need to examine the
neural and social origins of music to understand why humans are so drawn to it. Curiously, the best place to start with this complex question may be to study people with no such emotional draw to music—people
with something called musical anhedonia.
Loui began with an exploration of why humans seem to find music pleasurable. It is so pleasurable that there is evidence of instrument construction from over 40,000 years ago. Before the advent of writing,
someone took the wing bone of a vulture, carved holes in that bone, and had that bone flute play the notes on a pentatonic scale. Why go through this effort? Loui suggests that music and social bonding
may be a co-evolved system,
where the acoustic features of music (e.g., meter, beat, harmony, discrete pitches) activate our brains in ways that give rise to fundamental aspects of human behavior (e.g., perception, prediction,
social reward).
Caption Image Above: Loui opens the talk by persuading the audience to think about the evolution of music with an interdisciplinary mindset rather than to simply look into popular musicians across time.
To test the neural predictions of the co-evolution hypothesis—focusing on the tight relationship between music and reward—Loui turned to people who are “unmoved” by music. These people report no feelings
of pleasure from music despite finding pleasure from other art forms like photography or poetry. From this work emerged a neuroanatomical model linking music perception and reward, where connections between auditory processing regions (e.g., Heschl’s gyrus and superior temporal gyrus) and reward circuitry
(e.g., striatum) are key for evoking the feelings of pleasure associated with music. Critically, if there are any disconnections in this network, it may lead to the experience of musical anhedonia.

Loui concluded with a fascinating series of studies that examine musicality (the human ability to perceive and produce music) using a novel musical system: the Bohlen-Pierce scale. Freed from the influence of the ubiquitous 2:1 musical system (the frequency ratio that defines an octave) pervasive in Western music, Loui was able to test the
relationship between liking music and familiarity with a novel musical system. Overall, the more you’ve heard a musical system, the more you like it. Moreover, people with musical anhedonia never like
music from any system no matter their exposure, lending support to the neural origins of their emotional disconnect from music.
Throughout the presentation, Loui played many snippets of music that engaged the audience—who all dutifully bopped around to the beat. Clearly, many of us in attendance felt the emotional pull to music,
though perhaps not all!
Loui’s work is a fascinating and innovative peek into the evolution of music. If you want to read more on the subject, check out a book co-edited by Loui called “The Science-Music Borderlands: Reckoning with the Past and Imagining the Future.”
Caption: Loui introduces the concept of musical anhedonia; wherein certain people don’t experience emotional pleasure from listening to music.
Summary by Raunak Pillai, New York University, USA

There comes a time in every Psychonome’s career when they must look for a job. For graduate students and postdoctoral scholars, this can be a daunting time, full of fear and uncertainty about the paths that
may lie ahead. To help with these concerns, the Psychonomic Society’s inaugural Graduate Student and Postdoc Committee (of which I am a part, for full disclosure) hosted a lunchtime workshop with a panel of expert cognitive psychologists who have gone on to land a variety of non-academic jobs.
The panel included:
Alex Burgoyne, Senior Scientist at Human Resources Research Organization (HumRRO) Aubrey Lau, UX researcher at Zillow Rachel Ostrand, Staff
Research Scientist at IBM Research Craig Sanders, Research Engineer at Meta Rebecca Hirst, Director and Chief Science Officer at Open Science Tools Ltd.
(PsychoPy), and Postdoc at Trinity College Dublin, Multisensory Lab
Caption Image Above: The panelists discussing giving career advice.
So, whether you are a graduate student or postdoc looking to explore industry careers or mentor one, read on for a summary of the panel’s advice!
Question: What are the key differences between academic and industry roles? The panelists noted that the day-to-day work in academia and industry often involves similar activities, like research,
data analysis, and writing. Sometimes, the panelists acknowledged that industry careers can be more meeting-heavy and collaborative. One panelist highlighted the importance of being concise and delivering
actionable insights rather than diving deeply into every scientific detail. Industry timelines can also sometimes be shorter, requiring more flexibility in completing research.
Question: What skills in your graduate training best prepared you for industry careers? Graduate trainees learn programming, communication & presentation, and experimental design, all of
which are skills invaluable in industry settings. The panelists also noted that graduate school lets one learn how to teach oneself, which is key for gaining new skills.
Question: What challenges might arise during the transition from graduate school to industry roles? One of the biggest challenges the panelists noted was how one communicates their science.
In academia, we must often focus on specific details and theoretical implications, whereas in industry, the focus is more on high-level insights and practical recommendations. In addition, the composition
of industry teams can vary. Some panelists found themselves being the only cognitive psychologists in their workplaces. This came with challenges like isolation, but also rewards like forging interdisciplinary
collaborations.
Question: What influenced career decisions and steps taken to transition? Many panelists cited work-life balance, compensation, and differences in the culture of industry versus academic settings
as reasons for their transition from graduate school or postdoc positions to industry roles.
Question: How should one adapt academic CVs for industry roles? The key takeaway was to keep it short. While academic CVs take many pages and detail every talk, poster, paper, or award, industry
resumes should be brief (one, maybe two pages) and focus on key skills. The panelists also recommended tailoring language for the industry where appropriate (e.g., describing experiments as A/B
testing for user experience roles). A quick search on LinkedIn can reveal free resources for making these language shifts.
 Question: When is the right time to look for internships, and how should you approach them? Internship timelines vary by industry, but in tech, opportunities are often posted as early as October
or November for the following summer. Reaching out to teams whose work you admire through cold emails can also help uncover opportunities that might not be publicly posted. For graduate students interested
in industry roles, the 3rd or 4th year might be a good time to consider internships.
Overall, the panelists shared a lot of insights about the range of options available in industry and how to get there. As cognitive psychologists, our research and training allow us to make valuable contributions
to the world—not just in academia, but also beyond!
Caption: The panelists posing alongside the Psychonomic Society Graduate Student and Postdoc Committee after the successful panel.
Speaker: Asifa Majid, University of Oxford, UKSummary by Raunak Pillai, New York University, USA 
What aspects of humans’ conceptual representation are shared by all people across regions and cultures? This lofty question has been the subject of much deliberation, with many claims being lobbed that this or that aspect of human cognition is truly universal. However, in her talk, Asifa Majid argues that actual progress towards answering this question has been impeded by an overreliance on data from Western, English-speaking samples and that the search for universals can only proceed by taking a cross-cultural approach. To illustrate this point, Majid focuses on the case of smell. For decades, scholars have argued that our sense of smell occupies a weaker place in our conceptual repertoire than other senses like vision and hearing. For instance, you could presumably name more distinct sights and sounds than you could smell. Based on observations like these, scholars have gone as far as to call smell the “mute sense.” Caption Image Above: Data from Majid et al., 2018 showing the diversity index of various senses in each language. Across languages, there is no consistent ordering of the senses, in contrast to claims that there is a universal hierarchy of sense representation in the human conceptual system. But is this really the case for all people? Majid argues it is not. She presents data from speakers of various languages across the globe. For each language, speakers’ verbal descriptions of various sense data were coded using Simpsons’ diversity index, a measure of how “codable” a particular sense is in a given language. The conventional wisdom is that there is a rough hierarchy of importance, such that vision and hearing are more universally codable than senses like smell. However, this hierarchy only applies to English speakers. Across languages, there is no universal hierarchy. Building on this insight, Majid then presents data from hunter-gatherer societies, such as the Maniq people in modern Thailand. For the Maniq, smell plays a more important part in everyday life—from religious ceremonies to food consumption—than in many Western cultures. In fact, the Maniq people have a number of words for scents that are not present in English, such as a word for picking out unpleasant smells shared by shrimp paste, tigers, and tree sap.  Further, people in industrialized societies often use visual cues to evaluate the quality of food (e.g., looking for discoloration—or even just at the “best by” date), and, in doing so, may overlook perfectly good-smelling food if it looks a little blemished. By contrast, people in Maniq society may instead prefer to use smell to label and categorize food they are gathering. For instance, Majid shows a video of a Maniq gatherer cutting a sugar cane plant, immediately smelling it, and verbally labelling the smell as evidence of its quality. Thus, Majid argues that the traditional view that smell is a conceptually impoverished “mute” sense is a sort of fiction that emerges from an overreliance on Western, English-speaking samples. This case study investigating the role of smell in human knowledge carries with it a point of much broader significance. It is only by meaningfully taking into account cross-cultural variation that we can move towards a deeper, more accurate understanding of the human mind. Caption: Majid discussing her work on cognitive universals. Her slide displays a word in the Maniq language representing an unpleasant smell.
Speakers: Adrian Staub, University of Massachusetts Amherst, Ellie Deutsch, University of Massachusetts Amherst, John Greene, University of Massachusetts Amherst, Jillian Hammond, University of Massachusetts AmherstSummary by Daniel Pfaff, University of California, Santa Cruz, USA, and Melinh Lai, University of Chicago, USA 
Have you ever thought about words? Or at least, how the things that form words—sounds and letters and audio/visual features—inspire some representation of not just a broader lexical form but also semantic meanings. Early work, mainly in the latter half of the 20th century but extending as far back as the 1880s, studied the interactivity between individual levels and more complete word forms through the Word Superiority Effect, a pattern in which letters tend to be remembered better when they appear in strings that make up real words than if they appeared in non-word strings or isolation. The robustness and replicability of the Word Superiority Effect through the years has been cited as support for the processing of words and letters to be highly interactive, in that identification of the initial letters of a word leads to activation of broader word representations, which then feedback to and facilitate letter identification. More recently, psycholinguists have observed what may be an analogous Sentence Superiority Effect, in which whole words are remembered more accurately when they appear in ordered sets that form complete sentences. In some studies, memory for words that appeared in sentence-like contexts was enhanced by as much as 20% compared to those in groups that did not form grammatical sentences. These initial studies, which relied on free-response tasks that presented the target words in a 200-millisecond window, suggest that numerous complex operations occur in tandem: words are activated and processed in parallel, relevant syntactic structures are activated, and feedback from the structural activation then further facilitates word processing, all within 200 milliseconds of being presented a sentence. As Staub discussed, such a claim is not easily reconciled with other known phenomena in language processing, including other studies that demonstrate more limitations on parallel word processing and the fact that syntactic processing is both incremental and time-consuming. Staub and his team of talented undergraduates also noted that foundational studies that cite the Word Superiority Effect as support for interactive models of word and letter processing specifically drew their evidence from the results of alternative forced choice (AFC) tasks. In contrast, the studies arguing for an analogous Sentence Superiority Effect rely on free-response tasks. To better understand whether Sentence Superiority Effects are the results of a highly interactive network between syntactic structures and lexical representations, Staub and colleagues ran three experiments with a combined total of over 600 participants that involved both free-response tasks and AFC tasks. Comparison of word identification data within subjects and across tasks would better elucidate whether Sentence Superiority Effects could still be obtained in the conditions that initially legitimized the Word Superiority Effect as evidence of interactivity.
The first two experiments presented target words in grammatical nonsense; in other words, the targets were presented in randomly generated strings that still followed typical grammar rules, such as “work saved red homes.” Memory for words within these grammatical but nonsensical sentences was compared to memory for words in completely ungrammatical strings like “saves homes red work.” While the second experiment added the relatively minor modification of increased font size (to draw out effects potentially better), the third experiment used more semantically meaningful sentences (e.g., “hands hold great power”). All three experiments showed patterns resembling a Sentence Superiority Effect for free-response data, much like the original studies that first described the Sentence Superiority Effect (although the current effects were weaker). Critically, however, no Sentence Superiority Effects were observed in the AFC tasks in any of the three studies. Staub and colleagues take these findings as a sign that the earlier accounts of the Sentence Superiority Effect—where word and sentence processing is highly interactive in the same way that letter and word processing is—do not need to be as complex. Instead, these effects can also be explained by participants simply guessing what a word is based on the surrounding words, but not necessarily from the syntactic structure. Speakers: Alexandra E. Kelly, Drexel University, USA, Evangelia G. Chrysikou, Drexel University, USASummary by Hannah Mechtenberg, University of Connecticut, USA 
How are emotion concepts organized in the mind? Alexandra Kelly, a doctoral student at Drexel University, walked through her recent empirical and modelling work that seeks to find new clarity about the storage and organization of emotion concepts. She hypothesized that interoceptive ability—the ability of a person to sense and interpret the physiological signals in our body—may be the core grounding source for emotion concepts. So, is a person’s ability to accurately sense their heart rate or the filling of their lungs with air linked to how their emotional concepts are structured? Central to this idea is that interoception lies at the root of affective states—which are a core component of emotion concepts. For example, consider the emotion of anger. When you feel angry, many sensations come from the body, including feeling hot or flushed, increased heartbeat and respiration, and even muscle clenching. These signals then ascend through the system and are eventually processed in higher-order emotional centers in the brain. The affective state is then labeled and recognized as anger. Clearly, the body and the ability to accurately sense changes in the body are intimately related to emotions and perhaps even emotion concepts. Caption Image Above: Kelly opens her talk that explores the relationship between interoceptive ability and the organization of emotion concepts. To investigate the relationship between interoceptive ability (an ability with considerable individual variability) and emotional concept organization, Kelly modelled the structure of individual people’s emotional lexicons. She presented participants with 28 emotion concepts (e.g., anger, fear, sadness, etc.) spanning seven emotion categories and asked them to indicate how related two emotions were. From this, she created an interconnected network that is representative of each person’s emotional lexicon; with the nodes representing emotion concepts and the edges the connections between them. Critically, the edges can vary in length and the nodes are able to cluster in various ways based on participant judgements, so each resultant network is almost like an emotion concept “fingerprint” for each participant. Kelly then wanted to see if she could predict the structure of these emotional concept networks using measures of interoceptive ability collected from the same participants. Out of the four topological features of these networks, only node clustering was significantly correlated with interoceptive ability. Clustering captures how nodes tend to hang together in the network and reflects relatedness amongst emotional concepts—with concepts rated as more similar tending to cluster together. Perhaps interoceptive ability helps with interpreting the emotional category of a particular emotional concept.
This work raises the intriguing possibility that interoceptive ability may affect how emotion concepts are stored long-term. Kelly has plans to explore the relationship between emotion concept networks and brain measures, including functional connectivity of emotion processing regions. Caption: Central to Kelly’s work is modeling the individual emotional concept networks for each person. Aspects of the topology of these networks can be predicted by interoceptive ability. Speakers: Courtney A. Kurinec, Washington State University, Anthony R. Stenson, Eastern Oregon University, Paul Whitney, Washington State University, John M. Hinson, Washington State UniversitySummary by Daniel Pfaff, University of California, Santa Cruz, USA 
In the wee hours of scrolling through social media before bed, how vulnerable do you think you are to misinformation? The general premise goes like this: hardly anyone gets enough sleep, and social media use continues to rise, which brings with it the rise in misinformation. Knowing that sleep helps consolidate information throughout your day, our sleep-deprived brains let more misinformation seep into our knowledge. Some data even supports this idea; previous research has shown that people who sleep less than five hours a night are more likely to report false memories and reply with misinformation-consistent responses. In the final talk in the False Memory session today, Courtney A. Kurinec discussed this data alongside the Continued Influence Effect (CIE), the trend that misinformation continues to be effective even after being corrected. Sleep loss might disrupt the natural abilities of the brain to limit CIE by impairing encoding of the truth or impairing the brain’s ability to break previous associations. However, sleep studies are incredibly resource intensive, so Kurinec and colleagues used a related fatigue effect: Social Jet Lag. Social Jet Lag is that tired feeling you get after talking to others all day. It similarly impairs cognitive function, and instead of requiring an overnight stay in a lab, Social Jet Lag is often induced after a shift at a restaurant or other service industry job. Kurinec and colleagues invited these participants, who were likely to be currently experiencing Social Jet Lab, to read a short passage that included a possible interpretation of the events in the passage. After a short distractor task, participants read a follow-up passage with a retraction and correction of the previous interpretation. However, participants with all levels of Social Jet Lag were found to be equally susceptible to misinformation, regurgitating the original. This was the same result for another variable that Kurinec and colleagues analyzed, subjective responses of sleepiness, but that again did not correlate with misinformation susceptibility. The one variable that did modulate the Continued Influence Effect was subjective measures of sleep duration. Those with fewer hours of sleep were more likely to remember the misinformation without correction than those with more sleep. Kurinec and colleagues showed that only some aspects of sleep are related to CIE but cautioned against broader interpretations. Moving forward, they want to reinforce these data with a full sleep study manipulating sleep hours to test the relationship between CIE and sleep more thoroughly. In the meantime, let’s all try to put the phone down and maybe pick up a book instead in the midnight hour. Speakers: Valeria Thompson, University of Saskatchewan, Tay Spock, University of Saskatchewan, Kailyn Phillips, University of Saskatchewan, Emilie Moellenbeck, University of SaskatchewanSummary by Xueqing Chen, University of Bristol, UK 
Valerie Thompson's, a Professor of Cognitive Psychology at the University of Saskatchewan, research interests include intuitive judgments, thinking and decision-making, and metacognition (that is, how we evaluate the accuracy of our thought processes). In her talk, she discussed monitoring and sensitivity in two reasoning tasks. First, a bit of background. Humans possess the capability to monitor their own performance. "Feeling of Rightness" (FOR) and "Feeling of Error" (FOE) represent two forms of monitoring judgments. The FOR refers to the extent to which individuals feel confident about the correctness of their answers or decisions. Conversely, the FOE pertains to the extent to which individuals feel that they may have made a mistake or that their answers are incorrect. A theory suggested by Fernandez Cruz et al. (2016) proposed that FOR shows less monitoring accuracy compared to FOE. To test this theory, Thompson conducted two experiments to explore the differences between these forms. In Thompson's first study, she engaged 184 participants in a syllogistic reasoning task, where participants tackled problems like determining if politicians are athletes based on a series of logical extensions. Using the two-response paradigm, participants initially responded quickly based on intuition and then reevaluated their responses, documenting any changes in answers and the time taken for rethinking. The initial findings from this study showed that FOR and FOE judgments were equally sensitive to the complexity of the problems and the accuracy of the participants' responses. Further analysis revealed that both types of judgments equally influenced the participants' decisions to rethink and modify their answers, indicating similar control sensitivity. Building on these findings, Thompson conducted a second experiment using the base-rate neglect task with scenarios that involved either conflict or non-conflict conditions, such as deciding the likely profession of Richard, a skilled debater described in a context heavily populated by plumbers but including a few politicians. This study reinforced the initial results, showing no significant differences in how FOR and FOE judgments responded to problem complexity or conflict. The study also demonstrated that both judgment types effectively promoted analytical thinking, and there was minimal reactivity or evidence of framing effects influencing participant responses. Thompson's research offers insightful contributions to understanding how different monitoring judgments function in reasoning tasks. Her studies suggest that while FOR and FOE are similarly effective in various cognitive tasks, their application can significantly enhance how individuals engage with complex reasoning challenges. The findings encourage further investigation into these cognitive processes, particularly exploring potential differences in judgment influence across various reasoning tasks. Through ongoing research, Thompson aims to deepen our understanding of metacognitive judgments and potentially improve educational and cognitive assessment practices. Moreover, the implications of these studies extend beyond academic settings, influencing practical applications in fields such as education, psychology, and cognitive training. By enhancing our grasp of how judgments like FOR and FOE impact decision-making, Thompson's work contributes to refining approaches to learning and problem-solving in real-world scenarios. This ongoing research enriches theoretical knowledge and offers valuable insights for developing strategies that improve cognitive function and decision-making in everyday life. Speakers: Louise A. Brown Nicholls, University of Strathclyde, Julia-Marie Lukas, University of Strathclyde, Linzi F. Crawford, University of Strathclyde, Lazaro H. Jackson, University of StrathclydeSummary by Xueqing Chen, University of Bristol, UK 
Dr. Louise Brown Nicholls is a professor at the University of Strathclyde focusing on human memory and attention, particularly as these cognitive abilities are affected by adult aging. Her research explores how factors like emotion and lifestyle impact cognitive performance across the adult lifespan. She is deeply involved in multidisciplinary aging research through the Strathclyde Ageing Network and leads the Scottish Cognitive Ageing Network. Her talk covered the use of strategies that predict subjective cognitive abilities across the adult lifespan. Cognitive aging involves changes in 'fluid' cognition, which declines, and 'crystallized' cognition, which tends to remain stable or improve. This suggests that cognitive decline is not uniform and that specific strategies can mitigate its effects. Although objective measures may sometimes overestimate cognitive aging, ongoing research emphasizes that many older adults maintain robust cognitive function. The brain's adaptability through neural scaffolding supports cognitive functions well into older adulthood, illustrating the brain's capacity to compensate for age-related declines. As people age, they tend to rely more on compensatory strategies, particularly when they face cognitive challenges or recognize a decline in their memory. These strategies range from external aids like notetaking and calendars to internal methods such as visualization and mnemonic devices. Research has shown that older adults who actively engage in such strategies can maintain better cognitive function and independence. Brown Nicholls conducted a pre-registered study exploring how adult age influences the relationship between using cognitive strategies and subjective cognitive difficulties. This study involved 606 adults from the United Kingdom, aged 18-86 years, without diagnosed cognitive impairments or neurological conditions. The study measured specific, everyday cognitive difficulties and the employment of both generalized and memory-specific strategies. Participants completed surveys assessing their attention, language, visual-perceptual abilities, and visuospatial and verbal memory. The surveys included the Compensatory Cognition Strategies Scale and the Multifactorial Memory Questionnaire – Strategy Scale to measure generalized and memory-specific strategies. Covariates such as gender, depression, anxiety, and stress were also considered. Moderated regression models were applied to investigate the potential interactions between age and strategy use on cognitive difficulties. The results indicated that memory-specific strategies were a robust predictor of lower cognitive difficulties across all age groups, confirming their effectiveness in managing cognitive challenges. Interestingly, no interaction effects between age and strategy use were observed, suggesting that the benefits of these strategies are consistent regardless of age. Generalized strategy use significantly predicted performance only in tasks involving visuospatial memory, highlighting the importance of tailored strategies for specific cognitive domains. The lack of age-related interaction suggests that while the propensity to use certain strategies might change with age, the effectiveness of these strategies remains stable across the lifespan. These findings underscore the potential of training and cognitive interventions that could be beneficial at any stage of adulthood. Promoting strategic thinking and compensatory techniques from an early age could help individuals maintain cognitive health and functional independence throughout their lives. By understanding and implementing effective cognitive strategies, we can better manage the challenges of cognitive aging, enhance daily functioning, and improve the overall quality of life for individuals across all age groups. Speakers: Khena M. Swallow, Cornell University, Karen Sasmita, Cornell UniversitySummary by Melinh Lai, University of Chicago, USA 
Khena M. Swallow began her talk in a very fun way: with a clip from the film Bend It Like Beckham. The scene, which depicted a tense and slightly chaotic game of soccer (or “football” if that’s your cup of tea), showcased the essential task posed to visual processing: making sense of scenes, which often involve several people, objects, and movements. Swallow and her colleague Karen Sasmita ask the broad question, “How does the system do this?” More specifically, throughout the presentation, Swallow and Sasmita explored the separate influences of more internal processing derived from established knowledge and external processing of perceptually derived information from the environment, as well as when people may prioritize one over the other. Swallow and Sasmita had a working hypothesis that occasions in which a person understands that an event has come to an end and another event is beginning may involve less reliance on internal knowledge, which is not very useful when an event is just beginning and very little information about it has yet to become apparent, and more on external processing as a person takes in information about the event to better understand it. Naturally, they decided to design a study to investigate exactly that. Participants in their experiment watched clips from Bend It Like Beckham and The Hundred-Foot Journey. The clips were then presented a second time, allowing the researchers to investigate the influence of familiarity on external processing strategies. Then the clips were presented for a third time in a task in which participants identified event boundaries within the scenes. Eye movements and EEGs were recorded throughout the experiment, and participants were told to either move their eyes freely while watching the movie clips or were explicitly instructed to keep their gazes fixed on the center of the screen. Their analyses focused on two frequency bands of electrical brain activity: the delta band, which ranges from 2-4 Hz and tends to increase with more internal processes like cognitive control and updating working memory, and the alpha band (8-12 Hz) which is sensitive to the degree of processing perceptual information. The delta and alpha bands thus make for reasonable indicators of external and internal processing, respectively.
During the first-round viewing of the movie clips, participants who were free to move their eyes around showed delta bands that increased before the moments that were later identified as event boundaries, while alpha activity was modulated after the event boundary. This suggests that people rely more on internal processing as events unfold but revert back to external processing as an event is ending and a new one is beginning. The authors were also interested in whether greater familiarity with a scene could then modulate these patterns of external processing (as measured by delta band activity). Delta and alpha activity recorded during the second viewing of the movie clips showed a similar but more muted effect as the first viewing, indicating that increasing the predictability of events by making them more familiar does indeed modulate external processing patterns. Finally, the authors were also interested in whether restricting eye movements and thus restricting how much external processing is even possible, would lead to different brain activity patterns during second viewing. Indeed, limiting the amount of external processing taking place led to no significant delta increases before an event boundary and an earlier modulation of alpha activity than the end of an event boundary, suggesting that limiting vision behavior led to disruptions in both internal and external processing.
Contact Member Services at info@psychonomic.org.
Office Hours: Monday through Friday, 8:30 a.m. to 5:00 p.m. CT (U.S. Central Time) Sunday Recap
|