Donate | Join/Renew | Print Page | Contact Us | Report Abuse | Sign In
2024 Annual Meeting - Friday Highlights
 

 

   Saturday Recap 

 

Friday Meeting Highlights

 

Symposium I (PS Leading Edge Workshop Initiative): The Future of Future Thinking

 

Investigating Concept Maps for Judgement of Learning Calibration

Symposium II: Social Contagion of Memory from People, Robots, and Microblogs

 

People lexically align more to partners who exhibit comprehension difficulties

Symposium III: Behavior Change: A (Nonpolluting) Engine for Sustainability

 

Confirmation or Surprise? How Predictable Language Supports Young Children's Fast-Mapping New Word-Referent Pairs

A Perceptual Similarity Space for Speech Perception

 

Does Pretending Enhance Learning When It Is Done Covertly?

Good Learners Tend to Be Poor Monitors: A Surprising Twist in Metamemory Monitoring Ability and Memory Ability

 

Examining the Mechanisms and Boundary Conditions of the Internet Fixation Effect

 

Symposium I: The Future of Future Thinking: Toward an Integrated Science of Prospective Cognition 
(
PS Leading Edge Workshop Initiative)

Speaker: Daniel L. Schacter, Harvard University, USA

Summary by Hannah Mechtenberg, University of Connecticut, USA

Daniel Schacter has been a leader in the science of human memory for decades and currently serves as the William R. Kenan Jr. Endowed Professor of Psychology at Harvard University. He spoke in the first symposium of this year’s annual meeting, which centered on topics concerning future thinking or prospective cognition. Future thinking, broadly, is the extraordinary ability for people to simulate, in high detail, the future to guide our behaviors and make decisions.

In his talk, Schacter focused on a phenomenon called episodic future simulation. For the past 15-odd years, he has explored the ability for people to, in his words, “construct a mental representation of a specific future experience.” This ranges from imagining yourself walking through the grocery store and picking up all the groceries you need for the week to simulating yourself giving a talk at a busy conference. Critically, there is a richness to episodic future simulation that goes beyond simple script rehearsal of familiar experiences to constructing something new.

Caption Image Above: Dr. Daniel Schacter gives a talk at the Future of Future Thinking Symposium at the 2024 Psychonomic Society Annual Meeting.

Schacter reviewed a suite of behavioral, neuroimaging, and neuromodulatory evidence that distinguishes episodic future simulation from canonical episodic memory—though the two phenomena do seem to rely on overlapping neural machinery. Two regions have emerged as key players in episodic future simulations: the hippocampus and the angular gyrus. The hippocampus may help recall past episodic snippets, while the angular gyrus helps to integrate those snippets into a cohesive simulated event. To support this hypothesis, when activity in the left angular gyrus is impaired using transcranial magnetic stimulation (TMS), participants reported significantly fewer details about their future simulations relative to stimulation to a control site.

Schacter concluded with several applications of this work, including in understanding the behavior of pathological grandiose narcissists (they tend to remember more details about simulated positive events about themselves than negative ones) and in potentially helping struggling adolescents by simulating highly detailed and positive future events.

These findings open exciting and promising avenues for future research (pun intended?) with significant theoretical and applicable implications. Perhaps we should all spend a little more time simulating good things happening in our futures—they could come true!

Caption: Data showing the number of details given about simulated future events and for recalled past events after transcranial magnetic stimulation to the left angular gyrus (green bars) and a control site (grey bars). The upper panel shows group averages, and the lower panel shows the individual participant data. 

  

Symposium II: Social Contagion of Memory from People, Robots, and Microblogs
Speaker:
 Suparna Rajaram, Stony Brook University, USA

Summary by Raunak Pillai, New York University, USA

In the moments before I wrote this talk summary, I happened to come across a post on BlueSky by my graduate advisor that also provided a summary of the very same talk. So, as I am recalling what I learned from Rajaram’s talk to write this recap, I now also have in mind information about what other people recalled about the talk and posted on social media.

This experience nicely encapsulates a major theme of Suparna Rajaram’s talk: We are often immersed in digital, social environments that have unique implications for what we remember. What kinds of implications? Rajaram addressed this broad question by presenting three, interrelated lines of work on the memorial consequences of encountering information shared or recalled by others.

Caption: Slide from Suparna Rajaram’s talk showing evidence of recalling in groups increasing rates of false memory relative to solo recall.

First, Rajaram presented research done with Raeya Maswood and Christian Luhmann using a variant of the Deese-Roediger-McDermott paradigm, in which people study a list of words (e.g., bed, rest, dream) that are closely related to a lure word that is not on the list (e.g., sleep). Critically, in this study, participants recalled the word lists either alone or in groups of three people. After this, participants were again tested on their memory for the word lists.

On this final test, the data show evidence of social contagion. When participants first recalled information in collaborative group settings (versus by themselves), they were more likely to end up misremembering the lure words as part of the lists. In effect, recalling information in a group setting provides an opportunity for other peoples’ false memories to intrude into one’s own.

Next, Rajaram presented work with Tori Peña, Raeya Maswood and Melissa Chen examining memory for information found online. The team collected a series of real-world X (Twitter) posts and news headlines about 6 different topics like the keto diet or standardized testing, that were matched for word count and extrinsic features (e.g., both sets of stimuli did not contain images or links).

Then, the team had participants study these tweets or headlines and take a memory test about them. Overall, participants were more likely to remember the tweets than the headlines. These results show that peoples are especially attuned to remembering socially meaningful information, like online posts created by peers rather than journalists.

Finally, Rajaram presented some recent work with Tsung-Ren Huang and Yu-Lan Cheng examining how memory is affected by interactions with conversational agents, mirroring the kinds of interactions people may have with Siri, Alexa or ChatGPT. Participants studied a series of words, then recalled the word lists alongside either a human confederate or a conversational robot, called RoBoHon. Critically, the human or the robot were instructed to state a few items that were not actually on the original list. Finally, participants were asked to make judgements about whether various words were originally studied or not.

Like in the first study Rajaram discussed, participants showed signs of social contagion. Hearing a human recall a word that was never really studied made people more likely to think they had heard the word themselves. Interestingly, this finding held when the incorrect words were stated by the robots as well. In fact, peoples’ memories were just as adversely affected by hearing words recalled by the robot as hearing words recalled by the human.

Overall, these three findings shed light on how our memories function in the digital and social information environments we find ourselves in. Humans are strikingly susceptible to memorial contagion of information falsely recalled not only by humans but also by robots. Further, we tend to prioritize socially meaningfully information in memory, such as those shared by our peers online. Here’s hoping this digital talk summary from a fellow Psychonome is memorable to you!

 

Symposium III: Behavior Change: A (Nonpolluting) Engine for Sustainability
Speaker: Robert Cialdini, Arizona State University, USA

Summary by Raunak Pillai, New York University, USA

Climate change is one of the gravest threats humans face today. The symposium on Climate Change and Human Cognition organized scholars discussing their work on how psychological research can contribute to addressing this existential issue.

As part of this symposium, Robert Cialdini presented his research testing psychologically informed strategies to reduce household energy consumption. Along the way, he shared a story about how he scaled this work up from theory-informed experiments to a real-world, long-term intervention.

The story starts with a basic insight into how humans change their behaviors. People often think the best way to change behaviors is to change the attitudes and beliefs people have about a topic first. By this logic, the best way to encourage people to act in a climate-conscious way would be to inform them about the impacts of climate change or make them more concerned about the impacts of global warming.

Caption: Household energy consumption after exposure to various environment messages.

Instead, Cialdini took a different approach. His work was informed by longstanding research on the persuasive power of descriptive social norms. The idea is that, when deciding what to do, people often turn to what they know others are doing. By this logic, the best way to encourage climate-conscious behavior is to alert them to other people acting similarly.

To test these different accounts, Cialdini and his team devised an experiment in which households received one of several different notes at their home with various pro-environmental messages. Some messages focused on encouraging pro-environmental norms like talking about the benefits of household energy savings for the environment or society. Other messages focused on cost savings associated with such actions.

These messages targeting beliefs and attitudes had limited effects on household energy consumption relative to a control condition. Instead, the most effective message was one that simply highlighted that peoples’ neighbors were taking similar actions to conserve energy. Simply making people aware of descriptive social norms can strongly shape how people behave.

These findings provided proof of concept, but scaling the findings up required time, resources, and energy. Coincidentally, as Cialdini describes, a pair of entrepreneurs familiar with his work came and paid him an unexpected visit to his office hours—right in between two students discussing their recent exam grades.

These entrepreneurs hired Cialdini on as a consultant as they worked to expand these descriptive social norm messages. Over a 10-year period, Cialdini estimates these efforts saved $700 million in household energy expenditure, 23 trillion watts of electricity consumption, and 36 billion pounds of CO2 emissions.

Overall, these results speak to the high potential of psychological research for make meaningful change on pressing issues like climate change.

 

A Perceptual Similarity Space for Speech Perception
Speaker: Matthew Goldrick, Northwestern University, USA

Summary by Hannah Mechtenberg, University of Connecticut, USA

As part of the first speech perception session, Matthew Goldrick spoke on a perennial issue in the field: how can listeners tell two talkers apart? It’s a simple question that feels intuitive—we can just tell. However, we must be able to pick up on some feature or set of features in the speech signal that allows us to distinguish one talker from another. The big question is—what are those features?

Goldrick introduced a novel approach to modeling the perceptual similarity between utterances from different talkers. Existing approaches that attempt to quantify the differences between how Talker A (e.g., Jane) and Talker B (e.g., Emma) produce the same sentence have largely focused on identifying specific salient features of the speech signal. These may include the ways individual speech sounds (e.g., /p/ and /b/) are pronounced, prosodic contours (the “melody” of how someone talks), or even differences in lower-level acoustic dimensions (e.g., formants). These human-identified features have never entirely been able to explain all the ways an utterance spoken by Jane differs from one by Emma. Hence, Goldrick and colleagues have moved away from human-generated insight and towards that from deep learning models.

Caption Image Above: Goldrick introduces the subject of his talk at the first speech perception section at the 2024 annual meeting of the Psychonomic Society.  

The talk focused on the issue of intelligibility and whether a deep learning model can predict whether one talker is likely to be more intelligible than another. They used a technique called dynamic time warping to model the movement through a multidimensional perceptual space over the course of a complete sentence. The model taught itself how to quantify the differences between two talkers—and to predict the intelligibility of one talker given another.

Goldrick verified the predicted perceptual difference quantified by the model by asking human raters to indicate the intelligibility of all the utterances. They found that the perceptual distance measure accounted for a significant amount of the variance, and that as perceptual distance from a familiar accent increased, intelligibility scores decreased. Notably, the output from the deep learning model accounted for far more variance than the simple feature-related measures that were the standard in the field.

Their next steps include looking at how these effects might generalize to other talkers, if the model could help describe adaptation effects, and to spend some time digging into the model to try and identify which features it uses to generate the perceptual difference measure.

 

 

Caption: Goldrick introduces the concept of the perceptual space, and how they can model the “movement” through this space over the course of a spoken utterance.

 

Good Learners Tend to Be Poor Monitors: A Surprising Twist in Metamemory Monitoring Ability and Memory Ability
Speaker:
 Chunliang Yang, Beijing Normal University, China

Summary by Xueqing Chen, University of Bristol, UK

Chunliang Yang, Associate Professor at Beijing Normal University, focuses on his work in learning, memory, and metacognition. His academic journey took him from earning his Ph.D. at UCL to completing a dynamic postdoctoral stint at the National University of Singapore.

In Friday's metamemory session, Yang delivered an engaging presentation that challenged a common assumption in cognitive psychology that individuals with excellent memory automatically possess superior metamemory abilities, the capacity to monitor and reflect on their own memory processes.

Traditionally, it is presumed that high performers in memory tasks are also adept at understanding and evaluating their cognitive states—a phenomenon often mirrored in various aspects of cognitive and performance-based fields. However, Yang's findings reveal a surprising twist: excellent memory skills do not necessarily correlate with sharp metamemory monitoring abilities.

In his presentation, Yang discussed a comprehensive meta-analysis involving 1,694 participants across 15 studies, which showed a consistent but weak negative correlation between memory abilities and metamemory accuracy. This finding was intriguing as it contradicted the intuitive belief that better memory should enhance one's ability to monitor and judge their memory processes accurately.

Yang conducted two pivotal experiments to validate this counterintuitive conclusion. The first experiment used binary Judgments of Learning (JOLs), where participants assessed their memory immediately after learning sessions. This approach confirmed the negative correlation, indicating that individuals with higher memory abilities often made less accurate self-assessments.

In a quest to minimize any interference between memory performance and JOL accuracy, Yang embarked on a second experiment. This study involved measuring memory and metamemory performances through separate tasks to ensure that each was assessed independently. The results from this experiment also found a negative correlation.

Yang discussed potential cognitive quirks that could explain this unexpected relationship. He highlighted the "confidence deficit hypothesis" as a possible explanation for this phenomenon, suggesting that individuals with top-tier memory performance might not always excel in self-monitoring due to the different cognitive strategies they employ. It's possible that high-ability learners remember many items but mistakenly believe they will forget them, leading to discrepancies in their JOLs.

These insights provide a novel understanding of cognition, where superior memory does not always align with self-evaluation. Yang's research offers implications for educational strategies and policymaking. It challenges educators and psychologists to rethink how they support high-performing individuals, particularly in developing strategies that enhance both memory and metamemory skills.

Yang's presentation sparked a series of discussions and questions among the audience, eager to explore how these new insights could transform educational approaches and cognitive training programs. His work exemplifies the complexity of cognitive functions and underscores the importance of a nuanced approach in educational and psychological practices.

Experimental results often prompt us to rethink and change our entrenched impressions, advancing further on the path of understanding. What new thoughts will tomorrow's conference bring us? Let us look forward to it with anticipation!

 
 

Investigating Concept Maps for Judgement of Learning Calibration
Speaker:
 Christopher M. Cischke, Michigan Technological University, USA

Summary by Xueqing Chen, University of Bristol, UK

Christopher Cischke, an Associate Professor at Michigan Technological University, has dedicated his career to digital logic, embedded systems, and circuit design. After completing his education at the University of Minnesota, he returned to Houghton to teach and further his research while pursuing a PhD in Computational Science and Engineering.

At Friday's conference, Cischke presented research on improving student learning outcomes using Judgments of Learning (JoLs) and concept maps. His research critically evaluates the traditional assumption that students inherently understand how to assess their own learning effectively.

Cischke's findings explore students' challenges in accurately determining what to study to excel in exams. Traditionally used in laboratory settings for simple memory tasks, JoLs, when applied in real educational environments, demonstrate that delayed judgments—where evaluations are made after a period rather than immediately—result in more precise self-assessments due to the necessity of active memory retrieval.

Further advancing his exploration, Cischke introduced concept maps as a tool to augment the accuracy of JoLs. These visual tools, which organize and interconnect knowledge similarly to human memory's structure, provide robust cues that enhance learning and assessment accuracy.

Through two experiments, Cischke investigated the effectiveness of traditional JoLs compared to concept-map enhanced JoLs. Under the concept map condition, students were asked to complete the aspects of the concept map and to find errors on the concept map. The results revealed that concept maps significantly enhanced the calibration of students' judgments regarding their understanding and mastery of the material. This enhancement is attributed not only to the increased time spent on tasks, but also to the alignment of cognitive structures with the learning content, facilitating deeper understanding and retention.

An insight from Cischke 's work is that concept maps can significantly improve how students calibrate their learning assessments in engineering subjects. This calibration is crucial as it influences how students allocate their study time, which in turn affects their academic performance.

Cischke 's findings indicate that integrating concept maps into educational settings could transform teaching strategies by fostering a more dynamic and reflective learning environment. However, the interplay between study time and learning efficiency is intricate and warrants additional investigation to enhance educational outcomes.

In conclusion, Cischke's research provides compelling evidence for rethinking conventional methods of learning assessment. Educators can dramatically improve how students engage with information by integrating structured, reflective tools like concept maps into educational practices, leading to more effective learning outcomes. His ongoing work continues to explore innovative ways to enhance teaching methods and student achievements in complex cognitive tasks, pushing the boundaries of educational psychology and suggesting new avenues for improving educational practices.

Next, we'll create more concept maps for conference reports to aid learning! What will Saturday's conference bring to our daily lives? Stay tuned to find out!

 

People lexically align more to partners who exhibit comprehension difficulties
Speakers:
Rachel Ostrand, IBM Research, Victor Ferreira, University of California, San Diego

Summary by Melinh Lai, University of Chicago, USA

Rachel Ostrand discussed her work with Victor Ferreira on lexical alignment, the phenomenon where speakers select what words to produce to match the features or needs of their conversational partner. Most examinations of lexical alignment focus on how speakers use the knowledge they share with a conversational partner or the underlying biases they have about a broader group to decide whether and how they will align their word choice patterns with another person. But what the field has yet to investigate is whether speakers use more immediate signals of a conversational partner’s overall proficiency with understanding language to decide whether to adaptively align their wording patterns. In other words, Ostrand and Ferreira sought to understand how real-time conversational feedback about another person’s comprehension proficiency affected a speaker’s decision to engage in lexical alignment.

More specifically, they were interested in whether the lexical alignment phenomenon is primarily driven by an automatic priming mechanism, the processing of social-affiliative factors, or a perception of general communicative utility. To investigate, they observed how participants communicated with online partners, which in reality were automated chatbots, while working together to order items from a catalog—a task that the authors designed to require clear communication between the partners. Importantly, the items in this “catalog” consisted of objects that have two names, with one name being more commonly used than the other, like couch or sofa (where couch was the primary name for the associated piece of furniture by about 70% of a survey sample).

During the Exposure Phase, the online “partner” always used the less-common names for objects, and participants would pick out the appropriate item from a set of 4 objects. After several trials of exposure, the experiment moved on to the Test Phase, where participants viewed images of objects that they then named for their partner, who would then select the correct item if the participant had used the same less common name that the partner used during the Exposure Phase. Alternatively, if the participant used the more common label, the partner would respond that they did not understand and move on to the next trial. The authors also varied how frequently the chatbot partner made this response; put another way, they manipulated how proficient the chatbot partner was in comprehending object labels it had not previously used.

Ostrand and Ferreira used this procedure of placing a human participant in conversation with a chatbot in two experiments. In the first, participants were informed that their online conversational partner was a chatbot; in the second, they were informed (falsely) that their partner was another person. In both experiments, there was overall more lexical alignment in all conditions  where the human participant was exposed to the chatbot partner’s lexical preferences. Both experiments also showed that lexical alignment occurred particularly frequently if the chatbot partner demonstrated low comprehension proficiency for object names it had not previously used.

Comparing the two experiments showed that participants tended to align more with their partner when they believed it to be a chatbot than when they believed it was another person. The results together suggest that lexical alignment is less influenced by some kind of priming mechanism or by social factors; instead, it seems that speakers recognize certain features of their conversational partners, at high levels of whether the partner is human as well as their overall comprehension abilities, and then make adaptations that allow for easier communication.

 

Confirmation or Surprise? How Predictable Language Supports Young Children's Fast-Mapping New Word-Referent Pairs
Speaker:
 Kirsten Read, Santa Clara University, USA

Summary by Melinh Lai, University of Chicago, USA

Kirsten Read presented a great investigation of the relationship between children’s predictions of upcoming words and their subsequent learning. The influence of prediction outcomes on learning and memory is of great interest to many psycholinguists. On the one hand, several theories suggest that incorrect predictions propagate error signals through the language system that will inform and change future predictions. In other words, these theories argue that people learn from their mistaken predictions. Conversely, other theories have posited that correct predictions are more conducive to learning, since making an accurate prediction of upcoming language stimuli frees up cognitive resources that can be applied to efficient encoding and retrieval.

To examine children’s predictions and word-learning further, Read and her colleagues tested 3- to 5-year-olds’ abilities to make accurate guesses about upcoming words while listening to rhyming stories, with the idea that the rhymes readily encourage children to make predictions (think about the appeal of Dr. Seuss books!). The stories consisted of animal characters with different names, and each story was written to either end with a likely animal or an unlikely animal (both of which rhymed with the preceding story). The manipulation of a likely or unlikely word in these prediction-encouraging stories thus would serve as either prediction confirmations (with the likely animals) or prediction errors (the unlikely animals).

Sixty 3- to 5-year-olds visiting a children’s museum, where the researchers conducted their experiment, listened to the stories while looking at a display with three animals. Read and her team video-recorded the children during each trial to examine whether the children looked at target animals (i.e., animals that the children were predicting) before the animals were named. The children’s memory of the animals and their associated names was tested after they heard the stories, which the researchers then correlated with their looking behavior during the stories.

The results showed that children found the likely targets (which were more predictable in these rhyming stories) to be more memorable than the unlikely targets. They also tended to look at the likely animals more often than the unlikely animals, suggesting that they were indeed predicting these targets. Moreover, the frequency of children’s looks to the more likely animals correlated with their later memory for the names of the different animal characters, while there was no such correlation between looks to the unlikely (and unpredicted) animals and memory for their names. In other words, these findings suggest that correct predictions are helping to drive children’s learning of animal-name associations.

To investigate this relationship further, Read and colleagues also re-classified their data according to children’s general looking behaviors before they heard the targets. Trials in which a child looked at a distractor item on the display, but not at the target, before hearing the target word were classified as “Surprise” trials; they reflect occasions where the child’s prediction was disconfirmed, as exemplified by their looks to an animal that would later turn out to be incorrect. The trials that involved a child looking at the correct target before actually hearing it were classified as “Confirmation” trials, because the looking behaviors suggest that the child made a prediction that turned out to be correct. Analyses of the re-classified data aligned with the previous analysis. Children correctly remember animal-name pairings after Confirmation trials than Surprise trials. Both analyses of these data thus suggest that children’s learning of at least animal-name associations is driven more so by accurately predicting an upcoming word than by making a prediction error.

  
 

Does Pretesting Enhance Learning When It Is Done Covertly?
Speakers:
 Michelle Rivers, Santa Clara University, Ashley Berdelis, Texas Christian University, Steven C. Pan, National University of Singapore, Uma Tauber, Texas Christian University

Summary by Daniel Pfaff, University of California, Santa Cruz, USA

Would you have guessed that thinking through answers to questions helps you learn the answer?

If you answered that question in your head, you’d be one step closer to learning the real answer (which is yes, by the way).

Michelle Rivers (with Ashley Berdelis, Steven C. Pan, and Uma Tauber) spoke about their research into prequestioning and how it improves performance at the Friday morning session about Testing Effects.

Prequestioning is the practice of formulating an answer to a question before subsequently learning the answer to that question. When you are tested again on the same questions, your performance goes way up compared to if you had never seen the prequestions. Researchers hypothesize that this is either because the prequestion activates related prior knowledge, or because the prequestion directs attentional processes to the most relevant information for subsequent encoding. In other words, it’s as if you either get practice recalling the relevant information or get ready to recall it.

But Rivers and colleagues wanted to test whether just thinking about the answer instead of answering it explicitly would confer the same benefits. To do this, they asked participants questions about a particular topic, like the planet Saturn, and then provided a short passage that provided the answers to all those questions. Then, participants answered either those same questions, or they answered questions that were related to the prequestions. As a baseline, some participants simply read the passage and then answered the multiple-choice questions.

Their research suggests that, yes, covert prequestions like “Think about the answer to this question about Saturn’s rings” still show some benefit over the control condition where no prequestions were asked. Prequestions that required a response, called overt prequestions, did have a larger benefit, though. Interestingly, there was even a boost in scores on the related questions in both overt and covert prequestion conditions. It seems that any type of prequestioning grants a general boost to testing scores!

Michelle and their team also asked participants after the experiment if they had employed any particular strategy during the prequestions, knowing that those questions or related ones would appear after reading the passage. Overall, the most popular answers were reading, memorizing, and responding to the question, but there was no reason to apply any particular strategy to a specific score.

When asked about transferring these results to the classroom, Michelle cautioned against applying the results wholesale. When she asks a question in her classroom, Michelle still instructs her students to raise their hands if they have an answer in their heads. However, this is just to ensure that the students pay attention to the lecture!

Next time you’re in front of a group, try to let them answer your main questions first. At the very least, they will learn your answers even better. But who knows? They also might give you a better answer than the one you had!

 

Examining the Mechanisms and Boundary Conditions of the Internet Fixation Effect
Speakers:
 Dana Lis Bittner, University of California, Santa Cruz, Mercedes T. Olivia, University of California, Santa Cruz, Benjamin C. Storm, University of California, Santa Cruz

Summary by Daniel Pfaff, University of California, Santa Cruz, USA

Did anyone else do speed-googling q in their typing class? I have a distinct memory of practicing how to google keywords in a way that brings up exactly the information you need as quickly as possible, often while racing your fellow classmates. Not only did this teach speed typing skills and how to navigate Google search results, but it might also have made us all dependent on Google!

In her recent talk at the Cognition and Technology session on Friday afternoon at the 2024 Annual Psychonomics Meeting, Dana-Lis Bittner (with Mercedes T. Oliva and Benjamin C. Storm) presented recent research into the modern reliance on the Internet over old-fashioned memory.

Typically, we already offload some of our memory to members of our group according to expertise (like how you ask your New York City friend for some pizza recommendations to hit up during the 2024 Psychonomics Annual Meeting!). A Transactive Memory System distributes information across multiple sources to maximize available information and minimize responsibility on any one individual. But how has the Internet changed the balance of this system?

For many people, the Internet has almost superseded this Transactive Memory System, meaning that we over-rely on Google to answer questions we already know the answers to. In the first study, this phenomenon was called Internet Fixation, and this earlier work by Bittner’s colleagues found that people were more likely to use Google on a new set of trivia questions when they had been asked to use Google in a previous set of questions about the same topics.

Bittner went a level deeper and dug into the factors that drive this Internet Fixation. What if people were simply unconfident in their own memory? To answer this question, Bittner and colleagues again asked participants to google the answers to ten trivia questions, but for the new trivia questions, Bittner asked participants whether they knew the answer before they had a chance to google. However, all participants (even the ones who never used Google on the previous questions) answered about at about the same levels of confidence. Bittner also asked questions about topics that had come up previously and about brand-new topics.

Surprisingly, the Internet Fixation Effect was similar across both familiar and new topics. However, maybe a reliance on the Internet for these trivia question answers is not so surprising, considering that it is almost always true and helpful. So, Bittner decided to change that.

In a follow-up set of experiments, Bittner again presented participants with trivia questions, but instead of letting them google freely, they presented them with edited Wikipedia pages on the topics, changed to omit some of the information needed to answer the questions. Some of these edited Wikipedia pages even had zero answers to the questions Bittner asked.

Despite what you might expect, in the second phase of the experiment, none of these edited pages in the training phase changed participants’ behavior! Even for participants who always went to unhelpful Wikipedia pages, when the final round of questions started, they opened these pages just as much. However, Bittner and colleagues thought participants simply readjusted to the helpful Wikipedia pages in the second phase, as these were completely unedited and very helpful. So, they then did a second version of this paradigm where the Wikipedia pages in the second phase were always unhelpful, and yet found similar results.

Finally, instead of switching topics in the second phase, Bittner and colleagues presented ten new questions on the same topics as the first phase of the experiment, still using these edited Wikipedia pages. Finally, in the condition with entirely unhelpful pages, participants’ use of the Wikipedia pages sunk down to baseline levels, but not any lower. As opposed to going down to zero, participants still opened these extremely useless Wikipedia pages as much as the control condition.

Bittner characterized these results as evidence that the Internet is our habitual transactive memory partner (AKA ole reliable). However, Bittner hesitated to label this relationship as all good or bad. Sure, we are relying less on our own memory, but the Internet will always know more, know it faster, and know it more reliably than we ever will. Why shouldn’t we use it?

Stretch those typing fingers, loyal readers, and happy googling!

 

Questions?

Contact Member Services at info@psychonomic.orgOffice Hours: Monday through Friday, 8:30 a.m. to 5:00 p.m. CT (U.S. Central Time)

Saturday Recap



 

  4300 Duraform Lane • Windsor, Wisconsin 53598 USA
Phone: +1 608-443-2472 • Fax: +1 608-333-0310 • Email: info@psychonomic.org

Use of Articles
Legal Notice

Privacy Policy