Donate | Join/Renew | Print Page | Contact Us | Report Abuse | Sign In
Share |

60th Annual Meeting of the Psychonomic Society





2019 Program
Keynote Address
Special Events
Affiliate Meetings

2019 Registration
Hotels and Venue
Family Care Grants
Mobile App

Explore Montreal

Exhibitors and Sponsors
Press and Media

2019 Program Committee
Future Meetings
Past Meetings

Coverage of the 60th Annual Meeting


   Thursday, November 14  
   Friday, November 15      Saturday, November 16      Sunday, November 17  


View the Psychonomics 2019 Photo Album
Photo albums are available for each day of the conference.

Watch the Psychonomics 2019 Keynote Address, Symposium I, and Symposium II

Special Reports

#psynom19: The Twitter(nome) Wrap-Up
A Psychonomics 2019 report by Stephan Lewandowsky (University of Bristol, UK)

#psynom19: The Next Generation of Psychonomes
A Psychonomics 2019 report by Stephan Lewandowsky (
University of Bristol, UK)

DAILY RECAP: Sunday, November 17

Thank you to everyone who participated in Psychonomics 2019! The Psychonomic Society annual meeting is designed to promote science and foster community among scientists. This year's conference, which took place in Montréal, Québec, Canada, from November 14-17, 2019, attracted over 2,220 attendees, and showcased scientific research from around the world. The conference featured a keynote address by Judith Kroll (University of California, Irvine, USA), four symposia, over 300 talks, and over 1,000 poster presentations.

A series of Psychonomics 2019 special reports can be viewed below.  Conference participants actively engaged in an online discussion using #psynom19, which can followed on Twitter. Photo highlights from each day are posted below, and the full Psychonomics 2019 photo album will be published on November 30.

Psychonomics 2020 is scheduled for November 19-22, 2020 in Austin, Texas, USA. The call for symposia opens in February, and the call for abstracts opens in March. Mark your calendar and we'll see you in Austin in 2020! 

Photo Highlights (Sunday, November 17) | View the full Photo Album for Sunday, November 17






DAILY RECAP: Saturday, November 16

Psychonomics 2019 Reports: 

Photo Highlights

Facts Context Judgment 
A Psychonomics 2019 report by Stephan Lewandowsky (University of Bristol, UK)
Perhaps more than any other branch of psychological science, psychophysics offers a window into the nature of our mental representations. That’s why we now know with some confidence that for us to detect an increase in the magnitude of a stimulus, we need to add a constant fraction of its original magnitude—the famous Weber’s fraction. If you take a 10 cm long line, then another line must be 10.5 cm long (let’s say) for you to be able to tell the difference. If the first line is 20 cm long, it takes 21 cm for the second one to be identified as longer. Whatever the lengths, the difference must be a constant fraction (in this case .05) for it to be detected.
Or is it?
Stephen Link related the case of Ewald Hering, a student of the brilliant Gustav Fechner, who almost single-handedly established the discipline of psychophysics. Hering presented data that seemed to violate the constancy of Weber’s Law: he found that small weights required a larger increment than heavier weights for a difference in weight to be detected. The effect was striking and seemingly robust: the Weber fraction declines from around .05 for small weights (250 g) to around .01 for large weights (2.5 kg).


Does this mean Weber was wrong?


Not so fast. When Fechner examined Hering’s methodology in detail, he found that the procedure required people to lift their forearm to judge the weight attached to their hand by a sling. In other words, people had to lift not only the stimulus weight, but their own forearm. Add the estimated weight of the forearm to that of the stimuli (apparently around 2kg), and bingo—the Weber fraction for all of Hering’s weights turns into a constant. 


But there is a lot more to Fechner than discovering subtle problems in his student’s experimental procedures. For example, Fechner also proposed a variant of signal-detection theory in the 19th century—nearly 100 years before it was reinvented in the 1950s. Link’s paper about this historical event can be found here.


Back to top


Framing Space: A Cross-Cultural Assessment of Scene Cognition
A Psychonomics 2019 report by Anna M. Wright (Vanderbilt University, USA)

Research has shown a difference in the way western and East Asian populations emphasize the various aspects of a scene. While westerners tend to emphasize items in the foreground, east Asians tend to focus more on the background contents. Helene Intraub presented research investigating how cross-cultural differences in language and categorization might affect basic perceptual processes, such as anticipatory representation beyond the view of a photograph.

In one experiment, western and East Asian participants were asked to take photographs of a person in a room. The photos taken by East Asian participants contained more background context and a shot of the model taken from farther away. This confirms the differences found in the literature but does not get at how these differences might influence other processes.

In a second experiment, participants were presented with individual pictures to memorize and were later presented with that same picture with either expanded or contracted borders at testing. When told to manipulate the borders by dragging them in or out horizontally and vertically using a mouse, both groups showed anticipatory representation beyond the view of the photograph through significant boundary extension. However, there were no differences across cultures in framing the picture space from memory.

While these experiments show cross-cultural differences in space framing, with East Asians framing space more holistically and westerners framing space in a more model-centered way, they show no difference in the spatial expanse of the image. This suggests that anticipatory spatial construction is not tied to culture.

Back to top

The Frog and the Tadpole: Fighting the Hindsight Bias 
A Psychonomics 2019 report by Anna M. Wright (Vanderbilt University, USA)
There’s an old Korean saying, “The frog knows not the tadpole,” that describes the tendency people have to inflate knowledge of an event’s outcome after it has occurred. This phenomenon is known as “hindsight bias.” It is unclear how to minimize or eliminate this bias or if there are differences in the size of this bias across age ranges and cultures.
In her talk on Saturday morning, Lisa Son discussed the potential that mental time-travel has to reduce the hindsight bias. Mental time-travel is effectively remembering a time when we did not yet know what we know now. An extension is the ability to imagine how others will react when they do not yet know what will happen. A set of experiments was conducted to investigate cultural and age differences in the hindsight bias, and whether this bias can be eliminated by encouraging mental time-travel. In each experiment, participants were given a picture identification task, where they were shown a blurred image followed by a succession of increasingly clearer images until they indicated that they knew what the subject of the picture was.
One experiment included Korean and American participants. While both Korean and American participants showed a hindsight bias, it was less pronounced in the Korean participants. In another experiment, participants were asked to imagine that they were 3rd or 5th graders and estimate they would do the task at that age. In this case, the hindsight bias went away. In yet another experiment, 3rd and 5th graders were tested. They had the same identification ability as adults and exhibited the same level of hindsight bias, but they thought that adults would perform better than they actually did.
The results of these studies suggest that encouraging people to take on the perspective of an unknowing other does not always reduce hindsight bias, and when it does, it seems to be due to the idea that ‘adults know best’ and not due to mental time-travel.
Is It Every Justified to Use d' as a Measure of Sensitivity? Researchers Should Adopt da as Their Default Measure of Sensitivity
A Psychonomics 2019 report by Yi-Pei Lo (University of Illinois Urbana-Champaign, USA)

When researchers use old-new recognition memory tests, d' is often used as a measure of sensitivity. Sensitivity is the ability to distinguish old items (presented on a study list) from new items (not presented on a study list). When d' is used, two assumptions are made: the underlying distributions are Gaussian, and the variances are equal. Yonatan Goshen-Gottstein claimed that the misuse of d' is common in literature, and it can lead to incorrect conclusions. When the assumptions are violated, the potential for a false positive is high, and can occur even in highly replicable results. 


By using Monte Carlo simulations of d', Goshen-Gottstein and colleagues found that sensitivity measured by d’ is confounded with bias. He went on to argue that all the studies over the past 50 years that used d’ to establish changes or differences in sensitivity, reached a conclusion unsupported by the data.


What’s the solution? Monte Carlo simulations estimated considerably fewer false positives when using da over d'. Goshen-Gottstein therefore proposed using da as the sensitivity measure because it does not assume that the distributions are equal.


Back to top

Is the "Learning from Errors" Benefit Due to Semantic Mediation or Episodic Recollection?
A Psychonomics 2019 report by Michelle Rivers (Kent State University, USA)
Nobody likes making mistakes, but research suggests that making errors can benefit learning – people remember more when an error is committed and corrected as compared to when the answer is given without error generation. But why is this the case? 
The semantic mediation hypothesis suggests that errors serve as a meaningful bridge to the correct answer. By contrast, the episodic recollection hypothesis suggests that remembering the surrounding context under which the error is made is critical to observing a benefit for making errors.
Janet Metcalfe and her colleague Barbie Huelser tried to differentiate between these two hypotheses. They created congruent (e.g., wrist-palm) and incongruent (e.g., tree-palm) cues for the “correct” answer (e.g., hand). Participants in the congruent condition generated errors that were semantically related to the target words (e.g., finger), whereas participants in the incongruent condition generated errors that were unrelated to those words (e.g., coconut).


If the semantic mediation hypothesis is correct, then “finger” should be better than “coconut” as a bridge to the correct answer “hand.” That is, related words should provide a semantic mediator to the answer, but unrelated words should not.


What did the researchers find? Both conditions produced an error-generation benefit on a final test – results that are difficult to explain with the semantic mediation hypothesis. These results contradict Metcalfe’s expectation, as she thought they would find support for semantic mediation.


To quote Metcalfe, “I thought I was right, but now I’m eating crow.” It takes humility to announce this to an audience of critical scientists, but these findings move us one step closer to understanding why errors can benefit memory. 


Back to top 


What Can Essay Response Scoring Tell Us about When Blocked Versus Interleaved Practice Will be Beneficial?
A Psychonomics 2019 report by Raunak Pillai (Vanderbilt University, USA)
Essay scoring for tests like the GRE (Graduate Record Examinations) shares many similarities with categorization tasks. That is, human raters are given a set of prototypical essay responses and must identify which of those score categories each new exemplar belongs to. But do the principles underlying categorization learning also apply to essay scoring? 
In this talk, Bridgid Finn explored experiments designed to assess the efficacy of various kinds of training on essay scoring abilities. In particular, training sequence (blocked vs. interleaved problem types), review type (retrieval vs. studying), and feedback type (elaborate vs. less complex) were manipulated. Overall, training sequence and feedback type seemed to have little effect on essay scoring accuracy, whereas retrieval seemed to consistently improve essay scoring performance relative to studying alone. This result echoes other recent work about the benefits of retrieval during study
However, when looking at the average bias (that is, the average difference from the correct score for incorrectly scored essays), it appears that people who studied problems were more strict than warranted, whereas people who were tested during learning were less strict than warranted. Moreover, the scoring bias for people who studied problems in a blocked fashion was more negative than the scoring bias for those who studied in an interleaved fashion.


Overall, these findings offer some guidance for effective means of training people on essay scoring tasks (retrieval is best!), though there is room for future work on the cause and potential solutions to rater bias.


Back to top



Photo Highlights (Saturday, November 16) | View the full Photo Album for Saturday, November 16 









Back to top


DAILY RECAP: Friday, November 15 

 Psychonomics 2019 Reports: 

Photo Highlights

The Cognitive and Neural Bases of Personal Semantic Memory: Insights from Individuals with Lesions to the Core Autobiographical Memory Neural Network 
A Psychonomics 2019 report  by Raunak Pillai (Vanderbilt University, USA)
Semantic memory involves facts about the world, such as the color of the sky, and personal semantic facts, such as one’s favorite color. In this talk, Matthew Grilli hypothesized that the latter personal semantic facts can be divided into two subcategories. The first has spatiotemporal reference (e.g., “I often hike in the Tucson desert”). This subcategory is termed “experience-near,” and is predicted to be more closely related to episodic memory, and reliant on medial temporal lobe activity. The second subcategory include facts without such context (e.g., “I live an active lifestyle”). This subcategory is termed “experience-far,” and is thought to be closely related to general semantic facts and reliant on cortical activity.
Amnesiac patients with medial temporal lobe lesions, when asked to provide autobiographical details, offered significantly fewer experience-near personal semantic facts per event than control participants, while their reporting of experience-far details were similar to control participants. These findings suggest that there is a distinction between experience-near and experience-far facts.
Back to top


The Distinction between Semantic and Episodic Memory

A Psychonomics 2019 report by Raunak Pillai (Vanderbilt University, USA)
Since Endel Tulving proposed the distinction between episodic and semantic memory in 1972, much research has been done on these two kinds of declarative memory. Episodic memory refers to the repository of autobiographical memories, whereas semantic memory refers to knowledge that we no longer associate with specific episodes—that is, we know what a dog is without remembering when we learned that.
However, recent evidence suggests that a reconsideration of this division between episodic and semantic memory might be warranted. In this talk, Louis Renoult addressed this issue, beginning first with a brief history of the episodic-semantic divide. As Renoult noted, Tulving originally offered this distinction between “personally experienced,” or episodic, and more general factual, or semantic, memories as a “pre-theoretical” approach. Moreover, Tulving emphasized the interdependence of these two memory stores. Since then, however, two separate traditions in memory research have emerged, largely focusing on episodic and semantic memory in isolation.


As Renoult notes, however, recent evidence suggests that the “core recollection” network, which is involved in episodic memory, overlaps significantly with the general semantic network. This evidence suggests that episodic memory requires reinstatement not only of the original sensory-perceptual information, but also of the semantic representations that occurred during the original episode. Further, this leads to the possibility that episodic and semantic stores are not as distinct as once thought. Whether or not this forces an abandonment of these categories remains to be seen.


Renoult’s talk was the introduction to a Symposium on episodic and semantic memory, which was live-streamed as a part of Psychonomics Live!, and will be available to view online after the conclusion of the conference. 


Back to top

Science, Statistics, and the Problem of Pretty Good Inference
A Psychonomics 2019 report by Kristen Bowman (Tarleton State University, USA)
The long standing debate of how to conduct research reached a new high during the Pre-Conference Meeting by the Society for Mathematical Psychology when Danielle Navarro, Associate Professor at the University of New South Wales in Australia, forthrightly said that, “All models suck!” This strong claim was followed by a more detailed explanation of why many researchers are led to believe that there are mandatory rules for conducting research. As Navarro stated, these rules appear to be necessary, at first glance, to protect psychological sciences from “total chaos.” (Her slides can be found here, and you will find them to be quite expressive, full of animated GIFs and some computer-generated art by Danielle herself.)
While these rules protect research from dangers, such as p-hacking, they may also inadvertently limit the realm of research possibilities and approaches, which for the field of psychology could be problematic. Navarro therefore argued that these “rules” should not be rules, but should instead be viewed as guidelines for conducting research. The ultimate goal should be to enhance the quality of research and its validity in the real world, which can be achieved in a variety of ways.


Navarro explained that with statistics, p-values are not the most efficient mode of analyzing data. One alternative to traditional statistical inference is the use of Bayesian inference, the main advantage being that Bayes factors can provide evidence for the alternative or for the null hypothesis—unlike conventional statistics which provide asymmetric levels of evidence. 


Furthermore, Navarro explained that statistics do not make scientists. Instead, scientists should use statistics to make inferences, which is best supported by guidelines rather than by simply following rules. While rules may provide a sense of safety and trust, the better solution is to fully document all elements of studies to permit public scrutiny. Furthermore, we need to consider how to update our ideas of what these rules should be and what we should do in our practices. By updating, we could gain a broader understanding of how to conduct modern research.


Back to top 


What the Font Size Illusion for Nonwords Tells Us about Metamemor

A Psychonomics 2019 report  by Raunak Pillai (Vanderbilt University, USA)
The font size illusion occurs when learners predict better learning for words that are in larger fonts, contrary to actual memory performance. In this talk, Monika Undorf described three experiments designed to examine what this effect says about our metamemory abilities.
In Experiment 1, memory and metamemory for words (e.g., leap), nonwords (words that are not pronounceable, e.g., jxfb), and pseudowords (pronounceable but nonwords, e.g., nund) were compared. The font size illusion was found for words and pseudowords, but not for nonwords.
 To investigate whether these results occurred because the learners discounted the harder nonwords, memory and metamemory were assessed for high contrast stimuli (words and nonwords) and low contrast stimuli (pseudowords and nonwords) in Experiment 2. Contrary to the results in Experiment 1, nonwords elicited a font size illusion in both contrast conditions. This finding led to the hypothesis that the proportion of nonwords mediated the presence of a font size illusion.
To test this hypothesis, in Experiment 3, learners either learned many words and few nonwords, or few words and many nonwords. The font size illusion was larger for the condition with few words and many nonwords, which suggests that learners may discount their ability to learn nonwords when lower in frequency and are more difficult to process.
Together, these results are consistent with frameworks that suggest fluency processing influences metamemory judgements. Moreover, the relative difficulty of the stimuli plays a role in mediating the magnitude of the font size illusion.
Back to top


Why You Need Quirks to Show How It Works

A Psychonomics 2019 report by Michelle Rivers (Kent State University, USA)
You may have heard the story of Archimedes, who had his “aha!” moment about the displacement principle while taking a bath. Many moments of insight occur outside of traditional work contexts. But why is that?
When people hit an impasse while solving problems, it helps for them to take a break (or “incubation period”) to allow old, inappropriate solutions to fade from their mind, and newer, better solutions to come to mind. Researchers Steve Smith and Zsolt Beda investigated whether new contexts, not associated with mental blocks, could also help with problem solving through a similar process. 


To test this idea, participants studied sets of words (e.g. fortune, fat, chart) in anticipation of a memory test. These words were presented on a screen with a particular background context (e.g., a picture of a beach). Little did the participants know, these studied words were meant to mislead them – by inducing “mental fixation” – when attempting to solve creative problems (generate a word that relates to 3 presented cue words; e.g. luck, belly, pie; solution: pot). Participants had to solve these problems either immediately or at a delay, and either on a screen with the same background as the fixation-inducing words (old context) or on a screen with a novel background (new context). Replicating prior research on incubation effects, participants correctly solved more problems at a delay than immediately, and this effect was larger when participants solved problems in a new context compared to an old one.


These results provide empirical support for the idea that taking a break, particularly one in a novel context, can boost creativity. So, next time you are trying to resolve a complex problem – perhaps one you face in your own research – consider taking a moment to step outside of the lab.


Back to top 


Photo Highlights (Friday, November 15) | View the full Photo Album for Friday, November 15








  DAILY RECAP: Thursday, November 14 |  View the full Photo Album for Thursday, November 14

  Bilingualism Reveals the Networks that Shape the Mind and Brain
A review of the Psychonomics 2019 keynote address

  Watch Video 

  By Stephan Lewandowsky (University of Bristol, UK), Digital Content Editor

  Canada has two official languages, French and English, and with a population of
  nearly four million, Montreal is the 4
th-largest Francophone city in the world.
  (Paris is the largest, 
and you get bonus points for guessing the 2nd and 3rd.) What
  a great place to schedule a keynote by one of the world’s foremost researchers on

  Bilingualism has been studied by cognitive scientists for decades. In 2005, in the 125th anniversary issue of Science, Donald Kennedy and Colin Norman listed 125 questions that pointed to critical knowledge gaps that science should address. Among them, the question of why children can learn a first language (“L1”) with seeming ease whereas adults struggle even to acquire the basics of a second language (“L2”) later in life.

Enter Judith Kroll and her research program that contributed to re-assessing this traditional view, namely that L2 acquisition by adults is beset with difficulties. Much data has suggested that the later in life people start learning a second language, the more likely it is that they retain an accent and are unable to acquire the full set of nuances of L2 grammar. This need not prevent anyone from becoming a famous actor and Governor of California, but it does suggest that brain plasticity diminishes with age.

The principal message of Kroll’s keynote was that we now know that there is far greater plasticity in adulthood than was previously acknowledged. This includes our ability to acquire and use a second language. A corollary of this increased emphasis on plasticity is that bilingualism is not a simple binary, all or none question. It turns out that there are many different variants of bilingualism, depending on the cultural and cognitive context in which a speaker operates, and intriguingly there are even different types of monolingualism. Yes, as we shall see, people who speak only one language may nonetheless listen to other languages in different ways depending on the environment in which they live.

Kroll addressed several main points about bilingualism that added up to a fascinating picture of this burgeoning research area.

Is there a bilingual advantage?

Media coverage often highlights the purported cognitive advantages of being bilingual—bilingualism has been called the “mental gymnasium for the brain.” Indeed, there is considerable evidence for advantages that arise from mastering multiple languages. 

Early in life, brain scans of babies raised in bilingual homes identify activation patterns associated with executive functioning as early as 11 months of age. Monolingual babies, by contrast, do not show this cognitive maturity as early in life. 

One remarkable aspect of this apparent cognitive advantage is that, later in life, bilingual people appear to perform cognitive tasks with greater efficiency than their monolingual counterparts. For example, in the classic flanker task, in which people must make a choice between a target stimulus and other interfering stimuli nearby, bilingual participants show less activity in the anterior cingulate cortex, a brain region implicated in conflict monitoring, than monolingual participants. It appears that because of their lifelong ability to juggle two languages, bilinguals can perform an interference task with greater efficiency –  they require less brain resources to do the job. 

The same idea may underlie the observation that bilingualism may delay the onset of the symptoms associated with Alzheimer’s dementia by 4-5 years. Intriguingly, at the point of diagnosis, however, the bilingual brain appears to be more affected by the disease than the brain of a monolingual patient. This again suggests that bilinguals may use their brain with greater efficiency, hence delaying the onset of their visible symptoms without however slowing the physical progress of dementia. 

What happens to L1 during acquisition of another language and its subsequent use?

One of Kroll’s principal hypotheses was that bilinguals learn to regulate their first language (L1) while acquiring L2. 

The pervasive finding in the modern literature is that bilinguals are activating both languages in parallel, and this activation creates many opportunities for cross-language interaction. Intriguingly, both languages are active even when the person is completely unaware of this co-activation. The second language comes “online” very early on: In EEG studies, processing of L1 can be shown to be affected by L2 learning even after one semester, at a point when behavioral indices would not reveal any effect of L2 and L1, and when learners exhibit very little apparent proficiency in the new language. 

When do effects of dual language come online? Learners look like monolinguals in L1 early on. There's no effect of cognate status in L1 lexical decision. But in EEG, the effect on L1 emerges after one semester of L2 acquisition—that is, the difference between cognate and non-cognate Spanish words on lexical decision in native English. It shows that sensitivity develops early.

Eventually, once people acquire greater proficiency in the second language, their L1 performance takes a notable hit. For example, when native English speakers are immersed in a Spanish-speaking environment to accelerate their L2 learning, their ability to generate exemplars from a category in English is impaired relative to people who stay in an English-speaking environment but are also trying to learn Spanish in a classroom. In other words, your ability to generate “apple” and “banana” and so on when given the prompt “fruit” is diminished after you spend some time in Granada surrounded by Spanish-speakers.  

Your English takes a hit when you learn Spanish or some other second language. 

This “hit” may however be very desirable because it enables L2 learners to inhibit interference from their native language and to become more proficient in L2 more quickly. 

How do bilinguals coordinate their language use?

The final point made by Kroll is that bilingualism is dynamically coordinated by speakers depending on the interactional and cultural context of language use. 

The ability of bilinguals to coordinate their language use is very refined indeed. For example, habitual “code-switchers” (i.e., people who constantly switch between languages, even within a single conversation) show less processing cost—as revealed by evoked potentials—when they need to switch languages than bilinguals who do not switch languages. Basically, habitual switchers process the two languages as though they were one. 

When code-switchers talk to each other, they send subtle cues to their conversational partners about what they intend to do. For example, speech rate slows down prior to a switch to another language—and listeners become sensitive to this subtle cue and expect a switch to the other language. As Kroll put it, bilinguals learn to tango with their languages and align their language switching during a conversation. 

Intriguingly, these subtleties of bilingualism need not involve use or comprehension of a second language! It turns out that even monolinguals pick up something about another language if they are exposed to a diverse linguistic context. One of Kroll’s students discovered some striking differences between English-speaking monolinguals who live in California and their monolingual counterparts in rural Pennsylvania. Both groups learned about Finnish vowel harmony (this is not something many English speakers know about). In a generalization task to new materials, the brain activation of California participants showed greater sensitivity to vowel harmony than the brains of Pennsylvania participants—apparently the mere exposure to a multi-lingual environment was sufficient to create greater sensitivity to extremely subtle features in a foreign language. 

Wrapping it all up, Kroll emphasized that to understand bilingualism, we must capture the full social and linguistic diversity of language use to account for dynamic changes between contexts and across lifespan. There are many different types of bilinguals, and differences between monolinguals, and we must recognize this diversity if we wish to understand how people in Montreal can live so comfortably with two different languages. And once we know that, we can always study people in Luxembourg, who routinely speak 5 different languages.

Watch Judith Kroll's Keynote Address

Back to top


Photo Highlights (Thursday, November 14) | View the full Photo Album for Thursday, November 14




Back to top  

  8735 W. Higgins Road, Suite 300 • Chicago, IL 60631 USA
Phone: +1 847-375-3696 • Fax: +1 847-375-6449 • Email:

Use of Articles
Legal Notice

Privacy Policy