Amsterdam, The Netherlands
10-12 May 2018
Tackling the Confidence Crisis with Statistical Scrutiny,
Verifiable Credibility, and Radical Transparency
Chair: Balazs Aczel, ELTE Eötvös Loránd University, Hungary
If ever was a time that psychology needed self-reflection, then it is surely now. Methodological, statistical and publication traditions, which long ruled in the realm of psychological research, lost much from their once solid grounding. In this symposium, six speakers will add new thoughts to this discussion with special focus on: issues with the p-curve procedure, how to use Bayes factors, counting researcher degrees of freedom, assessing effect size overestimation and how to achieve full transparency of empirical research.
The P Curve Is Not What You Think It Is
Richard Morey, Cardiff University, United Kingdom
As the psychological scientific community has become aware of the methodological problems that plague the field, a class of meta-analytic statistical methods have arisen that attempt to 1) detect various sources of bias, and 2) correct for these sources of bias. One of the methods is "p curve" or "p uniform" (Simonsohn, Nelson, & Simmons. 2014; van Assen, van Aert, & Wicherts, 2015), which use sets of p values gleaned from the literature to assess the "evidential value" of that set. I will show that the p curve procedure does not do what is claimed by its users and proponents, furthermore, due to the way it was derived, p curve can lead to dramatically mistaken inferences.
Assessing the Properties of Psychological Research from Collections of Results
Balazs Aczel, ELTE Eötvös Loránd University, Hungary
If psychological research is what psychological researchers do then an obvious way to learn about it is to examine what researchers did in the past. It is thought that collections of statistical findings extracted from published articles provide representative samples of many decades of publication and research practices. It is also assumed that descriptive properties of the field (such as the typical sample sizes or the magnitude of the measured effects) and its biases (such as miscalculations, overestimations or publication biases) are reflected in these giant data sets. Nevertheless, there are recurring questions regarding the appropriateness of these collections. In our talk, we will compare text-mined results with different estimations and demonstrate their possibilities and limitations regarding their ability to describe the properties of the field.
Researcher Degrees of Freedom
Jelte Wicherts, Tilburg University, The Netherlands
The designing, collecting, analyzing, and reporting of psychological studies entail many choices that are often arbitrary. The opportunistic use of these so-called researcher degrees of freedom aimed at obtaining statistically significant results is problematic because it enhances the chances of false positive results and may inflate effect size estimates. In this talk, I present an extensive list of 34 degrees of freedom that researchers have in formulating hypotheses, and in designing, running, analyzing, and reporting of psychological research. The list can be used in research methods education, and as a checklist to assess the quality of preregistrations and to determine the potential for bias due to (arbitrary) choices in unregistered studies.
The Case for Radical Transparency in Statistical Reporting
Eric-Jan Wagenmakers, University of Amsterdam, The Netherlands
Most researchers are new to radical transparency in statistical reporting. Radical transparency means that data and analysis code are provided in a public repository, such that others can re-analyze and possibly critique the results; it means that different statistical models are explored to probe the robustness of the conclusions; it also means that statistical reporting is inclusive, in the sense that the same question is examined using multiple statistical paradigms that may yield conflicting conclusions; finally, radical transparency means that all analysis efforts are reported openly and comprehensively. In particular, analyses that produce unwanted outcomes are fully acknowledged. In order to encourage such radical transparency, it is essential that publication decisions do not hinge on the outcome, but depend solely on the scientific quality of the question and the rigor of the experimental and statistical methodology. Real-life examples will demonstrate the theoretical and pragmatic advantages of the proposed approach.
Taking Research Credibility to the Next Level, an Initiative for Verifiable Credibility
Zoltan Kekecs, Lund University, Sweden
In the midst of the credibility crisis, scientists and clinicians are looking for ways to discern trustworthy from untrustworthy studies, and guidance on how to perform good research. Recent initiatives offer strategies and solutions to tackle some of the issues related to trustworthiness of research. However, these approaches still leave us with significant blind spots. That is, the study process and data collection themselves are still performed in the dark, leaving room for sloppy protocol execution, unreported protocol deviations, questionable research practices, or outright fraud. We need to establish best-practice methodologies that would empower researchers to conduct highly credible studies through verifiable trustworthiness. A combination of techniques will be described that can ensure as intended protocol delivery, fully transparent data collection process, and scientific inference that are acceptable to the majority of the stakeholders on the field. These approaches are now being crash-tested in a large-scale multi-lab study.
Strengthening the Bond Between Theory and Evidence
Susann Fiedler, Max Planck Institute for Research on Collective Goods, Germany
The replication crisis in psychology has led to a fruitful discussion about common research practices and research institutions. We present a set of measures that aim at making science more efficient and research results more reliable by fostering a strategic alignment and the interlocking of all parts of the research process. The recommended changes address individuals as well as institutions and concern theory, empirical methodology, and accumulation of evidence. The ideas put forward in this talk aim to improve the foundation for efficient research by fostering: (a) precise theory specification, critical theory testing, and theory revision; (b) a culture of openness to evidence and subsequent theory revision and (c) the establishment of interconnected databases for theories and empirical results, which are continuously updated in a decentralized manner.
The Limits of Prediction in Language Processing
Chair: Falk Huettig, Max Planck Institute for Psycholinguistics
The notion that “brains … are essentially prediction machines” is becoming more and more influential in the cognitive and neurosciences. It is argued that prediction is a fundamental principle of human information processing and offers a “deeply unified account of perception, cognition, and action” (Clark, 2013; cf. Friston, 2010). Indeed, if prediction is the grand unifying principle of the human mind, then, of course it must also be the unifying principle of language processing. Given the strong claims about prediction it is surprising that few studies have actually directly investigated whether people routinely predict in language processing. In the symposium new empirical results will be presented that imply that there are many real life situations in which people may not predict language or which suggest that prediction is less pervasive than typically assumed. The implications of these findings for theories of prediction and language processing are discussed.
When Does the Brain Predict Specific Words During Language Comprehension?
Aine Ito, Humboldt-Universität zu Berlin, Germany
Recent models of language processing propose that prediction plays an important role during comprehension. While prediction can facilitate comprehension, available evidence suggests that whether predictions occur is dependent on the situation where people comprehend language. Exploring under what circumstances predictions occur will help us shed light on how much of a role prediction plays in natural language comprehension. In this talk, I will present experiments that investigated predictions of phonological/orthographic word form and semantic information. The results suggest that prediction of word form occurs under particularly limited circumstances compared to prediction of semantic information. I argue that, during natural language comprehension, people may rarely predict specific words, while they may more readily predict broader, less detailed information about upcoming language.
Predictability Effects in Reading Require Equivocal Perceptual Evidence
Adrian Staub, University of Massachusetts, Amherst
A word's predictability, as determined by cloze probability, typically has an effect on how long a reader's eyes spend on the word. I will discuss recent research from my lab establishing that the predictability effect on early eye movement measures (first fixation duration, gaze duration) is eliminated when the reader lacks valid parafoveal preview. I argue that predictability influences primarily the earliest stages of orthographic processing, and that when early perceptual evidence is very clear, as when a word is first encountered in foveal vision rather than in the parafovea, predictability has little or no effect. I discuss the apparent paradox raised by the fact that the N400 effect of predictability is obtained in RSVP, where each word is initially encountered in foveal vision. I suggest that the resolution is to recognize that the eye movement and ERP effects of predictability may reflect distinct underlying mechanisms.
Is Prediction Production? La bocca della verità
Martin Corley, University of Edinburgh, United Kingdom
Listening to another person speaking is accompanied by speech-motor activation in the listener; this has been taken as support for the view that the language production system is implicated in prediction (e.g., Pickering & Garrod, 2007). We present a series of experiments in which participants were led to anticipate a specific word through the acoustic presentation of high-cloze sentence-stems, but were presented with an alternative item to name. In this way prediction and production competed. We employed ultrasound speech imaging to investigate articulator activity. Ultrasound analysis shows that speech-motor movements are systematically affected by predicted words, and that, contra standard theories of production, these effects are found well before acoustic word onset. Contra prediction-as-production theories, however, the perturbations in articulation are not sensitive to phonetic detail of what is predicted. This suggests that the speech motor system is predicting when it must speak, as opposed to what it might hear.
Effects of Speech Rate, Preview Time of Visual Context, and Participant Instructions Reveal Strong Limits on Prediction in Language Processing
Falk Huettig, Max Planck Institute for Psycholinguistics
In three eye-tracking experiments participants listened to simple Dutch sentences while viewing four objects. We used the identical visual stimuli and the same spoken materials but varied speech rates, preview time, and participant instructions. Target nouns were preceded by definite gender-marked determiners, which allowed participants to predict the target object. In Experiment 1, participants had four seconds preview and sentences were presented either in a slow or a normal speech rate. Participants predicted the targets as soon as they heard the determiner in both conditions. Experiment 2 was identical except that participants were given only a one second preview. Participants predicted the targets only in the slow speech condition. Experiment 3 was identical to Experiment 2 except that participants were explicitly told to predict. This led only to a small prediction effect in the normal speech condition. These findings are problematic for theoretical proposals that assume that prediction pervades cognition.
Investigating Reliance on Prediction in High-Functioning Older Readers
Shruti Dave, Northwestern University
Highly educated and healthy older adults have a wealth of both reading experience and semantic knowledge, and are therefore likely to maximize use of heuristic, i.e., probabilistic internal models when processing incoming content. Prediction has been described in exactly these terms: incrementally generated pre-activations of specific items across unfolding context. Using a self-report paradigm, we separately assessed neural indices (ERPs) associated with prediction accuracy and plausibility. Relative to young adults, older readers showed reduced N400 facilitation for both predicted and plausible information, and instead relegated their neural resources to post-N400 contextual updating processes. Critically, this pattern was exacerbated in the older adults with the highest vocabulary and verbal fluency scores, suggesting that readers with high knowledge stores, access to that knowledge, and a lifetime of strategies for effective reading are less reliant on anticipatory mechanisms than prevailing frameworks of prediction would suggest.
Prediction versus Newness
Fernanda Ferreira, University of California, Davis
The comprehender’s ability to anticipate words and structures during language processing arises from at least three sources of information: (1) the presence of multi-word sequences in the input, which create dependencies between earlier and later elements; (2) grammatical and conceptual rules, conventions, or taxonomies known to listeners and speakers; and (3) propositional overlap resulting from mechanisms that coordinate the exchange of information in discourse and dialogue. These information sources may facilitate the task of identifying content that is old or redundant, which in turn enable comprehenders to focus their efforts on new information. On this view, the comprehender’s goal isn’t to predict; instead, the goal is to acquire information, and prediction is simply a mechanism that facilitates processing, including identification of what is given and new. Theories of comprehension must emphasize that a fundamental purpose of language is to exchange information, and information, by definition, is non-predictable and non-redundant.
Falk Huettig, Max Planck Institute for Psycholinguistics
Shaping Attention Selection
Chair: Jan Theeuwes, Vrije Universiteit Amsterdam, The Netherlands
Even though traditionally attentional control has been considered to be the result of the interaction between voluntary (top-down) and automatic (bottom-up) control, recently a new theoretical framework has emerged in which this division is no longer that clear-cut. Indeed, it has been recognized that the history of attentional deployments can elicit lingering selection biases, which are unrelated to top-down goals or the physical salience of items. This symposium will bring researchers together that have investigated these lingering selection biases specifically in the areas of associative (reward) learning, emotional processing and statistical display properties.
Lingering Biases of Attentional Selection
Jan Theeuwes, Vrije Universiteit Amsterdam, The Netherlands
Lingering biases of attentional selection affect the deployment of attention above and beyond top-down and bottom-up control. We argue that selection history modulates the topographical landscape of spatial ‘priority’ maps, such that attention is biased towards locations having the highest activation on this map. The present study investigated the extent to which statistical regularities affect both attentional and oculomotor control. Participants performed a visual search task in which they searched for a shape singleton while ignoring a color distractor singleton. The distractor color singleton was presented more often in one location than in all other locations. Even though observers were not aware of the statistical display regularities, the regularities biased selection such that locations that were likely to contain a distractor were suppressed relative to all other locations.
Rebecca Todd, University of British Columbia, Canada
The relationship between ownership and value in attentional prioritization is well established that emotionally and motivationally salient stimuli implicitly guide attention. Moreover, there is ample evidence that we endow the things we own with higher value. In a series of studies we examined whether attentional prioritization was conferred on everyday items through simple assignment of ownership. Participants learned that objects either belonged to themselves or the experimenter and subsequently performed a temporal order judgment task as a measure of attentional prioritization. Self-owned objects were consistently prioritized. Although individual differences in self-biased attention were not associated with loss aversion, participants revealed overall more positive associations with self-owned objects. We next manipulated reward levels to examine whether higher reward for other-owned objects could reverse the cognitive advantages of ownership. Failure to reverse the effect suggests two competing possibilities: Altering the ownership effect may require a larger reward, or self-relevance and money may differentially affect the reward system.
Effects of Risk and Reward on Attentional and Oculomotor Selection
Mike E. Le Pelley, UNSW Sydney, Australia
Classical studies of decision-making have examined how people’s explicit evaluations (“Which is better: option A or option B?”) are influenced by rewards and risks. Recent work suggests that rewards and risks also influence more ‘implicit’ decisions made by our perceptual-cognitive system, regarding where to move our eyes next. Specifically, stimuli that have previously signalled the availability of reward become more likely to capture our eye movements, and the likelihood of capture depends critically on the magnitude and (un)certainty of reward signalled by the stimulus. Here we describe a set of studies investigating whether risk/reward exerts a similar effect in the context of explicit evaluations versus implicit eye-movements—which might suggest a common reward-representation underlying both—or whether it is possible to dissociate decisions at these different levels of the cognitive hierarchy. These findings provide insight into how goal-directed and automatic behaviour can be shaped by our previous experiences.
Reward and Prior Selection Can Bias Visual Selective Attention
Anna Schubö, Philipps-Universität Marburg, Germany
Selective attention focusses our limited processing resources on relevant information to allow acting coherently in the visual world. Recent research has shown that many instances of attentional selection are neither resulting from top-down nor bottom-up control, but are a consequence of the observer’s “history of selection”. Using a visual selection task, we investigated the role of reward and other history-driven effects by analyzing lateralized ERP components N2pc, ND and PD. Results showed that target and distractor processing were modified by both reward and prior selection, and that attention was captured by stimuli from a category previously learned to be valuable even when now detrimental to task performance. The results underline the role of selection history in addition to top-down and bottom-up control.
How Attention and Reward Determine the Learning of Stimulus-Response Associations
Matthew Self, Netherlands Institute for Neurosciences, Amsterdam, The Netherlands
We can learn new tasks by listening to a teacher, but we can also learn by trial-and-error. Here, we investigate factors that determine how participants learn new stimulus-response mappings by trial-and-error. Does learning in human observers comply with reinforcement learning theories, which describe how subjects learn from rewards and punishments? If yes, what is the influence of selective attention in the learning process? We developed a novel redundant-relevant learning paradigm to examine the conjoint influence of attention and reward feedback. Subjects only learned stimulus-response mappings for attended shapes, even when unattended shapes were equally informative. Reward magnitude also influenced learning, an effect that was stronger for attended than for non-attended shapes and carried over to a subsequent visual search task. Our results provide insights into how attention and reward jointly determine how we learn. They support powerful learning rules that capitalize on the influence of both factors on neuronal plasticity.
Plasticity of the Human Attention System: The Impact of Reward and
Statistical Learning on Priority Maps of Space
Leonardo Chelazzi, University of Verona, Italy
We recently demonstrated that reward can alter the “landscape” of spatial priority maps, increasing priority for locations associated with greater reward relative to less rewarded locations. Importantly, the effects persisted for several days after the end of learning and generalized to new tasks and stimuli. We have now begun to assess whether similar effects can be induced via statistical learning. In a series of experiments using variants of visual search, unbeknownst to the participants, we manipulated the probability of occurrence of the sought target and/or of a salient distractor across locations. Results indicate that, analogous to the influence of reward, uneven probabilities of the critical items alter deployment of attention in a persisting fashion. As for reward, we argue that these effects reflect durable changes in priority maps of space, implementing “habitual attention”. In summary, reward and statistical learning appear to be strong (and implicit) determinants of attentional deployment.
Proactive Control: Mechanisms and Deficits
Chairs: Eyal Kalanthroff, The Hebrew University of Jerusalem, Israel, and Marius Usher, Tel-Aviv University, Israel
Cognitive control is understood as a mechanism of top–down bias to the bottom–up associations linking stimuli to automatic responses. An important challenge facing this approach is to account for the marked variability in cognitive control with task contingencies and between individuals. Recently, the Dual Mechanisms of Control (DMC) framework was proposed by Braver to directly address this challenge. According to DMC, much of this variability can be explained in terms of the balance between two types of processes: a proactive process (deployed in advance of the stimulus) and a reactive process (deployed after the stimulus). We start with a presentation by Braver who will outline the flexibility of cognitive control (DMC framework) and present recent evidence. Following, Usher and Davelaar will present a Stroop model, in which variability in proactive control modulates behavioral aspects of taskconflict. Verguts will present a novel model of proactive and reactive control based on neural oscillations. Kalanthroff will examine effects of anxiety and emotional distractors on pro-active control and Chevalier will present results on the development of proactive control and the ability to deploy it flexibly in children.
Flexible Neural Mechanisms of Cognitive Control: The Dual Mechanisms Framework
Todd Braver, Washington University in St. Louis
The neural mechanisms of cognitive control, which involve a network of brain regions centered on the lateral prefrontal cortex, appear to be highly flexible. We have argued that this flexibility arises because cognitive control can be deployed in both a proactive and reactive mode. The proactive mode of control is future-oriented, preparatory, and sustained in nature, while the reactive mode is transient, stimulus-driven, and frequently engaged by the presence of interference. I will present some recent work highlighting this theoretical approach, its utility for understanding individual differences and cognitive impairment in different populations, as well as some new directions it has taken us in understanding how motivation interacts with cognitive control. In addition, I will describe on-going large-scale behavioral and brain imaging projects utilizing a newly developed task battery designed to more thoroughly test this framework.
Task-Conflict and Proactive-Control: A Computational Theory of Stroop
Marius Usher, Tel-Aviv University, Israel
The Stroop task is a central experimental paradigm used to probe cognitive control by measuring the ability of participants to selectively attend to task-relevant information and inhibit automatic responses. Here, we focus on a particular source of Stroop variability, the reverse-facilitation (RF; faster responses to nonword neutral stimuli than to congruent stimuli), which has been suggested as a signature of task-conflict. We propose that task-conflict variability results from the degree of proactive-control that subjects recruit in advance of the Stroop stimulus and we present a computational model of Stroop, which includes the resolution of task-conflict and its modulation by proactive-control. Results show that the model (a) accounts for the variability in Stroop-RF reported in the literature, and (b) solves a challenge to previous Stroop models—their ability to account for reaction time distributional properties. Finally, we show that the model accounts control deficits observed in patients with Schizophrenia.
Cortical Oscillations for Proactive and Reactive Cognitive Control
Tom Verguts, Ghent University, Belgium
Recent single-cell/EEG/MEG work has suggested an important role of neurophysiological oscillations in cognitive control. For example, right before or during difficult tasks, electrophysiological activity in medial frontal cortex exhibits theta frequency (≈6-8Hz); which is coupled to faster (gamma, ≈40-50Hz) frequencies; and the coupling strength correlates with behavioral performance in such tasks. However, the mechanistic role of these frequencies and their coupling has remained unclear. This talk will present recent computational modeling work attempting to address this issue. It is demonstrated that theta-frequency waves originating from medial frontal cortex can synchronize gamma-frequency waves in posterior processing areas (e.g., areas for processing color or word information). This principle can be usefully applied to several tasks (e.g., probabilistic reversal learning, Stroop task), both in a proactive and a reactive manner, accounting for the electrophysiological signatures. Finally, preliminary EEG data that confirm this model will be examined.
The Effect of Emotion and Anxiety on Proactive Task Control
Eyal Kalanthroff, The Hebrew University if Jerusalem, Israel
The talk will report investigations of the effect of negative emotional distractors and of anxiety on proactive task control. In the first experiment we found that when Stroop stimuli are preceded by brief aversive (but not neutral) picture primes (distractors), patients who suffer from anxiety show a deficit in pro-active control associated with task-conflict. In the second experiment we used a “threatened morality manipulation” to induce guilt and anxiety in normal (student) participants before they carried out a Stroop and an object interference task. Again, we found evidence for a deficit of pro-active control, especially in highly anxious participants. In the third experiment using the Stroop and object interference tasks, we found a similar deficit in pro-active control in patients with obsessive-compulsive disorder (OCD), which strongly correlates with symptom severity. We suggest that anxiety, which is triggered by stimuli or task-instructions, or is inherent to the individual (OCD) leads to a shift in the type of cognitive control from proactive to reactive.
Proactive Control in Children: Mechanisms of Change
Nicolas Chevalier, University of Edinburgh, United Kingdom
As children grow older, they engage cognitive control less reactively and instead more proactively anticipate and prepare for task demands. In this talk, I will present findings showing that this developmental shift from reactive to proactive control is observed across multiple tasks, tapping response inhibition, working memory, and set-shifting. Critically, although younger children are already capable of proactive control engagement, behavioral, eye-tracking, and event-related potentials evidence shows that they often spontaneously engage reactive control, even when their performance would already benefit from proactive control. Further, proactive control engagement relates to how children value cognitive effort. Together, these findings suggest that proactive control engagement is not necessarily constrained by fundamental cognitive limitations, but also by metacognitive skills. More broadly, cognitive control development does not merely reflect a quantitative increase in cognitive resources, but also and foremost better and more flexible engagement of extant resources to match changing task demands.
Avishai Henik, Ben-Gurion University of The Negev, Israel
What Limits the Capacity of Working Memory? An ‘Adversarial’ Working Memory Symposium
Chair: Robert Logie, University of Edinburgh, United Kingdom
The widespread use and longevity of the concept of working memory has led to a plethora of approaches, definitions, and theories. Each group of like-minded researchers tends to explore research questions and empirical findings that are most relevant for their own assumptions, rather than to seek disconfirmatory evidence. The focus may be on cognitive architecture, neuroanatomical correlates of performance on tasks assumed to require working memory, the development of computational models that simulate behavioural data, the role of limited capacity attention and long-term memory, the ability to switch between task requirements, or why working memory capacity varies across individuals. The approaches are, if anything, even more diverse than when different models of working memory were addressed in a range of chapters in 'Models of Working Memory', edited by Miyake and Shah (1999). Nearly 20 years on from that important book, this symposium will bring together leading international working memory researchers who represent some of the current diversity in the field, each asked to address a common question that is the focus of much of the contemporary debate, 'What Limits the Capacity of Working Memory?'.
Emerging Working Memory Capacity from Multiple Interacting Systems: E Pluribus Unum
Robert Logie, University of Edinburgh, United Kingdom
The concept of an executive controller or attentional control in working memory begs the question of what is controlling the controller and so on, leading to an infinite hierarchy of executives or “homunculi”. We will briefly review a range of studies of healthy younger and older adults and brain damaged individuals, along with some new data on concurrent dual-task performance suggesting that executive control might arise from the interaction among multiple different functions in cognition that use different, but overlapping, brain networks, and that the concept of an executive controller for attention might now be offered a dignified retirement.
How Can Attention Be Distinguished From Multiple Modules?
Nelson Cowan, University of Missouri-Columbia
Researchers favoring attention-based analyses of working memory suggest that memoranda stored with different kinds of codes can interfere with one another in working memory. Those favoring a more modular view sometimes suggest that verbal memoranda can be recoded for spatial storage, and vice versa, to retain an overflow when the primary module reaches capacity. Can these views be distinguished? We suggest that they can, supporting the attention-based view when there is interference between memory and non-memory tasks with minimal potential for coding overlap, and when brain mechanisms show effects of mnemonic effort and suggest a close relationship between perception and memory prioritization. We offer examples from the recent literature.
Number and Precision: Distinct Aspects of Online Memory Function
Ed Awh, The University of Chicago
Working memory (WM) capacity can be characterized both in terms of the number of items stored as well as the precision of those representations. Work examining individual differences shows that number and precision are uncorrelated, suggesting that they reflect distinct aspects of memory ability. Moreover, number exhibits robust correlations with fluid intelligence while precision shows no such link. Despite the apparent utility of this number/precision dichotomy, however, prominent WM models argue against the validity of the number construct by positing that WM can store an unlimited number of items. I’ll review recent work that demonstrates sharp limits in the number of items that can be stored, confirming the number construct. Finally, I’ll discuss how this dichotomy offers a natural explanation of neural signals that track the number of individuated representations in working memory while remaining insensitive to the “information load” associated with those items.
Temporal Constraints Limit Working Memory Capacity
Valérie Camos, Université de Fribourg, Switzerland
In the time-based resource sharing (TBRS) model, the amount of information that can be maintained in working memory results from the balance between loss and reconstruction mechanisms. When attention is distracted and focused on a concurrent activity, memory traces degrade over time. However, through attentional focusing, traces can be reactivated to counteract their forgetting. As a consequence, the speed at which a concurrent task is performed and the time during which attention is available for maintenance determine working memory capacity. Studies in young adults and children would illustrate the impact of these different temporal constraints.
A Computational-Modeling Approach to Working Memory Capacity
Klaus Oberauer, Université de Zurich, Switzerland
Multiple mechanisms and processes are assumed to contribute to working memory (WM). Understanding their interplay is a challenge for the unaided mind, often resulting in vagueness and over-simplification of informal theories. Computational modeling helps to overcome these difficulties, making assumptions more explicit and testable. I will present examples of how computational models of WM help to explain key findings reflecting its capacity limit: The effects of set size, and of concurrent processing of distractors from the same or from a different content domain than the memoranda. In these models, interference is the main cause of the capacity limit of WM. The notions of time-based decay, and of limited resources, which play a prominent role in informal theories of WM, so far appear to be unnecessary, or unhelpful, in the context of computational models.
Different Perspectives on Working Memory
Randall W. Engle, Georgia Institute of Technology
I don’t see the different approaches presented in this symposium as conflicting any more than I see the 6 blind villagers observing an elephant as conflicting. I have learned a great deal about working memory from the work of all of the participants in this symposium. My own perspective starts with my interest in the role of temporary memory in real-world tasks and that led me to the study of individual differences in working memory, how to measure that, and what are the elements of the working memory system responsible for those differences. I have used both experimental and differential approaches and have concluded that differences in ability to control attention in the service of temporary memories is a key factor in maintaining those memories but also in disengaging from those memories.
Evidence and Scientific Knowledge in a "Post-Truth" World
Chair: Stephan Lewandowsky, University of Bristol, United Kingdom
Never before has humanity had access to as much knowledge, and enjoyed the benefits of so many scientific insights. Paradoxically, this abundance of knowledge and scientific flourishing coexist with what has been called an era of "post-truth" public discourse. Wholesale dismissal of expertise and scientific knowledge has become a feature of public debate, and many people distrust institutions and scientific evidence. Are we heading towards a dystopian future in which it is not medical knowledge but an opinion market on Twitter that determines whether a new strain of avian flu is contagious to humans? Knowledge and expertise appear to be in crisis. What are the features and causes of this crisis? How can people's misconceptions be corrected? What can be done to prevent the rapid spread of "fake news"? How can we encourage people to make medical decisions that are scientifically sound? This symposium seeks answers to those questions.
How to Respond to Persuasive Messages of Vaccine Deniers: Experimental Verification of a Two-dimensional Debunking Strategy
Philipp Schmid, Universität Erfurt, Germany
Facing vaccine deniers in public is a challenge for health authorities and approaches on how to respond to persuasive messages of vaccine deniers are rare. Recent best practice guidance by the WHO Regional Office for Europe suggests to use two dimensional debunking when facing a denier in public. In terms of the Persuasion Knowledge Model two-dimensional debunking increases the audience’s knowledge about the content and at the same time reduces the perceived adequacy of the persuasive technique that the vaccine denier is using. In comparison to conventional one-dimensional approaches this combination is expected to minimize the influence of persuasive messages of vaccine deniers on individual’s intention to get vaccinated. The effectiveness of this new approach was analyzed in two online experiments. Results reveal that two-dimensional debunking is a promising approach to minimize the impact of vaccine deniers. A transfer of the strategy to other science domains is intended.
Incentivized Accuracy in the Encoding and Retrieval of Partisan Information
Vittorio Merola, University of Exeter, United Kingdom
We vary the timing at which we provide respondents with monetary incentives to heighten accuracy motivation. By providing the incentives before respondents receive information about a policy, we are evaluating whether a greater accuracy motivation at the time of the processing and encoding of new information can help reduce motivated bias. Conversely, by providing these incentives after the information about the policy, we are able to see whether a greater accuracy motivation during the memory retrieval and metacognitive phase is able to reduce motivated bias. Most importantly, we are be able to compare these two effects to each other. The design thus allows us to answer questions about whether motivated bias when responding to political information might be caused by biased processing, recall or simply through automatic responses that fail to incorporate any new information.
Empowering School Children to Make Informed Judgements on the
Trustworthiness of Online Information Resources
Geoff Walton, Manchester Metropolitan University, United Kingdom
Research with young people demonstrates that their ability to function in digital environments on a psychomotor level may well be impressive but cognitive responses need to be supported by mechanisms that increase their understanding of the information environment (Pickard et al, 2010). Lewandowsky (2012) reports that people, in general, adopt a cognitive default position of trusting information and it is unsurprising that young people demonstrate a similar trait. The work described here aims to build a new toolkit for digital literacy and informed digital citizenship drawing on previous research on trust (Pickard et al, 2010) and information discernment (Walton, 2017; Walton & Hepworth, 2011: 2013). The aim is to empower school children to make informed judgments on the trustworthiness of online information resources. This paper presents the findings of a project involving one case study of 16-17 year-old students in a UK school.
The Fake News Game: Actively Inoculating Against the Spread of Misinformation
Sander van der Linden, Cambridge University, United Kingdom
The theory of inoculation offers a promising framework to help immunize the public against the rise of “fake news”. We theorized that “active” inoculation in particular would offer cognitive resistance by reducing the perceived reliability and persuasiveness of previously unseen fake news articles. In order to test this, we developed a novel “fake news game” in which participants were specifically tasked with creating a news article about a strongly politicized issue (the European refugee crisis) using misleading tactics, from the perspective of different fake news producers. To pilot test the efficacy of the game, we conducted a randomized field study (N=95) in a public high school setting. Our results provide some preliminary evidence that playing the fake news game reduced the perceived reliability and persuasiveness of fake news articles.
How Tackling Misconceptions about Psychology can Inform Us about Tackling Misconceptions in General
Annette Taylor, University of San Diego
Misinformation, misconceptions, and myths about psychology are pervasive. Even when students are taught the correct information, studies repeatedly show that unless instructors pay attention to, and are aware of specific misconceptions, the students do not change these prior false beliefs. The danger is that students use the misinformation as a foundation to build future knowledge, and may use ways of knowing about misinformation in order to evaluate new information. An additional challenge is that in psychological science there are few large, overarching conceptual frameworks. We need to address most of the many psychological misconceptions individually. So what can we do? What can we learn about re-educating the uniformed and misinformed, in general, by studying psychological misconceptions? We have applied methods adapted from the Denial 101x framework to the psychological sciences and present the result of that adaptation.
Learning Words from Experience: The Emergence of Lexical Quality
Chairs: Jennifer M Rodd, University College London, United Kingdom, and Kate Nation, University of Oxford, United Kingdom
Lexical quality is the extent to which a word’s mental representation contains accurate and comprehensive information about its spelling, sound and meaning. High quality lexical representations not only afford efficient word recognition (i.e., knowing which word was present), but also ensure that appropriate stored lexical knowledge becomes available to support higher-level comprehension processes. Lexical knowledge is not an ‘all or nothing’ factor in which words are either known or unknown: even for highly familiar words there is significant variation (both within and across individuals) in lexical quality, and this variation affects the ease with which word meanings are processed. This symposium brings together researchers who provide novel theoretical frameworks to explain how variations in lexical quality arise from differences in linguistic experience. We present a range of mechanistic accounts that explain how important new information continues to be integrated into the lexicon throughout the lifespan to support skilled language comprehension.
Where Does Variation in Lexical Quality Come from and How Does It Influence Children’s Lexical Processing?
Kate Nation, University of Oxford, United Kingdom
Words vary in lexical quality: we all know some words well, others less so. Where does this variability come from and how does it relate to people’s lexical processing? We argue that variations in lexical quality are a product of language experience, especially reading experience. To capture this developmental experience, we analyzed a large corpus of reading material written for children, allowing us to quantify factors that distinguish item-level properties (e.g., frequency, semantic diversity, orthographic age of acquisition). We then related these lexical statistics to children’s lexical processing in a range of tasks tapping word reading, semantic judgment and reading comprehension. We found systematic effects of various lexical statistics influencing children’s behavior, interacting with the linguistic requirements of each task, and children’s level of reading skill. We conclude by framing lexical quality as the dynamic and on-going product of encounters with language, starting in childhood but continuing throughout life.
How Do Methods of Reading Instruction Influence Lexical Quality? Behavioral and Neural Findings from an Artificial Orthography Experiment
Joanne S.H. Taylor, Aston University, United Kingdom
Debate continues over whether reading instruction should focus on letters and sounds (phonics) or written words and meanings (whole-language). Framed within the lexical quality hypothesis – which approach facilitates efficient reading by providing high quality associations between print, sound, and meaning? To examine this, adults learned sounds and meanings for novel words written in unfamiliar symbols. We varied the learning focus: one orthography received three times more print-to-sound than print-to-meaning training, whereas the other orthography received the reverse. Reading aloud trained and untrained words and spelling were more accurate and faster following print–sound training. Print–meaning training benefited reading comprehension speed, but not accuracy. fMRI scans indicated increased neural effort during reading aloud for print–meaning versus print–sound training. During reading comprehension, neural effort was equivalent for the two approaches. We suggest that phonics-like approaches are a more efficient method of reading instruction and do not compromise lexical quality.
Quality and Quantity in Language Development: Examining Influences in a Computational Model of Literacy Development
Padraic Monaghan, University of Lancaster, United Kingdom
Computational models of reading require a specificity about the representations involved in the reading system that may be implicit in descriptive models of behavior. In this talk, we present a developmental model of language and literacy development, focusing on pre-literacy experience of the model and the effect of this early exposure on learning to read. We show that both quantity and quality of pre-literacy language exposure is critically important for literacy development, and that impoverishment in either is potentially problematic for learning to read. We also show that the quality of these pre-literacy language skills influences the effectiveness of interventions in early reading training. The modelling work enables us to examine the unfolding experience of the learner and its influence on gradual literacy acquisition, with insights on lexical quality in literacy that go beyond static models of the mature language reader.
Learning New Meanings for Words that Already Have Meanings
Charles A. Perfetti, University of Pittsburgh
Maintaining lexical quality across the lifespan requires that people continuously add new meanings to words they already know, as when one learns that “skate” is also the name of a fish. 1) How is such learning different from the more studied case of learning meanings assigned to novel word forms? 2) Is there an impact on learning from the competition with the already established learning? 3) What is the fate of both the new meaning and the original meaning? We report a series of behavioral and ERP experiments (including frequency-band analyses) that test hypotheses related to these questions. We draw a general account of new meaning learning that stresses strong interactions between new meanings and prior knowledge. These interactions aid new learning in the early phases of learning but also cause perturbation of prior knowledge and lead to temporary knowledge suppression. Despite these cognitive dynamics, the long term retention of the original meaning usually co-exists harmoniously with the new meaning.
The Importance of Learning Mechanisms in Word-Meaning Access
Jennifer M. Rodd, University College London, United Kingdom
Being able to understand exactly what each word in a sentence means is an essential component of language comprehension. This is a relatively challenging task because the vast majority of common words have multiple possible interpretations (e.g., the TRUNK of the elephant/car/tree). I present data from both large-scale web-based experiments and lab-based experiments that demonstrate that learning mechanisms continue to shape lexical representations during adulthood in response to their linguistic environment. Listeners’ make use of both very recent and longer-term experience with specific ambiguous words to facilitate future word-meaning access. We explore how factors such as the modality and the spacing of these encounters influence the magnitude of these effects in order to describe the nature of the underlying learning mechanisms.
Mind the Speaker: The Use of Speaker Characteristics in Lexical-Semantic Processing
Zhenguang G. Cai, University of East Anglia, United Kingdom
Successful language comprehension requires that listeners rapidly and accurately retrieve the meanings that speakers intend to convey in their speech. Thus, retrieving the speaker-intended meaning can be challenging when the listener and the speaker have different semantic or pragmatic interpretations for a word, as in cases where the listener and the speaker come from different demographic backgrounds (e.g., age, gender, dialect). For instance, the word “bonnet” would be typically meant to refer to a type of hat by an American English speaker but a car part by British English speaker. In this talk, I will present findings that listeners use available characteristics (e.g., accent) of the speaker to make inferences about a word’s intended meaning and that such speaker-based inferences are made in parallel to lexical-semantic processing. Thus, high quality lexical-semantic knowledge not only includes information about what words mean, but also information about how different speakers are likely to use these meanings.
The Call for Symposia closed on 18 September 2017.