Donate | Join/Renew | Print Page | Contact Us | Report Abuse | Sign In
Share |

November 9-12, 2017
Vancouver, British Columbia, Canada - Vancouver Convention Centre West

 #psynom17

 


Program
Symposia
Registration

Keynote Address
Special Events
Affiliate Meetings
Mobile App

Hotels and Venue
Family Care Grants
Travel
Explore Vancouver

Exhibitors
Press and Media

2017 Program Committee
Future Meetings
Past Meetings





Symposia

SYMPOSIUM I
Dual Process Theory 2.0

SYMPOSIUM II
Improving Use of Statistical Inference in Science

SYMPOSIUM III
Beyond the Lab: Using Big Data to Discover
Principles of Cognition

SYMPOSIUM IV
When Man Bites Dog: What do Developmental Reversals Tell Us about Cognitive Development, Aging, and the Brain

SYMPOSIUM V
50 Years of Implicit Learning Research:
A Symposium in Honor of Arthur S. Reber



SYMPOSIUM I
Dual Process Theory 2.0

Friday, November 10  |  10:00 a.m. - 12:00 p.m.
The two-headed, dual process view of human thinking has been very influential in the cognitive sciences. The core idea that thinking can be conceived as an interplay between a fast-intuitive and slower-deliberate process has inspired a wide range of psychologists, philosophers, and economists. However, despite the popularity of the dual process framework it faces multiple challenges. One key issue is that the precise interaction between intuitive and deliberate thought processes (or System 1 and 2, as they are often referred to) is not well understood. There is little dispute that sometimes intuitions can be helpful and sometimes deliberation is required to arrive at a sound conclusion. But how does our reasoning engine decide which route to take? Are both processes activated simultaneously from the start or do we initially rely on the intuitive system and switch to deliberate processing when it is needed? But how do we know whether deliberation is needed and determine whether merely relying on our intuitions is warranted or not? The various speakers in this symposium will give an overview of empirical work and recent advances in dual process theorizing that started to focus on these fundamental outstanding issues.


A Three-Stage Dual-Process Model of Analytic Engagement
Gordon Pennycook, Yale University, USA; Jonathan Fugelsang, University of Waterloo, Canada; and
Derek
J. Koehler, University of Waterloo, Canada

Dual-process theories formalize a salient feature of human cognition: Although we have the capacity to rapidly generate answers to questions, we sometimes engage in deliberate reasoning processes before responding. We have, in other words, two minds that might influence what we decide to do. Although this distinction is widely accepted (with some notable exceptions), it poses serious (and largely unappreciated) challenges for our understanding of cognitive architecture. What features of our cognitive architecture trigger the engagement of analytic thought? To what extent is analytic thinking discretionary? Do we truly have the capacity to decide when to think? If so, what underlying processes trigger the decision to think? The goal of this talk will be to highlight the areas of ambiguity in dual-process theories with the objective of outlining some potential theoretical and empirical groundwork for future dual-process models.


Empirical Evidence for a Parallel Processing Model of Belief Bias
D
ries Trippas, Max Planck Institute for Human Development, Germany and Simon Handley
, Macquarie University, Australia

Belief bias is the tendency for people to respond the basis of their prior beliefs in a task where they are instructed to respond on the basis of logical structure. The default-interventionist account of belief bias explains this finding as follows: a quick intuition-based (Type 1) process provides a response based upon the conclusion’s believability before a slower deliberation-based (Type 2) process has the chance to kick in and provide a response based on logical analysis. In this talk we review a series of recent empirical findings which suggest that this account of belief bias needs revision. First, some people have a degree of implicit sensitivity to logical validity, suggesting that deliberation is not always necessary for logical responding. Second, some people draw upon effortful processing to integrate their prior beliefs in order to achieve higher reasoning accuracy, suggesting that intuition is not always sufficient for belief-based responding. Third, consistent with these findings, logic-based processing can interfere more with belief-based processing than the converse. We conclude by presenting results consistent with predictions drawn from an alternative parallel processing account of belief bias.


The Smart System 1
W
im De Neys,
CNRS & Paris Descartes University, France

Daily life experiences and scientific research on human thinking indicate that people’s intuitions can often lead them astray. Traditional dual process models have presented an influential account of these biases. In my talk I will review evidence for two controversial claims: 1) biased intuitive reasoners show sensitivity to their bias, and 2) correct deliberate responses are typically generated intuitively. I will discuss how these findings force us to fundamentally re-conceptualize our view of intuitive and deliberate thinking.


Logical Intuitions and Other Conundra for Dual Process Theories
V
alerie Thompson, University of Saskatchewan, Canada

Dual-Process Theories (DPT), posit that reasoning reflects the joint contribution of two qualitatively different sets of processes: Type 1 processes are autonomous, and therefore usually faster and Type 2 processes require working-memory, and are therefore usually slower. The DPT explanation for many reasoning phenomena rests on this asymmetry: Type 1 processes produce a default answer that may not be overturned by Type 2 processes. Two corollaries to the speed-asymmetry assumption are that processing is sequential (Type 1 precedes Type 2) and that the basis of the IQ-reasoning relationship is due to Type 2 processes (i.e., that high IQ reasoners have the capacity to inhibit the default response and reformulate the problem). In this talk, I will outline some of the evidence that challenges these core assumptions, discuss their implications for DPT moving forward, and highlight some of the outstanding questions that remain for DPT.

 

SYMPOSIUM II
Improving Use of Statistical Inference in Science

Friday, November 10  |  1:30 p.m. - 3:30 p.m.
This symposium features seven speakers that focus on a proper use of statistical inference in science. Talks will feature suggestions for improvement on statistical practices in different fields, new cutting-edge techniques for statistical inference, ways to diagnose improper methodology and inference, and recommendations on improving methodological practices.


A Simulation Study of the Strength of Evidence in the Recommendation of
Medications Based on Two Trials With Statistically Significant Results

Don van Ravenzwaaij
, University of Groningen, The Netherlands
   
with John Ioannidis,
Stanford University, USA

A typical rule that has been used for the endorsement of new medications by the Food and Drug Administration is to have two trials, each with p < .05, demonstrating effectiveness. In this paper, we calculate with simulations what it means to have exactly two trials, each with p < .05, in terms of the actual strength of evidence quantified by Bayes factors. Our results show that different cases where two trials have a p-value below .05 have wildly differing Bayes factors. Bayes factors of at least 20 in favor of the alternative hypothesis are not necessarily achieved and they fail to be reached in a large proportion of cases. In a non-trivial number of cases, evidence actually points to the null hypothesis. We recommend use of Bayes factors as a routine tool to assess endorsement of new medications, because Bayes factors consistently quantify strength of evidence.


Using Theory to Improve Statistical Inference in Science
Stephan Lewandowsky
, University of Briston, United Kingdom
    with
Klaus Oberauer, University of Zurich, Switzerland

Recent debate of the presumed “replication crisis” has largely focused on statistics, with practices such as “p-hacking” (deciding when to stop testing based on preliminary analyses) and “HARKing” (hypothesizing after results are known) being identified as problematic. We suggest that statistical practices should not be considered in isolation. Instead, we also need to strenghten theorizing: Theories should make unambiguous predictions such that the absence of the predicted phenomenon counts as evidence against the theory. The distinction between exploratory and confirmatory research then turns into the distinction between testing a prediction of a theory and research that does not, regardless of whether the researcher thought of the prediction before or after looking at the data. Rigorous theorizing can thus address the risks of HARKing, and reduce the incentive for p-hacking because positive and negative findings become equally informative. For theorizing to meet these criteria, it must be instantiated in computational models.


Response Inhibition in the Real World:
A Bayesian Hierarchical Model for Messy Stop-Signal Data

Dora Matzke, University of Amsterdam, The Netherlands
    with
Samuel Curley, The University of Newcastle, Australia and Andrew Heathcote, University of Tasmania, Australia

Response inhibition is frequently investigated using the stop-signal paradigm. In this paradigm, participants perform a two-choice response time task that is occasionally interrupted by a stop signal that instructs participants to withhold their response. Stop-signal performance is typically modeled as horse race between a go and a stop process. If the go process wins, the primary response is executed; if the stop process wins, the primary response is inhibited. The standard horse-race model allows for the estimation of the latency of the unobservable stop response. It does so, however, without accounting for accuracy on the primary task and the possibility that participants occasionally fail to trigger the go or the stop process. We propose a Bayesian mixture model that addresses these limitations. We discuss the operating characteristics of our model, apply it to stop-signal data featuring the manipulation of task difficulty, and outline the strengths and weaknesses of the approach.


Inference on Constellations of Orders
Jeff Rouder
, University of Missouri, USA
    with
Julia Haaf, University of Missouri, USA

Most theories in cognitive psychology are verbal and provide ordinal-level constraint. For example, a theory might predict that performance is better in one condition than another. One way of gaining additional specificity is to posit multiple ordinal constraints simultaneously. For example a theory might predict an effect in one condition, a larger effect in another, and none in a third. Unfortunately, there is no good inference system for assessing multiple order and equality constraints simultaneously. We call such simultaneous constraints “constellations of orders” and show how common theoretical positions lead naturally to constellation-of-order predictions. We develop a Bayesian model comparison approach to assess evidence from data for competing constellations. The result is a statistical system custom tuned for the way psychologists propose theory that is more intuitive and far more accurate than current (linear model) approaches. Come see if it is not the best thing since sliced bread.


The Debiasing Gauntlet: Challenges for Publication Bias Mitigation
Alexander Etz
, University of California, Irvine, USA
    with
Joachim Vandekerckhove, University of Leuven, Belgium

The published literature is a selective sample from the studies researchers perform. Consequently, meta-analyses have a positivity bias, inflating our impression of empirical effect sizes. Recently, it has been argued that this renders meta-analysis essentially useless. However, many methods have now been proposed that purport to mitigate this bias. We suggest certain usability criteria these mitigation methods must meet, and translate them into a series of concrete challenges. The criteria are similar to classical model assessment desiderata, revolving around maximizing generalization performance: A successful debiasing method must yield accurate predictions about new, direct replications, based on information that is publicly available. We present a series of incrementally challenging tests for debiasing methods based on real – as opposed to simulated – datasets, and discuss the performance of some common methods. We suggest upper and lower limits of predictive performance and point out limitations in many methods that preclude evaluation of their performance.


Robust Tests of Theory With Randomly Sampled Experiments
Joachim Vandekerckhove
, University of Irvine, California, USA
    with
Beth Baribault, University of California, Irvine, USA; Christopher Donkin, University of New South Wales, Australia;
    Daniel R. Little, The University of Melbourne, Australia; Jennifer S. Trueblood, Vanderbilt University, USA; Zita Oravecz, The Pennsylvania State
    University, USA
; Don van Ravenzwaaij, University of Groningen, The Netherlands; and Corey White, Syracuse University, USA

We describe and demonstrate a novel strategy useful for replicating empirical effects in psychological science. The new method involves the indiscriminate randomization of independent experimental variables that may be moderators of a to-be-replicated empirical finding, and is used to test the robustness of an empirical claim to some of the vagaries and idiosyncrasies of experimental protocols. The strategy is made feasible by advances in Bayesian inference which allow for the pooling of information across unlike experiments and designs, and is proposed as a gold standard for replication research. We demonstrate the practical feasibility of the strategy with a replication of a study on subliminal priming.


Bayesian Reanalysis of Null Results Reported in Medicine:
Strong Yet Variable Evidence for the Absence of Treatment Effects

Rink Hoekstra
, University of Groningen, The Netherlands
    with Rei Tendeiro-Monden, University of Groningen, The Netherlands; Don van Ravenzwaaij, University of Groningen, The Netherlands; and
    Eric-Jan Wagenmakers, University of Amsterdam, The Netherlands

Efficient progress requires that we know when a treatment effect is absent. We considered 207 articles from the New England Journal of Medicine and found that 22% reported a null result for at least one of the primary outcome measures. Unfortunately, standard statistical analyses are unable to quantify the degree to which these null results actually support the null hypothesis. Such quantification is possible, however, by conducting a Bayesian hypothesis test. Here we reanalyzed a subset of 43 null results from 36 articles using a default Bayesian test for contingency tables. This Bayesian reanalysis revealed that, on average, the reported null results provided strong evidence for the absence of an effect. However, the degree of this evidence is variable and cannot be reliably predicted from the p-value. Instead, sample size is a better (albeit imperfect) predictor for the strength of evidence in favor of the null hypothesis.

 

SYMPOSIUM III: LEADING EDGE WORKSHOP
Beyond the Lab: Using Big Data to Discover Principles of Cognition

Organizers:
Gary Lupyan, University of Wisconsin-Madison, USA
Robert Goldstone, Indiana University Bloomington, USA

November 10 Friday  |  3:50 p.m. - 5:50 p.m.
With more than 100 years of collective practice, experimental psychologists have become highly sophisticated in their application of well-controlled laboratory experiments to reveal principles of human cognition and behavior. This approach has yielded rigorous experimental designs with extensive controls and it should be valued and encouraged. But the very expertise with which psychologists wield their tools for achieving laboratory control may now be limiting our field to the ways in which we can discover principles of cognition by going beyond the lab.

This workshop will focus on two “beyond the lab” approaches that have seen explosive growth in the last five years (for example, in just 2015 about 7,000 scholarly articles made use of the Amazon Mechanical Turk crowdsourcing service). The first focus is on extending traditional laboratory techniques beyond the lab. These include the use of crowdsourcing services to conduct experiments of the type that are impractical or impossible to conduct in the lab, “gamifying” traditional data-collection such that participants actively want to participate in our studies, and organizing contests as a way of efficiently exploring the solution space to projects that are beyond the capabilities of a single research team. The second focus is on using “naturally occurring datasets” wherein creative interrogation of a diverse range of large, real-world data sets can reveal principles of human judgment, perception, categorization, decision making, language use, inference, problem solving, and mental representation. Both of these approaches fit into the broader “big-data” initiatives that are transforming the social sciences. This symposium

Workshop Details

 

SYMPOSIUM IV
When Man Bites Dog: What do Developmental Reversals
Tell Us about Cognitive Development, Aging, and the Brain

Saturday, November 11  |  10:00 a.m. - 12:00 p.m.
Historically, childhood has been construed as a period of limitations: in almost every aspect of human functioning older children and adults outperform younger children. However, childhood is also the time of unique opportunities to learn. In this symposium, we explore the possibility these aspects of development are related, with some cognitive immaturities being beneficial for learning early in development. Such less is more principle has consequences – situations in which younger children outperform older children and adults. We identify these situations as developmental reversals. Developmental reversals are both surprising and theoretically informative. They are surprising because they violate most textbook versions of human cognition. They are informative because (1) they transpire in multiple aspects of cognition (e.g., speech and face perception, attention, memory, and reasoning) and (2) often re-emerge in the course of normal aging (with older adults outperforming younger adults in the same way that children outperform young adults). In the four proposed talks, we will consider such reversals in cognitive development (Brainerd & Reyna; Sloutsky) and cognitive aging (Hasher), as well as the costs of and benefits of less is more in the brain (Thompson-Schill). We then discuss successes and limitations of our proposal (Newcombe).


Developmental Reversals and Cognitive Development
Introduction by Vladimir M. Sloutsky.


Developmental Reversals in Attention and Memory:
How Cognitive Immaturities Support Exploratory Behavior

Vladimir M. Sloutsky, The Ohio State University, USA

Cognitive immaturities have been historically considered as resulting mostly in cognitive limitations. This talk presents new evidence demonstrating how children’s limitations in executive function and cognitive control result in developmental reversals in attention and memory tasks. Additional new evidence links these reversals to exploratory behaviors and reveals the mechanisms underlying such behaviors. Taken together, this evidence suggests an adaptive nature of cognitive immaturities and argue that they allow maximizing exploration, something that is necessary for successful learning and cognitive development.


Developmental Reversals in False Memory and Reasoning Illusions
Charles J. Brainerd, Cornell University, USA
    and Valerie F. Reyna,
Cornell University, USA

Positive progression – that from childhood to adulthood, memory becomes more accurate and reasoning becomes more logical – is one of our most cherished principles of cognitive development. This makes the developmental reversals that fuzzy-trace theory predicts seem highly counterintuitive. Those predictions fall out of the notion that although verbatim and gist memory both improve with age, they can work against each other in certain types of remembering and certain forms of reasoning. Extensive evidence of such reversals has been reported in two spheres: false memory and the classic reasoning illusions of the judgment-and-decision making literature. False memory for semantically-related word lists, sentences, narrative texts, and live events all exhibit reversals, as do illusions such as decision framing and the conjunction fallacy. In each instance, reversals occur because erroneous inferences reflect advanced semantic capabilities, especially gist extraction and the disposition to rely upon it, whereas primitive verbatim memory governs technically correct performance.


Developmental Reversals in Aging: Costs and Benefits of Cognitive Control
Lynn Hasher
, University of Toronto, Canada

Although the ability to control the breadth of attention provides advantages across a range of tasks that have been at the center of interest to cognitive psychologists, there are other, perhaps less studied tasks which benefit from a less tightly regulated, broader focus of attention. On these latter tasks, older adults have been found to outperform young adults (or at least to do as well as young adults). This talk will highlight the surprising benefits of reduced cognitive control using healthy aging as a model, with a few references to findings on time of testing and mood effects since these too are also associated with differences in reliance on control even in young adults.


Costs and Benefits of Cognitive Control: When a Little Frontal Cortex Goes a Long Way
Sharon L. Thompson-Schill
, University of Pennsylvania, USA

Prefrontal cortex is a key component of a system that enables us to regulate our thoughts, behaviors and emotions, and impairments in all of these domains can readily be observed when this cognitive control system is compromised. Here, I explore a somewhat less intuitive hypothesis, namely that cognitive control has costs, as well as benefits, for cognition. I will provide evidence from several experiments in which we manipulated frontally-mediated cognitive control processes using noninvasive brain stimulation (transcranial direct current stimulation; TDCS) of prefrontal cortex and observed the consequences for different aspects of cognition. Using this experimental methodology, we demonstrate the costs and benefits of cognitive control for language, memory, and creative problem solving. I will suggest that this framework for thinking about cognitive control has important implications for our understanding of cognition in children prior to maturation of prefrontal cortex.

Discussion
Nora Newcombe, Temple University, USA



SYMPOSIUM V
50 Years of Implicit Learning Research:
A Symposium in Honor of Arthur S. Reber

Saturday, November 11  |  1:30 p.m. - 3:30 p.m.
In 1967, the term ‘implicit learning’ was coined to describe a phenomenon in which participants appeared to extract complex underlying rules without being able to report what they had learned (A.S. Reber, 1967). Over the last fifty years, a substantial amount of research has followed the idea of learning outside awareness across debates on consciousness, methodological challenges of measuring the lack of awareness, incorporating findings from patients with neuropsychological disorders, and eventually to modern views of the cognitive neuroscience of human memory systems. The core concept of implicit knowledge affecting cognitive processes is still visible across research areas ranging from language learning, skill acquisition and expertise, intuition and decision making, and all the way to the representation of stereotype bias. In this symposium we will review research that has followed in this tradition, considering its history and examining how the idea of implicit learning came to pervade theories of memory and continues to influence research in cognitive psychology and cognitive neuroscience.


The Reach of the Unconscious
Axel Cleeremans
, University Libre de Bruxelles, Belgium

A great conceptual pendulum oscillates, with a period of about 30 or 40 years, over our understanding of the relationships between conscious and unconscious information processing. Its path delineates the contours of the unconscious mind as well as its contents: Sometimes smart and defining the very fabric of the mind, the unconscious is at other times relegated to taking care of little more than our bodily functions. The pioneering work of Arthur Reber suggested that the unconscious is not only capable of influencing ongoing processing, but also that it can learn! However, even Reber was cautious in this respect, reminding us that the only safe conclusion is that participants’ ability to verbalise the knowledge they have acquired always seems to lag their ability to use it. Today, it often feels like we have thrown caution to the wind, with many questioning the very functions of consciousness and arguing that it is but a mere epiphenomenon. Here, I will revisit this long-standing debate and suggest that the pendulum has swung a little too far. A few general principles emerge from this skeptical analysis. First, the unconscious is probably overrated. Second, since awareness cannot be “turned off”, it should be clear that any decision always involves a complex mixture of conscious and unconscious determinants. Third, there is a pervasive and continuing confusion between information processing without awareness and information processing without attention. Implicit learning, as a field, remains fertile grounds to explore such issues.


Implicit Learning in Healthy Old Age: The Central Influence of Arthur S. Reber
James H. Howard, Jr., The Catholic University of America, USA and Darlene V. Howard, Georgetown University, USA

In his later work Arthur Reber proposed that the implicit learning system was more basic than the often-studied explicit system and thus less sensitive to brain insult. Consistent with the prevailing view of aging at the time, research focused almost exclusively on age-related cognitive declines, as shown for episodic memory and other forms of explicit cognition. Reber’s proposal suggested that this was providing an incomplete picture of cognitive aging since implicit learning is essential throughout life for acquiring new skills and adapting to new physical and social environments. Since the mid 1990’s our group has investigated the aging of implicit learning, finding that although learning of deterministic relationships is indeed relatively preserved with age, declines do occur when learning probabilistic sequential relationships. Furthermore, this pattern of savings and loss can be related to selective age-related differences in brain function, consistent with Reber’s hypothesis.


Implicit Learning and the Multiple Memory Systems Framework
Barbara J. Knowlton, University of California, Los Angeles, USA

The idea that complex structure can be acquired implicitly has been important for the view that there are multiple memory systems that depend on different brain systems. Studies showing that patients with amnesia are capable of learning perceptuomotor skills gave rise to the distinction between declarative vs. procedural learning (knowing that vs. knowing how; Cohen & Squire, 1980). However, subsequent studies showing that these patients were able to acquire the structure of an artificial grammar and category prototypes served to broaden our understanding of the capabilities of nondeclarative learning, in that it became clear that patients with amnesia were also able to learn complex information that was not embedded within a learned procedure. Rather, the lack of awareness for what had been learned became a more important diagnostic feature of nondeclarative memory. Studies of implicit learning in amnesia expanded the notion of multiple memory systems beyond the procedural-declarative dichotomy. Recent studies have focused on elucidating the distinct roles of cerebral cortex, basal ganglia and cerebellum in implicitly acquiring structure from input.


A Brief History of Implicit Learning From Language to Memory Systems and Beyond
Paul J. Reber
,
Northwestern University, USA

The first description of the phenomenon of implicit learning occurred at a time where the science of psychology was just beginning a “cognitive revolution” that allowed for theories that gave serious consideration to knowledge representations within the human mind. Early debates about measuring consciousness and awareness of implicitly learned knowledge showed how challenging it can be to infer representation characteristics solely from behavior. The eventual emergence of cognitive neuroscience and the memory systems framework provided a supporting route for integrating findings from neuropsychology and neurobiology into a coherent view of multiple types of learning and memory. This interdisciplinary approach has allowed the core concept of implicit learning to guide our recent research on cognitive questions outside the laboratory in areas such as intuitive decision making, skill learning and even cybersecurity. These examples demonstrate how the key original insight that there is more than one kind of learning continues to influence an exceptionally broad range of research across psychological science.

Discussion
Arthur S. Reber

 

The Call for Symposia closed on May 1, 2017.







  2424 American Lane • Madison, WI 53704-3102 USA
Phone: +1 608-441-1070 • Fax: +1 608-443-2474 • Email: info@psychonomic.org

Use of Articles
Legal Notice