Donate | Join/Renew | Print Page | Contact Us | Report Abuse | Sign In
Share |
59th Annual MeetingFleur de LisNovember 15-18 - New Orleans

#psynom18 Twitter

Symposia

Program

Keynote Address

Symposia



Special Events

Affiliate Meetings

Mobile App



Registration

Hotels & Venue

Family Care Grants

Travel

Explore New Orleans



Exhibits & Sponsors

Press & Media



2018 Program Committee

Future Meetings

Past Meetings


Jogonomics










   

Generalization in Language and Memory
Should Statistics Determine the Practice of Science, or Science Determine the Practice of Statistics?
Medical Image Perception and Decision Making
What Speech Prosody Can Tell Us About Cognition
Time for Action: Reaching for a Better Understanding of the Dynamics of Cognition


Generalization in Language and Memory

Co-chairs: Jelena Mirkovic, York St John University, United Kingdom, and University of York, United Kingdom, and
M. Gareth Gaskell, University of York, United Kingdom
Domain-general memory processes play a key role in language learning and use. For example, sleep-related memory consolidation has been shown to influence vocabulary learning, the learning of speech sounds and phonotactic regularities, and grammar learning, and both in adults and in children. In this symposium, we will examine the contribution of memory consolidation mechanisms to the process of generalization. Generalization is central to linguistic processing, from generalizing speech sounds from native to non-native speakers, to generalizing the knowledge of syntactic structures to every new sentence we read or hear. Despite this centrality, attempts to understand the nature of generalization across these domains have been rare. The symposium talks will assemble empirical findings across a range of domains (the learning of sounds, grammar, semantic representations, morphology, print-to-sound mappings), and allow us to explore how they link to the computational and neurobehavioral models of memory systems.

Experience Shapes Generalisation of Form-Meaning Knowledge in Mind and Brain

Kathleen Rastle, Royal Holloway, University of London, United Kingdom
Most English words are built by combining and recombining smaller meaningful units (morphemes; e.g. develop, developer, undeveloped, redeveloping). This morphological structure provides the primary basis for generalisation in the language, providing users with the flexibility to create and understand new words (e.g. hyperdevelopment). However, this type of generalisation depends on the acquisition of knowledge relating the forms of words to particular meanings (e.g. hyper means ‘having too much of a quality’). In this talk, I will describe what is known about how this form of knowledge is acquired and represented, and how it is shaped by experience. Drawing on both behavioural and neuroimaging studies of skilled readers, I provide evidence that acquisition of this form of knowledge reflects an accumulation of text experience, and I discuss whether particular types of text experiences are more important than others. Finally, I describe how this knowledge may change over the course of learning, or through modifications to the nature of the training regime.


Consolidation in Non-Native Phonetic Learning

Emily Myers, University of Connecticut, USA
The acoustics of speech vary according to the talker that is producing the sound, and as a function of coarticulation from adjacent speech sounds. As such, non-native speech sound learning inherently requires generalization from learned instances to new talkers and phonological contexts. Work from our group suggests that a consolidation period that contains sleep (but not a waking interval) is sufficient to support generalization from one talker to a similar novel talker in non-native learning as well as in perception of accented speech. These findings support the hypothesis that consolidation during sleep allows listeners to abstract away from trained tokens to generalize to a new talker, however overnight generalization to novel phonological contexts has been harder to demonstrate. In this talk, I will discuss the role of sleep-mediated consolidation processes during non-native phonetic learning, and will point to potential obstacles that may prevent overnight generalization to new phonological environments.


Influences of Sleep and Time of Day on Memory and Generalization of Semantic Category Structure

Anna Schapiro, Harvard Medical School, USA
Semantic memory encompasses knowledge about both the properties that typify concepts (e.g. robins, like all birds, have wings) as well as the properties that individuate conceptually related items (e.g. robins, in particular, have red breasts). We investigated the impact of sleep on new semantic learning using a property inference task in which participants learned names and visual properties of objects possessing both category-typical and exemplar-unique properties. Memory for typical properties improved and memory for unique properties was preserved across a night of sleep, while memory for both feature types declined over a day awake. These and other findings from the literature motivate the idea that sleep may also support generalization of category properties to novel objects. In experiments targeting this generalization, we found no effect of sleep and instead a strong time of day effect, with better generalization occurring in the morning.


Contributions of Memory Processing During Sleep to Rule Generalization in Language

Laura J. Batterink, University of Western Ontario, Canada
Sleep plays an important role in memory consolidation, as well as in abstraction and generalization. During sleep, memories that share common elements may be reactivated together, strengthening the shared connections and leading to the formation of general rules or schemas. Generalization is an essential component of many aspects of language acquisition, including grammar learning. In two studies using two artificial language learning paradigms, we tested whether extraction of novel grammatical rules may be facilitated by memory processing during sleep. In the first study, we found evidence that both slow-wave sleep and REM sleep synergistically facilitate the extraction of a novel hidden grammar rule. In the second study, we demonstrate that grammatical generalization can be biased and promoted through auditory cueing during sleep. These findings have important implications for language learners, suggesting that sleep provides an opportunity to stabilize and perhaps actively enhance grammatical rule knowledge acquired during wake.


The Intertwined Nature of Learning, Representation, and Generalization: Insights From Connectionist Modeling

Blair C. Armstrong, University of Toronto Scarborough, Canada, and
Basque Center on Cognition, Brain and Language, San Sebastián, Spain

     with Nicolas Dumay, Basque Center on Cognition, Brain and Language, San Sebastián, Spain, and University of Exeter, United Kingdom;
     Woojae Kim, Howard University, USA; and Mark A. Pitt, The Ohio State University, USA

There are competing tensions between learning to represent new words and generalizing new knowledge. For example, learning new spelling-sound correspondences in English often benefits from generalizing other words (e.g., bint rhyming with mint, tint). However, exception words must also be represented (e.g., pint). This work explored how these pressures can be understood in terms of a graded representational "warping" mechanism inherent to the connectionist framework. It studied how made-up words with a dominant (regular), subordinate (ambiguous) or previously non-existent (exception) pronunciation were learned in simulations and a multi-day training experiment. The results showed that generalization was related to the degree of warping required to represent the new pronunciation. These findings highlight how theories of representation are fundamentally intertwined with theories of learning and generalization. The impact of warping is discussed in relation to other types of language learning (e.g., second language, semantics) as well as statistical learning.


How Does Consolidation Support Language Generalization?

M. Gareth Gaskell, University of York, United Kingdom
     with Jelena Mirkovic, York St John University, United Kingdom, and University of York, United Kingdom
Memory consolidation is often thought to promote generalization. We will review a set of recent studies that provide empirical tests of this claim within the language domain. In each study, we have used sets of associations (e.g., form-meaning mappings) that vary in the systematicity of the mapping across the trained pairs. The typical pattern that emerges from these studies is that the strength of the consolidation effect depends on the level of systematicity, with mappings that are low in systematicity (i.e., arbitrary) showing the largest benefit. In effect, consolidation promotes arbitrary mappings most strongly, with generalization based on overlapping patterns benefiting to a lesser extent or not at all. With this in mind, we take a new look at the properties of a complementary memory systems account of language learning, focusing on the ability of the medial temporal lobes to support generalization, and the ability of cortical networks to acquire regularities prior to consolidation.

Should Statistics Determine the Practice of Science, or Science Determine the Practice of Statistics?

Chair: Richard M. Shiffrin, Indiana University Bloomington, USA
The so-called ‘reproducibility crisis’ has raised questions about the intersection of science and statistics. The two have evolved together, so the answer to the title question is ‘both’, but the recent focus on irreproducible reports may have tipped the balance toward more and better statistical regulations for producing data and for publication criteria. Some believe reform is needed, possible remedies lying in stricter statistical criteria. Others believe flexibility in scientific decision making is more important. Related issues concern the emphasis upon replication vs. generalization, a-priori hypothesis generation and study design vs post-hoc fishing for data patterns, the importance of patterns of data across multiple conditions versus amount of data per condition, and tension between data collection and costs of research. Should the primary goal be statistical rigor or scientific progress, to the extent these might differ? These issues are a sample of what should be covered in a debate among scientists and statisticians about the appropriate cooperation between science and statistics.

Science Should Govern the Practice of Statistics

Richard M. Shiffrin, Indiana University Bloomington, USA
Although there are two sides to these complex issues, this talk will make the case for the scientific judgment side of the ledger. I will I argue that statistics should serve science and should be consistent with scientific judgment that historically has produced progress. I argue against one-size-fits-all statistical criteria, against the view that a fundamental scientific goal should be reproducibility, and against the suppression of irreproducible results. I note that replications should on average produce smaller sized effects than initial reports, even when science is done as well as possible. I make a case that science is post hoc and that most progress occurs when unexpected results are found (and hence against the case for general use of pre-registration). I argue that much scientific progress is often due to production of causal accounts of processes underlying observed data, often instantiated as quantitative models, but aimed at explaining qualitative patterns across many conditions, in contrast to well defined descriptive statistical models.


The Emptiness of Statistics without Scientific Context

Richard Morey, Cardiff University, United Kingdom
Recently psychological scientists have been (rightly) re-examining their methodology. Failures to replicate many studies, including those featured in textbooks, have thrown light on pernicious practices that run the gamut from opportunistic analyses to outright fraud. The use of formal “statistical significance” at the 5% level (p<.05) is often blamed for weak replicability. Recently Benjamin et al (2018) called for statistical significance to be redefined to .5% to raise the evidential bar for “discoveries” of new effects. It is my view that the proposal is based on a misunderstanding: there cannot be any statistical criterion for discovery, because discovery occurs against the background of a scientific context. Adopting a more stringent criteria may exacerbate the problem by reifying yet another statistical criterion, instead of dealing with it at its philosophical core.


Some Bayesian and Psychometric Reflections on Reproducibility and Robustness

Joachim Vandekerckhove, University of California, Irvine, USA
     with Beth Baribault, University of California, Irvine, USA
In retrospect, it appears that there are many good explanations for the poor reproducibility of psychological studies. Poor research practice, publication bias, small samples, misaligned incentives, and over-interpretation of weak evidence all seem to occur regularly. I will discuss some of these issues from the perspective of Bayesian statistics (focusing on evidence considerations rather than a lexicographic decision rule); from the perspective of psychometrics (focusing on generalizability and robustness rather than one-off observations); and from the perspective of cognitive science (focusing on model building rather than hypothesis testing). I will suggest a few new approaches, some top-down and some bottom-up, and some more radical than others, that might improve our research practice. The approaches have in common that they are meant to encourage the two-way traffic between scientific and statistical goals, with careful translation from scientific accounts to statistical models and rigorous inference from finite data to abstract knowledge.


Breaking Out of Tradition: Statistical Innovation in Psychological Science

Trish Van Zandt, The Ohio State University, USA
     with Steve MacEachern, The Ohio State University, USA
An insistence on p-values that meet arbitrary criteria for publication has resulted in the current replication crisis. This misapplication of statistical methods can be tied directly to a failure of statistics education in psychological science. Not only must we reconsider the statistics curriculum in our graduate programs, but we argue that we must change our focus to those methods that (a) permit the construction of hierarchical models that allow us to explain a range individual differences under a common theoretical umbrella, and (b) move us away from procedures that emphasize asymptotic models (such as the GLM) over theory-driven models. Finally, we need to emphasize the discovery of qualitative patterns in data over quantitative differences across groups, which leave open the question determining the size of a practically meaningful difference. Bayesian methods provide ways to address these difficulties. In this talk, we discuss how Bayesian models incorporate meaningful theory, and how hierarchical structures are flexible enough to explain a wide range of individual differences.


When Are Sample Means Meaningful? The Role of Modern Estimation in Psychological Science

Clintin Davis-Stober, University of Missouri, USA
Sample means are considered a foundational statistic for understanding experimental psychological data. We argue that in many areas of psychology sample means are unacceptably inaccurate at estimating parameters of interest, often due to impoverished sample and effect sizes. We define the sample mean as unacceptably inaccurate if it is less accurate than a benchmark estimator that does not use the data to estimate the relations among experimental conditions. We consider two such benchmark estimators: one that randomizes the relations among conditions and another that states that there are no condition effects no matter the data. We show that there are common cases within psychology where these (nonsensical) benchmark estimators outperform sample means on average, and these may even occur when effects are detected. Our argument highlights the need for modern estimation methods, e.g., hierarchical Bayes, which yield more accurate estimates. We argue that modern estimators should replace sample means, even for describing data, because they are interpretable in a wider range of contexts.


We Should be Doing More Estimation: Designing Studies for Hypothesis Testing May Be Slowing Our Progress

Christopher Donkin, University of New South Wales, Australia
Most of us were trained to do hypothesis testing above all else. Most experiments are planned around an eventual hypothesis test. While hypothesis testing is an integral part of the research process, it should be reserved for when we have competing hypotheses that make quantitative predictions for data, rather than a null and a 'default' alternative. More concerning is that the premature use of hypothesis testing may actually stymie the development of quantitative models, since the space of potential experience goes under-explored. I argue that we may be better served doing more estimation, and using the information we gain to build alternative hypotheses that combine the data we have observed and the theoretical principles we wish to imbibe.



Medical Image Perception and Decision Making

Chair: Trafton Drew, University of Utah, USA
Despite technological advances, cancer detection and diagnosis remain fallible: false negative rates vary across the field, but are often >15%. Moreover, radiology is often the subject of medical malpractice lawsuits on the basis of lesions that are retrospectively visible. Are these perceptual errors, decision errors or something else? Cognitive psychology has much to say about the differences between retrospectively visible and prospectively visible phenomena, but there is often a disconnect between the basic science of cognitive psychology and the applied questions that are relevant to medical image interpretation. The growing field of medical image perception is devoted to leveraging cognitive psychology to improve diagnostic image evaluation. This symposium will highlight some recent advances in this field as researchers use cyclic interactions between clinical observations and basic science to advance our understanding of the role of perception and decision making in diagnostic image evaluation.

Appreciating the Role of the Observer in the Interpretation of Medical Images

Elizabeth Krupinski, Emory University, USA
Medical images constitute a core portion of the information physicians utilize to render diagnostic and treatment decisions. At a fundamental level, the diagnostic process involves two aspects – visually inspecting images (perception) and rendering interpretations (cognition). Key indications of expert interpretation are consistent, accurate and efficient diagnostic performance, but how do we know when someone has attained the level of training required to be considered an expert? How do we know the best way to present images to the clinician in order to optimize accuracy and efficiency and avoid fatigue? The advent of digital imaging has dramatically changed the way that clinicians view images, how residents are trained, and thus potentially the way they interpret image information, emphasizing our need to understand how clinicians interact with the information in an image during the interpretation process. With improved understanding we can develop ways to further improve decision-making and thus improve patient care.


The Influence of Prior Expectation and Expertise on Attentional Cueing in Medical Images

Ann J. Carrigan, Macquarie University, Australia
     with Kim M. Curby, Denise Moerel, Anina N. Rich, Macquarie University, Australia
Radiologists make critical decisions based on searching and interpreting medical images. Prior expectations may set a search strategy or attentional bias, as the probability of a lung nodules differ across anatomical regions within the chest. Using a modified attention-cueing paradigm, we investigated the potential for information in medical images to cause attention shifts in radiologists and control participants. For the radiologists, the results showed that there was no underlying bias of attention when shown normal chest radiographs as primes. Attention was spatially cued by a nodule within a chest radiograph when the images were presented in an upright orientation, and a reversed effect was seen when the image was inverted. For the control participants, no cueing effects were seen, which suggests that the attentional cueing we see for the radiologists is likely due to their experience with medical images. These findings have clinical implications and teaching benefits.


Why Doesn’t That Clever Computer Aided Detection System Work as Well as Theory Says It Should?

Jeremy M. Wolfe, Harvard Medical School/Brigham and Women's Hospital, USA
The future of radiology is a future where radiologists collaborate with technology to detect and identify pathology. While computer aided detection and diagnosis systems have been part of radiology for decades, recent advances in machine learning have led some to suggest that AI will replace radiologists. For the foreseeable future, however, it is more plausible that “Machine Learning is the Next Chapter of Radiology, Not the Last”. Unfortunately, at present, the combination of expert radiologists with expert AI does not produce as much of a benefit as theory might lead us to suspect. There are many ways for an AI to deliver its information to a human. These different methods can produce very different results and those results will be profoundly shaped by the structure of the specific task. I will describe a methodology for studying and improving the human-AI interaction, starting with non-expert observers and moving to radiologists.


Using Cognitive Psychology Tools to Understand Breast Cancer Detection in Segmented-3D Displays

Stephen Mitroff, The George Washington University, USA
     with Stephen H. Adamo, The George Washington University, USA
Tomosynthesis is a new technique in breast cancer detection that creates a segmented-three-dimensional (3D) image of breast tissue, allowing radiologists to move through layers of depth. Compared to mammography, where all breast tissue is compressed into one 2D image, tomosynthesis reduces unnecessary patient callbacks by ~20% while increasing detection rates. However, tomosynthesis can take ~2x longer than mammography alone. To better understand segmented-3D search and the implications for radiology, we have created a testing platform that examines differences between 2D and segmented-3D search. Critically, this paradigm shows a similar pattern in performance to radiological findings, is flexible, and can be run with non-expert populations (which are easier and cheaper to recruit). The goal is to test ideas with the paradigm and then bring the most promising findings to the clinic. We will discuss an example where multiple-target visual search errors are different in segmented-3D searches compared to 2D searches.


From Interaction to Inspiration: Clinical Experience to Spark and Cultivate Research

Joann Elmore, University of California, Los Angeles, USA
Physicians’ clinical judgment and diagnostic accuracy are fundamental to delivering high-quality patient care. As a primary care physician, my interactions with patients have inspired the trajectory of my research. My prior work has documented extensive variability among radiologists’ in their interpretations of mammograms and low levels of accuracy among pathologists in the diagnosis of breast and skin biopsies. The potential reasons for such considerable variability are intriguing, as our diagnostic accuracy is impacted by various intervening factors such as clinical experience, fear of malpractice, and tolerance of ambiguity, to name a few. In this talk, I will describe how my clinical experiences as a practicing physician critically informs my research program, which is devoted to improving cancer detection rates.


Interruption in Radiology: Quantifying Differential Cost in Response to Different Types of Interruption

Trafton Drew, University of Utah, USA
     with Lauren Williams, William Aufferman, and Megan Mills, University of Utah, USA
Radiologists are often interrupted during diagnostic image interpretation, where mistakes can be life threatening. We know from the cognitive psychology literature that interruptions lead to a wide array of negative outcomes. However, not all interruptions are created equal. We think an inadvertent tap on the shoulder is less distracting than being told a patient has stopped breathing, but there is essentially no data to evaluate the accuracy of this belief. In a series of three studies with radiologist observers reading real medical cases, we sought to address this gap in the literature. To evaluate the cost of interruption, we examined time spent per case and what structures were fixated using mobile eye-tracking. We were surprised to find that some interruptions yielded no observable cost in any of these metrics. More generally, the interruption cost increased with the disruptiveness of the interruption, consistent with our hypothesis.

Discussant: Todd Horowitz, National Cancer Institute, USA

What Speech Prosody Can Tell Us About Cognition

Chair: Cassandra Jacobs, University of California, Davis, USA
Prosody is an often-overlooked, but incredibly expressive aspect of language production, encompassing acoustic and temporal factors like word durations, speech rate, disfluencies, pitch or intonation, and volume or intensity. Differences in prosody reflect cognitive factors such as the degree of taxation on working memory, recent experience or ease of processing, and linguistic factors. Prosody is useful in first language acquisition, learning and memory, and online language comprehension. Both children and adult listeners use it to guide their decision-making about how to interact with the real world. The prosodic forms of utterances can betray our cognitive and mental states, such as what we know, but others do not know. This symposium will focus on how speech prosody can inform cognitive scientific theories of learning and memory and categories and concepts in addition to psycholinguistic theories.

What Can Prosody Tell Us About Language Production?

Duane Watson, Vanderbilt University, USA
Language production is fast, context dependent, and driven by communicative intentions, which makes it difficult to study in a controlled laboratory environment. Historically, researchers have used measures such as reaction time and speech errors to understand the mechanisms that underlie language production. In this talk, I will argue that we can better understand the language production system by measuring an utterance's prosody. Specifically, I will show that the mechanisms engaged in lexical access and phonological encoding affect how an utterance is realized. I will also present evidence that different levels of language production (e.g. lexical access vs. phonological encoding) have differing effects on where speakers speed up and slow down over the course of an utterance. Thus, an utterance’s prosody, i.e. how an utterance is produced, can provide a window into the workings of language production.


Variability and Inferences in Pragmatic Interpretation of Speech Prosody

Chigusa Kurumada, University of Rochester, USA
     with Andrés Buxó-Lugo, University of Rochester, USA

How humans cope with uncertainty in information processing has been a persistent puzzle in cognitive sciences. In language comprehension, naturally produced speech sounds are noisy and variable, creating uncertainty in mappings between the signal and underlying representations (phonemes, words). Additionally, inferences supporting these mappings are informed by comprehenders’ beliefs about the world and contextually induced, not directly observable, factors such as the talkers’ goals and intentions. How can we investigate the multifaceted process involving different levels of inferences? We discuss this problem by focusing on comprehension of speech prosody, which conveys much information about the talker’s pragmatic intentions (e.g., questions vs. statements). Drawing on recent models of human perception (e.g., Ideal observer models), we present a framework in which listeners optimizes their pragmatic interpretation of prosody by leveraging: 1) implicit knowledge of statistical structures of acoustic cues learned over past experiences; and 2) contextual inferences over possible and likely meanings.


Focus and Disfluencies in Spoken Language Comprehension

Fernanda Ferreira, University of California, Davis, USA
A word is considered focused if it sits in a prominent syntactic position, or is spoken with prosodic emphasis—if it is accented relative to other words in an utterance. Focused constituents attract listeners’ attention and activate a set of alternate forms. For example, when listeners hear “John wanted not only a dog but also a…___”, they generate a set of dog alternates likely to fill the upcoming slot. Our work on the processing of disfluent speech has shown that sequences involving a speech error and correction behave similarly: When listeners identify a word as a speech error, they generate a set of alternates likely to serve as the repair, (e.g., “John wanted a dog um I mean a…___”). These findings suggest that the tendency to predict is equally strong in disfluency and in focus constructions, indicating that disfluent and fluent speech are processed using similar mechanisms.


Implicit Prosody in Reading Relies on Similar Cognitive Mechanisms as Explicit Prosody

Mara Breen, Mount Holyoke College, USA
In addition to the evidence that prosody influences the comprehension of spoken language, recent empirical work suggests that implicit prosodic representations are activated during silent reading. Using methods similar to those employed for visual imagery, we have demonstrated evidence for implicit prosody in the form of behavioral similarities between auditory perception and auditory imagery. In addition, we’ve observed correlations between individual speaker’s patterns of explicit speech production and their (silent) reading proficiency. Most recently, we’ve observed neuroimaging evidence of overlap in the electrophysiological responses to prosodic manipulations implemented in spoken and signed languages. These results inform fundamental questions about the representations that underlie language processing, the domain-generality of prosodic representations, and the role of individual differences in cognitive processing.


Panel Discussion

Cassandra Jacobs, University of California, Davis, USA
Different prosodic properties of an utterance, such as disfluencies, intonation and speech rate can inform listeners about the speaker’s internal state. Prosody also provides an additional perspective on the mechanics of language comprehension and production. Many of the same mechanisms that are involved in processing the visual world appear to help listeners understand spoken and written language. At the same time, many open questions remain. Are the abilities that allow listeners to predict what a speaker will say domain-general? If not, what are the limitations to the parallels we can draw between language and other aspects of cognition and why is language different? If prosodic processing engages language-specific abilities, do these abilities differ from other dimensions of language processing, such as speech perception? Finally, how might we use prosody in experiments to better understand other facets of cognition, such as expertise, skill learning, and category learning?

 

Time for Action: Reaching for a Better Understanding of the Dynamics of Cognition
From the Psychonomic Society Leading Edge Workshop Initiative.

Co-chairs: Joo-Hyun Song, Brown University, USA, and Timothy Welsh, University of Toronto, Canada
The goal of this symposium is to share research and theoretical perspectives that have advanced the understanding of how cognition and action systems are integrated and operate synergistically. Historically, the transformation of sensory inputs into action has been treated as a set of relatively unidirectional processing events with the results of low-level sensory and earlier perceptual processes informing higher-order cognitive processes until a decision is made to respond, at which point the action system receives its instructions. Given this approach, it may not be too surprising that there has been relatively little interaction between researchers in cognitive and motor domains. Thus, a deeper understanding of human behavior has been hindered because little attention has been paid to the broader context of action and how action processes are embedded in the larger canvas of visual attention, memory, learning, decision making and interpersonal interaction. This knowledge is vital understanding of human behavior and will help shape the design of everyday objects and training and working environments.

Introduction

Joo-Hyun Song, Brown University, USA, and Timothy Welsh, University of Toronto, Canada


The Time For Action is At Hand

David Rosenbaum, University of California, Riverside, USA
The science of mental life and behavior has paid surprisingly little attention to the means by which mental life is translated into physical actions. Even with the growing acceptance of embodiment, motor control has more often been viewed as a window into perception and cognition than as a topic in its own right. In a 2005 American Psychologist article, called “The Cinderella of Psychology,” I suggested that the relegation of motor control to the sidelines of psychology has a number of historical causes. I will briefly review those causes and then turn to studies showing that psychonomic-style research has revealed important principles of action organization. New, exciting lines of work are also in the pipeline. These developments suggest that the time for the integration of action research with research on cognition, perception, and emotion is at hand.


Towards A Unitary Approach To Human Action Control

Bernhard Hommel, Cognitive Psychology Unit & Leiden Institute for Brain and Cognition, Leiden University, The Netherlands
     with Reinout W. Wiers, University of Amsterdam, Amsterdam, The Netherlands
From its academic beginnings, the theory of human action control has distinguished between endogenously-driven, intentional action and exogenously-driven, habitual or automatic action. We challenge this dual-route model and argue that attempts to provide clear-cut, straightforward criteria to distinguish between intentional and automatic action have systematically failed. Specifically, we show that there is no evidence for intention-independent action, and that attempts to use the criterion of reward sensitivity and rationality to differentiate intentional and automatic action are conceptually unsound. As a more parsimonious, and more feasible alternative, we suggest a unitary approach to action control, according to which actions are (a) represented by codes of their perceptual effects; (b) selected by matching intentionsensitive selection criteria; and (c) moderated by metacontrol states.


Actions as Social Signals: Methods and a Framework for Studies of Human Social Interaction

Antonia Hamilton, University College London, United Kingdom
Social interactions in real life are spontaneous, fluid and rarely repeated in exactly the same way again. How, then, can we pin down these behaviours in the lab and place them in a theoretical framework? To answer these questions, I will describe a series of studies, which record hand, head and body movement in high resolution during naturalistic social interactions. We characterize these actions in a social signalling framework, testing if an action performed only for one person, or is it intended as a signal which sends a message to another? For example, we find evidence that imitation is greater when people believe they are watched, showing that that imitation is a social signal. This showcases how high-precision methods and strong theories are both needed to advance studies of human social interaction.


Dynamics of Distraction in Goal-Directed Action

Jeff Moher, Connecticut College, USA
Our ability to stay focused on the task at hand fluctuates from moment-to-moment. In recent work, I have explored these fluctuations in focused attention in the context of goal-directed action. Specifically, I have found that the trajectory of a hand movement towards a target in a simple search task can vary widely over time, despite no changes in the task or stimulus properties. Increased deviation towards a non-target distractor on one trial appears to indicate a lack of focus that sustains into subsequent trials. These effects are not limited to motor priming, as similar patterns are observed when participants switch between goal-directed action responses and keypress responses. The results of these studies hold promise both for furthering our understanding of the dynamics of attention and for practical implementations that could detect drifts in focus during high-stakes situations (e.g., driving) and send alerts to prevent costly errors.


Choice Reaching with a LEGO Arm Robot (CoRLEGO)

Dietmar Heinke, University of Birmingham, United Kingdom
I will present a neurobiologically inspired robotics model, termed CoRLEGO (Choice reaching with a LEGO arm robot). CoRLEGO’s architecture is based on the assumption that the process of selecting reaching targets can leak into the motor system (i.e., leakage effect). In CoRLEGO this leakage effect was implemented with neurobiologically plausible, dynamic neural fields (DNF); competitive target selection and topological representations of motor parameters. CoRLEGO demonstrates how the leakage effect can simulate evidence from Song and colleagues’ choice reaching studies such as the curvature effect and the colour priming effect. An extension of CoRLEGO can mimic findings that transcranial direct current stimulation (tDCS) over the motor cortex modulates the colour priming effect (Woodgate et al., 2015). The extension includes feedback connections from the motor system to the brain’s attentional system (parietal cortex). This architecture adds to growing evidence that there is a close interaction between the motor system and the attention system. This view is different from the traditional conceptualization of the motor system as the endpoint of a serial chain of processing stages.

Discussant: Laura Thomas, North Dakota State University, USA
     with Joo-Hyun Song, Brown University, USA, and Timothy Welsh, University of Toronto, Canada

Fleur de lisCall for Symposia

The Call for Symposia closed on May 1, 2018.
Submission Rules and Guidelines

  2424 American Lane • Madison, WI 53704-3102 USA
Phone: +1 608-441-1070 • Fax: +1 608-443-2474 • Email: info@psychonomic.org

Use of Articles
Legal Notice

Privacy Policy