Donate | Join/Renew | Print Page | Contact Us | Report Abuse | Sign In
2018 Leading Edge Workshop
Share |

Time for Action: Reaching for a Better Understanding of the Dynamics of Cognition

8-9 May 2018   |   Amsterdam, The Netherlands

Call for Papers
A special issue of Attention, Perception, & Psychophysics

Time for Action: Reaching for a Better
Understanding of the Dynamics of Cognition


The overarching goal of this workshop is to advance the understanding of how cognition and action systems are integrated and operate synergistically. This knowledge of how humans efficiently interact and navigate in complex environments is vital for generating a comprehensive understanding of human behavior and will help shape the design of everyday objects and training and working environments. One poignant example is computer technology. Human-computer interfaces equipped with gestural and tangible technologies are becoming increasingly accessible and ubiquitous in educational, leisure, and work settings. A thorough understanding of the interactions between cognition and action is needed help designers engineer devices and environments that maximize the functionality and usability. Thus, the workshop will bring together a diverse group of scholars in the fields of psychology, neuroscience, kinesiology, and human-computer interactions to share and critically evaluate their cutting-edge theoretical, empirical, and translational developments. We have also invited research scientists from industry and a program director from science funding agency as observers and discussants to enhance broad impacts of the scientific progress. Bringing this diverse group of participants together will: 1) enable the dissemination of past and current research; 2) provide a framework and chart a course for future fundamental research and knowledge translation; and, perhaps most critically, 3) generate a set of methodological standards to facilitate consistency in future research.

Joo-Hyun Song

Joo-Hyun Song
Brown University, USA

Timothy Welsh

Timothy Welsh
University of Toronto, Canada

Section 1: Theoretical Considerations of the Interactions Between Cognition and Action

Integrating Perception and Action: The Theory of Event Coding

Bernhard HommelBernhard Hommel, University of Leiden, The Netherlands
This talk gives an introduction into the Theory of Event Coding (TEC), which claims that perception and action are not only based on shared (i.e., sensorimotor) representations but are in some sense one and the same thing. Behavioral and neurocognitive studies will be discussed to show how knowledge about possible action goals and action affordances is acquired, how this knowledge is used to select and control intentional action and to anticipate action outcomes, and how it is neurally represented. Recent extensions also consider the role of cognitive (meta) control and the representation of self and other social events.

The Evolutionary History of Integrated Cognition and Action

Paul CisekPaul Cisek, University of Montreal, Canada
While “cognition” and “action” are traditionally considered as theoretically distinct types of processes, a growing body of experimental data emphasizes their integration. This raises the question of whether these labels accurately demarcate true biological categories of processes within the brain or whether they are merely conceptual artifacts resulting from the tumultuous history of psychological theories. In my talk, I will address this question from the perspective of a different type of history: that of nervous system evolution along the vertebrate lineage leading to humans. I will summarize current thinking on how a sequence of neural innovations implemented the continuous expansion of the behavioral repertoire from early metazoans to modern primates. Along this route, the biologically relevant distinctions are not between serial functional modules such as perception, cognition, memory, or planning, but between parallel behavioral systems, each involving an entire closed sensorimotor loop that mediates interactions with the environment. These include general categories of activity such as forage vs. rest, explore vs. exploit, approach vs. avoid, as well as a variety of species-typical behaviors such as walking, climbing, burrowing, manipulation, vocalization, etc. I will review neurophysiological data suggesting that these distinctions provide a more natural mapping between structure and function in the brain, explaining why cognition and action should appear to be so closely integrated.

Interactions Between the Dorsal and Ventral Visual Streams in the Production of Skilled Movements

Melvyn GoodaleMelvyn Goodale, The University of Western Ontario, Canada
Human beings are capable of reaching out and grasping objects with great accuracy and precision – and vision plays a critical role in the control of this ability.  The visual guidance of these skilled movements, however, requires transformations of incoming visual information that are quite different from those required for visual perception.  For us to grasp an object successfully, our brain must compute the actual (absolute) size of the goal object, and its orientation and position with respect to our hand and fingers – and must ignore the relative size or distance of the object with respect to other elements in the visual array.  These differences in the required computations have led to the emergence of dedicated visuomotor modules in the dorsal visual stream that are separate from the networks in the ventral visual stream that mediate our conscious perception of the world.  But even though the dorsal stream may allow an observer to reach out and grasp objects with exquisite ease, it is trapped in the present. By itself, the dorsal stream can deal only with objects that are visible when the action is being programmed. The ventral stream, however, allows an observer to escape the present and bring to bear information from the past – including information about the function of objects, their intrinsic properties, and their location with reference to other objects in the world. Ultimately then, both streams contribute to the production of goal-directed actions.

Section 2 – Experimental Evidence for the Interactions Between Cognition and Action

From Action to Abstraction: Gesture as a Mechanism of Change

Susan Goldin-MeadowSusan Goldin-Meadow, University of Chicago, USA
When speakers talk, they gesture, particularly when they explain their solutions to a problem.  These gestures are not mere hand waving.  They reflect ideas that the speaker has about the problem, often ideas that are not found in that speaker’s talk. But gesture can do more than reflect ideas––it can also change them. In this sense, gesture behaves like any other action; both gesture and action on objects facilitate learning problems on which training was given.  However, only gesture promotes transferring the knowledge gained to problems that require generalization. Gesture is, in fact, a special kind of action in that it represents the world rather than directly manipulating the world (gesture does not move objects around). The mechanisms by which gesture and action promote learning may therefore differ––gesture is able to highlight components of an action that promote abstract learning while leaving out details that could tie learning to a specific context. Because it is both an action and a representation, gesture can serve as a bridge between the two and thus be a powerful tool for learning abstract ideas.

Vision Prioritizes Action-Relevant Information in Perihand Space

Laura Thomas

Laura E. Thomas, North Dakota State University, USA
Objects within reach afford immediate interaction, creating a potential need to evaluate items that are candidates for action by integrating visual information with spatial, tactile, and proprioceptive representations. Observers show visual biases within the hands’ grasping space that suggest our ability to see the world around us is tied to adaptations that privilege effective action. I will present evidence that the visual system weights processing on the basis of an observer’s current affordances for specific grasping actions: Fast and forceful power grasps enhance temporal sensitivity, whereas detail-oriented precision grasps enhance spatial sensitivity. These visual biases rapidly shift to accommodate newly learned grasp affordances, suggesting that experience-driven plasticity tunes visual cognition to facilitate action. The visual system’s adaptive sensitivity to behavioral contexts even extends to incorporate the affordances of co-actors in the environment, leading vision to prioritize action-relevant information both when observers act alone and when engaging in joint action with a partner. These findings contradict purely modular theories of vision and suggest that a more complete understanding of visual cognition must incorporate consideration of body-based and social contexts.

Section 2 – Experimental Evidence for the Interactions Between Cognition and Action

Session 2-2: Attention and Target Selection for Action

Paradoxical Modulation of Motor Actions by Attention

JooHyun SongJooHyun Song, Brown University, USA
Vision is crucial not only for recognizing objects, but also for guiding actions. Most real-world visual scenes are complex and crowded with many different objects competing for attention and action. In order to efficiently guide motor actions, the visual system must be capable of selecting one object as the target of the current action, while suppressing the wealth of other irrelevant possibilities. It is generally accepted that more perceptually salient stimuli are able to attract attention automatically and thus are more disruptive to behavior than weakly salient distractors. Yet, counter intuitively, we recently discovered dissociable effects of salience on perception and action: while highly salient stimuli interfere strongly with perceptual processing, increased physical salience or associated value attenuates action-related interference. Thus, this result suggests the existence of salience-triggered suppression mechanisms specific to goal-directed actions. Furthermore, we observed that attentional distraction does not impair the original learning of a simple visuomotor rotational adaptation task. Paradoxically, successful recall of the visuomotor skill only occurs when a similar level of attentional distraction is present. This finding suggests that performing a distractor task acts as an internal ‘attentional context’ for encoding and retrieving of motor memory. Therefore without consideration of internal task contexts in real-life situations, the success of learning and rehabilitation programs may be undermined. Taken together, understanding integrated attention-action systems provides new insights into our seamlessly interaction with a complex external world.

Fluctuations in Focused Attention During Goal-Directed Action

Jeff MoherJeff Moher, Connecticut College, USA
Our ability to stay focused on the task at hand fluctuates from moment-to-moment. Whether we are driving, typing, or trying to read a conference abstract, our minds often wander. I will discuss recent work that explores these fluctuations in focused attention in the context of goal-directed action. There are (at least) two distinct advantages to this approach. First, reach movements can be broken down into meaningful subcomponents, providing a more fine-grained measure for subtle changes in behavior. Second, the spatial properties of hand movements allow us to distinguish instances when someone is pulled towards or moves away from a distracting non-target object - an important nuance that is often lost with more traditional keypress approaches.

I have found that the trajectory of a hand movement towards a target in a simple search task can vary widely over time, despite no changes in the task or stimulus properties. Increased deviation towards a non-target distractor on one trial appears to indicate a lack of focus that sustains into subsequent trials. These effects are not limited to motor priming, as similar patterns are observed when participants switch between goal-directed action responses and keypress responses. The results of these studies hold promise both for furthering our understanding of the dynamics of attention and for practical implementations that could detect drifts in focus during high-stakes situations (e.g., driving) and send alerts to prevent costly errors.

Choice Reaching with a LEGO Arm Robot (CoRLEGO)

Dietmar HeinkeDietmar Heinke, University of Birmingham, United Kingdom
I will present a neurobiologically inspired robotics model, termed CoRLEGO (Choice reaching with a LEGO arm robot). CoRLEGO’s architecture is based on the assumption that the process of selecting reaching targets can leak into the motor system (i.e., leakage effect). In CoRLEGO this leakage effect was implemented with neurobiologically plausible, dynamic neural fields (DNF); competitive target selection and topological representations of motor parameters. CoRLEGO demonstrates how the leakage effect can simulate evidence from Song and colleagues’ choice reaching studies. In their experiments participants are asked to reach an item presented on the screen. Usually, the reach target is defined by its colour oddity (i.e., a green square among red squares or vice versa). These experiments show that non-target items can divert the reaching movement away from the ideal trajectory to the target item (i.e., the curvature effect) and that the curvature effect declines with repetitions of the target colour (i.e., colour priming). An extension of CoRLEGO can mimic findings that transcranial direct current stimulation (tDCS) over the motor cortex modulates the colour priming effect (Woodgate et al., 2015). The extension of CoRLEGO includes feedback connections from the motor system to the brain’s attentional system (parietal cortex) guide visual attention to extract movement-relevant information (i.e., colour) from visual stimuli. This work adds to growing evidence that there is a close interaction between the motor system and the attention system. This view is different from the traditional conceptualization of the motor system as the endpoint of a serial chain of processing stages.

Section 2 – Experimental Evidence for the Interactions Between Cognition and Action

Session 2-3: Action and Object Processing

Two Neurocognitive Routes for Tool Use and Transport in the Left Hemisphere

Laurel J. BuxbaumLaurel J. Buxbaum, Moss Rehabilitation Research Institute, USA
Our ability to stay focused on the task at hand fluctuates from moment-to-moment.  Whether we are driving, typing, or trying to read a conference abstract, our minds often wander.  I will discuss recent work that explores these fluctuations in focused attention in the context of goal-directed action.  There are (at least) two distinct advantages to this approach.  First, reach movements can be broken down into meaningful subcomponents, providing a more fine-grained measure for subtle changes in behavior.  Second, the spatial properties of hand movements allow us to distinguish instances when someone is pulled towards or moves away from a distracting non-target object - an important nuance that is often lost with more traditional keypress approaches.

I Don’t Have Time to Make Up My Mind: Using Actions to Understand Ongoing Decision Making Processes

Heather NeyedliHeather Neyedli, Dalhousie University, Canada
We need to rapidly select and plan actions to objects or stimuli that will achieve our goals while avoiding negative outcomes. To select and plan the best action we must be aware of the value associated with the potential outcomes of our actions and the probability that these outcomes will occur. If this information is processed more effortlessly or subconsciously, we will be able to pre-plan potential actions allowing us to react, execute and adapt the movement more quickly. In my talk, I will discuss findings that show that while choice may seem like a conscious process, unconscious processes influence our choices and the actions associated with those choices. By exploring a number of recent findings, I will show that information from movement trajectories and movement endpoint can reveal biases in attention and decision making. Furthermore, the probabilities associated with our actions are better integrated into our choices and action plans than the values associated with our actions. The tight link between perception, decision making and action planning processes may lead to faster decisions and more rapidly adaptable actions, but only in certain contexts.

Section 3 – Knowledge Translation

The Neurotherapeutic Potential of Animation

Omar AhmedOmar Ahmed, The Johns Hopkins School of Medicine, USA
Scientists now know that exploring new and diverse physics helps learning. Researchers at Johns Hopkins showed in a study, published in the Journal Science, that when an infant observes a scenario with surprising physics, its helps them learn. That is, exploring new and unfamiliar dynamics enhances learning. At Kata, we create experiences that require movements to control the physics of an animal completely different from a human. A player encounters new and unexpected physics by learning how to move creatures built using a physics-animation simulation technology. These virtual creatures were born from thousands of hours of study of actual animals, and then integrating that knowledge with physical simulations containing bones, muscles, and control systems users can manipulate with their own input. The result is a deeply visceral experience, the only one of its kind in the world, designed to address cognition through new, exploratory, movement and its associated physics.

Leveraging Knowledge of Integrated Sensorimotor-Cognition Systems to Enhance Performance and Learning During Human-Computer Interactions

Timothy WelshTimothy Welsh, University of Toronto, Canada
Efficient and user-friendly human-computer interactions (HCI) are largely determined by the design of the virtual workspaces (e.g., the size and location of the stimuli on the screen) and the input technologies that convert the user’s thoughts and actions into commands and functions. For this reason, HCI designers attempt to utilize knowledge generated by behavioural and neural scientists when developing new technologies and systems.  Over the last decade, my lab has been collaborating with researchers in computer sciences and digital media on a series of projects that leverages research and theories of integrated sensorimotor-cognition systems for the engineering of embodied and tangible HCI. We have shown that embodied HCI systems that transfer the reaching and kicking movements of the user to the virtual character can be more efficient than systems that employ conventional keyboard and mouse devices.  Importantly, this work has also revealed that playing “smart games” using embodied interfaces facilitates improvements in spatial skills, such as mental rotation and perspective taking, whereas playing these same games with keyboard and mouse interfaces do not.  More broadly speaking, this work highlights the large scope of important opportunities for translating the knowledge gained through fundamental behavioural and neuroscience research to improve HCI systems and, potentially, educational practices.

Max Di Luca, Oculus VR

Section 4 – Future of Research in Interactions Between Cognition and Action

Session 4-1 Methodological Considerations and Standards

Actions as Social Signals: Methods and a Framework for Studies of Human Social Interaction

Antonia HamiltonAntonia Hamilton, University College London,
United Kingdom

Social interactions in real life are spontaneous, fluid and rarely repeated in exactly the same way again.  How, then, can we pin down these ephemeral behaviours in the lab and achieve both ecological validity and experimental control?  And what kind of a theoretical framework do we need to guide our experiments?To answer these questions, I will describe a series of studies which record movement kinematics and manipulate the knowledge of being watched.  Obtaining precise measures of kinematics and finding appropriate ways to analyse and characterise kinematics is critical for advancing our understanding of the details of human social behaviour.  But to place this in a broader framework, it is important to think about what a social behaviour is for – in particular – is an action performed only for one person, or is it intended as a signal which sends a message to another?  Understanding actions as social signals allows us to develop and test hypothesis of how actions function in social interactions.  A key prediction of the signalling hypothesis is that a signal is sent only when it can be received, that is, when someone is watching.  Thus, experimental manipulation of the ‘knowledge of being watched’ is a critical test of whether a particular action or a particular kinematic pattern has a social-signalling function.  Our data provide evidence that imitation is used as a social signal, and showcase how high-precision methods and strong theories are both needed to advance studies of human social interaction.

Models, Movements, and Minds: New Tools to Unite Decision Making and Action

Craig ChapmanCraig Chapman, University of Alberta, Canada
As neuroscience and psychology embrace the study of the brain as a dynamic and fluid decision-making machine, categorical labels like “Perception”, “Decision” and “Action” are beginning to crumble. Here, I will argue that the tools we use to study human thinking through behaviour must also keep pace with our new worldview. I will first present analysis techniques that are able to capture the fluidity of perception, decision, and action, such as comparing movement trajectories (e.g. functional ANOVA) and measuring how different factors affect movements over different time scales (e.g. functional regression as in Scherbaum, S. et al, Cognition, (2010)). But, just as isolating “Action” as a modular neural component was a mistake, so too is measuring only a single aspect of the sensorimotor stream.  Thus, I will also discuss the development of a new tool: the Gaze and Movement Assessment (GaMA) suite which provides a platform for analyzing combined and simultaneously recorded eye- and motion-tracking data, with an imminent extension to also include electroencephalography. Finally, as we empower our research with new tools, we must also retain a fundamental tie to the theories which guide our hypotheses. As such, I will conclude with a description of some new alternatives to decision models which extend more conventional evidence accumulation processes to explain our new conceptualization of decision-making as a flexible and unified process encompassing sensation, movement and everything in between.

Section 4 – Future of Research in Interactions Between Cognition and Action

Session 4-2. Revisiting of Motor Control as Cinderella: Past & Future of Cross-Disciplinary Fields

The Time for Action Is at Hand

David RosenbaumDavid Rosenbaum, University of California, Riverside, USA
Watch a toddler use a broom or a robot fold laundry and the skill used for everyday tasks becomes obvious. Yet the ease with which neuro-typical adults carry out such tasks is generally taken for granted. Such activities are often viewed as cognitively unsophisticated, perhaps because academic training is not required to achieve them, and people who perform such tasks for a living occupy the lowest rung of society. The bias against the view that physical action is cognitively sophisticated is so deeply rooted that psychology, the science of mental life and behavior, has paid scant attention to the means by which mental life is translated into physical actions. Even within this workshop, whose speakers include the most enlightened researchers in the world, motor control is more often viewed as a window into perception and cognition than as a topic of interest in its own right. In a 2005 American Psychologist article, called the Cinderella of Psychology, I suggested that the relegation of motor control to the sidelines of psychology has a number of historical causes. I will briefly review those causes in my talk and comment on signs that things are improving. Echoing the view of others who emphasize that action cannot be divorced from cognition, I will review work from my own lab and others that probe the processes underlying physical action selection and control. These lines of work indicate that the time for integration of action research with research on cognition, perception, and emotion is at hand.

General Discussion:
Future of the Fields: A View from the Bridge

Betty TullerBetty Tuller, National Science Foundation, USA
An understanding of the dynamics of action/cognition will be vital for the creation of human-computer collaborative systems where human and machine agents learn from each other, both quickly and over the lifespan. Human acceptance of these collaborations will likely need an understanding of how the “softer” aspects of cognition (trust, emotion, reciprocity, intention, negotiation, etc.) are embodied in the dynamics of human actions and instantiated in the human-machine collaboration. In this talk, I will discuss the opportunities this affords for our sciences and calls for a stronger emphasis on use-inspired basic research.


  8735 W. Higgins Road, Suite 300 • Chicago, IL 60631 USA
Phone: +1 847-375-3696 • Fax: +1 847-375-6449 • Email:

Use of Articles
Legal Notice

Privacy Policy