Time for Action: Reaching for a Better Understanding of the Dynamics of Cognition
Hotel: DoubleTree by Hilton Amsterdam Centraal Station
Meeting Room: London 1+2
Please note that all times listed are local Amsterdam time (CET = GMT+1).
May 8, 2018
Section 1 – Theoretical considerations of the interactions between cognition and action
Integrating perception and action: the Theory of Event Coding
Bernhard Hommel, Institute for Psychological Research & Leiden Institute for Brain and Cognition, The Netherlands
Abstract: This talk gives an introduction into the Theory of Event Coding (TEC), which claims that perception and action are not only based on shared (i.e., sensorimotor) representations but are in some sense one and the same thing. Behavioral and neurocognitive studies will be discussed to show how knowledge about possible action goals and action affordances is acquired, how this knowledge is used to select and control intentional action and to anticipate action outcomes, and how it is neurally represented. Recent extensions also consider the role of cognitive (meta) control and the representation of self and other social events.
The evolutionary history of integrated cognition and action
Paul Cisek, Department of Neuroscience, University of Montreal, Canada
Abstract: While “cognition” and “action” are traditionally considered as theoretically distinct types of processes, a growing body of experimental data emphasizes their integration. This raises the question of whether these labels accurately demarcate true biological categories of processes within the brain or whether they are merely conceptual artifacts resulting from the tumultuous history of psychological theories. In my talk, I will address this question from the perspective of a different type of history: that of nervous system evolution along the vertebrate lineage leading to humans. I will summarize current thinking on how a sequence of neural innovations implemented the continuous expansion of the behavioral repertoire from early metazoans to modern primates. Along this route, the biologically relevant distinctions are not between serial functional modules such as perception, cognition, memory, or planning, but between parallel behavioral systems, each involving an entire closed sensorimotor loop that mediates interactions with the environment. These include general categories of activity such as forage vs. rest, explore vs. exploit, approach vs. avoid, as well as a variety of species-typical behaviors such as walking, climbing, burrowing, manipulation, vocalization, etc. I will review neurophysiological data suggesting that these distinctions provide a more natural mapping between structure and function in the brain, explaining why cognition and action should appear to be so closely integrated.
11:30 am-12:30 pm
Interactions between the dorsal and ventral visual streams in the production of skilled movements
Mel Goodale, The Brain and Mind Institute, The University of Western Ontario, London, Canada
Abstract: Human beings are capable of reaching out and grasping objects with great accuracy and precision – and vision plays a critical role in the control of this ability. The visual guidance of these skilled movements, however, requires transformations of incoming visual information that are quite different from those required for visual perception. For us to grasp an object successfully, our brain must compute the actual (absolute) size of the goal object, and its orientation and position with respect to our hand and fingers – and must ignore the relative size or distance of the object with respect to other elements in the visual array. These differences in the required computations have led to the emergence of dedicated visuomotor modules in the dorsal visual stream that are separate from the networks in the ventral visual stream that mediate our conscious perception of the world. But even though the dorsal stream may allow an observer to reach out and grasp objects with exquisite ease, it is trapped in the present. By itself, the dorsal stream can deal only with objects that are visible when the action is being programmed. The ventral stream, however, allows an observer to escape the present and bring to bear information from the past – including information about the function of objects, their intrinsic properties, and their location with reference to other objects in the world. Ultimately then, both streams contribute to the production of goal-directed actions.
Section 2 – Experimental evidence for the interactions between cognition and action
Session 2-1: Dynamics of action in social interactions
From Action to Abstraction: Gesture as a Mechanism of Change
Susan Goldin-Meadow, University of Chicago, USA
Abstract: When speakers talk, they gesture, particularly when they explain their solutions to a problem. These gestures are not mere hand waving. They reflect ideas that the speaker has about the problem, often ideas that are not found in that speaker’s talk. But gesture can do more than reflect ideas––it can also change them. In this sense, gesture behaves like any other action; both gesture and action on objects facilitate learning problems on which training was given. However, only gesture promotes transferring the knowledge gained to problems that require generalization. Gesture is, in fact, a special kind of action in that it represents the world rather than directly manipulating the world (gesture does not move objects around). The mechanisms by which gesture and action promote learning may therefore differ––gesture is able to highlight components of an action that promote abstract learning while leaving out details that could tie learning to a specific context. Because it is both an action and a representation, gesture can serve as a bridge between the two and thus be a powerful tool for learning abstract ideas.
Vision prioritizes action-relevant information in perihand space
Laura E. Thomas, Center for Visual and Cognitive Neuroscience, North Dakota State University, USA
Abstract: Objects within reach afford immediate interaction, creating a potential need to evaluate items that are candidates for action by integrating visual information with spatial, tactile, and proprioceptive representations. Observers show visual biases within the hands’ grasping space that suggest our ability to see the world around us is tied to adaptations that privilege effective action. I will present evidence that the visual system weights processing on the basis of an observer’s current affordances for specific grasping actions: Fast and forceful power grasps enhance temporal sensitivity, whereas detail-oriented precision grasps enhance spatial sensitivity. These visual biases rapidly shift to accommodate newly learned grasp affordances, suggesting that experience-driven plasticity tunes visual cognition to facilitate action. The visual system’s adaptive sensitivity to behavioral contexts even extends to incorporate the affordances of co-actors in the environment, leading vision to prioritize action-relevant information both when observers act alone and when engaging in joint action with a partner. These findings contradict purely modular theories of vision and suggest that a more complete understanding of visual cognition must incorporate consideration of body-based and social contexts.
Session 2-2: Attention and target selection for action
Paradoxical modulation of motor actions by attention
Joo-Hyun Song, Cognitive, linguistic & psychological sciences, Brown University, USA
Abstract: Vision is crucial not only for recognizing objects, but also for guiding actions. Most real-world visual scenes are complex and crowded with many different objects competing for attention and action. In order to efficiently guide motor actions, the visual system must be capable of selecting one object as the target of the current action, while suppressing the wealth of other irrelevant possibilities. It is generally accepted that more perceptually salient stimuli are able to attract attention automatically and thus are more disruptive to behavior than weakly salient distractors. Yet, counter intuitively, we recently discovered dissociable effects of salience on perception and action: while highly salient stimuli interfere strongly with perceptual processing, increased physical salience or associated value attenuates action-related interference. Thus, this result suggests the existence of salience-triggered suppression mechanisms specific to goal-directed actions. Furthermore, we observed that attentional distraction does not impair the original learning of a simple visuomotor rotational adaptation task. Paradoxically, successful recall of the visuomotor skill only occurs when a similar level of attentional distraction is present. This finding suggests that performing a distractor task acts as an internal ‘attentional context’ for encoding and retrieving of motor memory. Therefore without consideration of internal task contexts in real-life situations, the success of learning and rehabilitation programs may be undermined. Taken together, understanding integrated attention-action systems provides new insights into our seamlessly interaction with a complex external world.
Fluctuations in focused attention during goal-directed action
Jeff Moher, Connecticut College, USA
Abstract: Our ability to stay focused on the task at hand fluctuates from moment-to-moment. Whether we are driving, typing, or trying to read a conference abstract, our minds often wander. I will discuss recent work that explores these fluctuations in focused attention in the context of goal-directed action. There are (at least) two distinct advantages to this approach. First, reach movements can be broken down into meaningful subcomponents, providing a more fine-grained measure for subtle changes in behavior. Second, the spatial properties of hand movements allow us to distinguish instances when someone is pulled towards or moves away from a distracting non-target object - an important nuance that is often lost with more traditional keypress approaches.
I have found that the trajectory of a hand movement towards a target in a simple search task can vary widely over time, despite no changes in the task or stimulus properties. Increased deviation towards a non-target distractor on one trial appears to indicate a lack of focus that sustains into subsequent trials. These effects are not limited to motor priming, as similar patterns are observed when participants switch between goal-directed action responses and keypress responses. The results of these studies hold promise both for furthering our understanding of the dynamics of attention and for practical implementations that could detect drifts in focus during high-stakes situations (e.g., driving) and send alerts to prevent costly errors.
Choice reaching with a LEGO arm robot (CoRLEGO)
Dietmar Heinke, University of Birmingham, United Kingdom
Abstract: I will present a neurobiologically inspired robotics model, termed CoRLEGO (Choice reaching with a LEGO arm robot). CoRLEGO’s architecture is based on the assumption that the process of selecting reaching targets can leak into the motor system (i.e., leakage effect). In CoRLEGO this leakage effect was implemented with neurobiologically plausible, dynamic neural fields (DNF); competitive target selection and topological representations of motor parameters.
CoRLEGO demonstrates how the leakage effect can simulate evidence from Song and colleagues’ choice reaching studies. In their experiments participants are asked to reach an item presented on the screen. Usually, the reach target is defined by its colour oddity (i.e., a green square among red squares or vice versa). These experiments show that non-target items can divert the reaching movement away from the ideal trajectory to the target item (i.e., the curvature effect) and that the curvature effect declines with repetitions of the target colour (i.e., colour priming).
An extension of CoRLEGO can mimic findings that transcranial direct current stimulation (tDCS) over the motor cortex modulates the colour priming effect (Woodgate et al., 2015). The extension of CoRLEGO includes feedback connections from the motor system to the brain’s attentional system (parietal cortex) guide visual attention to extract movement-relevant information (i.e., colour) from visual stimuli. This work adds to growing evidence that there is a close interaction between the motor system and the attention system. This view is different from the traditional conceptualization of the motor system as the endpoint of a serial chain of processing stages.
May 9, 2018
Section 2 - Experimental evidence for the interactions between cognition and action (Continued)
Session 2-3: Action and object processing
Two neurocognitive routes for tool use and transport in the left hemisphere
Laurel Buxbaum, Moss Rehabilitation Research Institute, Elkins Park, Pennsylvania, USA
Abstract: The left hemisphere in humans is exquisitely attuned to manipulative manufactured objects (tools), and left hemisphere lesions give rise to a fascinating group of disorders called the limb apraxias, characterized by deficits in imitating, learning, recognizing, and/or producing tool actions. Historical studies of praxis are marred by confusing and contradictory definitions and approaches. We take as our point of departure the influential “two visual streams” model of perception and action, and flesh out the mechanisms of an additional subdivision, the ventro-dorsal stream, which plays a role in the integration and task-relevant selection of online (“structure based”) and stored (“function based” or “semantic”) action information. We focus in particular on how competition between candidate actions computed within and across the two systems may give rise to errors of action in healthy and, especially, in brain lesioned individuals. Our theoretical framework bears on current two route models of left hemisphere functioning in the language domain, unifies a number of disparate findings in the functional neuroimaging and patient-based action literature, and lays the groundwork for the treatment of two major subtypes of apraxia.
I don’t have time to make up my mind: Using actions to understand ongoing decision making processes
Heather Neyedli, School of Health and Human Performance, Dalhousie University, Canada
Abstract: We need to rapidly select and plan actions to objects or stimuli that will achieve our goals while avoiding negative outcomes. To select and plan the best action we must be aware of the value associated with the potential outcomes of our actions and the probability that these outcomes will occur. If this information is processed more effortlessly or subconsciously, we will be able to pre-plan potential actions allowing us to react, execute and adapt the movement more quickly. In my talk, I will discuss findings that show that while choice may seem like a conscious process, unconscious processes influence our choices and the actions associated with those choices. By exploring a number of recent findings, I will show that information from movement trajectories and movement endpoint can reveal biases in attention and decision making. Furthermore, the probabilities associated with our actions are better integrated into our choices and action plans than the values associated with our actions. The tight link between perception, decision making and action planning processes may lead to faster decisions and more rapidly adaptable actions, but only in certain contexts.
Section 3 – Knowledge translation
10:30am -12:30 pm
The neurotherapeutic potential of animation
Omar Ahmad, The Kata Project, The Johns Hopkins School of Medicine, USA
Abstract: Scientists now know that exploring new and diverse physics helps learning. Researchers at Johns Hopkins showed in a study, published in the Journal Science, that when an infant observes a scenario with surprising physics, its helps them learn. That is, exploring new and unfamiliar dynamics enhances learning.
At Kata, we create experiences that require movements to control the physics of an animal completely different from a human. A player encounters new and unexpected physics by learning how to move creatures built using a physics-animation simulation technology. These virtual creatures were born from thousands of hours of study of actual animals, and then integrating that knowledge with physical simulations containing bones, muscles, and control systems users can manipulate with their own input. The result is a deeply visceral experience, the only one of its kind in the world, designed to address cognition through new, exploratory, movement and its associated physics.
Leveraging knowledge of integrated sensorimotor-cognition systems to enhance performance and learning during human-computer interactions
Tim Welsh, Centre for Motor Control, Faculty of Kinesiology & Physical Education, University of Toronto, Canada
Abstract: Efficient and user-friendly human-computer interactions (HCI) are largely determined by the design of the virtual workspaces (e.g., the size and location of the stimuli on the screen) and the input technologies that convert the user’s thoughts and actions into commands and functions. For this reason, HCI designers attempt to utilize knowledge generated by behavioural and neural scientists when developing new technologies and systems. Over the last decade, my lab has been collaborating with researchers in computer sciences and digital media on a series of projects that leverages research and theories of integrated sensorimotor-cognition systems for the engineering of embodied and tangible HCI. We have shown that embodied HCI systems that transfer the reaching and kicking movements of the user to the virtual character can be more efficient than systems that employ conventional keyboard and mouse devices. Importantly, this work has also revealed that playing “smart games” using embodied interfaces facilitates improvements in spatial skills, such as mental rotation and perspective taking, whereas playing these same games with keyboard and mouse interfaces do not. More broadly speaking, this work highlights the large scope of important opportunities for translating the knowledge gained through fundamental behavioural and neuroscience research to improve HCI systems and, potentially, educational practices.
Discussants: Max Di Luca, Research Scientist, Oculus VR and Robert Rauschenberger, Principal Scientist, Exponent, Inc.
Section 4 – Future of research in interactions between cognition and action
Session 4-1 Methodological considerations and standards
Actions as social signals: Methods and a framework for studies of human social interaction
Antonia Hamilton, Institute of Cognitive Neuroscience, University College London, United Kingdom
Abstract: Social interactions in real life are spontaneous, fluid and rarely repeated in exactly the same way again. How, then, can we pin down these ephemeral behaviours in the lab and achieve both ecological validity and experimental control? And what kind of a theoretical framework do we need to guide our experiments?To answer these questions, I will describe a series of studies which record movement kinematics and manipulate the knowledge of being watched. Obtaining precise measures of kinematics and finding appropriate ways to analyse and characterise kinematics is critical for advancing our understanding of the details of human social behaviour. But to place this in a broader framework, it is important to think about what a social behaviour is for – in particular – is an action performed only for one person, or is it intended as a signal which sends a message to another? Understanding actions as social signals allows us to develop and test hypothesis of how actions function in social interactions. A key prediction of the signalling hypothesis is that a signal is sent only when it can be received, that is, when someone is watching. Thus, experimental manipulation of the ‘knowledge of being watched’ is a critical test of whether a particular action or a particular kinematic pattern has a social-signalling function. Our data provide evidence that imitation is used as a social signal, and showcase how high-precision methods and strong theories are both needed to advance studies of human social interaction.
Models, movements, and minds: New tools to unite decision making and action
Craig Chapman, Faculty of Kinesiology, Sport, and Recreation, University of Alberta, Canada
Abstract: As neuroscience and psychology embrace the study of the brain as a dynamic and fluid decision-making machine, categorical labels like “Perception”, “Decision” and “Action” are beginning to crumble. Here, I will argue that the tools we use to study human thinking through behaviour must also keep pace with our new worldview. I will first present analysis techniques that are able to capture the fluidity of perception, decision, and action, such as comparing movement trajectories (e.g. functional ANOVA) and measuring how different factors affect movements over different time scales (e.g. functional regression as in Scherbaum, S. et al, Cognition, (2010)). But, just as isolating “Action” as a modular neural component was a mistake, so too is measuring only a single aspect of the sensorimotor stream. Thus, I will also discuss the development of a new tool: the Gaze and Movement Assessment (GaMA) suite which provides a platform for analyzing combined and simultaneously recorded eye- and motion-tracking data, with an imminent extension to also include electroencephalography. Finally, as we empower our research with new tools, we must also retain a fundamental tie to the theories which guide our hypotheses. As such, I will conclude with a description of some new alternatives to decision models which extend more conventional evidence accumulation processes to explain our new conceptualization of decision-making as a flexible and unified process encompassing sensation, movement and everything in between.
Session 4-2. Revisiting of Motor Control as Cinderella: Past & future of cross-disciplinary fields
The Time For Action is At Hand
David Rosenbaum, University of California, Riverside, USA
Abstract: Watch a toddler use a broom or a robot fold laundry and the skill used for everyday tasks becomes obvious. Yet the ease with which neuro-typical adults carry out such tasks is generally taken for granted. Such activities are often viewed as cognitively unsophisticated, perhaps because academic training is not required to achieve them, and people who perform such tasks for a living occupy the lowest rung of society. The bias against the view that physical action is cognitively sophisticated is so deeply rooted that psychology, the science of mental life and behavior, has paid scant attention to the means by which mental life is translated into physical actions. Even within this workshop, whose speakers include the most enlightened researchers in the world, motor control is more often viewed as a window into perception and cognition than as a topic of interest in its own right. In a 2005 American Psychologist article, called the Cinderella of Psychology, I suggested that the relegation of motor control to the sidelines of psychology has a number of historical causes. I will briefly review those causes in my talk and comment on signs that things are improving. Echoing the view of others who emphasize that action cannot be divorced from cognition, I will review work from my own lab and others that probe the processes underlying physical action selection and control. These lines of work indicate that the time for integration of action research with research on cognition, perception, and emotion is at hand.
General discussion: Future of the fields: A view from the bridge
Betty Tuller, Program Director, National Science Foundation, USA
Abstract: An understanding of the dynamics of action/cognition will be vital for the creation of human-computer collaborative systems where human and machine agents learn from each other, both quickly and over the lifespan. Human acceptance of these collaborations will likely need an understanding of how the “softer” aspects of cognition (trust, emotion, reciprocity, intention, negotiation, etc.) are embodied in the dynamics of human actions and instantiated in the human-machine collaboration. In this talk, I will discuss the opportunities this affords for our sciences and calls for a stronger emphasis on use-inspired basic research.