When opposites slow you down but don't collide: Negligible dual-task costs with stimulus incompatibility
Doing two things at once is hard. But why? Answering this question can give us key insights into how the human mind works.
Everyday life in the 21st century is rife with attempts to multi-task (e.g., using a mobile device while doing just about anything else). Cognitive psychologists study multi-tasking in the laboratory by having participants do simple tasks separately (single-task) and together (dual-task). In each single task a participant perceives a stimulus and must produce a response. For example, you might hear a voice speak the letter A or B and then you’d have to say “yes” or “no” to answer the question “is it an A?”, or you might see a square or a circle and then you’d have to press the 1 key or the 2 key to indicate whether you saw a square or circle, respectively. (We are not particularly concerned with the mapping of stimuli to responses here; suffice it to say that there are two simple tasks that involve the choice between stimuli.)
Researchers have found that people are almost always slower when performing two such tasks at once. However there are a few conditions under which such dual-task costs are minimal, perhaps even absent.
Understanding why this happens can give us hints about how the mind processes and responds to incoming information, and a recent article by Halvorson and Hazeltine in the Psychonomic Society’s journal Psychonomic Bulletin & Review sought to do just that.
The study investigated a situation in which dual-task costs are negligible. In the auditory-verbal task, when you hear the word “cat” you say the word “cat”, and when you hear “dog” you say “dog”. In the visual-manual task, when you see a picture of a hand pressing a key with its index finger you press a key with your index finger, and when you see a picture of a hand pressing a key with its middle finger you press a key with your middle finger. Prior studies have shown that people tend to be just as fast at doing both these tasks simultaneously as they are at doing the two tasks separately.
One explanation, consistent with the framework of embodied cognition, is that these tasks have “ideomotor compatibility”: the stimulus directly conveys the appropriate response, which is activated automatically without requiring a limited response-selection stage of processing that would otherwise be needed by both tasks at once. Put more plainly: when you say what you hear and do what you see, you don’t have to think much about either.
To test the ideomotor compatibility explanation, Halvorson and Hazeltine included a key condition in their study: for some participants, the stimulus-response mappings were reversed so that their task was to do the opposite of what they heard/saw. That is, when you hear “cat” you’re supposed to say “dog” and when you see a hand using its index finger you’re supposed to use your middle finger. If the ideomotor compatibility explanation is correct, then we should find dual-task costs in this “opposite” condition, because responses would no longer be selected automatically and the two tasks would again be drawing on common limited cognitive resources.
What were the results? Not unexpectedly, people were slower performing the opposite tasks than the ideomotor-compatible version: Saying “dog” in response to “cat” took longer than saying “dog” when you just heard “dog.”
Intriguingly, dual-task costs were nonetheless equally negligible across conditions. That is, even though each task was slowed, people seemed quite capable of doing both tasks at once even when they were tasked with saying or doing the opposite of what they heard or saw.
The authors thus suggest an alternative explanation for the low dual-task costs: regardless of the stimulus-response mappings (same vs. opposite), one task occurs in a verbal modality (hear/speak) and the other task occurs in a spatial modality (see/press), and prior research has suggested that we may simply have separate cognitive resources for these modalities. Indeed, there is evidence from memory research that interference within each of those modalities is considerably greater than interference between modalities. What is of note here is the apparent absence of any dual-task interference between modalities.
Over email, Halvorson is quick to clarify: “To be clear: we are not suggesting that that people should talk on the phone and drive! We do not believe it is the case that driving is a purely spatial task or that talking to a disembodied voice has no spatial component…”
A mystery still remains: plenty of other studies have used tasks with verbal versus spatial modality and have still found dual-task performance costs. So what is it about the tasks in this study that yields essentially no dual-task costs, if it’s not the ideomotor compatibility? I look forward to finding out by reading future articles, while not simultaneously doing anything else.