If it's not moving it'll hit you: Perceptual biases in 3D motion detection
Tuesday, April 7, 2015
Much of civil aviation ultimately relies on the human perceptual system: Pilots must avoid each other by scanning the airspace around them and identifying aircraft that are in potential conflict. This is a skill that can be taught—and during the 15 years that I spent teaching people how to fly, there were a few basic ideas that students had to acquire. For example, begin a scan by focusing on your wing tip; this forces your eyes to accommodate to a relatively near object, and if you then scan the horizon the eye has to reset its focus to infinity. Any aircraft in the air space between your wing and the horizon will then “jump out” during the refocusing process. Another important message that flight instructors impart to students is “if an aircraft stays in the same spot on your cockpit window you are going to collide with it.” It’s the (seemingly) stationary aircraft you have to worry about—not the ones that move.
Of course, aircraft on a collision course are not actually stationary, they are moving directly towards the observer. In consequence, they do not “move” along the horizontal or vertical axis, which makes their detection particularly difficult.
There has been much research on how people detect this “motion in depth”; that is, movements of objects away from or towards an observer in 3D space. One conclusion of this research has been that people introduce a “lateral bias” in the perception of an approaching object, such that they believe it might (just) miss their head. This bias is obviously not terribly useful for a pilot.
A recent article in the Psychonomic Society’s journal Attention, Perception, and Psychophysics further examined people’s biases in perceiving the motion of objects in depth. Researchers Jacqueline Fulvio, Monica Rosen, and Bas Rokers presented people with what was effectively a 3D version of the video game Pong. On each trial, a little dot appeared which followed a random trajectory in the horizontal (x; right-left) and in the depth plane (z; approaching or receding). On approximately half the trials, the dot approached the observer, whereas on the remaining half it receded. You can watch a movie of the trial sequence here, and I enclose a screen snapshot below:
Note that the two side-by-side copies of the display were presented to the left and right eye, respectively: Observers wore shutter glasses, so that the left- and right-eye images were fused into a single 3D scene.
The observer’s task was to watch the dot move for 1 second and, after it disappeared, adjust a “paddle” by moving it to the point along an imaginary concentric circle where it would have intercepted the trajectory of the dot. The paddle can be seen in the snapshot above: It is the horizontal textured bar next to the aperture formed by the two half-moon shapes.
In a nutshell, people watched a dot move in a random direction and indicated where it was heading after it disappeared.
The experiment included several different levels of contrast for the white dot, which ranged from “incredibly visible” to “gray-on-gray” (actually, the researchers used slightly more technical labels; namely, 5.79 cd/m2 through 0.15 cd/m2. But that just means “incredibly visible” and “gray-on-gray”, respectively).
Unlike previous research, this study did not observe a lateral bias when the dot was clearly visible: Observers were by and large very accurate in their detection of the motion. However, when the detectability of the dot was reduced, people became less accurate. This by itself is not surprising, but what is surprising is that this decline in accuracy expressed itself as an increase in confusions in the direction of motion in the depth (z) plane. On up to 30% of trials, people thought a target was approaching when it was receding, or vice versa. This compares to only 1-2% of confusions of lateral direction (x; right vs. left).
A follow-up experiment extended the findings to another way in which sensory uncertainty was manipulated. Instead of varying the contrast of the target, the target’s starting position was manipulated, such that it either appeared “behind” or “in front of” the fixation plane. That is, because the display provided 3D cues, people focused on a specific plane at the beginning of each trial, and the target appeared either closer or further away from that plane. The results were the same as in the first study: Observers frequently misreported the direction of the motion-in-depth component of the target when its appearance was not aligned with the focal plane.
Taken together, the studies identify a novel limitation of our ability to detect the motion of objects in depth: more so than with lateral movements, we find it difficult to tell which way an object is moving in the depth plane when there is some sensory uncertainty.
One other noteworthy subtlety in the data regards the directionality of the confusions. Observers were more likely to report receding motion as approaching under high-contrast conditions, and approaching motion as receding under low-contrast conditions. This means that if an airplane is highly visible, you are more likely to mistakenly think it’s heading towards you—an error that may remain inconsequential. However, the result also means that if an aircraft emerges from the haze, you are more likely to mistakenly think that it is heading away from you—an error that obviously has the potential to be highly consequential.
Pilots thus face a double whammy: When aircraft are particularly difficult to detect, their direction of travel in the crucial depth plane is also most likely to be confused in the worst possible way. All the more reason to keep scanning and focus on that wingtip every so often.