Donate | Join/Renew | Print Page | Contact Us | Report Abuse | Sign In
Share |


Coverage of the 59th Annual Meeting

Fleur-de-lisProgram & App Fleur-de-lisKeynote (video) Fleur-de-lisSymposia Fleur-de-lisExhibits Fleur-de-lis2018 Committee

Color and Visual-Spatial Working Memory Contribute to Recognition of Distractors from Visual Search

Report: Abstract 188

The overarching question addressed in this talk is to what extent participants incidentally encode background or distractor items during visual search. Previous research has shown that the color of distractors interfered with target recognition during visual search. In particular, red has been shown to attract attention and excite individuals, while other colors (such as yellow or blue) are less attractive and are perceived as more calming. Foraker, therefore, hypothesized that color should impact incidental encoding of distractors. Specifically, red should attract attention more, compared to yellow or blue, and lead to better performance during visual search. Eye tracking was used to measure total dwell time and fixation count while participants searched for a target word among colored distracters within natural scenes (both indoor and outdoor scenes). The results revealed an interaction between fixation count and distractor color, with red distractors requiring fewer fixations compared to yellow or blue distractors. Foraker suggests that red, in particular, has perceptual importance during visual search, which may have its roots in evolutionary psychology. In the end, distractors are more likely to be incidentally encoded depending on the color. The results of this research have significant real-world implications in advertising and marketing.

Summary by Adam J. Barnas, University of Wisconsin-Milwaukee

 

Understanding the Relationship Between Intertrial Priming in Visual Search and Visual Working Memory Capacity

Report: Abstract 189

Are individual differences in working memory capacity related to attentional selection processes driven by intertrial priming? The conventional view holds that attentional selection is governed by the interaction of top-down goals and bottom-up saliency. However, a growing body of literature suggests that selection history (i.e., the influence of lingering attentional biases across trials, known as intertrial priming) can also influence selection.

To better understand the role of selection history, Carly Leonard and Amber Johnson aimed to explore how intertrial priming was related to individual differences in working memory capacity during visual search tasks. A task originally developed by Maljkovic and Nakayama was used  to capture intertrial priming effects in a “pop-out” search task. Participants were instructed to detect a uniquely colored target amongst homogenous distractors across a series of trials. The color of the target could either switch trial by trial, or it could be repeated. Differences in response time across these two conditions allows for a priming magnitude to be calculated for each individual.

The priming magnitude varied greatly amongst participants. Moreover, when there were equal numbers of target-colour repetitions versus switches, individual differences in working memory capacity were related to priming magnitude. This result provides converging evidence that priming effects differ from both automatic and voluntary processes. Leonard and Johnson concluded that priming effects resemble knowledge that is learned, but not necessarily controlled.

Summary by Rebecca Lawrence, The Australian National University

 

Why Doesn’t That Clever Computer Aided Detection System Work as Well as Theory Say It Should?

Report: Abstract 169

Visual search is a ubiquitous task that forms a crucial component of many high-risk professions, such as cancer screening in medical images. Radiologists often rely on artificial intelligence (AI) systems to help them identify abnormal X-rays, with the goal to improve the detection and identification of pathological growths. These AI systems use image statistics to identify parts of an image that deviate significantly from the remainder. It is standard practice for the AI to either “triage” cases before asking the human observer to make a decision, or by giving the observer a “second opinion” on a decision he/she has already made.

However, neither of these procedures improves detection relative to when observers complete the task without AI assistance. Wolfe described a new procedure to improve human-AI interaction, in which the AI is asked to make a decision twice, before and after the human observer. This simple, but innovative, procedure improves signal detection relative to all other procedures, including observers without AI assistance, thus opening the possibility to improve cancer screening in the real-world.

Summary by Juan D. Guevara Pinto, Louisiana State University

 

Drugs for Creativity?

A report: Abstract 132

 

How many cups of coffee have you had today?  When you want to wake up or focus for an interesting talk at Psychonomics, you consume caffeine.  But what if you want to be more creative? Colzato and her colleagues wondered if administering small amounts of psilocybin (so called “microdosing”) would enhance creativity.  Psilocybin directly binds to serotonin receptors in the brain, and increasing serotonin is associated with increased mental flexibility, making psilocybin a promising candidate for investigating this question.

The researchers recruited participants in the Netherlands – where some psychedelic drugs are conveniently legal and easy to access – to take sub-perceptual doses of psychedelic truffles.  Before and after consumption, participants completed three tasks: Raven’s Progressive Matrices (a measure of intelligence), the Picture Concept Task (which requires forming remote associations and measures convergent thinking), and the Alternative Uses Task (e.g., list possible uses for a brick; a measure of divergent thinking). Across two studies, the truffles consistently enhanced participants’ divergent thinking abilities – participants generated more fluent, original, and flexible uses after consuming the truffles compared to when they were sober. The authors found mixed results on the convergent thinking task, and the general intelligence measure was unaffected by truffle consumption. This research provides initial evidence that microdosing psychedelics might be a promising way to enhance creativity.  Of course, more research is needed before we can recommend microdosing while designing your next research study!

Read more here: https://link.springer.com/article/10.1007/s00213-018-5049-7

Summary by Michelle Rivers, Kent State University

 

Beyond Overall Effects

 

Report: Abstract 32

 

Science is supposed to be cumulative, but what are we trying to accumulate? When assessing the evidence for a particular effect, meta-analysts collect effect sizes from a range of studies on a topic and produce a meta-analytic average. Typical meta-analysis models allow for studies to vary in their effect or allow moderators to influence the average but, in essence, the question is, what is the mean effect? As Haaf and Rouder point out, this focus misses out on a range of important questions. Specifically, questions like: is every study consistent with an effect in the same direction? Do some studies really produce results in an opposite direction? Do some, or all, studies show no effect? These questions concern the distribution of effect sizes. This is important in determining the coherence of a body of work.

For example, Haaf and Rouder find that all 17 replications of facial feedback--the idea that merely smiling, even if it is due to holding a pen between your teeth, is sufficient to influence our affective responses--from a recent many-labs replication (Wagenmakers et al., 2016, PoPS) are consistent with no effect. On the other hand, when it comes to the benefits of considering survival value on later memory performance, the collection of studies identified by Scofield et al. (2018, PBR) are all consistent with a positive effect. This approach to testing constraint in the distribution of effect sizes is more informative than the typical approach. The mean is of little use in telling us whether all studies point in the same direction or whether there is true heterogeneity in the direction (or presence of) effects.

 

Summary written by Stephen Rhodes, University of Missouri

 

 


 

'Doctor' Speeds 'Nurse', but 'Florida' Does Not Make You Walk More Slowly
Report on Hal Pashler’s keynote address at the Psychonomic Society 59th Annual Meeting.

Watch last night's keynote address today! (video)

By Stephan Lewandowsky
Words that are related prime each other—deciding that “doctor” is a word in a lexical decision task speeds up responding to “nurse” when it is presented a short time later. This semantic priming effect has been known for 40 years and is robust and replicable.

In the mid-1990s, new reports of priming emerged that purported to show that merely activating concepts by a word prime can change people’s subsequent goals, motivations, and courses of action. For example, it was reported that priming of the concept of “honesty” makes people more forthcoming in answering embarrassing questions.

These new priming effects were much broader and more dramatic than anything previously reported—and sure enough, they attracted the public’s attention and led to “wow” media coverage.

Thus the stage was set for Hal Pashler’s research that he reported in his keynote address on Thursday night.

How fascinating that priming would do those things—or does it? Together with colleagues Doug Rohrer and Christine Harris, Pashler set out to “see priming with my own eyes”. To foreshadow the outcome, this journey uncovered a lot of troubling questions but, ultimately, no priming.

Instead, Pashler and colleagues accumulated an impressive list of failures to replicate:
What about “achievement priming”? Does performance on a word search task improve if it has been preceded by a search task involving words that are achievement related? No, this effect failed to replicate.

Do people walk more slowly if they’ve been trying to form sentences from words that (weakly) imply old age, such as “bingo”, “gray”, or “Florida”? No, this effect failed to replicate despite the original finding having been cited 4,700 times.

Does exposure to words related to “rudeness” make participants more likely to interrupt a subsequent conversation? No. Another failure to replicate.

Do people feel closer to their immediate family members if they are asked to put dots on a paper that are close together (rather than spaced further apart)? No.

The replication failures piled up, and so did musings in the community of social psychologists whose findings didn’t replicate, that Pashler and likeminded colleagues just lacked the “artistry” required to “produce” those priming effects.

So Pashler and colleagues went online: surely internet-administered research would not be subject to “artistry” or lack thereof? But again, the findings failed to replicate: priming honesty did not make people more likely to report answers to embarrassing questions. Priming money did not make people more likely to endorse the free-market system and social inequality.

By now, sufficient evidence has been accumulated to justify the conclusion that many priming effects in social psychology are impossible to replicate—indeed, the mean effect size across many attempts at replication turns out to be exactly 0.0.

How could this have happened? How can a branch of psychology produce a large literature on effects that are not replicable?

Pashler highlighted one factor, namely the role played by “conceptual replications” in social psychology. A “conceptual replication” involves an experiment that differs from the original study in a number of ways, including perhaps the dependent variable or some other experimental factors. Supporters argue that conceptual replications are more informative than direct replications because they underscore the generality of an effect.

But here is the rub: If a direct replication is attempted and fails—as has happened so frequently with social-priming research—then this will ultimately cause a re-evaluation of the phenomenon under consideration. By contrast, if a conceptual replication fails, this does not lead to loss of faith in the original finding—on the contrary, the original finding is protected by the experimenter’s chagrin that they “shouldn’t have changed so many things” in their attempted replication.

What is the solution to this dilemma?
Pashler argues that cognitive psychology—unlike social psychology—is in the fortunate position that greater statistical power can be obtained relatively cheaply; namely by increasing the number of trials per participant and by using within-subjects designs. Power is cheap for cognitive psychologists in many circumstances, whereas it remains expensive—and hence elusive—for many social psychologists who cannot use within-subject designs or multiple trials.

It is no surprise, therefore, that cognitive research replicates at roughly twice the rate of social psychology. Replicability in cognitive psychology is far from optimal, but we can improve it by enhancing the power of our studies relatively cheaply.


 

 

 

 

 

 

 

 

 

Undergraduate Psychonomes Pondered Their Careers in Cognitive Psychology through Engagement with Panels of Graduate Students, Postdocs, Professors, and Industry Psychologists

 

Report on Encouraging Future Scientists: Supporting Undergraduates at Psychonomics

 

Thomas, Umanath, and colleagues organized a lunchtime workshop aimed at facilitating career development of undergraduate attendees of Psychonomics. The first panel consisted of three graduate students in their early, mid, and late graduate career, and a postdoc. The second panel consisted of two professors in their early and mid career as well as two industry psychological scientists. Each panel shared their views on things they wish they had known when they were applying to graduate schools and provided undergraduate psychonomes with insights from how their career unfolded. The panels’ presentations were followed by undergraduate psychonomes eagerly asking questions. Between the presentations and the questions, the workshop covered a wide range of important topics, such as how to get the best out of attending Psychonomics, how to select the graduate programs that is right for you, and the possibility of careers outside of academia and what you can do to get such jobs. The workshop was well-attended, and it was great to see that the society is supporting the development of cognitive psychologists in their professional infancy. Many attendees stayed in the room after the workshop ended to network, and the room was filled with the enthusiasm both from the undergraduate psychonomes and the veteran psychonomes who embrace the art of mentoring.

 

 Summary written by Toshiya Miyatsu, Washington University in St Louis

  2424 American Lane • Madison, WI 53704-3102 USA
Phone: +1 608-441-1070 • Fax: +1 608-443-2474 • Email: info@psychonomic.org

Use of Articles
Legal Notice

Privacy Policy