4,000 years of the pursuit of happiness: overcoming the dark side of hedonism and reward
Wednesday, August 05, 2015
When the Egyptian King Intef died some 4,000 years ago, his tomb was inscribed with a song that encourages its audience to live a life of what came to be called hedonism: “Revel in pleasure while your life endures… never weary grow/In eager quest of what your heart desires.” You can read the full text below:
Some 3,800 years later, the pursuit of happiness was enshrined in another, now rather more influential document.
But other than yielding pleasure itself, what are the consequences of the pursuit of happiness? What, if anything, does pleasure “do”?
This question already occupied philosophers in antiquity, and it has occupied modern psychologists for more than a century: What are the consequences of reward(a slightly more technical term for pleasure)?
One of the most reliable findings in all of psychology is that if a behavior is followed by a reward, its likelihood of occurrence increases. Conversely, if a behavior is punished, its subsequent likelihood typically decreases. Edward Thorndike subsumed those twin findings under the famed Law of Effect.
A recent article in the Psychonomic Society’s journal Attention, Perception, & Psychophysics examined one particular aspect of reward; namely, the effects of reward on attention. There has been much previous research showing that offering a reward (e.g., food or money, awarded for particularly fast or accurate responses) seemingly “sharpens the mind”: People generally become faster and more accurate at many attentional tasks if they are rewarded. Researchers Lynn and Shin focused on the “dark side” of attentional reward, which they defined as “… undesired aftereffects, whereby a stimulus previously associated with reward attracts attention even when it would be more beneficial to ignore it.” For example, if participants learn to associate a particular feature of a stimulus (e.g., its color) with a reward, their performance suffers when that same feature is subsequently used as a distractor—the reward-induced greater attention on that feature persists even when attending to that feature has become disadvantageous because the task parameters have changed.
Lynn and Shin were interested in whether this persistence can be overcome by, you may have guessed it, reward. Specifically, they asked whether offering a greater reward after the task changed (compared to before) might overcome the attentional persistence that is usually observed when rewards are used to shape responding.
The experimental design was quite simple: There were two phases of trials, and during each phase people had to report the orientation of a line (horizontal or vertical) that flashed up to the left or to the right of a central fixation point. The manipulation of greatest interest involved a cue, which was flashed briefly before the actual stimulus and which quite reliably (80% of the time) identified the location (but not orientation) of the target line. In Phase 1, the cue was an X and in phase 2 it was a T, but in both phases the cue display was identical. The figure below shows a screen shot of the stimulus display for a valid trial during Phase 1 (i.e., a trial on which the “X” correctly signaled the location of the line):
Correct responses were rewarded with points that ultimately translated into the possibility of a monetary reward. The reward schedule, expressed in $ amounts for the fastest and most accurate participant, differed between conditions as follows:
||Phase 1 (X)
||Phase 2 (T)
The condition of greatest interest is group 2: Would the higher reward in Phase 2 overcome the counter-productive attentional persistence due to the reward in Phase 1, which was expected to occur in Group 1?
The results were quite complex overall, but the most important finding is shown in the figure below for Groups 1 and 2, which plots the time that people required to judge the orientation of the target line:
The left panel for Group 1 shows the attentional benefit as well as the detriment that arises from rewarding responses in Phase 1 only—remember that this group was offered $50 in Phase 1 but nothing in Phase 2. In consequence, valid trials (on which the “X” correctly indicated the location of the target line) were much faster in Phase 1 than invalid trials (the 20% of trials on which the “X” was in the opposite location to the target). This is the signature of reward-driven attentional learning: people latch onto the validity of the “X” and exploit it to fine-tune their performance.
But the left panel also illustrates the dark side of this reward-driven attentional benefit: Note how the relationship between the valid and invalid trials was reversed in Phase 2: Formerly valid trials now incurred a cost because the “T”, not the “X”, was predictive of line location in Phase 2. People were unable to adjust to this new contingency. (Although not shown in the figure above, no such cost was observed in Groups 3 and 4, which were not rewarded in Phase 1, confirming that the effects in Phase 2 in the figure above were a consequence of rewards during Phase 1 rather than simply experience with the task or some other variable).
Turning to the right panel in the figure above, we see nearly identical results for Phase 1—not surprisingly, because the two groups did not differ from each other during that initial phase. But now consider the differences between groups in Phase 2: unlike their unfortunate counterparts in Group 1, participants in Group 2 did extremely well during the second phase. Rather than showing a cost due to their attentional learning in Phase 1, they exploited the revised contingencies—that is, the fact that the “T” rather than the “X” now predicted line location—in Phase 2. If anything, they benefited even more from valid trials in Phase 2 than they had done in Phase 1.
Why? Because the potential reward was greater in Phase 2 than in Phase 1, and this increased motivation overcame the previously-learned associations from Phase 1.
In a nutshell, if rewarded responses become counter-productive, the solution is to reward people even more for the newly-required responses.
The results of Lynn and Shin may have implications for everyday problems such as drug addiction. It has long been suspected that people who were once addicted to drugs often resume their former habits when they encounter cues—such as friends or contexts in which drugs were consumed—that remind them of their earlier habits. Lynn and Shin suggest that rather than representing accidental encounters, those triggers may reflect a “… somewhat involuntary active searching for these cues” based on prior reward-driven attentional learning. However, Lynn and Shin also note that their results suggest “… that affected individuals may be able to curb this tendency by focusing on more rewarding stimuli.”
Kicking a drug habit may therefore be assisted by the pursuit of happiness by other means.