Donate | Join/Renew | Print Page | Contact Us | Report Abuse | Sign In
2018 Psychonomic Society Collaborative Symposium
Share |

Advances in Information Aggregation and Collective Intelligence Research

 

Meeting of the Japanese Society for Cognitive Psychology (JSCP)

September 2018  |  Osaka, Japan

 

Organizers

   
 David B. Budescu
Fordham University,
USA
 Mark Steyvers
University of
California, Irvine
USA

Description
The last decade has seen a proliferation of theoretical and empirical work in various areas psychology on “Wisdom of Crowds” and “Collective Intelligence”.   Much of this first wave work consists of simple and straightforward illustrations that simple aggregation rules that invoke the “Wisdom of Crowds” and effective teaming efforts can improve various measures that reflect the quality of the collective decisions.  The proposed symposium includes a collection of papers that represent more sophisticated efforts to understand and model the cognitive, structural and social factors that drive the aggregation and teaming effects.


Optimal forecasting teams
David V. Budescu
Fordham University, USA

One of the most surprising results of recent large scale geopolitical forecasting tournaments sponsored by IARPA (e.g., Mellers et al., 2014) is the fact that small teams were, on average, more accurate than individuals.  This result seems to contradict the expectation of the “wisdom of crowd” approach that highlights the importance of independence between forecasters. How large should a team be, to take full advantage of this positive “teaming effect”? In other words, if one has access to n forecasters, is it better to divide them into many small teams, or to group them all together? To address these questions we re-analyzed data of the teams and the individuals who participated in Year 4 of the IARPA tournament, as well as from a new experiment that manipulated systematically team size. We found that smaller teams (n=5) were more active than larger teams (n=15) but also less accurate. In order to improve accuracy without sacrificing activity level, we composed synthetic teams by aggregating forecasts of members of smaller teams.  We found that, on average, these recomposed teams matched the activity level of the smaller teams and the accuracy of the larger teams.  We consider the implications of these results for optimal teaming that seeks to maximize both activity and accuracy.

 

Co-authors:
     
Yizhi "Roxanne" Zhang, Fordham University, Barbara Mellers, University of Pennsylvania,  Eva Chen, Good Judgment Inc.

 


 

Collective intelligence in medicine: Boosting medical diagnostics by aggregating independent judgments
Stefan M. Herzog

Max Planck Institute for Human Development

Collective intelligence refers to the ability of groups to outperform individual decision makers when solving complex cognitive problems. Despite its potential to revolutionize decision making in a wide range of domains, including medical, economic, and political decision making, at present, the conditions underlying successful collective intelligence in consequential, real-world contexts are not yet well understood. Which features of decision makers and decision contexts favor the emergence of collective intelligence? Which decision-making rules permit this potential to be harnessed? Here we focus on two key areas of medical diagnostics, breast and skin cancer detection. Using a simulation study that draws on two large real-world datasets, involving more than 140 doctors making more than 20,000 diagnoses, we investigate when combining the independent judgments of multiple doctors outperforms the best doctor in a group. We find that similarity in diagnostic accuracy is a key condition for collective intelligence: Aggregating the independent judgments of doctors outperforms the best doctor in a group whenever the diagnostic accuracy of doctors is relatively similar, but not when doctors’ diagnostic accuracy differs too much. This result is highly robust and holds across different group sizes, performance levels of the best doctor, and collective intelligence rules (majority voting or adopting the most confident diagnosis). Similarity enables collective intelligence because the collective overrules the best individual particularly when that best individual is incorrect; in contrast, whenever the best individual is correct, the collective tends to agree already.

 Co-authors:
     
Ralf H. J. M. Kurvers, Ralph Hertwig, Jens Krause, Max Wolf

 

Can social interaction improve group performance? An experiment with the information-cascade paradigm

Tatsuya Kameda
University of Tokyo


The wisdom of the crowds refers to a “group” phenomenon in which aggregated judgments are more accurate than individual judgments. This phenomenon reflects a statistical property where random noises in individual judgments are cancelled out via mechanical aggregation such as group averaging. In reality, however, human group members often rely too much on social information contributed by others to make decisions, which leads to a reduction of diversity that can undermine the wisdom of crowds effect. It is thus important to disentangle how and when people can strike a right balance between independence and interdependence in social decision-making. We conducted an experiment to investigate how social interaction affects judgmental accuracy using the information-cascade paradigm. In each session, eight participants were asked to estimate the number of marbles in a jar sequentially, where each participant was provided with the preceding others’ estimates before making his/her final estimate. Here we had two conditions with a different payoff scheme whereby monetary reward was made contingent on accuracy of the individual’s own judgment (the individual-accuracy condition), or the group judgment aggregated via averaging (the group-accuracy condition). Results showed that social information improved participants’ judgments only when the payoff was contingent on individual accuracy. In the group-accuracy condition where members’ cooperation toward group performance was emphasized, participant’s judgments became less independent from each other, precluding emergence of the wisdom of crowds.

Co-authors:
Hye-rin Kim, Hokkaido University, Wataru Toyokawa,,
University of St. Andrews, Japan Society for the Promotion of Science

 

Strategy-Advantage Switching in Individual and Group Judgment

Henrik Olsson
Santa Fe Institute

 

Many inferences are not done by isolated individuals. Teams of health care professionals give prognostic assessments of patients’ chances to recover from cancer, financial experts in the Federal Open Market Committee decide upon the federal funds rate, and selection committees make hiring decisions are a few examples. If we would like to design an inference strategy that maximizes predictive accuracy in a group setting where the individual predictions are averaged together, would the same strategies that work well for individuals also work in a group setting? I show that a strategy that works well for individual predictions does not necessarily work well for group predictions: Constrained strategies produce more accurate predictions for individuals, while unconstrained strategies lead to more accurate predictions for groups. This phenomenon of strategy-advantage switching can be understood by analyzing a decomposition of the mean squared error into bias, variance, and covariance. The bias component is the difference between the true value that is being predicted and the mean prediction of the strategy, the variance component is the variance of those predictions, and covariance is the average covariance of predictions among group members. A strategy's’ bias-variance profile, together with its susceptibility to incur covariance, will determine how well it performs individually and in a group setting. I discuss the implications of the results in the context of collective intelligence and how decision environments should be structured to maximize group performance.

 

 

Making a wiser crowd: Benefits of individual metacognitive control over question selection

Mark Steyvers
University of California, Irvine

The wisdom of the crowd refers to the finding that judgments aggregated over individuals are typically more accurate than the average individual's judgment. Here we examine the potential for improving crowd judgments by allowing individuals to choose which questions to respond to. In circumstances where individuals' metacognitive assessments of what they know tend to be accurate, allowing individuals to opt in to questions of interest or expertise has the potential to create a more informed knowledge base over which to aggregate. In several experiments we demonstrate that crowds composed of self-selected judgments are more accurate than crowds composed of experimenter-selected questions. We apply simple cognitive models within a Bayesian framework to provide a computational account of the self-selection advantage.  Overall, the results show that allowing individuals to use private metacognitive knowledge holds much promise in enhancing judgments, including those of the crowd.

 


Co-authors:

Stephen Bennett, University of California, Irvine and Aaron Benjamin, University of Illinois, Urbana Champaign

 
Eliciting knowledge about social circles improves election forecasts

Mirta Galesic
Santa Fe Institute

 

Election outcomes can be difficult to predict. A recent example is the 2016 U.S. presidential election, where Hillary Clinton lost five states that had been predicted to go for her, and with them the White House. Most election polls ask people about their own voting intentions: whether they will vote, and if so, for which candidate. We show that, compared to own-intention questions, eliciting participants’ knowledge about the voting intentions of their social contacts improved predictions of voting in the 2016 U.S. and 2017 French presidential elections. Responses to social-circle questions predicted election outcomes on national, state, and individual levels, helped explain last-minute changes in people’s voting intentions, and provided information about the dynamics of echo chambers among supporters of different candidates. Overall, social-circle questions are a way of tapping into the “local” wisdom of crowds and can provide valuable information about social interactions that shape individual beliefs and behaviors.


Co-authors:
Wandi Bruine de Bruin, Leeds University, Arie Kapteyn, J. Darling & E. Meijer


  2424 American Lane • Madison, WI 53704-3102 USA
Phone: +1 608-441-1070 • Fax: +1 608-443-2474 • Email: info@psychonomic.org

Use of Articles
Legal Notice

Privacy Policy