Advances in Information Aggregation and Collective Intelligence Research
Held in conjunction with the 2018 Annual Conference of the Japanese Society for Cognitive Psychology (JSCP).
September 2018 | Osaka, Japan
The last decade has seen a proliferation of theoretical and empirical work in various areas psychology on “Wisdom of Crowds” and “Collective Intelligence”.
Much of this first wave work consists of simple and straightforward illustrations that simple aggregation rules that invoke the “Wisdom of Crowds” and effective teaming efforts can improve various measures that reflect the quality of the collective decisions.
This symposium includes a collection of papers that represent more sophisticated efforts to understand and model the cognitive, structural and social factors that drive the aggregation and teaming effects.
The presentations cover a wide range of application domains including medicine and intelligence, as well as a wide variety of methodological approaches including laboratory experiments large-scale field studies and simulations. Together they represent the state of the art in this exciting multi-disciplinary research area.
From left: Shigeo Matsubara, Kyoto University, Japan; Tatsuya Kameda, University of Tokyo, Japan; Mirta Galesic, Santa Fe Institute, USA; Henrik Olsson, Santa Fe Institute, USA; Mark Steyvers, University of California, Irvine, USA; and David V. Budescu, Fordham University, USA.
Also pictured (far right) is JSCP Academic Board Member Jun Kawaguchi, Nagoya University, Japan.
David V. Budescu, Fordham University, USA
Mark Steyvers, University of California, Irvine, USA
Optimal Forecasting Teams
David V. Budescu
Fordham University, USA
One of the most surprising results of recent large scale geopolitical forecasting tournaments sponsored by IARPA (e.g., Mellers et al., 2014) is the fact that small teams were, on average, more accurate than individuals. This result seems to contradict the expectation of the “wisdom of crowd” approach that highlights the importance of independence between forecasters. How large should a team be, to take full advantage of this positive “teaming effect”? In other words, if one has access to n forecasters, is it better to divide them into many small teams, or to group them all together? To address these questions we re-analyzed data of the teams and the individuals who participated in Year 4 of the IARPA tournament, as well as from a new experiment that manipulated systematically team size. We found that smaller teams (n=5) were more active than larger teams (n=15) but also less accurate. In order to improve accuracy without sacrificing activity level, we composed synthetic teams by aggregating forecasts of members of smaller teams. We found that, on average, these recomposed teams matched the activity level of the smaller teams and the accuracy of the larger teams. We consider the implications of these results for optimal teaming that seeks to maximize both activity and accuracy.
Co-authors: Yizhi "Roxanne" Zhang, Fordham University, Barbara Mellers, University of Pennsylvania, Eva Chen, Good Judgment Inc.
Eliciting Knowledge About Social Circles Improves Election Forecasts
Santa Fe Institute, USA
Election outcomes can be difficult to predict. A recent example is the 2016 U.S. presidential election, where Hillary Clinton lost five states that had been predicted to go for her, and with them the White House. Most election polls ask people about their own voting intentions: whether they will vote, and if so, for which candidate. We show that, compared to own-intention questions, eliciting participants’ knowledge about the voting intentions of their social contacts improved predictions of voting in the 2016 U.S. and 2017 French presidential elections. Responses to social-circle questions predicted election outcomes on national, state, and individual levels, helped explain last-minute changes in people’s voting intentions, and provided information about the dynamics of echo chambers among supporters of different candidates. Overall, social-circle questions are a way of tapping into the “local” wisdom of crowds and can provide valuable information about social interactions that shape individual beliefs and behaviors.
Co-authors: Wandi Bruine de Bruin, University of Leeds, M. Dumas, Santa Fe Institute, Arie Kapteyn, University of Southern California , J. E. Darling, University of Southern California & E. Meijer, University of Southern California
Can Social Interaction Improve Group Performance? An Experiment with the Information-cascade Paradigm
University of Tokyo, Japan
The wisdom of the crowds refers to a “group” phenomenon in which aggregated judgments are more accurate than individual judgments. This phenomenon reflects a statistical property where random noises in individual judgments are cancelled out via mechanical aggregation such as group averaging. In reality, however, human group members often rely too much on social information contributed by others to make decisions, which leads to a reduction of diversity that can undermine the wisdom of crowds effect. It is thus important to disentangle how and when people can strike a right balance between independence and interdependence in social decision-making. We conducted an experiment to investigate how social interaction affects judgmental accuracy using the information-cascade paradigm. In each session, eight participants were asked to estimate the number of marbles in a jar sequentially, where each participant was provided with the preceding others’ estimates before making his/her final estimate. Here we had two conditions with a different payoff scheme whereby monetary reward was made contingent on accuracy of the individual’s own judgment (the individual-accuracy condition), or the group judgment aggregated via averaging (the group-accuracy condition). Results showed that social information improved participants’ judgments only when the payoff was contingent on individual accuracy. In the group-accuracy condition where members’ cooperation toward group performance was emphasized, participant’s judgments became less independent from each other, precluding emergence of the wisdom of crowds.
Co-authors: Hye-rin Kim, Hokkaido University, Wataru Toyokawa,,University of St. Andrews, Japan Society for the Promotion of Science
Collective Intelligence Through Collaboration mong Human Forecasters and Machine Forecaster: A Case of Economic Indicators Forecast
Kyoto University, Japan
How can human forecasts and a machine forecast be combined in inflation forecast tasks? In the existing studies on collective intelligence, the differences between human forecasters and a machine forecaster are not fully considered. A machine-learning-based forecaster makes a forecast based on a statistical model constructed from past time-series data, while humans take varied information such as economic policies into account. To utilize the advantages of a machine forecast and human forecasts, we propose a human-machine ensemble method for estimating the expected error of a machine forecast and dynamically determining the optimal number of humans included in the ensemble. Although combination methods for different forecasts have been studied such as ensemble and consensus methods, these methods always use the same manner of combination regardless of the situation (input), which makes it difficult to use the advantages of different types of forecasters. Our method can overcome this drawback. We evaluated the proposed method by using the seven datasets on U.S. inflation and confirmed that it attained the highest forecast accuracy for four datasets and the same accuracy as the highest one of traditional methods for two datasets.
Co-authors: Takahiro Miyoshi, Kyoto University
Strategy-Advantage Switching in Individual and Group Judgment
Santa Fe Institute, USA
Many inferences are not done by isolated individuals. Teams of health care professionals give prognostic assessments of patients’ chances to recover from cancer, financial experts in the Federal Open Market Committee decide upon the federal funds rate, and selection committees make hiring decisions are a few examples. If we would like to design an inference strategy that maximizes predictive accuracy in a group setting where the individual predictions are averaged together, would the same strategies that work well for individuals also work in a group setting? I show that a strategy that works well for individual predictions does not necessarily work well for group predictions: Constrained strategies produce more accurate predictions for individuals, while unconstrained strategies lead to more accurate predictions for groups. This phenomenon of strategy-advantage switching can be understood by analyzing a decomposition of the mean squared error into bias, variance, and covariance. The bias component is the difference between the true value that is being predicted and the mean prediction of the strategy, the variance component is the variance of those predictions, and covariance is the average covariance of predictions among group members. A strategy's’ bias-variance profile, together with its susceptibility to incur covariance, will determine how well it performs individually and in a group setting. I discuss the implications of the results in the context of collective intelligence and how decision environments should be structured to maximize group performance.
Making a Wiser Crowd: Benefits of Individual Metacognitive Control Over Question Selection
University of California, Irvine, USA
The wisdom of the crowd refers to the finding that judgments aggregated over individuals are typically more accurate than the average individual's judgment. Here we examine the potential for improving crowd judgments by allowing individuals to choose which questions to respond to. In circumstances where individuals' metacognitive assessments of what they know tend to be accurate, allowing individuals to opt in to questions of interest or expertise has the potential to create a more informed knowledge base over which to aggregate. In several experiments we demonstrate that crowds composed of self-selected judgments are more accurate than crowds composed of experimenter-selected questions. We apply simple cognitive models within a Bayesian framework to provide a computational account of the self-selection advantage. Overall, the results show that allowing individuals to use private metacognitive knowledge holds much promise in enhancing judgments, including those of the crowd.
Co-authors: Stephen Bennett, University of California, Irvine and Aaron Benjamin, University of Illinois, Urbana Champaign