Machine Aggregation of
Human Judgment


AAAI Fall Symposium 2012
Arlington, Virginia
November 2-4, 2012

Overview

Program Committee

Accepted Papers

Invited Speakers

Important Dates

Registration & Hotel

Program Schedule

 



 

Overview

This symposium focuses on combining human and machine inference. For unique events and data-poor problems, there is no substitute for human judgment.  Even for data-rich problems, human input is needed to account for contextual factors.  For example, textual analysis is data rich, but context and semantics often make automated parsing unusable.  However, humans are notorious for underestimating the uncertainty in their forecasts and even the most expert judgments exhibit well-known cognitive biases. The challenge is therefore to aggregate expert judgment such that it compensates for the human deficiencies.

There are fundamental theoretical reasons to expect aggregated estimates to out-perform individual forecasts. These theoretical results are borne out by a robust empirical literature demonstrating the superiority of opinion pools and prediction markets over individual forecasts, and of ensemble forecasts over those of top models. While weighted forecasts are theoretically optimal, among human experts unweighted forecasts have been hard to beat.

We focuses on methods with the potential to come closer to the theoretical optimum. While a number of methods have shown promise individually, there is potential for significant advancement from combining them into structured, efficient, repeatable elicitation and aggregation protocols. Benefits of improved aggregation methods include substantial increases in the quality and reliability of expert judgements, removing misunderstanding, illuminating context dependence of forecasts, and reducing overconfidence and motivational bias in forecasts. On the other hand, there is some skepticism that statistical models can outperform experts most of the time. Machine reasoning lacks the context to know when the models no longer apply, or in cases like natural language, simply lack sufficient context to be reliable in open-world or novel problems. This symposium considers powerful hybrid techniques using humans to help aggregate computer models.

A broad range of researchers in the AI community and other application fields such as econometrics, sociology, political science, and intelligence analysis will find this symposium interesting and useful. Bringing these disciplines together to the venue also greatly facilitates the research endeavors.