Subjective probabilistic judgments in forecasting are inevitable in many real-life domains. A common way to obtain such judgments is to assess fractiles or confidence intervals. However, these judgments tend to be systematically overconfident. Further, it has proved particularly difficult to debias such forecasts and improve the calibration. This paper proposes a simple process that systematically leads to wider confidence intervals, thus reducing overconfidence. With a series of experiments, including with professionals, we show that unpacking the distal future into intermediate more proximal futures systematically improves calibration. We refer to this phenomenon as the time unpacking effect, find it is robust to different elicitation formats, and address the possible reasons behind it. We further show that this results in better overall forecasting performance when improved calibration is traded off against less sharpness, and that substantive benefits can be obtained even from just one level of time unpacking.
© 2001-2024 Fundación Dialnet · Todos los derechos reservados