Milán, Italia
An initial screening experiment may lead to ambiguous conclusions regarding the factors which are active in explaining the variation of an outcome variable: thus, adding follow-up runs becomes necessary. To better account for model uncertainty, we propose an objective Bayesian approach to follow-up designs, using prior distributions suitably tailored to model selection. To select the best follow-up runs, we adopt a model discrimination criterion based on a weighted average of Kullback–Leibler divergences between predictive distributions for all possible pairs of models. Our procedure should appeal to practitioners because it does not require prior specifications, being fully automatic. When applied to real data, it produces follow-up runs which better discriminate among factors relative to current methodology
© 2001-2025 Fundación Dialnet · Todos los derechos reservados