7 Discussion

The introduction of this thesis started by explaining Bayesian statistics as a way of updating information. I start this discussion by reflecting on the example that I used in Chapter 1, discussing the importance of (hidden) assumptions, and I relate this to possible differences between experts and data. I consider different roles that expert knowledge can play in research, and under what conditions I think those roles could be appropriate. Finally, I give consideration to what seems a natural future direction for (my) research; decision making.

7.1 Hidden assumptions

In Chapter 1 the example of coin tosses is used to introduce the concept of Bayesian statistics, even though in the remainder of this thesis the normal distribution is used in the analyses. Why then, not explain Bayesian statistics using the normal distribution? The main reason is that the coin tossing example requires a single parameter \(\theta\), resulting in a straight-forward single-parameter model. Two parameters should be considered with respect to the normal distribution: the mean (\(\mu\)) and variance (\(\sigma^2\)). This makes explanations more complex and the interactions between the two parameter might not make for an intuitive initial framework. Many text books decide to first explain the situation in which either \(\mu\) or \(\sigma^2\) is fixed (e.g. Albert, 2009; Gelman et al., 2013; Kaplan, 2014; Kruschke, 2010; Lynch, 2007; Ntzoufras, 2011; Press, 2009). This decision is taken consciously, which is illustrated by the following comments.

“For the purpose of mathematical derivation, we make the unrealistic assumption that the prior distribution is either a spike on \(\sigma\) or a spike on \(\mu\).”

Kruschke (2010) p. 322

“Perhaps a more realistic situation that arises in practice is when the mean and variance of the normal distribution are unknown”

Kaplan (2014) p. 28

“In reality, we typically do not know \(\sigma^2\) any more than we know \(\mu\), and thus we have two quantities of interest that we should be updating with new information”

Lynch (2007) p. 65

There is nothing wrong with explaining a simplistic version first. One reason not to do so is because an explanation with this ‘unrealistic’ or ‘hidden’ assumption might not make for proper intuition. In general, the more complex models become, the more complex specifying prior information becomes. Almost never is the specification of prior information as easy as in the examples in Chapter 1 concerning the coin flips and the Beta distribution that has a natural interpretation in that case. Moreover, in multiparameter models the priors interact with one another to say something about the data that you might expect. Priors on certain parameters by themselves might look reasonable, but together they can sometimes imply very implausible situations about reality. Simulating fake data, or looking at implied predictive distributions as done in Chapter 5, can help identify these problems (Gabry, Simpson, Vehtari, Betancourt, & Gelman, 2019; van de Schoot et al., 2020). Moreover, recent work points to interpretation challenges for the prior if context of the likelihood is not taken into account (Gelman et al., 2017), or information about an experiment is ignored (Kennedy, Simpson, & Gelman, 2019). Note in relation to this, how in the hierarchical model of Chapter 4, priors on the individual level are essentially based on the estimated group level effects, which includes information from both the prior and the likelihood. All this reflection on assumptions is not to criticize explanations in textbooks or articles. The point is made to highlight that the choices that are made with respect to the models and priors are highly influential for results and interpretations, and being explicit about them is a minimal requirement.

7.2 Expert Knowledge

Being transparent about models, priors and choices is related to the issue of eliciting expert knowledge. As mentioned in the introduction of Chapter 3, when conducting an elicitation experts are forced to use a representation system that belongs to the statistical realm. They are forced to use the same parametric representation as the statistical model. For non-trivial problems, statistical models can become complex quickly. If expert knowledge is elicited with the purpose of being used as a prior distribution in a statistical model, the implicit assumption is made that the expert adheres to the same model as statistically specified. This can be a rather strict assumption, in which confidence will decrease when models become more complex.

In this dissertation in Chapters 3 and 6 we focus on the comparison of experts’ elicited distributions among one another and their contrast with respect to what traditional data implies given our statistical model. If discrepancies occur between the two this can be highly informative and it need not be that one or the other is at fault and wrong. The discrepancies are so interesting because the differences can occur due to different implied models. When discussing with experts why their beliefs diverge from one another, or from the traditional data, we can learn subtle differences in the implied models that experts use. The information obtained using experts-data (dis)agreements methodology might inform us to specify slightly different statistical models or include other variables in our statistical models. In the long run, if the experts learn from the data, and the model is refined based on expert knowledge, we can expect both sources of information to converge.

Note that I reflect here on cases where we do have data, not on cases where expert knowledge is used when no data is available. Obviously in those cases we do not have the luxury of comparing multiple sources of information. It is often more desirable to have any information, for instance provided by experts, than no information. In this scenario it is even more essential to have quality checks available to evaluate experts’ expertise. As discussed in Chapter 6, the classical method is a much used procedure in this instance (Cooke, 1991, Chapter 12). The lack of suitable calibration questions for many social scientific research topics makes this method, at least for now, unfeasible in those settings. Moreover, the work presented in this dissertation is not a substitute for asking calibration questions, but should be viewed as an additional area of research.

In cases where calibration is possible, updating the elicited experts’ beliefs with new data in a full Bayesian framework can certainly be considered. In cases where calibration is not an option, I would rather contrast expert knowledge as an alternative source of information than update it with traditionally collected data. The two alternative ways of incorporating information in prior distributions that were discussed in Chapter 1, using previous research and logical constraints, seem more defensible than elicited expert priors without calibration. Especially the use of priors describing plausible parameter space seems no more than logical. In Chapters 3 and 6 we use uniform priors as benchmarks that could be considered in line with Laplace’s (1749-1827) principle of insufficient reason. It seems that these might be more in line with the data in the proposed statistical model than some experts’ beliefs. Whatever the reason, and whichever source of information is right, when using Kullback-Leibler divergences that assign truth status to the traditional data, the ‘ignorant’ benchmarks examples resonate the following idea:

“Ignorance is preferable to error and he is less remote from the truth who beliefs noting than he who believes what is wrong.”

Thomas Jefferson (1781)

7.3 Taking a decision

I began this thesis by stating that all of us have to make decisions whilst facing uncertainty and incomplete information. In addition, I stated that Bayesian statistics offers a way to describe our state of knowledge in terms of probability. In this thesis we have indeed concerned ourselves mainly with obtaining prior, and estimating posterior, distributions of probability. We have contrasted sources of information, seeing this as an opportunity to learn and improve our knowledge and models. In short, we have concerned ourselves in this thesis with ways to systematically organize uncertainty and incomplete information, but not yet with the decision making process that should naturally follow from this.

Two approaches that are often used in science to make a decision, or come to a conclusion, are model selection and hypothesis testing. Model selection is a very useful concept to refine our theory and models. However, it does not always lead to a decision. Hypothesis testing is more naturally focused at making decisions, e.g. can we reject the hypothesis that there is no effect? It seems, however, to be a rather unhelpful restriction to single out one value and contemplate the issue with respect to that value. Indeed, Bayesian estimation can be seen as a case in which we test an infinite number of hypotheses concerning which values are most likely for parameters (Jaynes, 2003, Chapter 4). Moving beyond simple one dimensional questions to decisions that determine a course of action, it seems straightforward that we need to consider more than just an existence of an effect or not. Consider such questions as; do we implement a certain intervention in schools or not? Or should I get a certain type of insurance? To determine a course of action we need to be able to assess which choice seems most preferable out the options that are open to us. This cannot be assessed unless we assign value judgement to certain outcomes and take costs into account. Is raising the IQ of children by 1 point on average worth the investment if that means that we have to cut funding to hospitals by the same amount? Does the good outweigh the bad? That is the relevant question, not: should we change the way we teach at schools because an experiment provided us with \(p<.05\) for a hypothesis stating that both methods of teaching were exactly equal? The following words express this sentiment in a delightful way:

“You cannot decide what to do today until you have decided what to do with the tomorrows that today’s decisions might bring.”

Lindley (2013), p. 249

To extend the framework of Bayesian estimation into the field of decision making seems natural via the concepts of utility and loss. Given that a model has been found that seems reasonable, e.g. via model selection, the inference solutions obtained by applying probability theory only provide us with a state of knowledge concerning parameters, it does not tell us what to do with that information (Jaynes, 2003, Chapter 13). Utility or loss functions can be defined and maximized to determine which decision is optimal (Goldstein, 2006; Jaynes, 2003, Chapter 13). Moreover, if sequential decisions should be taken, a decision tree should be made taking all information up to each point into account (Gelman et al., 2013, Chapter 9; Lindley, 2013, Chapter 10). Utility can be defined very transparently, but it is not free of subjective value judgement (Jaynes, 2003, Chapter 13). For a wonderful example that illustrates this, see Jaynes (2003), p. 400 - 402 on the differences in rationale and utility of insurance, viewed from the standpoints of the insurance agency, a poor person, a rich person, and a rich person with an aversion to risk. For a full decision analysis of different strategies in the context of risk reduction of lung cancer in relation to household environmental risk of exposure to radon gas, see Gelman et al. (2013) p. 246-256.

In no way am I saying that assigning utility and loss functions are easy concepts. Moreover, I will not claim to have the wisdom at this point to undertake such an elaborate evaluation and ensure wise decisions. However, if I had to take a decision on what to peruse next academically, using what I have learned from working on this dissertation, I would peruse decision making. But only after I reflected on what tomorrows today’s decision would bring, given the uncertain and incomplete information that I have.

References

Albert, J. (2009). Bayesian computation with R. Springer Science & Business Media.

Cooke, R. M. (1991). Experts in uncertainty: Opinion and subjective probability in science. Oxford University Press on Demand.

Gabry, J., Simpson, D., Vehtari, A., Betancourt, M., & Gelman, A. (2019). Visualization in Bayesian workflow. Journal of the Royal Statistical Society: Series A (Statistics in Society), 182(2), 389–402.

Gelman, A., Carlin, J. B., Stern, H. S., Dunson, D. B., Vehtari, A., & Rubin, D. B. (2013). Bayesian data analysis. CRC press.

Gelman, A., Simpson, D., & Betancourt, M. (2017). The prior can often only be understood in the context of the likelihood. Entropy, 19(10), 555.

Goldstein, M. (2006). Subjective Bayesian analysis: Principles and practice. Bayesian Analysis, 1(3), 403–420.

Jaynes, E. T. (2003). Probability theory: The logic of science. Cambridge university press.

Kaplan, D. (2014). Bayesian statistics for the social sciences. Guilford Publications.

Kennedy, L., Simpson, D., & Gelman, A. (2019). The experiment is just as important as the likelihood in understanding the prior: A cautionary note on robust cognitive modelling. arXiv Preprint arXiv:1905.10341.

Kruschke, J. K. (2010). Doing Bayesian data analysis: A tutorial with R and BUGS. Academic Press.

Lindley, D. V. (2013). Understanding uncertainty. John Wiley & Sons.

Lynch, S. M. (2007). Introduction to applied Bayesian statistics and estimation for social scientists. Springer Science & Business Media.

Ntzoufras, I. (2011). Bayesian modeling using WinBUGS (Vol. 698). John Wiley & Sons.

Press, S. J. (2009). Subjective and objective Bayesian statistics: Principles, models, and applications (Vol. 590). John Wiley & Sons.

van de Schoot, R., Veen, D., Smeets, L., Winter, S., & Depaoli, S. (2020). A tutorial on using the WAMBS-checklist to avoid the misuse Bayesian Statistics. In Small sample size solutions: A guide for applied researchers and practitioners. Routledge.