CONFORMITY AND CONFIRMATION BIAS
3.1 Introduction
78 C h a p t e r 3
if her posterior belief about the state is away from her opponent’s posterior belief.
Second, a player can choose how to read the signal and then form her posterior belief and policy choice accordingly. Specifically, a player can interpret a piece of information as either challenging or confirming her view. These strategic features capture an individual’s tendency to exhibitconfirmatory bias, or to (mis-)interpret new information as in consistent with her current hypotheses about the world, driven by the motivation of a sense of belonging and/or peer pressure in the same ideological group.
We examine the conditions that support the following two types of (Bayesian Nash) equilibria: (i) Bayesian Updating Equilibrium (BUE), in which players always correctly interpret their signals, and (ii) Confirmatory Bias Equilibrium (CBE), in which players always interpret their signals as in favor of their prior belief. In particular, we investigate how the equilibrium conditions respond to changes in the strength of prior belief, the accuracy of a signal, and the payoff from policy choice. Overall, the equilibrium conditions demonstrate a trade-off between a higher expected payoff from policy choice for interpreting a signal correctly, and a lower expected utility loss from players’ belief misalignment for always interpreting a signal as supporting the more likely state based on current belief.
Our results show that, in general, an individual is more likely to exhibit confirmatory bias when she currently holds a stronger belief about which state is more possible. A confirmatory bias equilibrium is easier to be sustained when the prior belief favors one state more strongly. In contrast, the range of parameters that supports a Bayesian updating equilibrium shrinks as the prior becomes more extreme. Moreover, we show that a BUE can only be sustained when the prior belief is close enough to 50/50; alternatively, if a signal is only moderately accurate, a CBE can only be sustained when the prior is extreme enough. The reason behind this finding is that, as the players’ prior belief becomes stronger, the likelihood of having a different belief from the other and the disutility from it will eventually overwhelm the possible gain from choosing a better policy. Our results thus demonstrate that the motivation of conforming to the belief induced by a more likely signal is a possible explanation for the backfire effect of information observed in previous literature.
Different from the effect of the prior belief on the equilibrium conditions, we find that a signal’s accuracy has an ambiguous impact on the occurrence of confirmatory bias. Specifically, if the benefit from policy choice is large enough, lower accuracy of a signal will facilitate the occurrence of confirmatory bias. However, if the
gain from implementing the optimal policy is small, a CBE (BUE, resp.) may be easier to be sustained under greater (lower, resp.) accuracy of a signal; that is, higher accuracy of a signal may actually facilitate the occurrence of confirmatory bias. This finding, to some extent, violates the intuition that information from a source with high credibility is less susceptible to misinterpretations by readers. The reason behind the above finding is that, although a signal has a higher instrumental value when it is more accurate, the distance between different posterior beliefs induced by opposite signals (and thus the corresponding disutility) increases with a signal’s accuracy as well. Our result reveals that, as a signal’s accuracy is improved, an increase in the loss from two players’ belief misalignment could dominate the increase in the signal’s instrumental value related to policy choice if the consequence of policy choice is trivial.
After the derivation of the main results, we consider two possible extensions of the basic model. First, we examine the scenario in which there are more than two players and a player will suffer a utility loss if her posterior belief is away from the median (or the majority) posterior belief among the other players. We show that an increase in the size of players decreases the sustainability of a Bayesian updating equilibrium. Second, we discuss the robustness of our main results to the assumption that a player’s belief about the other player’s type, or the likelihood that the other player receives a specific signal, do not depend on the player’s own signal.
In our basic model, we assume that a player’s belief about types is solely determined by the common prior. In the extension, we allow a player to update her belief about types based on the signal she receives, and we argue that our main results pertaining to the confirmatory bias equilibrium still hold.
The paper proceeds as follows. We provide a brief review of the literature in the next subsection. Section 3.2 describes the model setup. In Section 3.3, we characterize the conditions that support the BUE and CBE, prove the existence of the equilibria, and obtain the main results. In Section 3.4 we discuss two extensions of the basic model. Section 3.5 concludes. The details of the proofs of propositions are provided in Appendix B.
Related Literature
The seminal theoretical work on confirmation bias is Rabin and Schrag (1999), in which they show how an individual’s confirmatory bias can lead to overconfidence.
Rabin and Schrag (1999) take confirmatory bias as their model’s primitive, assuming
that a decision maker may misread a signal that contradicts her current belief as confirming evidence with an exogenous and fixed probability. In other words, they do not model the mechanism behind an agent’s misinterpretation of signals. Our model complements their work by providing a micro foundation of confirmatory bias and characterizing the relationship between the strength the of prior and the occurrence of confirmatory bias.
A recent work by Fryer Jr et al. (2019) characterizes another possible mechanism behind misperceptions of new information and confirmation bias. In their model, in addition to the two signals that are correlated to the two states of nature, there is another signal that is ambiguous and (thus) open to interpretation. Fryer Jr et al.
(2019) assume that an agent is more likely to interpret the ambiguous signal as supporting her current belief and stores such perception in memory, which in turn induces confirmatory bias and polarization in the long run. Our model departs from Fryer Jr et al. (2019) in that, instead of introducing a third type of signal, we follow the setup in Rabin and Schrag (1999) and assume that an agent may misinterpret conflicting evidence as supporting evidence. Moreover, our model is established on a strategic interaction environment instead of an individual decision-making problem. The motivation behind distorting the meaning of a signal in our model is that a decision maker would like to conform to the (posterior) belief that is more likely to be held by another decision maker. Without multiple agents, a decision maker would not exhibit confirmatory bias under our model assumptions.
Our model joins the literature that studies conformism, or “the inclination of an individual to change spontaneously (without any order or request by anyone) his judgements and (or) actions to conform to ‘socially prevailing’ judgements and (or) actions” (Luzzati 1999, p. 111). The models in this literature can be divided into two classes based on their methodology. A representative model of the first class is Bernheim (1994), in which conformity is derived endogenously. In particular, Bernheim (1994) assumes that individuals would like to be considered to be of a given status (i.e., type), and conformism arises endogenously in the form of pooling equilibria. The second class of models, our model included, treats conformism as primitive in the sense that an agent directly suffers disutility for deviating from exogenous social or group norms on actions (e.g., Akerlof, 1980; Jones, 1984;
Luzzati, 1999). For example, Jones (1984) assumes that a worker obtains a utility loss from the distance between the worker’s production and the average production level of all workers. In our model, we assume that a player suffers a utility loss if the
player’s and her opponent’s interpretations of their signals diverge. A key feature of our model is that the size of disutility is determined by the distance between the players’ posterior beliefs (induced by their interpretations of signals), not by their choices themselves.
Our model is also related to the coordination games with private information, in the sense that the players in our model have an incentive to interpret their signals in the same way. Previous literature in this strand mainly focuses on how the issue of multiple equilibria in a coordination game may be overcome via incomplete infor- mation (e.g., Carlsson and Van Damme, 1993) or via communication (e.g., Banks and Calvert, 1992). Our focus is, however, not the properties of a coordination game itself but how this framework can help us explain the emergence of confirmatory bias and the backfire effect of new information.
Our model also contributes to the literature on motivated reasoning (Kunda, 1990;
Bénabou and Tirole, 2016), which studies how an agent may distort her belief updating process to achieve a desired conclusion for intrinsic motivation such as self- confidence (e.g., Bénabou and Tirole, 2002; Gottlieb, 2014), moral self-esteem (e.g., Bénabou and Tirole, 2011), anticipatory emotions (e.g., Caplin and Leahy, 2001;
Bracha and Brown, 2012), and dissonance reduction (e.g., Chauvin, 2020). In our model, the motivation underlying a decision maker’s non-standard Bayesian belief updating is her preference of sharing the same (posterior) belief as her opponent.
A main distinction between ours and previous models on motivated reasoning is that most of prior studies focus on individual decision-making (or belief-updating) problem, whereas our model is built within a strategic interaction environment.2 Another distinctive feature of our model is that our decision maker does not directly suffer a cost of belief distortion that is positively correlated with the distance between the objective Bayesian belief and the decision maker’s subjective belief. Instead, the cost comes from a higher chance of choosing an unmatched policy based on the decision maker’s subjective belief. This implies that two different distorted beliefs will lead to the same utility cost if the a decision maker chooses the same policy under both beliefs.
Last, our model belongs to the vast literature on biases in belief updating (see Benjamin 2019 for a review). It is noteworthy, however, that we do not incorporate cognitive biases into an agent’s belief updating process as a primitive. While a player
2One exception we notice is Bénabou (2013), where a group of agents can choose how to interpret publicsignals about future prospects.
in our model can misread the objective signal she receives, she follows the Bayes’
rule to form her posterior belief based on her perception, or subjective interpretation, of a signal.