I characterize the unique optimal learning strategy when there are two information sources, three possible states of the world, and learning is modeled as a search process. The optimal strategy consists of two phases. During the first phase, only beliefs about the state and the objective characteristics of information sources matter for the optimal choice between these sources. During the second phase, this choice also depends on how much the agent values different alternatives he has to choose from. The information sources are substitutes when each individual source is likely to reveal the state eventually and when the cost of information is low, and they are complements otherwise. Optimal delegation of information collection leads to the socially optimal outcome.
We present a dynamic model that illustrates three forces that shape the effect of overconfidence (overprecision of consumed information) on the amount of collected information. The first force comes from overestimating the precision of the next consumed piece of information. The second force is related to overestimating the precision of already collected information. The third force reflects the discrepancy between how much information the agent expects to collect and how much information he actually collects in expectation. The first force pushes an overconfident agent to collect more information, while the second and the third forces work in the other direction.
We adapt the cognitive hierarchical (CH) model to the belief formation process in a network game. In contrast to the classical CH model, we do not require the belief distribution f about the levels of thinking to be consistent with the realized distribution. In particular, we assume everybody is of level infinity. We show that for any ε > 0 arbitrary close to 0 we can construct an example with sufficiently connected network (so that there is a path from any player to any player) such that even if distribution f places probability 1 - ε to the event that everybody is of level infinity, the beliefs do not converge and therefore players permanently disagree. The most surprising part of our predictions is that we show that players, while being all of level infinity, do not learn that they are so sophisticated, despite that they all have a very strong prior for this event. This is in line with the famous Rubinstein’s Email Game (Rubinstein (1989)) where the prediction under “almost common knowledge” is very different from the equilibrium prediction that assumes common knowledge.
Many experiments demonstrate that an individual’s choice decisions are inconsistent. Following Luce (1959) and Block, Marschak, et al. (1960), a random choice approach to this problem has become very popular. It posits the existence of a probabilistic choice function that describes the probability of choosing an alternative from a given set of options. This paper contributes to the theoretical literature that narrows the class of random choice functions. Each alternative can be fully characterized by a vector in a n-dimensional space. A decision maker pays attention only to a randomly chosen subset of coordinates (or criteria) each time he faces a set of alternatives to choose from. Given this randomly chosen subset, he is perfectly rational, that is he chooses according to some strict preference ordering. For this procedure to be well-defined, the preference ordering must be separable with respect to criteria. In other words, the preference of the decision maker over any two alternatives should not depend on the characteristics that these alternatives have in common. This paper characterizes all systems of choice probabilities that are induced by this choice procedure.
Theoretical paper DeMarzo, Vayanos, and Zwiebel (2003) proposes a model of information aggregation in networks when individuals are subject to persuasion bias. The term "persuasion bias" refers to a particular form of boundedly rational behavior when individuals connected into a network do not account for repetition in the information they acquire. We argue that under the assumption that agents form their beliefs as a weighted average of all information available to them, the persuasion bias assumption is equivalent to a generalized version of the famous DeGroot model (DeGroot (1974)). We test the persuasion bias hypothesis against the (generalized) Bayesian updating model and find support for the persuasion bias hypothesis. We also found a positive correlation between how well a subject fits the generalized DeGroot model, compared to the alternative generalized Bayesian updating model, and their performance in the experiment. Data suggest that the generalized DeGroot model better accommodates other subjects' deviations from equilibrium, which explains the positive correlation.
This paper develops a dynamic model of information search in continuous time using Brownian motion to model gradual learning. In symmetric environment, the optimal strategy is to choose the source that most likely "confirms" the current beliefs: the individual will always prefer the information source that differentiates the most likely state from all other states.