Trust Games
Without explanatory bootstrapping, we would not be able to establish many adaptive prior beliefs, especially those that are complex or abstract.
From Chris Frith's paper Consciousness, (meta)Cognition, and Culture:
Instructions, and culture, work, usually via language, to apply precision control at the top level of the hierarchy.* [For example, in] studies of trust games . . . one player transfers money to a partner in the hope that this person will return the money with interest. To succeed in this game, you need to learn to distinguish between those who can be trusted and those who will just take your money. In most studies, this learning occurs slowly during direct interactions. However, you can also learn very quickly whom to trust through information from the experimenter or from gossiping with other players. Such instructions change your behaviour. You invest in those you are told are trustworthy and pay less attention to their actual behaviour. . . .
In the early stages of learning, the precision of the prior belief is much lower than the precisions of the evidence, since we don’t yet know how trustworthy the people are. We must attend closely to their behaviour. This pattern changes if we are told how trustworthy the various people are. Now, the precision of our prior belief is high, higher than the precision of the evidence that we collect on each trial. In other words, we no longer need to attend closely to the behaviour of our partners because we know precisely how trustworthy they are. Still, the responses of our partner can vary from trial to trial, and they do not always return the money. This results in a prediction error. However, we treat it as irrelevant noise. Remarkably, such prediction errors no longer elicit increased activity in the striatum. This is an example of precision control. The instruction about trustworthiness has altered the balance between prior belief and evidence by changing their relative precision . . .
We now see that top-level priors can be very quickly changed by top-down messages from other people. There are good reasons for this asymmetry. Top-level priors concern complex, abstract concepts, such as trustworthiness. Evidence for such concepts is difficult to collect and needs much experience. It takes a long time to learn such things directly by trial and error. We can get more precise priors from other people who have had more experience. We can get even better estimates from our cultural milieu since this encompasses the experience of many people over a long time. As a result, outside influences can come to dominate over direct experience.
There is sympathy in these quotations for both sides of the debate around guidance in education. On the one hand, it is not silly to be worried about the impact of explaining concepts to students fully (top down) prior to students' personal experience (bottom up) with said concepts. In some contexts, as we have just seen, altering or instantiating prior beliefs in this way can indeed lead us to ignore our own personal judgments and pay less attention to the evidence in front of us. On the other hand, without this explanatory bootstrapping, we would not be able to establish many adaptive prior beliefs, especially those that are complex or abstract.
And even if we were able to arrive at such beliefs "on our own," we would still need corroboration from others to validate our understanding.
* What is meant by "precision control at the top level of the hierarchy"? In this case, think of the top level as prior belief (what do I know about X going in; or, better, what do I know from others about this), whereas the bottom level is sensory input or evidence (what does my personal experience tell me about X). Precision is, basically, the strength of the signal. So, instructions and culture work by adjusting the strength of prior belief as a factor in cognition.