Abstract HTML Views: 326 PDF Downloads: 140 Total Views/Downloads: 580
Abstract HTML Views: 202 PDF Downloads: 110 Total Views/Downloads: 403
Most research articles studying how people learn to detect causal relationships in their environments commence with some sort of example to illustrate the relevance of causality in our daily lives. These examples allude to routine problems faced by doctors, economists, social psychologists, and others and emphasize the importance of deepening our understanding of causal reasoning. But despite these frequent applied examples, it is somewhat surprising that research on causal learning has only had a modest impact in applied disciplines. After three decades or so of intense study, it is probably time to wonder why this is the case. Plainly, we do not want causal learning to become a super-specialized topic, perfectly constructed but unable to generate useful knowledge of wider relevance. In our view, cross-boundary work to fulfill this ambition is being undertaken, but to make a full impact it requires a reformulation of the implicit paradigm for causal learning research. The core of this paradigm is simple and can be summarized in two principles: first, causal knowledge can be assessed by means of verbal or numerical judgments of causal strength, and second, a single mental algorithm is sufficient to account for how environmental conditions (including covariation between cues and outcomes, time delays, and statistical interactions) map onto judgments. This research program has produced an impressive corpus of data (see  for a recent review), but also a current feeling that wider progress and impact is not being achieved. This special issue is a joint attempt to present a vision of how research on causal learning might develop in the future, and to push that process forward. With regard to the first principle, it is important to acknowledge that judgments are not the only way to assess causal knowledge. Judgments reflect causal beliefs, and causal beliefs are probably the basis for other responses, such as decisions or interventions. But it is not possible to predict decisions or interventions on the basis of judgments alone. It would be naive to think that causal beliefs reflected in simple causal judgments are the sole input to decision-making and intervention processes. Much effort is necessary to ascertain how causal knowledge is employed in all of these competencies, so we can build bridges between what we have discovered in recent decades and other aspects of behavior. With regard to the second principle, we argue that a reconsideration of how theory needs to develop in the future is also necessary. Most researchers now accept that people interpret the world as causal, and build mental representations of their environment in which events are causally related. Still, these causal models must be constructed from some sort of evidence, and that evidence is provided by basic coding mechanisms capturing regularities in the environment. In other words, causal learning not only serves to detect and code statistical regularities, but also to uncover the hidden causal structure that generates those regularities. For example, the correlation between smoking and lung cancer has been known for a long time. However, some scientists and tobacco manufacturers denied the existence of a causal link between the two variables, because they believed that some other causal factor was responsible for the co-occurrence (for example, populations from certain social origins could be more likely both to smoke and suffer cancer). Obviously, if smoking were not a direct cause of cancer, it would be useless to recommend people to quit smoking. In theoretical terms, we need some basic coding mechanism(s) to capture statistical regularities, and some other mechanism(s) to infer causal structures from them. Most psychologists and neuroscientists would accept that the brain is a hugely sophisticated form of connectionist net, capable of building quite reliable models of the regularities in our experiences and interactions with the world (see Moris, Cobos, & Luque's paper in this volume). Miller's comparator model , and Allan's  recent work, emphasize the idea that basic coding processes, either associative or episodic, can generate representations of the world in which, with sufficient attentional resources, most relevant events, their conjunctions, their relations of time and order, and their statistical dependencies, are conserved. Additionally, a number of algorithms have been postulated in artificial intelligence that are capable of using the sort of output generated by these basic coding mechanisms to build structural causal representations [4, 5]. The limits of bounded rationality and actual research indicate, however, that the use of such algorithms requires the management of quantities of information that are beyond human processing limits. So, causal induction is also a learning problem: certain second-order cues (abstract features of interrelations among cues) can indicate what is a cause and what is not. We, and most of the authors in this volume, support this cues-to-causality approach (see the papers by Lagnado & Speekenbrink, and Hagmayer et al. in this volume, and , for more detailed discussions). Much research is needed to ascertain how we learn to manage these cues, the quantitative and qualitative role of cue information in the formation of causal beliefs and in open behavior, and how cues interact when more than one are simultaneously present...