CAML-2: Causality, Argumentation and Machine Learning
While traditional machine learning methods have led to great advances in artificial intelligence, there are a number of obstacles complicating their wider adoption. Pearl argues that these obstacles arise from the fact that current machine learning methods are associational learning systems which do not attempt to understand cause and effect relationships (Pearl, 2018). Thus, in order to overcome these obstacles, machines must be equipped with the ability to reason about cause and effect. The overall objective of the project CAML2 is to use formal argumentation techniques for causal machine learning. While large amounts of observed data are increasingly easy to obtain, it is well known that causal relationships cannot, in general, be discovered on the basis of observed data alone. This is a process that depends on additional knowledge, such as background knowledge provided by domain experts or results of controlled experiments. A machine that reasons about cause and effect must therefore interact with users. Given a query formulated by the user, it must be able to explain an answer, show which additional knowledge is necessary to arrive at an answer, and be able to integrate knowledge that is provided by the user. Formal argumentation provides tools that facilitate this kind of interaction. In formal argumentation, answers to queries are accompanied by a dialectical analysis showing why arguments in support of a conclusion are preferred to counterarguments. Such an analysis acts as an explanation for an answer, and can furthermore be made interactive, by allowing users to pinpoint errors or missing information and to formulate new arguments that can be integrated into the analysis. More concretely, we will develop an approach that allows for learning causal models from observed data and, in particular, to reason about conflicting causal models using arguments and counterarguments for and against certain models, respectively. The approach allows for the injection of background knowledge and expert information can be elicited through an interactive dialogue system. In order to address problems pertaining to uncertainty and noisy information, we will not only consider qualitative approaches to formal argumentation but also quantitative ones that are able to argue about probabilities. An important notion in causal discovery (and machine learning in general) is that of an intervention, i.e., a "manual" setting of a variable to a certain value. This allows to guide the learning algorithm in the appropriate direction. We will model interventions through the use of counterfactual reasoning methods of formal argumentation, yielding a comprehensive framework for interactive argumentative causal discovery. We will focus on the setting of supervised learning, but also consider our approach in an unsupervised setting, in particular, for clustering and density estimation.