Invited Talk

Prof. Henry Prakken

Department of Information and Computing Sciences,  Utrecht University, and Faculty of Law, University of Groningen, The Netherlands

Abstract: 

While machine learning has recently led to spectacular successes in AI, a main issue is the black-box nature of many machine-learning applications.  This is especially a problem in applications to decision-making with legal, ethical or social implications. A new subfield of AI has emerged called 'explainable AI'. After a brief overview of this field, I will discuss my recent work with Rosa Ratsma on applying case-based argumentation techniques to explaining the outputs of machine-learning-based decision-making applications. The model draws on AI & law research on argumentation with cases, which models how lawyers draw analogies to past cases and discuss their relevant similarities and differences in terms of relevant factors and dimensions in the problem domain.  A case-based approach is natural since the input data of machine-learning applications can be seen as cases. While the approach is motivated by legal decision making, it also applies to other kinds of decision making, such as commercial decisions about loan applications or employee hiring, as long as the outcome is binary and the input conforms to the model's factor- or dimension format. 

Reference:

H. Prakken & R. Ratsma, A top-level model of case-based argumentation for explanation: formalisation and experiments. Argument and Computation, first online 26 April 2021. DOI: 10.3233/AAC-210009.