ACCEPT: Perspectivized Argument Knowledge Graphs for Deliberation Support
Many decisions in daily life and in political or societal contexts are difficult to resolve and require deliberation of arguments in favor or against actions to be taken. In this process, conflicting interests of involved parties need to be balanced to achieve optimal outcomes, general acceptance and fairness.
Tools that can support humans in deliberation processes range from debating portals with pre-structured pro and con arguments on widely discussed issues to argument search engines with growing but imperfect capacities for argument structure annotation. Real argumentation, however, goes far beyond coarse-grained structuring of pre-stated pro and con arguments. Deep understanding of debated issues is needed that brings about i) potential implications of decisions; ii) how these affect interested parties, and iii) how to weight potential consequences, in order to find optimal and widely accepted solutions. Importantly, iv) we need to be able to address novel issues not discussed before, and thus, need to build argumentation systems that have access to knowledge resources and offer reasoning capabilities.
Computational argumentation is to date still far from achieving deep understanding of arguments. E.g., tools for classifying support-attack relations perform well in a discourse setting, but our work shows that they crucially rely on discourse markers and perform poorly if these are masked from the input. Similarly, the Argument Reasoning Challenge was designed to enforce deeper reasoning over arguments. But the good performance of BERT-based models turned out to rely on surface cues, and dropped to random behaviour once these cues were eliminated. In prior work we have addressed these weaknesses by integrating background knowledge in an interpretable way, using advanced deep learning methods and uncovering implicit information. But more research is needed to endow systems with deep argument understanding, which includes reasoning about arguments and forseeing consequences of actions, to support deliberation.
In this project we aim to move beyond our achievements in knowledge-based argument analysis: i) We aim to extend our neural-symbolic methods to achieve deeper understanding of arguments supported by background knowledge. This includes the induction of deeper relations between argument components leading to enhanced argument explicitation. ii) We represent, contextualize and enrich arguments in a multi-factorial argument knowledge graph that goes beyond current work by including stakeholder perspectives and their interests, values and goals. iii) Based on the argument graph we develop methods that perform reasoning to derive novel conclusions from many perspectives, will de- and recompose arguments to perform perspectivized argument graph completion, and analyze debated issues from multiple perspectives and learn to reason towards alternative premises, to offer interpretable deliberation support.