MARDY: Modeling ARgumentation DYnamics in Political Discourse
Bringing together expertise from Computational Linguistics, Machine Learning and Political Science, the project aims to develop a framework for data-driven modeling of key aspects of argumentation dynamics in policy debates as they unfold over a period of days or weeks. The project is funded by the DFG within the Priority Programme 1999 – Robust Argumentation Machines (RATIO)
When considering public debates about a controversial issue, a key goal of political scientists and analysts is to judge the potential impact of proposals, claims or individual arguments. In this assessment, multiple aspects play a role. Each statement has to be considered against the prior state of the debate and an analysis has to take into account which statements have found resonance in the past and how actors have positioned themselves on related issues in the past. How does a statement relate to statements from other actors who belong to the same discourse coalition or to the same policy network? What argumentative justification do speakers build up to back their claims? Moreover, the analyst will also account for the configuration of statements. How are several arguments combined into one coherent collective action frame that provides and interpretive scheme of the problem at hand and its solution?
The goal of the project is to support this type of analysis by developing computational models of argumentation in political discourse. Ideally, such models should reflect the mentioned factors, and should be transparent and flexible enough for experienced analysts to experiment with the setting of relevant meta-parameters in their attempt to identify plausible interpretive schemes and draw adequate conclusions. This defines the building of scalable models for political argumentation as an inherently interdisciplinary problem between Political Science and Computer Science. The former could not achieve scalability and data-driven predictive modeling without the latter; and the latter could not design analytically adequate model architectures without continuous in-depth exchange with the former. Model development has to go hand in hand with the development of a novel, joint analytical (meta-)methodology. While it will not be possible any time soon for a computational model to capture the full bandwidth of relevant clues, interdependencies and analysts’ intuitions, we believe that it is possible now to pull together a number of recent research threads from different areas, preparing the ground for more comprehensive interdisciplinary modeling.
The project is a collaborative effort, combining expertise from computer linguistics/machine learning (Jonas Kuhn & Sebastian Padó, University of Stuttgart) and political science (Sebastian Haunss, University of Bremen).
The framework is supposed to support
- robust automatic identification of claims and justifications in the news coverage (given labeled training data from a related content domain), including attribution to actors and coarse-grained thematic clustering,
- systematic arrangement of thematically related claims and justifications in an entailment hierarchy, guided by only small amounts interactive user feedback (determining, inter alia, the granularity of relevant distinctions),
- representation of the resulting structure in as actor/argument networks,
- exploration of the characteristics and dynamics of these networks in real policy debates, in particular their influence on the impact of different claims and justifications,
- transparent diagnostics and visualization for model interpretation and error analysis,
- predictive modeling of the course of a debate, to support (a) the development of theories of argumentation by retrospective experimental application to past debates and (b) the detection of unexpected turns etc. in ongoing debates,
- expansion of the analytical machinery to content domains for which no labeled training data are available,
- meta-level insights into interdisciplinary collaborations between Computational Linguistics, Machine Learning and Political Science: Which computational model classes and algorithmic approaches can be integrated in systematic discourse studies? What methodological framework can best support substantial exchange across disciplines?