CUEPAQ: Visual Analytics and Linguistics for Capturing, Understanding, and Explaining Personalized Argument Quality

In our project, Visual Analytics and Linguistics for Capturing, Understanding, and Explaining Personalized Argument Quality (CUEPAQ), we combine methods from the fields of visual analytics and computational linguistic to generate new approaches for the analysis of argument quality in terms of various metrics on different levels of linguistic analysis. Based on this analysis, we provide so-called preference profiles, enabling users to gain insights into their personal argumentation behavior and compare it to other users' behavior.

This project's main goal is to capture, understand, and explain the perceived quality of arguments. To that end, we collect various stylistic, content, and semantic features that influence how arguments are framed and perceived. This project's central question is how these elements interact to produce arguments that are perceived as high-quality.

In answering this question, we contribute to the research on argument quality a visual analytics framework for the rating and ranking of arguments. The system enables rapid analysis of interactions between argument quality and the linguistic expression of an argument. Our framework extracts preference profiles, which capture users' annotation behavior by indicating the content and stylistic features that mainly affect their argument rating. These preference profiles may vary from user to user or across different user groups. To account for this, we include both expert knowledge on the annotation of argument quality and results of non-expert user ratings in our analysis of argument quality. 

Based on relative preference comparisons between arguments, the system extracts patterns of linguistic features, both stylistic and interpretational. These features are expected to capture the users' preferences and would thus be reflected in their rating behavior. This externalized knowledge is visualized based on specific guidance strategies and allows both the user and the system to learn from each other. Since the system can keep track of the annotation behavior of different users, this co-adaptive process does not only allow a user to understand their own argumentation preferences but also to compare them with other users of the system, as well as the expert opinions on high-quality argumentation. 

In this project, we use computational linguistic methods to explore the relationship between linguistic choices and the ranking of arguments by users and systems based on expert-opinion. Concretely, we contribute to the uniform annotation of linguistic features of arguments that are relevant to the judgment of argument quality.