SICon

Organizers: Kushal Chawla, Weiyan Shi, Maximillian Chen, Liang Qiu, Yu Li, James Hale, Alexandros Papangelis, Gale Lucas, Zhou Yu

Social influence is the change in an individual's thoughts, feelings, attitudes, or behaviors that results from interaction with another individual or a group. For example, a buyer uses social influence skills to engage in trade-offs and build rapport when bargaining with a seller. A therapist uses social influence skills like persuasion to motivate a patient towards physical exercise. Social influence is a core function of human communication, and such scenarios are ubiquitous in everyday life, from negotiations to argumentation to behavioral interventions. Consequently, realistic human-machine conversations must reflect these social influence dynamics, making it essential to systematically model and understand them in dialogue research. This requires perspectives not only from NLP and AI research but also from game theory, emotion, communication, and psychology. \\newline We are excited to host the First Workshop on Social Influence in Conversations (SICon 2023). SICon 2023 will be a one-day hybrid event, co-located with ACL 2023. It would be the first venue that uniquely fosters a dedicated discussion on social influence within NLP while involving researchers from other disciplines such as affective computing and the social sciences. SICon 2023 features keynote talks, panel discussions, poster sessions, and lightning talks for accepted papers. We hope to bring together researchers and practitioners from a wide variety of disciplines to discuss important problems related to social influence, as well as share findings and recent advances. We encourage researchers of all stages and backgrounds to share their exciting work!
You can open the #workshop-SICon channel in separate windows.

Workshop Papers

Measuring Lexico-Semantic Alignment in Debates with Contextualized Word Representations
Authors: Aina Garí Soler, Matthieu Labeau, Chloé Clavel

Dialog participants sometimes align their linguistic styles, e.g., they use the same words and syntactic constructions as their interlocutors. We propose to investigate the notion of lexico-semantic alignment: to what extent do speakers convey the same meaning when they use the same words? We design measures of lexico-semantic alignment relying on contextualized word representations. We show that they reflect interesting semantic differences between the two sides of a debate and that they can assist in the task of debate's winner prediction.

Go to Paper
Detoxifying Online Discourse: A Guided Response Generation Approach for Reducing Toxicity in User-Generated Text
Authors: Ritwik Bose, Ian Perera, Bonnie Dorr

The expression of opinions, stances, and moral foundations on social media often coincide with toxic, divisive, or inflammatory language that can make constructive discourse across communities difficult. Natural language generation methods could provide a means to reframe or reword such expressions in a way that fosters more civil discourse, yet current Large Language Model (LLM) methods tend towards language that is too generic or formal to seem authentic for social media discussions. We present preliminary work on training LLMs to maintain authenticity while presenting a community's ideas and values in a constructive, non-toxic manner.

Go to Paper
Exploring Linguistic Style Matching in Online Communities: The Role of Social Context and Conversation Dynamics
Authors: Aparna Ananthasubramaniam, Hong Chen, Jason Yan, Kenan Alkiek, Jiaxin Pei, Agrima Seth, Lavinia Dunagan, Minje Choi, Benjamin Litterer, David Jurgens

Linguistic style matching (LSM) in conversations can be reflective of several aspects of social influence such as power or persuasion. However, how LSM relates to the outcomes of online communication on platforms such as Reddit is an unknown question. In this study, we analyze a large corpus of two-party conversation threads in Reddit where we identify all occurrences of LSM using two types of style: the use of function words and formality. Using this framework, we examine how levels of LSM differ in conversations depending on several social factors within Reddit: post and subreddit features, conversation depth, user tenure, and the controversiality of a comment. Finally, we measure the change of LSM following loss of status after community banning. Our findings reveal the interplay of LSM in Reddit conversations with several community metrics, suggesting the importance of understanding conversation engagement when understanding community dynamics.

Go to Paper
Eliciting Rich Positive Emotions in Dialogue Generation
Authors: Ziwei Gong, Qingkai Min, Yue Zhang

Positive emotion elicitation aims at evoking positive emotion states in human users in open-domain dialogue generation. However, most work focuses on inducing a single-dimension of positive sentiment using human annotated datasets, which limits the scale of the training dataset. In this paper, we propose to model various emotions in large unannotated conversations, such as joy, trust and anticipation, by leveraging a latent variable to control the emotional intention of the response. Our proposed emotion-eliciting-Conditional-Variational-AutoEncoder (EE-CVAE) model generates more diverse and emotionally-intelligent responses compared to single-dimension baseline models in human evaluation.

Go to Paper
Large Language Models respond to Influence like Humans
Authors: Lewis Griffin, Bennett Kleinberg, Maximilian Mozes, Kimberly Mai, Maria Do Mar Vau, Matthew Caldwell, Augustine Mavor-Parker

Two studies tested the hypothesis that a Large Language Model (LLM) can be used to model psychological change following exposure to influential input. The first study tested a generic mode of influence - the Illusory Truth Effect (ITE) - where earlier exposure to a statement boosts a later truthfulness test rating. Analysis of newly collected data from human and LLM-simulated subjects (1000 of each) showed the same pattern of effects in both populations; although with greater per statement variability for the LLM. The second study concerns a specific mode of influence – populist framing of news to increase its persuasion and political mobilization. Newly collected data from simulated subjects was compared to previously published data from a 15 country experiment on 7286 human participants. Several effects from the human study were replicated by the simulated study, including ones that surprised the authors of the human study by contradicting their theoretical expectations; but some significant relationships found in human data were not present in the LLM data. Together the two studies support the view that LLMs have potential to act as models of the effect of influence.

Go to Paper
What Makes a Good Counter-Stereotype? Evaluating Strategies for Automated Responses to Stereotypical Text
Authors: Kathleen Fraser, Svetlana Kiritchenko, Isar Nejadgholi, Anna Kerkhof

When harmful social stereotypes are expressed on a public platform, they must be addressed in a way that educates and informs both the original poster and other readers, without causing offence or perpetuating new stereotypes. In this paper, we synthesize findings from psychology and computer science to propose a set of potential counter-stereotype strategies. We then automatically generate such counter-stereotypes using ChatGPT, and analyze their correctness and expected effectiveness at reducing stereotypical associations. We identify the strategies of denouncing stereotypes, warning of consequences, and using an empathetic tone as three promising strategies to be further tested.

Go to Paper
BCause: Reducing group bias and promoting cohesive discussion in online deliberation processes through a simple and engaging online deliberation tool
Authors: Lucas Anastasiou, Anna De Libbo

Facilitating healthy online deliberation in terms of sensemaking and collaboration of discussion participants proves extremely challenging due to a number of known negative effects of online communication on social media platforms. We start from concerns and aspirations about the use of existing online discussion systems as distilled in previous literature, we then combine them with lessons learned on design and engineering practices from our research team, to inform the design of an easy-to-use tool (BCause.app) that enables higher quality discussions than traditional social media. We describe the design of this tool, highlighting the main interaction features that distinguish it from common social media, namely: i. the low-cost argumentation structuring of the conversations with direct replies; ii. and the distinctive use of reflective feedback rather than appreciative-only feedback. We then present the results of a controlled A/B experiment in which we show that the presence of argumentative and cognitive reflective discussion elements produces better social interaction with less polarization and promotes a more cohesive discussion than common social media-like interactions.

Go to Paper

ACL 2023

Back to Top

© 2023 Association for Computational Linguistics