This is an internal, incomplete preview of a proposed change to the ACL Anthology.
For efficiency reasons, we generate only three BibTeX files per volume, and the preview may be incomplete in other ways, or contain mistakes.
Do not treat this content as an official publication.
Smiles are a fundamental facial expression for successful human-agent communication. The growing number of publications in this domain presents an opportunity for future research and design to be informed by a scoping review of the extant literature. This semi-automated review expedites the first steps toward the mapping of Virtual Human (VH) smile research. This paper contributes an overview of the status quo of VH smile research, identifies research streams through cluster analysis, identifies prolific authors in the field, and provides evidence that a full scoping review is needed to synthesize the findings in the expanding domain of VH smile research. To enable collaboration, we provide full access to the refined VH smile dataset, key word and author word clouds, as well as interactive evidence maps.
The aim of this study is to investigate conversational feedbacks that contain smiles and laughs. Firstly, we propose a statistical analysis of smiles and laughs used as generic and specific feedbacks in a corpus of French talk-in-interaction. Our results show that smiles of low intensity are preferentially used to produce generic feedbacks while high intensity smiles and laughs are preferentially used to produce specific feedbacks. Secondly, based on a machine learning approach, we propose a hierarchical classification of feedback to automatically predict not only the presence/absence of a smile but, also the type of smiles according to an intensity-scale (low or high).
Research documents gender differences in nonverbal behavior and negotiation outcomes. Women tend to smile more often than men and men generally perform better in economic negotiation contexts. Among nonverbal behaviors, smiling can serve various social functions, from rewarding or appeasing others to conveying dominance, and could therefore be extremely useful in economic negotiations. However, smiling has hardly been studied in negotiation contexts. Here we examine links between smiling, gender, and negotiation outcomes. We analyze a corpus of video recordings of participant dyads during mock salary negotiations and test whether women smile more than men and if the amount of smiling can predict economic negotiation outcomes. Consistent with existing literature, women smiled more than men. There was no significant relationship between smiling and negotiation outcomes and gender did not predict negotiation performance. Exploratory analyses showed that expected negotiation outcomes, strongly correlated with actual outcomes, tended to be higher for men than for women. Implications for the gender pay gap and future research are discussed.
The smiling synchrony of the French audio-video conversational corpora “PACO” and “Cheese!” is investigated. The two corpora merged altogether last 6 hours and are made of 25 face-to-face dyadic interactions annotated following the 5 levels Smiling Intensity Scale proposed by Gironzetti et al. (2016). After introducing new indicators for characterizing synchrony phenomena, we find that almost all the 25 interactions of PACO-CHEESE show a strong and significant smiling synchrony behavior. We investigate in a second step the evolution of the synchrony parameters throughout the interaction. No effect is found and it appears rather that the smiling synchrony is present at the very start of the interaction and remains unchanged throughout the conversation.
The development of virtual agents has enabled human-avatar interactions to become increasingly rich and varied. Moreover, an expressive virtual agent i.e. that mimics the natural expression of emotions, enhances social interaction between a user (human) and an agent (intelligent machine). The set of non-verbal behaviors of a virtual character is, therefore, an important component in the context of human-machine interaction. Laughter is not just an audio signal, but an intrinsic relationship of multimodal non-verbal communication, in addition to audio, it includes facial expressions and body movements. Motion analysis often relies on a relevant motion capture dataset, but the main issue is that the acquisition of such a dataset is expensive and time-consuming. This work studies the relationship between laughter and body movements in dyadic conversations between two interlocutors. The body movements were extracted from videos using deep learning based pose estimator model. We found that, in the explored NDC-ME dataset, a single statistical feature (i.e, the maximum value, or the maximum of Fourier transform) of a joint movement weakly correlates with laughter intensity by 30%. However, we did not find a direct correlation between audio features and body movements. We discuss about the challenges to use such dataset for the audio-driven co-laughter motion synthesis task.
Background: Laughter is normally viewed as a spontaneous emotional expression of positive internal states; however, it more often serves as an intentional communicative tool, such as showing politeness, agreement and affiliation to others in daily interaction. Although laughter is a universal non-verbal vocalization that promotes social affiliation and maintains social bonds, its presence and usage is understudied in autism research. Limited research has focused on autistic children and found that they used laughter for expressing happiness and mirth, but rarely used it for social purposes compared to their neurotypical (NT) peers. To date, no research has included autistic adults. Objectives: The current study aims to investigate 1) the difference in laughter behaviour between pairs of one autistic and one neurotypical adult (MIXED dyads) and age-, gender- and IQ-matched pairs of two neurotypical adults (NT dyads); 2) whether the closeness of relationship (Friends/Strangers) would influence laughter production between MIXED and NT dyads. Method: In total, 27 autistic and 66 neurotypical adults were recruited and paired into 30 MIXED and 29 NT dyads in the Stranger condition and 7 MIXED dyads and 12 NT dyads in the Friend condition. (We were sadly only able to recruit 4 AUTISM dyads in the Stranger condition and 2 AUTISM dyads in the Friend condition, so these were not included in the analysis.) We filmed all dyads engaged in a funny conversational task and a video-watching task and their laughter behaviour was extracted, quantified and annotated. We calculated the Total duration of laughter, as well as the duration of all Shared laughter in each dyad. Results: Regardless of the closeness of relationship, MIXED dyads produced significantly less Total laughter than NT dyads in both the conversation task and video-watching task. The same tendency was also found for Shared laughter, although participants shared more laughter during video-watching than conversation and this tendency was more pronounced for NT than MIXED dyads. Strikingly, NT dyads produced more shared laughter when interacting with their friend than with a stranger during video-watching task, whilst the amount of shared laughter in MIXED dyads did not differ when interacting with their friend or a stranger. Conclusions: Autistic adults paired with neurotypical adults generally used laughter less as a communicative signal than neurotypical pairs during social interaction. Neurotypical adults pairs specifically produced more shared laughter when interacting with their friend than a stranger, whilst the amount of shared laughter produced by mixed pairs was not affected by the closeness of the relationship. This may indicate that autistic adults show a different pattern of laughter production relative to neurotypical adults during social communication. However, it is also possible that a mismatch between autistic and neurotypical communication, and specifically in existing friendships, may have resulted in patterns of laughter more akin to that seen between strangers. Future research will study shared laughter between pairs of autistic friends to distinguish between these possibilities.
Genuine and posed smiles are important social cues (Song, Over, & Carpenter, 2016). Autistic individuals struggle to reliably differentiate between them (Blampied, Johnston, Miles, & Liberty, 2010; Boraston, Corden, Miles, Skuse, & Blakemore, 2008), which may contribute to their difficulties in understanding others’ mental states. An intergroup bias has been found in non-autistic adults in identifying genuine from posed smiles (Young, 2017). This is the first study designed to investigate if autistic individuals would show a different pattern when differentiating smiles for in-groups and out-groups. Fifty-nine autistic adults were compared with forty non-autistic adults, matched on sex, age and nonverbal IQ. Roughly, half of each group were further randomly separated into two groups with a minimal group paradigm (adapted from Howard & Rothbart, 1980). There was no real difference between the groups, participants were primed to believe they were more similar to their in-groups. The ability to distinguish smiles was assessed on a 7-point Likert scale. We found both autism and non-autism groups rated genuine smiles more genuine than posed smiles and in-groups more genuine than out-groups. Even though both groups identified themselves more as in-group than out-group members, autistic individuals were less likely to than non-autistic individuals. However, autistic participants generally rated smiles as less genuine than non-autistic counterparts. These results indicate that autistic adults are capable of identifying genuine smiles from posed smiles, unlike previous findings; but they may be less convinced of the genuineness of others, which may affect their social communication thereafter. Importantly, autistic adults were equally influenced by social intergroup biases which has the potential to be used in interventions to alleviate their social difficulties in daily lives.
In this study we investigate the role of inhalation noises at the end of laughter events in two conversational corpora that provide relevant annotations. A re-annotation of the categories for laughter, silence and inbreath noises enabled us to see that inhalation noises terminate laughter events in the majority of all inspected laughs with a duration comparable to inbreath noises initiating speech phases. This type of corpus analysis helps to understand the mechanisms of audible respiratory activities in speaking vs. laughing in conversations.
Smiling differences between men and women have been studied in psychology. Women smile more than men although the expressiveness of women is not universally more across all facial actions. There are also body movement differences between women and men. For example, more open-body postures were reported for men, but are there any body-movement differences between men and women when they laugh? To investigate this question, we study body-movement signals extracted from recorded laughter videos using a deep learning pose estimation model. Initial results showed a higher Fourier Transform amplitude of thorax and shoulder movements for females while males had a higher Fourier transform amplitude of Elbow movement. The differences were not limited to a small frequency range but covered most of the frequency spectrum. However, further investigations are still needed.
This exploratory study investigates the extent to which social context influences the frequency of laughter. In a within-subjects design, dyads of strangers played two simple laughter-inducing games in a cooperative and competitive setting, ostensibly to earn money individually and as a team. We examined the frequency of laughs produced in both settings. The analysis revealed that, the effects of cooperative versus competitive framing interacted with the game. Specifically, when playing a general knowledge quiz, participants tended to laugh more in the cooperative than in the competitive setting. However, the opposite was true when participants were asked to find a specific number of poker chips under time pressure. During this task participants laughed more in a competitive than in the cooperative setting. Further analyses revealed that familiarity with the task affected the amount of laughter differently for each of the two tasks. Playing the second round of the poker chips task was associated with a significant decreases in laughter frequency compared to the first round. This effect was less marked for the general knowledge quiz, where increased familiarity with the task in the second round led to more laughs in the cooperative, but not competitive setting. Together, the results highlight the flexibility of laughter as an interaction signal and illustrate the challenges of studying laughter in naturalistic settings.