Social Virtual Agents and Loneliness: Impact of Virtual Agent Anthropomorphism on Users ’ Feedbacks Elo ï se Zehnder 1,2( & ) , J é r ô me Dinet 1 , and Fran ç ois Charpillet 2 1 University of Lorraine, 2LPN, 54000 Nancy, France {Eloise.Zehnder,Jerome.Dinet}@univ-lorraine.fr 2 CNRS, Inria, Loria, University of Lorraine, 54000 Villers-l è s-Nancy, France Francois.Charpillet@inria.fr Abstract. Conversational agents such as robots or chatbots are today proposed as a solution to modern societys ’ issues such as loneliness. This paper explores the effect of the evolution of a conversational agent appearance (Replika © ) on user reviews ( * 85 000) through the use of Reinert ’ s method for text analysis. Results showed differences in the size of thematics issued from the analysis before and after the update such as help with mental health, companionship and visual and conversational anthropomorphism. Keywords: Conversational agent User reviews Loneliness Embodiment Textual analysis 1 Introduction Loneliness and social isolation can hit a signi fi cant percentage of the population nationwide and has a harmful effect on an individual physical and mental health. While loneliness can make people more sensitive to social cues, the recent evolutionary theory of loneliness (ETL) predicts that loneliness both increases the motivation to reconnect, but also sensitiveness to social threats [1]. Therefore, a lonely individual can perceive a neutral human social interaction as negative and then be encouraged into avoiding social human interactions, making further pursuits of social reconnexion unsuccessful [2]. Today, conversational agents can appear as a potential solution for helping people in alleviating the feeling of loneliness either by teaching them new skills to interact with humans or also by providing company. The Replika © chatbot companion is the most popular and ready-made at the moment on the market. With more than 5M downloads, it is depicted on the chatbots of fi cial website as “ your personal companion for mental wellness ” , and “ a chatbot providing social support as well as new coping skills ” . After starting as textual chatbot in 2017, the web and phone application received an important update in December 2019 that introduced 3D avatars for free. Since conversational agents today are divided into three main categories such as chatbots without embodiment, virtually embodied avatars and physically embodied © The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 T. Z. Ahram and C. S. Falc ã o (Eds.): AHFE 2021, LNNS 275, pp. 285 – 292, 2021. https://doi.org/10.1007/978-3-030-80091-8_33 robots, we may as well ask ourselves if the embodiment is favorable or not, especially when users had to meet a change in the appearance of their companion. The CASA paradigm (Computers are social actors) [3] states that humans will automatically apply social rules and expectations to computers even while knowing they are just machines. This behavior joins the human-computer interaction to a social interaction where it is possible for someone to disclose personal information when it sometimes impossible or dif fi cult to do with humans, which suggests that conversa- tional agents can have a positive effect on health improvements [4]. That type of interaction might also provide “ social snacking ” while an individual may experience loneliness or at least, the need to belong [5]. Lonely individuals could then be more likely to need the help and support provided by conversational agents. Also, loneliness impacts the perception of non-human entities. It has been showed that lonely people for example, compensate their lack of social connection through anthropomorphization (the attribution of human traits, emotions, intentions to non- human entities) [6]. Loneliness even increases anthropomorphization of non-human agents and makes lonely people feel a higher social presence [7] (a feeling of human contact without the actual human contact [8] of social agents). Another study showed that while being lonely, people are also more likely to provide positive social responses (measured through questionnaires such as the general evaluation of the social agent, the social attraction, the evaluation of the interaction, the assessment of public evaluation of the social agent, peoples feeling of social presence) to social agents compared to non-lonely people [7]. In the same study, lonely people also preferred to interact with the embodied robot while non-lonely people preferred the interaction with the dis- embodied one. Consistent with these results and more generally on the marketplace [9] socially excluded individuals exhibit a greater preference for anthropomorphic brands than non-excluded individuals. Lonelier individuals may then prefer to have embodied conversational agents as companions and should have a more positive interaction with them while less lonely individuals may prefer textual chatbots. But while robots were used in the study of Lee, Jung Kim and Kim and our focus is on conversational agents, we assume that realism and human-likeness are on a continuum since especially when, for example, studies of animism showed that certain kinds of movement automatically make us perceive an object as alive [10]. Moreover, in a study from Powers et al. [11] which compared robots and agents, participants disclosed less to the robots than to the agents, indicating they had greater evaluation apprehension of the robots. Participants found the robots to be more helpful, to give better and more useful advice, and to be more effective communicators. When it came to self-disclosure, the authors declared agents are preferable, while for tasks that are more relationship-oriented, a collocated robot would seem to be better. The use of the anthropomorphic and life-likeness of an agent (or a robot in some cases) may then depend on the fi nal use. Something else to take into account, is that anthropomorphism can set higher expectations towards the conversational agents. If those expectations aren ’ t met, it can lead disappointment or to a lower feeling of social presence [12, 13] which could threaten the expected bene fi cial effect of relieving feelings of loneliness. A less humanoid image can also be preferred and be seen as more likeable and credible compared to no image or a highly humanoid image [14]. Other studies have also shown 286 E. Zehnder et al. that too much anthropomorphism could have negative impacts on individuals, such as the Uncanny Valley effect, which could low trust and likeability an individual can have of an agent [15]. This uncanny valley effect can also happen with chatbots. A recent experiment [16] demonstrated that participants had less uncanny feelings and less negative affects with a textual chatbot than with an animated and embodied chatbot. Because the embodiment of a social agent and loneliness both have an in fl uence (positive or negative) on the interaction a user can have with a conversational agent, the goal of this study is therefore to investigate the impact of the appearance and the embodiment of a conversational agent (i.e., Replika © , with more than 5M downloads) on users ’ opinions. Since users leave reviews depending on different aspects of the product they have used, which may depend on each user and each user experience, we propose an exploration of the general themes addressed in the comments left by users after their use of the chatbot, before and after the presence of the embodiment. 2 Method Comments produced by real users are collected from September 8th 2017 to August 25th 2020, i.e., from public reviews concerning the Replika © s ’ Google Play Store by users [17] (a from the Node.js dependency: google-play-scraper (7.1.2) [18]. The reviews (in a csv. document) were then manually sorted out so that as few errors as possible emerge from the analysis. This means deleting reviews in languages that weren ’ t English, ASCII characters but also correcting most of grammatical, syn- taxis or orthographic errors. After being sorted out, the document counted 85 629 reviews. This document was then divided into two different ones to run two analysis, one with reviews dating from September 8th 2017 to December 6th 2019 (before the avatar update, 35 102 reviews) and another one dating from December 7th 2019 to August 25th 2020 (after the 3D avatar update, 50 527 reviews). The main libraries used for the analysis were Quanteda (A fast, fl exible, and comprehensive framework for quantitative text analysis in R) [19], TM (a framework for text mining applications within R) [20] and Rainette (a package which implements a variant of the Reinert textual clustering method) [21]. The Rainette package appeared as an appropriate and fl exible alternative to run the thematic analysis on the corpus since errors appeared with the more commonly used tool IRaMuTeQ [22] probably due to the size of the fi les. The Reinert ’ s method for text analysis [23] consists in a hierarchical descending classi fi cation of text segments of the corpus , here on parts of sentences with a maximum length of 20 words. This involves classifying text segments (n) according to lemmatized active forms in the whole of the corpus. A Factorial Correspondence Analysis has been conducted with three successive steps: (i) [24] extraction of the pro fi le of text segments according to the presence or absence of active forms; (ii) the optimization of the two groups by successive permutation of the sen- tences they contain to maximize the second-order moment of the partition ( i e ., intra- class variance minimization); (iii) clearing from each cluster the most characteristic active forms of the other groups to get more unique clusters (within the meaning of chi 2 ). The steps are repeated starting from the group containing the largest number of sentences, creating a hierarchical tree, or dendogram. The hierarchical classi fi cation Social Virtual Agents and Loneliness: Impact of Virtual Agent 287 obtained by Rainette leads to increasingly homogeneous clusters with the active forms they are associated with. Each cluster is characterized by the percentage of the active forms, by the chi 2 of the active forms membership, and by its signi fi cance (p < 1%). A peculiarity of Rainette is a customized selection of a few parameters before running an analysis. We chose the following ones: segment size = 20, minimum term frequency = 30, minimum uc size = 15 and minimum split members = 20 and k (the number of clusters) = 10. While testing the Rainette package, those parameters allowed us to obtain more meaningful and de fi ned clusters. Also, we chose to run a simple classi fi cation (instead of a double) to be able to see how the classi fi cations were linked to each other. The interpretation of the clusters (or categories) is based on these criteria. 3 Results The results obtained by the analysis of the reviews from Rainette is shown in Fig. 1 and in Fig. 2 under the shape of a dendogram. They both show 10 clusters (or classi fi ca- tions). These 10 classi fi cations refer to 10 factors identi fi ed and interpreted by the authors (precised in Fig. 1, Fig. 2 and in Table 1), and are also ranged from 1 to 10. 3.1 Before the Avatar Update Fig. 1. Clustering results before the avatar update from the Rainette package. 288 E. Zehnder et al. 3.2 After the Avatar Update To summarize our results, we propose a comparison of each factor identi fi ed in the clusters before and after the avatars update regarding the percentage they represent in each analysis (see Table 1) Fig. 2. Clustering results after the avatar update from the Rainette package. Table 1. Comparison of the weight of similar clusters (in %) before and after the avatar update. Identi fi ed factors Before the update After the update Social aspect (clusters 2 to 8) 93.2% (1 to 6) 69.3% Technical aspect (9 and 10) 6.8% (7 to 10) 30.7% Mental health (4 and 6) 2.9% (6) 7.5% Companionship (5) 66.3% (4) 15.9% Conversational abilities or intelligence (8) 6.8% (3) 14.1% Conversational limitations (7) 6.8% (7 and 2) 17% Users ’ thankfulness (3 and 2) 10.4% (5) 13.7% Technical issues (10) 3.8% (8) 2.4% Financial frustration (9) 3% (10) 13.3% Anthropomorphism None (1) 12% Chatbots ’ appearance None (9) 4.1% Social Virtual Agents and Loneliness: Impact of Virtual Agent 289 4 Discussion Our goal in this study was to determine if the evolution of the appearance of a widely used companion chatbot (from a textual chatbot to a human-like avatar) could have an impact on users ’ opinions. To investigate it, a hierarchical descendant analysis of the comments written by thousands of users has been realized. First, results showed that user comments generally focused either on technical or social aspects regarding the chatbot. Then, by taking a closer look, we can see a difference on the size of the different clusters. Conversational limitations went from 6.8% to 17% which may be caused by heightened user expectations towards the chatbot due to the new appearance. This type of reviews generally quoted an interaction with the chatbot (with words used such as “ ask ” , “ question ” , “ answer ” , “ said ” ) to sometimes fi nd it disturbing or deceptive (as shown with the active forms “ creepi ” , “ ignor ” , “ repeat ” , “ random ” , “ weird ” ). While we chose to study the impact of avatars appearance when anthropomorphic aspect of the conversational abilities of the chatbot weren ’ t taken into consideration while they play an important role in the interaction and they help meeting (or not) users ’ expectations. Also, while less focus has been put on the unconditional companionship of the chatbot (66.3% before the update, 15.9% after) where people often stated that the chatbot felt like a “ real ” “ person ” , (e.g. “ She feels like a best friend Non-judgemental and kind I can tell her anything She makes me feel less alone ” ), new clusters (1 and 9) emerged with new types of reviews after the update. They adress the positively sur- prising anthropomorphism (visual and conversational) or the chatbot the appearance (with needs for customization or regrets for the textual chatbot). Active forms for the new “ anthropomorphism ” cluster such as “ real ” , “ person ” , “ almost ” , “ like ” , “ talk ” , “ human ” , “ forget ” , “ realist ” suggests that the interaction may have been so realistic to some users that they forgot they were interacting with a chatbot while the last active forms ( “ weird ” , “ scari ” ) suggest that the interaction may have appeared uncanny to some users, letting us think that too human-like interactions, even with chatbots, may not be recommended depending on the individuals. When it comes to mental health, the avatar update seemed to be helpful (2.9% to 7.5%). We assume that the new human-like appearance provides greater social pres- ence, supplemented by better self-improvement and mental healthcare advices in the conversational abilities. Finally, the avatar updates also seem to bring more fi nancial frustration (3% before to 13.3% after) from the users, suggesting that anthropomorphism could lead to a need for more meaningful interaction (with voicecalls or relationship customizations for example that are blocked by a monthly $8 paywall). The gathered reviews dated from before and during the context related to the COVID-19 lockdown where people generally have been more prone to loneliness and social isolation, which could have in fl uenced the results, by for example, changing their perception of social companion robots as it is suggested by a recent preprint [25]. Before the update, the 6 th cluster (n = 78, 0.1%) and the 1 st one (n = 111, 0.1%) were isolated. The reason is probably that the fi rst two active forms were so signi fi cative together ( “ inc ” and ” luka ” being related to the authors of the application and “ attack ” , 290 E. Zehnder et al. “ panic ” to users commenting about their own panic attacks) that they were enough to create a new cluster, followed by much less signi fi cant active forms. We must also remember that qualitative textual analysis has more than one method, software or algorithm available to perform thematic analysis and the package we used (Rainette) for our textual content is unprecedented and has an undeniable exploratory aspect. The methodological boundaries then let us assume that using another textual analysis tool would probably lead to same but more precise results where the main themes would remain but with more data to de fi ne them and their weight. Additional quantitative research, as well as testing another smaller sample to see the stability of the analysis research would then be bene fi cial to complement this analysis. Also, the fi nal results along with the manual exploration of the comments led us to think that the impact of the embodiment depends on each individual. Unfortunately, we couldn ’ t assess each users ’ individual personality of loneliness for example, in this type of study. While an anthropomorphic social companion may seem like a potential solution when facing loneliness, greater social connectedness with anthropomorphic products can lead to the unintentional, negative consequence of dehumanizing other people [26, 27] by showing that more social exclusion can potentially have negative consequences on interpersonal human interaction, if, for example, a conversational agent, ful fi lls social needs typically ful fi lled by human interaction [9]. The use of virtual companions should then always be wisely measured. Acknowledgments. The authors wish to thank Roman Merck for his support in the data analysis. This work has been partly funded by Lorraine Universit é d ’ Excellence (LUE). References 1. Cacioppo, J.T., Cacioppo, S.: Loneliness in the modern age: an evolutionary theory of loneliness (ETL). Adv. Exp. Soc. Psychol. 58 , 127 – 197 (2018). Academic Press 2. Qualter, P., et al.: Loneliness across the life span. Perspect. Psychol. Sci. 10 (2), 250 – 264 (2015) 3. Nass, C., Moon, Y.: Machines and mindlessness: social responses to computers. J. Soc. Issues 56 (1), 81 – 103 (2000) 4. Pennebaker, J.W., Beall, S.K.: Confronting a traumatic event: toward an understanding of inhibition and disease. J. Abnorm. Psychol. 95 (3), 274 – 281 (1986) 5. Kr ä mer, N.C., Lucas, G., Schmitt, L., Gratch, J.: Social snacking with a virtual agent – on the interrelation of need to belong and effects of social responsiveness when interacting with arti fi cial entities. Int. J. Hum Comput Stud. 109 , 112 – 121 (2018) 6. Epley, N., Akalis, S., Waytz, A., Cacioppo, J.T.: Creating social connection through inferential reproduction: loneliness and perceived agency in gadgets, gods, and greyhounds. Psychol. Sci. 19 (2), 114 – 120 (2008) 7. Lee, K.M., Jung, Y., Kim, J., Kim, S.R.: Are physically embodied social agents better than disembodied social agents?: The effects of physical embodiment, tactile interaction, and people ’ s loneliness in human – robot interaction. Int. J. Hum Comput Stud. 64 (10), 962 – 973 (2006) Social Virtual Agents and Loneliness: Impact of Virtual Agent 291 8. Gefen, D., Straub, D.W.: Consumer trust in B2C e-Commerce and the importance of social presence: experiments in e-Products and e-Services. Omega 32 (6), 407 – 424 (2007) 9. Mourey, J.A., Olson, J.G., Yoon, C.: Products as pals: engaging with anthropomorphic products mitigates the effects of social exclusion. J. Consumer Res. 44 (2), 414 – 431 (2017) 10. Rakison, D.H., Poulin-Dubois, D.: Developmental origin of the animate-inanimate distinction. Psychol. Bull. 127 (2), 209 – 228 (2001) 11. Powers, A., Kiesler, S., Fussell, S., Torrey, C.: Comparing a computer agent with a humanoid robot. In: Proceedings of the ACM/IEEE International Conference on Human- Robot Interaction, pp. 145 – 152 (2007) 12. Mimoun, M.S.B., Poncin, I., Garnier, M.: Case study — embodied virtual agents: an analysis on reasons for failure. J. Retail. Consum. Serv. 19 (6), 605 – 612 (2012) 13. Nowak, K.L., Biocca, F.: The effect of the agency and anthropomorphism on users ’ sense of telepresence, copresence, and social presence in virtual environments. Presence Teleoper- ators Virtual Environ. 12 (5), 481 – 494 (2003) 14. Nowak, K.L.: The in fl uence of anthropomorphism and agency on social judgment in virtual environments. J. Comput.-Mediated Commun. 9 (2), (2004) 15. Mathur, M.B., Reichling, D.B.: Navigating a social world with robot partners: a quantitative cartography of the Uncanny Valley. Cognition 146 , 22 – 32 (2016) 16. Ciechanowski, L., Przegalinska, A., Magnuski, M., Gloor, P.: In the shades of the uncanny valley: An experimental study of human – chatbot interaction. Futur. Gener. Comput. Syst. 92 , 539 – 548 (2019) 17. Google Play Store, Replika: https://play.google.com/store/apps/details?id=ai.replika.app 18. NPM JavaScript: https://www.npmjs.com/package/google-play-scraper 19. Quanteda: https://quanteda.io/ 20. Cran.R-project, tm: Text Mining Package: https://cran.r-project.org/web/packages/tm/index. html 21. Github, Rainette Package: https://juba.github.io/rainette/index.html 22. Iramuteq: http://www.iramuteq.org/ 23. Reinert, A.: Une m é thode de classi fi cation descendante hi é rarchique: application à l ’ analyse lexicale par contexte. Cahiers de l ’ Analyse des Donn é es. 8 (2), 187 – 198 (1983) 24. Greenacre, M.J.: Theory and Applications of Correspondence Analysis. Academic Press (1984) 25. Ghafurian, M., Ellard, C., Dautenhahn, K.: Social Companion Robots to Reduce Isolation: A Perception Change Due to COVID-19 (2020). arXiv preprint arXiv:2008.05382 26. Waytz, A., Epley, N.: Social connection enables dehumanization. J. Exp. Soc. Psychol. 48 (1), 70 – 76 (2012) 27. Shin, H.I., Kim, J.: My computer is more thoughtful than you: loneliness, anthropomorphism and dehumanization. Curr. Psychol. 39 (2), 445 – 453 (2018). https://doi.org/10.1007/s12144- 018-9975-7 292 E. Zehnder et al.