Determinants Affecting Consumer Trust in Communication With AI Chatbots: The Moderating Effect of Privacy Concerns

Determinants Affecting Consumer Trust in Communication With AI Chatbots: The Moderating Effect of Privacy Concerns

Jinjie Li, Lianren Wu, Jiayin Qi, Yuxin Zhang, Zhiyan Wu, Shuaibo Hu
Copyright: © 2023 |Pages: 24
DOI: 10.4018/JOEUC.328089
Article PDF Download
Open access articles are freely available for download

Abstract

This paper summarized the factors that influence consumers' trust in AI chatbots and divided it into chatbot-related factors (expertise, anthropomorphism, responsiveness, and ease of use), company-related factors (perceived risk, brand trust, human support), and consumer-related factors (privacy concerns). This research attempts to explore the mechanism of human-AI chatbots trust formation and answer the question of how to promote consumers' trust in AI chatbots. The results found that the chatbot-related factors (expertise, responsiveness, and anthropomorphism) positively affect consumers' trust in chatbots. The company-related factor (brand trust) positively affects consumers' trust in chatbots, and perceived risk negatively affect consumers' trust in chatbots. Privacy concerns have a moderating effect on company-related factors. This study helps deepen the understanding of human-AI chatbots communication trust, constructs a basic model of human-AI chatbots trust, and provides insights for e-commerce enterprises to improve chatbots and enhance consumer trust.
Article Preview
Top

Introduction

Chatbots, one of the widely used artificial intelligence technologies, have become an integral part of e-commerce user conversations (Balakrishnan & Dwivedi, 2021b). Numerous companies have built chatbots and used them as their preferred channel of interaction and communication with consumers. For example, Apple's “Siri”, Baidu's “Xiaodu”, Xiaomi's “Xiao Ai”, Alibaba's “My Honey” and Telegram's embedded chatbots, have been used in many fields for a wide range of purposes. The existence of AI chatbots can be divided into virtual presence and physical entities. Virtual presence chatbots can be further divided into task-oriented chatbots and non-task-oriented chatbots. AI chatbot input can be not only natural language (text, voice, or both), but in the future, facial expressions, body movements, etc (Figure 1). AI chatbots will respond like real people, output conversations, or execute commands (Liew & Tan, 2021; Mudofi & Yuspin, 2022).

Figure 1.

Classification of AI chatbots

JOEUC.328089.f01

However, do consumers like being served by AI Chatbots? The answer is contradictory (Choi & Drumwright, 2021; Mikalef et al.,2022). On the one hand, many consumers believe that AI simplifies operations and improves service efficiency (Kaplan & Haenlein, 2019); on the other hand, many consumers also resent being served by AI Chatbots, believing that human can better understand their needs (Arm Treasure Data, 2019). Why do consumers sometimes accept Chatbot services, and sometimes resent them? Is Chatbot technology not “smart” enough, or do consumers have a bias against Chatbots? This is an increasingly important theoretical and practical question that needs to be answered in current service management.

It is important to acknowledge that the current technology of AI Chatbots is far from mature, which is a major reason why consumers are reluctant to encounter Chatbots in most services (Wei et al., 2022). However, several studies have pointed out that even if chatbots perform as well as humans, consumers may still dislike the services from chatbots. For example, Luo et al. (2019) found that when the identity of an AI chatbot is revealed during sales via telephone, the success rate of the sale is significantly lower than that of human customer service. Castelo et al. (2019) argue that consumers are reluctant to adopt AI chatbots not because AI chatbot technology is not mature enough, but because of some psychological issues. Consumers who refuse to use chatbots prefer to engage with real people, fearing that chatbots will make mistakes and have limited functionality and answers, cannot “chat” in a friendly manner, lack empathy, and that private data may be compromised and used illegally (Van et al., 2019).

So what factors drive consumer attitudes toward AI services? This is not only a question of AI technology, but also a question of consumer psychology and social aspects.

The authors argue that consumer trust in AI chatbots is one of the most important factors driving consumer acceptance of AI chatbot services in areas such as e-commerce and customer service. This paper will explore the essential reasons why consumers trust AI chatbots and the key factors for understanding consumer trust in AI chatbots. Although the above issues have been covered and discussed in the existing literature, they are more often studied from a single dimension (Go & Sundar, 2019; Chen JS et al., 2021). However, consumers' trust in chatbots is the result of multiple factors and is a complex multidimensional problem. A reliable study should integrate all dimensions for systematic analysis.

Complete Article List

Search this Journal:
Reset
Volume 36: 1 Issue (2024)
Volume 35: 3 Issues (2023)
Volume 34: 10 Issues (2022)
Volume 33: 6 Issues (2021)
Volume 32: 4 Issues (2020)
Volume 31: 4 Issues (2019)
Volume 30: 4 Issues (2018)
Volume 29: 4 Issues (2017)
Volume 28: 4 Issues (2016)
Volume 27: 4 Issues (2015)
Volume 26: 4 Issues (2014)
Volume 25: 4 Issues (2013)
Volume 24: 4 Issues (2012)
Volume 23: 4 Issues (2011)
Volume 22: 4 Issues (2010)
Volume 21: 4 Issues (2009)
Volume 20: 4 Issues (2008)
Volume 19: 4 Issues (2007)
Volume 18: 4 Issues (2006)
Volume 17: 4 Issues (2005)
Volume 16: 4 Issues (2004)
Volume 15: 4 Issues (2003)
Volume 14: 4 Issues (2002)
Volume 13: 4 Issues (2001)
Volume 12: 4 Issues (2000)
Volume 11: 4 Issues (1999)
Volume 10: 4 Issues (1998)
Volume 9: 4 Issues (1997)
Volume 8: 4 Issues (1996)
Volume 7: 4 Issues (1995)
Volume 6: 4 Issues (1994)
Volume 5: 4 Issues (1993)
Volume 4: 4 Issues (1992)
Volume 3: 4 Issues (1991)
Volume 2: 4 Issues (1990)
Volume 1: 3 Issues (1989)
View Complete Journal Contents Listing