Article Preview
TopIntroduction
Artificial intelligence (AI) is increasingly used in the interaction between the customer and a chatbot in the service sector (e.g., customer service), as AI-based chatbots can provide a 24/7 customer support system (Bhardwaj, 2022). Regarding trust in AI-based chatbots, Przegalinska et al. (2019) identify three key aspects that hold significant relevance for interaction with a customer (e.g., in customer service), namely, competence/expertise, anthropomorphization, as security, and personal protection of customer data in customer service. In the context of anthropomorphization, the perceived social presence of users (customers) can be shaped by anthropomorphic appearance (e.g., in the form of an AI-based chatbot) (Morana et al., 2020). Moreover, Souali et al. (2019) suggest that AI-based chatbots may potentially compromise customer security as well as personal integrity, which is why security and traceability aspects related to the use of AI-based chatbots should receive the highest consideration for customers.
Accordingly, perceived security (and traceability) represents a significant influencing effect on customer trust towards AI-based chatbots in customer service (Van der Goot & Pilgrim, 2019), as well as the intention to interact with them. According to Toader et al. (2019), especially in the field of internet communication (e.g., in customer service), the issue of trust is essential for the customer’s acceptance to use the information as well as suggestions collected and provided by the AI-based chatbot, respectively, to retrieve the related services. Benlian and Hess (2011) state that companies are endeavoring to convey a trustworthy interaction option to the customer through information technology (IT) functions (AI-based chatbots) based on design elements as signals (stimuli), especially for their own websites (e.g., customer service). For this purpose, it is necessary to consider customer perceptions to determine the impact on the willingness to interact (Benlian & Hess, 2011) with an AI-based chatbot with or without trust-supporting design elements as signals (stimuli) in customer service. This motivates addressing the following research question (RQ):
RQ: To what extent do customers’ perceived security and traceability, perceived social presence, and trust differ with respect to their intention to interact with AI-based chatbots without trust-supporting design elements as signals (stimuli) and with AI-based chatbots with trust-supporting design elements as signals (stimuli) in customer service?
Against the background of answering the RQs, the research model approach of Adelmeyer et al. (2018, 2019) and Walter et al. (2014) was considered appropriate because it provides explanations for the most important factors underlying the intention to use.
TopLiterature Research And Relevant Works
Trust Signals in AI-Based Chatbots for Customer Service
Establishing a trust relationship between customers and a company can be stimulated or supported by appropriate (design elements as) signals (stimuli) (Chen et al., 2010) regarding the interaction with AI-based chatbots in customer service. The signals of trust building in the form of transparent (and traceable) interaction, as well as the use of helpful informational cues (e.g., trust seals), are factors that companies should address, with the goal of evoking a sense of personal connection and familiarity with the customer (Einwiller et al., 2000). Moreover, Feine et al. (2019) state that computer systems (AI-based chatbots) can elicit a response when interacting with human interaction partners (customers) using (design elements as) social signals (stimuli). The use of AI-based chatbots in practice—such as in the application area of customer service—is now finding wide corporate application, yet many customers are skeptical about interacting with these systems due to factors (Sonntag et al., 2022) related to security and traceability, social presence, and trust.