The Relationship Between Humanity Versus Artificial Intelligence Trust and Personality and Locus of Control

The Relationship Between Humanity Versus Artificial Intelligence Trust and Personality and Locus of Control

Muskan Singh, Sachin Ghai, Rajat Sharma
Copyright: © 2024 |Pages: 14
DOI: 10.4018/979-8-3693-2849-1.ch016
OnDemand:
(Individual Chapters)
Available
$33.75
List Price: $37.50
10% Discount:-$3.75
TOTAL SAVINGS: $3.75

Abstract

As artificial intelligence (AI) becomes increasingly prevalent in various applications, understanding the factors influencing trust in AI-based systems is crucial. This study explores the impact of personality traits, specifically the Big Five Inventory traits and locus of control (LOC), on trust behavior in the context of a decision-making card game involving AI-based algorithms. The findings reveal that LOC plays a pivotal role in influencing trust concordance and ratings, independent of BFI traits. Openness, a BFI trait, influenced reaction time specifically for suggestions from human sources. These results underscore the significance of LOC in shaping trust dynamics, extending beyond traditional personality dimensions. The study highlights how individuals, based on their LOC, make decisions about whom to trust, be it human or AI. These paper contribute to a better understanding of the nuanced factors influencing trust in AI-based algorithms, emphasizing the importance of considering personality characteristics in designing and implementing such systems.
Chapter Preview
Top

1. Introduction

Interpersonal interactions require the establishment and maintenance of trust, which is a diverse and intricate concept. When social conventions and cognitive abilities are insufficient to help people make well-informed, logical decisions, people turn to trust as a behavioural guidance (Abbass, 2019). The conviction that the system will perform as anticipated is the precise definition of trust in technology. Specialised artificial intelligence (AI) algorithms are a growing and frequently seamless part of our daily lives (Abbass et al., 2016). Although some individuals have faith in artificial intelligence (AI) algorithms to offer guidance on routine life decisions, others do not trust these kinds of technologies.

In order to mitigate the intricate nature of social structures and the surrounding environment, the author provides a generic model of trust that applies to both humans and robots (Alarcon et al., 2018). It might be possible to streamline all AI-based services, according to the author, if an interface is included that assesses human trustworthiness and gives users information at a pace and format that suits their understanding and actions. This remark appears to suggest that personality qualities are the foundation of trust, as they influence people's trust in both their own and AI-based services (Ben-Ner & Halldorsson, 2007).

Complete Chapter List

Search this Book:
Reset