An In-Depth Qualitative Interview: The Impact of Artificial Intelligence (AI) on Consent and Transparency

An In-Depth Qualitative Interview: The Impact of Artificial Intelligence (AI) on Consent and Transparency

DOI: 10.4018/979-8-3693-3226-9.ch014
OnDemand:
(Individual Chapters)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

AI is impacting consent and transparency adversely. Although AI can potentially augment transparency in decision-making via advanced technology, it is creating new concerns. This chapter focuses on the impact of AI systems on individuals' ability to provide informed consent for using their data, and the relationship between transparency in AI decision-making processes and issues related to accountability and trust. Discussed are GDPR (European Union General Data Protection Regulation), and CCPA (California Consumer Privacy Act) due to their consent and transparency within their broader privacy protection frameworks. Applied is a qualitative methodology and in-depth interview design using a communication and collaboration platform to explain the connection between AI consent and transformation. Research results offer avenues to understanding the challenges of informed consent and legal and ethical considerations regarding consent and transparency. Beneficiaries of this research are practitioners, academics, and learners in AI, cybersecurity, and criminology/criminal justice.
Chapter Preview
Top

Introduction

AI-driven innovations and critical technologies of Industry 4.0 revolutionize product and service accessibility (Gruetzemacher & Whittlestone, 2022; Johnson et al., 2022). In the era of rapidly advancing AI, issues surrounding consent and transparency have emerged as critical focal points in discussions encompassing AI, privacy, and cybersecurity (Rodrigues, 2020). The deployment of AI systems often entails collecting and utilizing vast amounts of personal data, which can pose challenges in obtaining informed consent from individuals regarding how their data is being employed (Brundage, 2018). Furthermore, a need for more transparency in AI decision-making processes can undermine trust and raise accountability concerns, as individuals may find themselves in the dark about why certain decisions are made on their behalf (Bodo et al., 2018; Brundage, 2018; Coglianese & Lehr, 2019). As per the findings of Akgül et al. (2023), generalized trust, often referred to as spontaneous sociability, represents the inherent trust individuals place in their fellow members of society at large. This trust forms a foundational element of human relationships and social interactions, playing a pivotal role in fostering positive engagements, influencing personal and professional relationships, and shaping workplace culture. Balancing the advantages of AI's capabilities and the distribution of influence in society is crucial, ensuring that these progressions do not undermine personal liberties, individual privacy, and the fair allocation of influence in society (Kerry, 2023). Understanding and addressing these issues are paramount to the responsible development and deployment of AI technologies (Burton, 2022; Demirel, 2022; Simon, 2019). AI is not a solitary technology; it is a convergence of multiple technologies, statistical models, algorithms, and methodologies (Lu & Burton, 2017).

This chapter is one of a series that embarks on a comprehensive exploration of the intricate dynamics between AI, consent, and transparency. This researcher employs a qualitative and in-depth interview methodology to unravel the complexities inherent in this triad. In doing so, we engage with five distinguished experts in AI, Cybersecurity, and Criminology/Criminal Justice, who have chosen to remain anonymous to ensure candid insights. These in-depth interviews are the cornerstone of the research, allowing the researcher to extract invaluable real-world perspectives and experiences from experts entrenched in these domains (Rutledge & Hogg, 2020; Osborne & Grant-Smith, 2021).

The importance of this research endeavor is underscored by the pressing need to ensure that AI systems respect individuals' privacy, facilitate informed consent, and enhance transparency in their decision-making processes (Ahmed, 2021; Kerry, 2020; Coglianese & Lehr, 2019). Privacy regulations such as the General Data Protection Regulation (GDPR) and the California Consumer Privacy Act (CCPA) have laid the groundwork for addressing these concerns.

Key Terms in this Chapter

Fairness and Bias Mitigation: Fairness and bias mitigation in the context of AI and machine learning involve concerted efforts to ensure that algorithms and models do not engage in discrimination against individuals or groups based on attributes such as race, gender, age, or other legally protected characteristics. These efforts encompass a range of techniques and strategies designed to diminish and ultimately eradicate biases present in both data and algorithms, promoting equitable outcomes in AI systems.

Consent: Consent refers to the voluntary and informed agreement provided by an individual or data subject for the collection, processing, and use of their personal data. It involves the individual granting explicit permission after being adequately informed about the purposes and scope of data processing, and they have the right to withdraw consent at any time.

Qualitative Research: Qualitative research is a research methodology focused on the accumulation of non-numerical data, including text, images, or narratives, to explore and comprehend social phenomena, human behaviors, and motivations. Its purpose is to provide insights into the context and underlying meaning of various phenomena, often employing techniques such as interviews, observations, or content analysis.

Transparency: Transparency in data handling and AI systems pertains to the practice of making the processes, operations, and decision-making criteria behind data collection, processing, and algorithmic actions visible and comprehensible to individuals and stakeholders. It ensures that individuals can understand and verify how their data is being used, fostering trust, accountability, and the ability to challenge potentially biased or unfair decisions.

In-Depth Interviewing: In-depth interviewing represents a qualitative research approach characterized by conducting comprehensive, open-ended interviews with individuals or subjects. This method is employed to gain profound insights into their perspectives, experiences, and viewpoints regarding a specific subject matter. In-depth interviewing finds application in disciplines such as the social sciences and market research.

General Data Protection Regulation (EU GDPR): The California Consumer Privacy Act is a state-level privacy legislation enacted in California, USA, with the primary objective of safeguarding the personal information of California residents. This law affords consumers specific rights concerning their personal data, including the right to be informed about data collection, request data deletion, and opt out of data selling practices.

AI Algorithms: AI (Artificial Intelligence) algorithms consist of predefined instructions and statistical models that empower computers and machines to execute tasks or render decisions autonomously, without the need for explicit programming. These algorithms find utility across a spectrum of AI applications, encompassing machine learning and deep learning, for the purpose of data processing and the generation of astute responses or forecasts.

California Consumer Privacy Act (CCPA): The California Consumer Privacy Act is a state-level privacy legislation enacted in California, USA, with the primary objective of safeguarding the personal information of California residents. This law affords consumers specific rights concerning their personal data, including the right to be informed about data collection, request data deletion, and opt ut of data selling practices.

Complete Chapter List

Search this Book:
Reset