Navigating the Legal Landscape of AI-Induced Property Damage: A Critical Examination of Existing Regulations and the Quest for Clarity

Navigating the Legal Landscape of AI-Induced Property Damage: A Critical Examination of Existing Regulations and the Quest for Clarity

Akash Bag, Astha Chaturvedi, Sneha, Ruchi Tiwari
Copyright: © 2024 |Pages: 26
DOI: 10.4018/979-8-3693-1565-1.ch011
OnDemand:
(Individual Chapters)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

This chapter dissects the proposal for an AI Liability Directive by the European Parliament and the Council. The Directive aims to adapt civil liability rules for artificial intelligence (AI) systems. Two main types of non-contractual liability are scrutinized: fault-based and strict liability. The core of the chapter revolves around the proposed AI Liability Directive. It dissects key provisions and highlights conflicting perspectives from scholars and associations. It also looks at the advantages and disadvantages of these rules and concludes by summarizing its findings and discussing how they might impact future policies related to AI responsibility.
Chapter Preview
Top

Introduction

Artificial intelligence (AI) is showing up in more and more aspects of our daily lives, such as self-driving cars and predictive analytics. Artificial Intelligence (AI) can potentially revolutionize our understanding of the world. It can lead to benefits like greater efficiency in factories through automation and help with hospital treatments employing robots (Cataleta, 2020). However, there are unintended repercussions to the broad usage of AI, such as data breaches, biased decision-making algorithms, and privacy violations. This raises questions about whether the rules can keep up with the rapid technological advancements since ambiguous regulations breed uncertainty and stifle artificial intelligence’s tremendous benefits and creative potential (Franke, 2019). Liability is a key concern in this situation. Maintaining liability is a legal principle that is necessary for efficiently handling problems. Liability is becoming increasingly necessary as the hazards connected with AI systems rise to ensure those harmed get compensation. Assigning blame to AI systems is difficult because many moving parts and stakeholders are involved, especially now that the system can learn and make decisions without human oversight (Karnow, 1996).

Because of this lack of transparency and the general public’s ignorance of AI systems, it is challenging to comprehend how a specific result was arrived at. As a result, it becomes difficult to decide who should be held accountable when AI systems make judgments on their own. Businesses and organizations may use this intricacy to absolve themselves of liability for harm caused by AI (Karnow, 1996). In theory, victims are entitled to reimbursement under current tort laws for harm brought on by AI. But because AI systems are different, it can be very difficult, if not impossible, to show a mistake and establish a causal relationship. The European Union (EU) has created the AI Liability Directive to solve this problem. This order creates a presumption of causality and adds steps to provide victims with easier access to evidence when working with high-risk AI systems. The aim is to ensure those who suffer harm due to AI get the same protection as those who suffer harm from other non-AI technology.

The chapter’s primary goal is to examine the difficulties and consequences of enforcing the proposed AI Liability Directive in the European legal system and the European Economic Area (EEA). Its objective is to evaluate how well the Directive complies with the European legal system. The legal foundation for artificial intelligence (AI) is covered in part 2, along with legislative building pieces like the General Data Protection Regulation, the planned Artificial Intelligence Act, and the Product Liability Directive. Part 3 examines the liability subject, including strict liability laws and fault-based liability. It also addresses the difficulties and drawbacks of implementing conventional liability laws on artificial intelligence (AI) systems, considering the complexity, multi-actor engagement, growing autonomy, and ethical concerns. The chapter also discusses the possible advantages of AI liability laws, including how they could foster innovation, increase public confidence in AI systems, and guarantee moral AI operations.

The proposed AI Liability Directive is the subject of part 4, which thoroughly examines its main clauses and a range of viewpoints from academics and associations. Concerns regarding its efficacy, possible influence on innovation, and the requirement for more clarity are all addressed. The chapter also examines possible drawbacks to the suggested Directive. Part 6 concludes with some final thoughts, summarizing the main discoveries and discussing how they can affect AI liability laws in the future.

Key Terms in this Chapter

General Data Protection Regulation (GDPR): EU law governing personal data protection, crucial for AI systems using personal data.

AI Liability Directive: Proposed EU rules for assigning responsibility and compensating those harmed by AI systems.

White Paper on AI: EU document defining AI, outlining benefits, and providing policy recommendations for its safe development.

Artificial Intelligence Act: Proposed EU legislation introducing “strict liability” for high-risk AI systems, making developers and producers liable for harm.

Resolution on Civil Liability and AI Liability Rules: European Parliament's proposal to establish clear AI-related liability rules for those involved in AI systems.

Product Liability Directive: EU regulation holding manufacturers accountable for defective products, including AI-related ones.

Digital Services Act (DSA) and E-commerce Directive: Regulations addressing online content, including illegal material and disinformation.

Complete Chapter List

Search this Book:
Reset