Modern Smart Cities and Open Research Challenges and Issues of Explainable Artificial Intelligence

Modern Smart Cities and Open Research Challenges and Issues of Explainable Artificial Intelligence

Siva Raja Sindiramutty, Chong Eng Tan, Wee Jing Tee, Sei Ping Lau, Sumathi Balakrishnan, Sukhminder Kaur, Husin Jazri, Muhammad Farhan Aslam Aslam
Copyright: © 2024 |Pages: 36
DOI: 10.4018/978-1-6684-6361-1.ch015
OnDemand:
(Individual Chapters)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

This chapter's purpose is to review the modern smart cities and open research challenges and issues of explainable artificial intelligence (XAI). With the advent of XAI, people's lives have been improved, and the idea of smart urban has been created. Although the anticipated advantages, the adoption of AI differs between smart cities in part because of challenges and issues that can prevent the adoption of AI in smart cities. This chapter will explore the importance of XAI in smart cities and what is the current state of the art in various XAI applications in smart cities, the open challenges of XAI in smart cities, the research issue of XAI in smart cities, case studies and examples, and evaluations and analysis XAI models in smart city application. The open challenges will be covering developing and Novel XAI with ontologies, assurance in ML algorithms, scalability of XAI models in smart cities, etc.
Chapter Preview
Top

Introduction

A developing subject called XAI addresses multiple methods for overcoming the black-box character of models generated by machine learning and creating interpretations that are understandable to humans. Designs that are far too difficult to read are represented by this “black box,” similar to popular Deep Learning models. There are open models such as linear regression and decision trees and in the field of machine learning that is not all very difficult. These designs are simpler to grasp since they can reveal some data about the connection between the feature point and the intended result. Complex models are an exception to this rule (Arikan, n.d; M. Shafiq et al., 2021);. In the principles of XAI, an AI system, its expected consequences, and flaws are all stated. It helps define fairness, model accuracy, and openness, and results in AI-supported decision-making. A company must first build confidence and trust before implementing AI technology. With the help of XAI, a firm can select an intelligent strategy for AI construction. As AI advances, people find it challenging to comprehend and follow the system's actions. The entire calculation process is reduced to a “black box,” which is challenging to comprehend. The black box algorithms are created using the information. Furthermore, no one really can understand or articulate what is happening inside of them, let alone how the Ai system arrived at a specific conclusion, not even the information scientists or software engineers that created the system. (Explainable AI (XAI), n.d.). There are various advantages to knowing whether an AI-enabled technology delivered a specific outcome. XAI can help designers ensure that the system is working as planned, may be necessary to meet requirements, or may be essential in enabling those affected by a judgement to challenge or change the outcome. (Explainable AI | Royal Society, n.d.-b).

Figure 1.

XAI components

978-1-6684-6361-1.ch015.f01
Source: Pandian (2022)

Figure 1 shows the components and models involved in XAI decision-making. An enterprise must fully comprehend the AI decision-making mechanisms with model supervision and accountability to avoid simply relying on it. People can benefit from XAI by better comprehending explaining deep learning, neural networks, and machine learning (EML). ML algorithms are sometimes thought of as opaque “black boxes” that are impossible to comprehend. The use of neural networks is one of the most difficult components of deep learning. In the creation of AI models, bias—often based on gender, ethnicity, location, or age—has long been an issue. Additionally, the effectiveness of an AI algorithm may suffer because the training and output sets of information differ. As a result, a company needs to control models regularly to increase AI explainability and gauge the effects of deploying such methods on the company's bottom line. Also supporting end-user user trust, model auditability, and effective AI use is XAI. Moreover, security, legal, and production AI's compliance and reputational concerns are reduced (Thorn, 2021; Kok et al., 2020). XAI is one of the key elements of accountable AI, a paradigm for implementing AI methods widely in real-world enterprises with a justice, accountability, and explainability model. To support the responsible deployment of AI, organisations must include ethical principles in AI programmes and procedures by developing AI systems based on trust and transparency (Arrieta et al., 2019; Lim et al., 2019).

Figure 2.

Smart city components

978-1-6684-6361-1.ch015.f02
Source: Godse (2022)

Complete Chapter List

Search this Book:
Reset