A Comprehensive Review of Trustworthy, Ethical, and Explainable Computer Vision Advancements in Online Social Media

A Comprehensive Review of Trustworthy, Ethical, and Explainable Computer Vision Advancements in Online Social Media

DOI: 10.4018/978-1-6684-8127-1.ch001
OnDemand:
(Individual Chapters)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

Responsible, ethical, and trustworthy decision-making powered by the new generation of artificial intelligence (AI) and deep learning (DL) recently emerged as one of the key societal challenges. This chapter provides a comprehensive review of state-of-the-art methods that emerged very recently in the domain of trustworthiness, fairness, and authenticity of online social media. Furthermore, this chapter discusses open problems, provides examples of other application domains in the realm of computer vision and intelligent computing, recommends bias mitigation strategies, and provides insights on the future developments in this key research domain.
Chapter Preview
Top

1. Introduction

Responsible, ethical, and trustworthy decision-making powered by the new generation of artificial intelligence (AI) and deep learning (DL) has recently emerged as a key societal challenge. Politicians, government officials, industry experts, and end users alike agree that in order for AI and DL to be used in commercial and public domains, safety mechanisms must be built to protect individuals. (“Directive on Automated Decision-Making,” 2019). Recent theoretical, technological, and computational developments have enabled new-generation computing systems to process incredibly large amounts of visual information in a fraction of a second and to learn advanced data patterns in a human-like manner (Zhu et al., 2023). However, while advancements in information processing, data collection and information visualization have grown exponentially, research on the explainability and trustworthiness of visual systems has remained limited. In recent years, the mysterious nature of highly publicized commercial products, such as OpenAI’s ChatGPT and Google’s Gemini chatbots, has highlighted the need to understand the inner workings of such complicated systems. As a result, more than ever, people are turning to societal and regulatory bodies to solve difficult problems surrounding the ethics, trust, privacy, security, legal, and policy issues of these emerging AI technologies.

In online social media applications, the increasing influence of computer vision techniques underscores the critical importance of explainability and trust. Because these algorithms control the curation and dissemination of content that increasingly shapes public opinion, transparency is paramount. Understanding the inner workings of computer vision systems will not only protect individual privacy and improve user experiences, but also can help address potential training data biases and verify the authenticity of data.

Furthermore, explainable AI systems are inherently more trustworthy, meaning that users can better understand and have confidence in the decisions made by the AI, leading to increased acceptance and adoption of AI technologies. However, establishing trust in these AI-driven platforms requires a commitment to explainability, ensuring that users can understand and hold accountable the algorithms shaping their online interactions. This is why striking the balance between innovation and ethical transparency is key, creating a digital environment where computer vision can be used to the fullest extent without compromising trust in these systems. This chapter makes the following contributions:

  • 1.

    Presenting a comprehensive overview of the existing research surrounding trustworthiness, bias, fairness, and explainability in online social media.

  • 2.

    Exploring diverse application domains within computer vision and intelligent computing in online social media, shedding light on the intricate and multi-dimensional aspects of trust-related issues.

  • 3.

    Introducing a unified framework for trustworthy computer vision applications, which underscores the significance of ethical data practices, robust data processing, bias mitigation, explainability, accountability, and user-centric design.

  • 4.

    Proposing a conceptual framework to determine trust factors for computing weighted trust scores that are leveraged to ensure the explainability of the model. This approach applies to a trust-aware recommender system in social media.

  • 5.

    Identifying critical research gaps in the emerging field of explainable AI (XAI) and trustworthy decision-making within computer vision techniques across diverse domains. We also outline open problems, paving the way for future investigations and advancements.

Firstly, the Literature Review section presents a comprehensive review of the existing research on the trustworthiness and explainability of computer vision algorithms in online social media. Next, we introduce a unified framework tailored for trustworthy

computer vision applications, followed by a new trust-aware computing technique for online social media. The expansive application of computer vision across diverse domains, along with a discussion of open problems and future research directions, is encapsulated in Applications and Open Problems. Finally, the insights and findings are synthesized to conclude this book chapter.

Complete Chapter List

Search this Book:
Reset