Bane and Boon of Hallucinations in the Context of Generative AI

S. M. Nazmuz Sakib (School of Business and Trade, International MBA Institute, Dhaka International University, Bangladesh)
Copyright: © 2024 |Pages: 299
EISBN13: 9798369373088|DOI: 10.4018/979-8-3693-2643-5.ch016
This teaching case was retracted
OnDemand PDF Download
Download link provided immediately after order completion

Abstract

The phenomenon of hallucinations takes place when generative artificial intelligence systems, such as large language models (LLMs) like ChatGPT, generate outputs that are illogical, factually incorrect, or otherwise unreal. In generative artificial intelligence, hallucinations have the ability to unlock creative potential, but they also create challenges for producing accurate and trustworthy AI outputs. Both concerns will be covered in this abstract. Artificial intelligence hallucinations can be caused by a variety of factors. There is a possibility that the model will show an inaccurate response to novel situations or edge cases if the training data is insufficient, incomplete, or biassed. It is common for generative artificial intelligence to generate content in response to cues, regardless of the model's “understanding” or the quality of its output.
InfoSci-OnDemand Powered Search