Challenges and Limitations of Few-Shot and Zero-Shot Learning

Challenges and Limitations of Few-Shot and Zero-Shot Learning

V. Dankan Gowda, Sajja Suneel, P. Ramesh Naidu, S. V. Ramanan, Sampathirao Suneetha
DOI: 10.4018/979-8-3693-1822-5.ch007
OnDemand:
(Individual Chapters)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

Essential to the development of AI and machine learning, this chapter explores the complex areas of few-shot and zero-shot learning. There have been great advancements towards more efficient and adaptive AI systems with few-shot learning and zero-shot learning, respectively, which can learn from minimal data and infer from particular data instances without previous exposure. Nevertheless, there are several limits and difficulties associated with these procedures. This chapter delves deeply into the theoretical foundations of both techniques, explaining how they work and what problems they solve in different ways. It examines the semantic gap, domain adaptation problems, and model bias, as well as the computational restrictions, overfitting, and model generalizability that are intrinsic to few-shot learning and zero-shot learning, respectively. We may better understand the ideas' potential use in different real-world contexts by comparing and contrasting them.
Chapter Preview
Top

1. Introduction

In an era of increasing interest in church the fast developing fields of machine learning and artificial intelligence (AI), one must find a proper solution to missing data. Today, few-shot learning as well as zero shot have become two hot topics. Few-Shot Learning means to used, as little labelled data as possible in training a model. Accuracy with traditional machine learning models must require a huge number of labelled data. Collecting this can be expensive and time consumording. On the other hand, few-shot learning aims to improve efficiency and adaptability by enabling AI models to provide reliable predictions with very few examples (J. Chen, 2023). These are achieved by using contemporary techniques like as meta-learning, embedding learning, and transfer learning that help the model generalize from sparse datasets to new data. On the other hand, Zero-Shot Learning exten ds enormously the frontiers of learning with few data. In the case of zero-shot learning, however, the model has to make predictions on classes it didn't see during training. This is achieved by, during training, learning something from attending one set of classes, and then using it to figure out something about another that was missing (i.e., a completely different class of things). To the end of making inferences based on what it understands from descriptions or attributes, the model usually uses this method of understanding and use. It does not need to have any direct experience.

Complete Chapter List

Search this Book:
Reset