Quantum Computing and the Qubit: The Future of Artificial Intelligence

Quantum Computing and the Qubit: The Future of Artificial Intelligence

Sasi P., Gulshan Soni, Amit Kumar Tyagi, Vijayalakshmi Kakulapati, Shyam Mohan J. S., Rabindra Kumar Singh
Copyright: © 2023 |Pages: 14
DOI: 10.4018/978-1-6684-6697-1.ch013
OnDemand:
(Individual Chapters)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

A model of dissimilarity-based study is featured after studies and design. No statistical data are included whatsoever. Diving into quantum computing as the title suggests, introductions to the concept of Qubit are given. Future papers contain the advancement of quantum computing operations, shifting to quantum computers, developing intelligent algorithms for the new age machines (referred to here as quantum machines). The chapter assumes the requirement of knowing the evolution of intelligent machines in the form of an introduction.
Chapter Preview
Top

Introduction

Mimicking the human brain has always been a dream for people across the globe for ages. Although it is believed by conservatives, pessimists that creating algorithms that could mime the neuron-clusters and their complex interconnectivity is highly improbable, we optimists (assuming the reader too) and researchers have been proving that we are a step nearer to the masterpiece every time we find a reasonable advance in the related area. Ever since 1943, when Walter Pitts And Warren McCulloch made the first computer model for the Human brain using “threshold logic” there was a thrive in humans against the impossibility which today is a clear proven possibility. Henry J. Kelley (n.d.)proposed the Back-Propagation (continuous) Model in 1960; this model was then inspired, and Stuart Dreyfus made a simpler chain rule model.

Understanding how intelligence works are complicated yet possible with proper knowledge of mathematical guess or probability as termed. For example, if you have only your touch-sensory neurons to observe things around, and respond to them, you would probably find that something hot touched you or of which shape it is but will be unable to find its material. Here comes the probability in your brain, if something you touched is in the shape of a wine glass your brain gets the message through the nervous system, and your brain guesses it is made up of glass, but unless there is some other evidence to prove the fact. Brain primarily depends on facts and data provided to it in past. Therefore, it is unimaginable and often not scalable for your mind, how a dark matter looks unless it is shown to you in past.

Figure 1.

Often this is one self's brain when it is given some problem out of its scope, however intelligent one might be.

978-1-6684-6697-1.ch013.f01

Machines that learn on the other hand work with the same, exact features. You train the machine with hundreds and thousands of pieces of data; they understand what it is. Just telling the machine about the size of the wine glass can be compared to a human having only one sensory reception, as a result, the machine understands that everything in the size of a wine glass is a wine glass, imagine how incompatible it would end up if the machine suggests a blow-torch as a wine glass. Isn’t that scary? So, teaching the machines properly is the new task; telling it what color to look for, shape to look for, length, width, etc. Thinking in such atomic levels for each problem we have, to make the machine learn is again very hard. If we closely observe, no one told you how a water bottle looks, they just tell you it is a bottle and it is used to drink water, it is your brain that understands facts like, anything which is a hollow shell and has a cap is a bottle which can be used to fill water. This conclusion is drawn when your brain draws data from different kinds of bottles whenever you see or touch one, enough times. Here comes the concept of intelligent retrieval or deep learning.

Intelligent retrieval enables the machine to learn facts about something presented to it. Yes, the machine learns by itself using patterns of how things are. The machine after reading enough examples sketches probability functions to know if the given object is matching with the object it learned. It weights every feature which is like prioritizing which feature is more important, and which feature is less prevalent. For example, the shape of a wine glass has a high priority because not many objects are in that shape, while its height has less priority because there are, obviously many objects of that height and size. The machine learns this fact that shape is more important when you show it many wine glasses which do vary in size. Hence, we can understand that, the more we train the machine, the more it learns and more accurate it becomes.

Complete Chapter List

Search this Book:
Reset