Theoretical Foundations of Deep Resonance Interference Network: Towards Intuitive Learning as a Wave Field Phenomenon

Theoretical Foundations of Deep Resonance Interference Network: Towards Intuitive Learning as a Wave Field Phenomenon

Christophe Thovex
Copyright: © 2020 |Pages: 23
DOI: 10.4018/978-1-5225-9742-1.ch015
OnDemand:
(Individual Chapters)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

Digital processes for banks, insurances, or public services generate big data. Hidden networks and weak signals from frauds activities are sometimes statistically undetectable in the endogenous data respective to processes. The organic intelligence of human experts is able to reverse-engineer new fraud scenarios without statistically significant characteristics, but machine learning usually needs to be taught about them or fails to this task. Deep resonance interference network is a multidisciplinary attempt in probabilistic machine learning inspired from waves temporal reversal in finite space, introduced for big data analysis and hidden data mining. It proposes a theoretical alternative to artificial neural networks for deep learning. It is presented along with experimental outcomes related to fraudulent processes generating data statistically similar to legal endogenous data. Results show particular findings probably due to the systemic nature of the model, which appears closer to reasoning and intuition processes than to the perception processes mainly simulated in deep learning.
Chapter Preview
Top

Introduction

Intelligence is not what we know, but what we do when we do not know. This statement is commonly attributed to the well-known psychologist Jean Piaget, biologist and epistemologist who agreed with the importance of communication and social interactions for the development of language and of human intelligence, as the philosopher H. Putnam (Putnam, 1975). According to the historical experimentation supposed to have been led by the roman emperor Frederick II Hohenstaufen (1197-1250), when babies cannot communicate with other babies and adults, after a few years they have no cognitive ability to learn language then they cannot acquire normal intellectual capabilities. Hence, human learning is social, and social learning - the action of learning aided by social networks, which are not necessarily digitized - is essential to foster or accelerate the development of intelligence (Balystok & Poarch, 2014). This forms a first statement.

Nevertheless, according to Corballis (2003), babies start to communicate by signs before being physically able to talk and verbalize thoughts with adults. Babies and animals such as dolphins and primates fail to the Turing’s test, an imitation game designed to test the existence of thought in computing machinery (Turing, 1950). Obviously, the Turing’s test fails to detect thought but estimates the ability of computing machinery to interact with natural language skills. It shows how much computing machinery only performs what we know how to order it to perform, as stated by Ada of Lovelace and quoted in Turing (1950). This forms a second statement.

Unifying the first and second statements, we may accept the eventuality of an intelligence to be developed with computing machinery without interpreted and outer evidence of its existence - i.e., Absence of evidence is not evidence of absence. However, in 2018 we were still unable to say a machine how to play the imitation game and handle the Turing’s test more than a few minutes, while adult humans cannot fail to the test, at least in their native language1. Anyway, it reminds us how fair was the conclusion of Alan Turing about thinking machines: “We can only see a short distance ahead, but we can see plenty there that needs to be done” (Turing, 1950).

Since precursor works such as presented in Pitrat (1990), the hypothesis of a self-programming machine offers the promise of an Artificial Intelligence (AI), i.e. a genuine AI doing when it does not know, according to Piaget’s definition of intelligence and performing what we do not know how to order it to perform. J. Pitrat and its general problems solver CAIA opened the Pandora’s box providing mathematical solutions to some specialized problems that were never coded, thanks to meta-knowledge (Pitrat, 2010). Because the machine still does not develop itself but does nothing without human programming, it entails there could be no AI in machine learning, just artificial knowledge, finally.

Talking machines (i.e., chatbots) process patterns in textual representations for performing operations on natural language. Computer vision processes patterns in images, video or 3D representations for objects classification. Both systems can collaborate but chatbots “ignore” everything of what computer vision “sees” and conversely. This is where the first and second statements are recalled for theoretical opening - i.e., (1) human learning is social and (2) machine only performs what we know how to order it to perform.

Key Terms in this Chapter

Epistemic: From episteme , Greek for “knowledge.” That Greek word is from the verb epistanai , meaning “to know or understand,” a word formed from the prefix epi- (meaning “upon” or “attached to”) and histanai (meaning “to cause to stand”). The study of the nature and grounds of knowledge is called epistemology , and one who engages in such study is an epistemologist.

Orthogonality: Perpendicular reference to a plane or a geometric object. Various object defined within an orthogonal frame belong to a continuous orthogonality.

Resonance: The intensification and enriching of a wave in phase with another wave-phase conjunction.

Threshold: Parameter defining minimal or maximal values beyond which data are filtered.

Greedy Algorithm: An algorithm which explore the whole combinatory for solving a problem, instead of other approach such as random walk, for instance.

Interference: The weakening and degradation of a wave with another wave in opposite phase.

Hidden Variable/Network: Variables and/or networks that are not directly visible but acts in the results of a calculus, a computing process or a network structure.

Bootstrap: Statistical method for creating samples from a population or to small part of code for starting a computer. Derived expression for the initialization phase of analytic algorithms.

Convex Optimization: Based on convex functions in mathematics (i.e., functions defining optimal value/set of values) convex optimization enables to find parameters in a linear progression for which loss is minimal.

Complete Chapter List

Search this Book:
Reset