Introduction to Data Mining

Introduction to Data Mining

DOI: 10.4018/978-1-7998-8350-0.ch001
OnDemand:
(Individual Chapters)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

This chapter presents four widely used data mining algorithms and treats their four aspects: essence, applications, advantages, and disadvantages. The algorithms are neural networks, rule induction, tree algorithms, and neighborhood-based reasoning. This chapter is a basic introduction with an overview of important issues. It includes links to relevant algorithmic details related to the mathematical treatment of the selected four algorithms explained in Chapter 2, links to issues on fast and energy-efficient implementations using the dataflow technology of Maxeler, which is explained in Chapter 3, and links to the part on possible applications of selected algorithms treated in Chapter 4.
Chapter Preview
Top

Introduction

An important question to answer at the very beginning of this chapter is to explain the major differences between datamining and semantic web. In both cases, the goal is the same: Efficient retrieval of knowledge from large databases or from the Internet (in this context, knowledge is defined as a synergistic interaction of data and links between data). The major difference is in the placement of complexity.

In the case of datamining, data and knowledge are represented using relatively simple mechanisms (typically, HTML, or derivatives thereof) and no metadata (data about data) are included. Consequently, during the retrieval process, relatively com- plex algorithms have to be used. This means that the complexity is placed into the retrieval request time, and that the complexity related to the system design time is relatively low - only the code of the algorithm has to be injected into the system.

In the case of semantic web, data and knowledge are represented using relatively complex mechanisms (like XML or derivatives thereof) with lots of metadata included (a byte of data could be accompanied with megabytes of metadata); with the help of quality metadata, even the simplest algorithm could do the retrieval success- fully. Consequently, at the knowledge retrieval time, relatively simple algorithms could be used, meaning that the complexity of the retrieval request time is relatively low. This means that the complexity is placed into the system design time, when the metadata structures get formed; the larger the metadata, the more work at the system design time (Agrawal, 2000) (Halbwachs, Caspi, Raymond, & Pilaud, 1991).

The above tells clearly about the stress in textbooks that cover the two tangential subjects: When teaching datamining, the stress should be on algorithms and the related math, as demonstrated in the second part of this book. When teaching se- mantic web, the stress should be on tools for creation and treatment of metadata. Consequently, this book concentrates on algorithms, while the tools are outside the scope of this book. The selection of algorithms presented here is based on their popularity, using the Google Scholar as the criterion of popularity. The popularity of a particular datamining algorithm is highly correlated with possible application domains of that algorithm, as discussed in the third part of this book.

The major issues in datamining are discussed in the papers (Berry & Linoff, Berry, M. J., & Linoff, G. S.) (Tan, Chawla, Ho, & Bailey, 2012) (Hand, 2007) (Hall, et al., 2009) (Fayyad, Piatetsky-Shapiro, & Smyth, 1996) (Tan, Steinbach, & Kumar, Introduction to data mining, 2006) (Witten & Frank, 2002) (Berson & Thearling, 1999). They are: (a) Effective uncovering of the hidden knowledge, in conditions when the search space is n-p complete, and (b) Effective development of a multidimensional interface that enables easy comprehension of the obtained results. These two major goals are achieved through the interaction of several system software layers: (a) A database at the bottom, followed by (b) layers responsible for artificial intelligence and automated presentation.

The important issues are presented using appropriate figures (Jovanovic, Milutinovic, 2002).

The history of datamining did not start in recent years; it started long ago. For example, the researchers that deciphered the ancient alphabets portrayed in Figure 1, did use the same datamining algorithms as those described in this book; only, for the processing, they did not use modern computers, but their brains. As indicated in Figure 2, this means that Galileo Galilei and Heinrich Schwabe, who discovered the periodicities related to sun rotation and sun spots, would not had become famous, had they been born at the time after the introduction of datamining tools; that same knowledge for which they became famous, could be uncovered even with the simplest datamining software working on the top of data obtained from a telescope directed to sun.

Figure 1.

Selected ancient alphabets data mined by human brain.

978-1-7998-8350-0.ch001.f01
Figure 2.

Repetitive solar activities that could be effectively uncovered using modern datamining software.

978-1-7998-8350-0.ch001.f02

Key Terms in this Chapter

Hidden Knowledge: Output of data mining techniques where valuable data are extracted from unstructured data.

Neuron: Neuron in one layer of neural network.

ALU: Arithmetic logic unit in processor.

Layer: One layer in neural network consisted of many neurons.

Data Mining: Technique for extracting hidden knowledge from data.

Complete Chapter List

Search this Book:
Reset