Briefly take a look into the attached pdf and answer the given five questions in the word document.Note: Please use APA 7 (…) format for references and in text citations.Reference: Tan, P. N., Steinbach, M., & Kumar, V. (2016). Introduction to data mining. Pearson Education, Inc.1
Rapid advances in data collection and storage technology have enabled organizations to accumulate vast amounts of data. However, extracting useful
information has proven extremely challenging. Often, traditional data analysis tools and techniques cannot be used because of the massive size of a data
set. Sometimes,the non-traditional nature of the data means that traditional
approachescannot be applied even if the data set is relatively small. In other
situations, the questions that need to be answeredcannot be addressedusing
existing data analysis techniques, and thus, new methods need to be developed.
Data mining is a technology that blends traditional data analysis methods
with sophisticated algorithms for processinglarge volumes of data. It has also
opened up exciting opportunities for exploring and analyzing new types of
data and for analyzing old types of data in new ways. In this introductory
chapter, we present an overview of data mining and outline the key topics
to be covered in this book. We start with a description of some well-known
applications that require new techniques for data analysis.
Business Point-of-sale data collection (bar code scanners, radio frequency
identification (RFID), and smart card technology) have allowed retailers to
collect up-to-the-minute data about customer purchasesat the checkout counters of their stores. Retailers can utilize this information, along with other
business-critical data such as Web logs from e-commerce Web sites and customer service records from call centers, to help them better understand the
needs of their customers and make more informed businessdecisions.
Data mining techniques can be used to support a wide range of business
intelligence applications such as customer profiling, targeted marketing, workflow management, store layout, and fraud detection. It can also help retailers
answer important business questions such as “Who are the most profitable
customers?” “What products can be cross-soldor up-sold?” and “What is the
revenue outlook of the company for next year?)) Some of these questions motivated the creation of association analvsis (Chapters 6 and 7), a new data
analysis technique.
Medicine, Science, and Engineering
Researchersin medicine, science,
and engineering are rapidly accumulating data that is key to important new
discoveries. For example, as an important step toward improving our understanding of the Earth’s climate system, NASA has deployed a seriesof Earthorbiting satellites that continuously generate global observations of the Iand
surface, oceans, and atmosphere. However, because of the size and spatiotemporal nature of the data, traditional methods are often not suitable for
analyzing these data sets. Techniquesdevelopedin data mining can aid Earth
scientists in answering questions such as “What is the relationship between
the frequency and intensity of ecosystem disturbances such as droughts and
hurricanes to global warming?” “How is land surface precipitation and temperature affected by ocean surface temperature?” and “How well can we predict
the beginning and end of the growing seasonfor a region?”
As another example, researchersin molecular biology hope to use the large
amounts of genomic data currently being gathered to better understand the
structure and function of genes. In the past, traditional methods in molecular biology allowed scientists to study only a few genes at a time in a given
experiment. Recent breakthroughs in microarray technology have enabled scientists to compare the behavior of thousands of genesunder various situations.
Such comparisons can help determine the function of each gene and perhaps
isolate the genesresponsiblefor certain diseases.However, the noisy and highdimensional nature of data requires new types of data analysis. In addition
to analyzing gene array data, data mining can also be used to address other
important biological challengessuch as protein structure prediction, multiple
sequencealignment, the modeling of biochemical pathways, and phylogenetics.
What Is Data Mining?
Data mining is the processof automatically discovering useful information in
large data repositories. Data mining techniques are deployed to scour large
databases in order to find novel and useful patterns that might otherwise
remain unknown. They also provide capabilities to predict the outcome of a
What Is Data Mining?
future observation, such as predicting whether a newly arrived.customer will
spend more than $100 at a department store.
Not all information discovery tasks are consideredto be data mining. For
example, Iooking up individual records using a database managemenr sysrem
or finding particular Web pages via a query to an Internet search engine are
tasks related to the area of information retrieval. Although such tasks are
important and may involve the use of the sophisticated algorithms and data
structures, they rely on traditional computer sciencetechniques and obvious
features of the data to create index structures for efficiently organizing and
retrieving information. Nonetheless,data mining techniques have been used
to enhance information retrieval systems.
Data Mining
and Knowledge
Data mining is an integral part of knowledge discovery in databases
(KDD), which is the overall process of converting raw data into useful information, as shown in Figure 1.1. This process consists of a series of transformation steps, from data preprocessing to postprocessing of data mining
Figure1,1. Theprocess
of knowledge
The input data can be stored in a variety of formats (flat files, spreadsheets, or relational tables) and may reside in a centralized data repository
or be distributed across multiple sites. The purpose of preprocessing is
to transform the raw input data into an appropriate format for subsequent
analysis. The steps involved in data preprocessinginclude fusing data from
multiple sources, cleaning data to remove noise and duplicate observations,
and selecting records and features that are relevant to the data mining task
at hand. Because of the many ways data can be collected and stored, data
preprocessingis perhaps the most laborious and time-consuming step in the
overall knowledge discovery process.
,,Closing the loop” is the phrase often used to refer to the process of integrating data mining results into decision support systems. For example,
in business applications, the insights offered by data mining results can be
integrated with campaign management tools so that effective marketing promotions can be conducted and tested. Such integration requires a postprocessing step that ensuresthat only valid and useful results are incorporated
into the decision support system. An example of postprocessingis visualization (see Chapter 3), which allows analysts to explore the data and the data
mining results from a variety of viewpoints. Statistical measuresor hypothesis testing methods can also be applied during postprocessingto eliminate
spurious data mining results.
Motivating Challenges
As mentioned earlier, traditional data analysis techniques have often encountered practical difficulties in meeting the challengesposed by new data sets.
The following are some of the specific challengesthat motivated the development of data mining.
Becauseof advancesin data generation and collection, data sets
terabytes, or even petabytes are becoming common.
If data mining algorithms are to handle these massive data sets, then they
must be scalable. Many data mining algorithms employ special search strategies to handle exponential search problems. Scalability may also require the
implementation of novel data structures to accessindividual records in an efficient manner. For instance, out-of-core algorithms may be necessary when
processingdata sets that cannot fit into main memory. Scalability can also be
improved by using sampling or developing parallel and distributed algorithms.
It is now common to encounter data sets with hunHigh Dimensionality
dreds or thousands of attributes instead of the handful common a few decades
ago. In bioinformatics, progress in microarray technology has produced gene
expression data involving thousands of features. Data sets with temporal
or spatial components also tend to have high dimensionality. For example,
consider a data set that contains measurements of temperature at various
locations. If the temperature measurementsare taken repeatedly for an extended period, the number of dimensions (features) increasesin proportion to
Motivating Challenges 5
the number of measurementstaken. Tladitional data analysis techniques that
were developed for low-dimensional data often do not work well for such highdimensional data. Also, for some data analysis algorithms, the computational
complexity increasesrapidly as the dimensionality (the number of features)
Heterogeneous and Complex Data TYaditional data analysis methods
often deal with data sets containing attributes of the same type, either continuous or categorical. As the role of data mining in business,science,medicine,
and other flelds has grown, so has the need for techniques that can handle
heterogeneousattributes. Recent years have also seen the emergenceof more
complex data objects. Examples of such non-traditional types of data include
collections of Web pagescontaining semi-structured text and hyperlinks; DNA
data with sequential and three-dimensional structure; and climate data that
consists of time series measurements (temperature, pressure, etc.) at various
locations on the Earth’s surface. Techniquesdeveloped for mining such complex objects should take into consideration relationships in the data, such as
temporal and spatial autocorrelation, graph connectivity, and parent-child relationships between the elementsin semi-structured text and XML documents.
Data ownership and Distribution
Sometimes, the data needed for an
analysis is not stored in one location or owned by one organization. Instead,
the data is geographically distributed among resourcesbelonging to multiple
entities. This requires the development of distributed data mining techniques.
Among the key challenges faced by distributed data mining algorithms include (1) how to reduce the amount of communication neededto perform the
distributed computatior, (2) how to effectively consolidate the data mining
results obtained from multiple sources,and (3) how to address data security
The traditional statistical approach is based on
a hypothesize-and-test paradigm. In other words, a hypothesis is proposed,
an experiment is designed to gather the data, and then the data is analyzed
with respect to the hypothesis. Unfortunately, this processis extremely laborintensive. Current data analysis tasks often require the generation and evaluation of thousands of hypotheses,and consequently,the development of some
data mining techniques has been motivated by the desire to automate the
process of hypothesis generation and evaluation. Furthermore, the data sets
analyzed in data mining are typically not the result of a carefully designed
experiment and often represent opportunistic samplesof the data, rather than
random samples. Also, the data sets frequently involve non-traditional types
of data and data distributions.
The Origins of Data Mining
Brought together by the goal of meeting the challenges of the previous section, researchersfrom different disciplines began to focus on developing more
efficient and scalabletools that could handle diverse types of data. This work,
which culminated in the field of data mining, built upon the methodology and
algorithms that researchershad previously used. In particular, data mining
draws upon ideas, such as (1) sampling, estimation, and hypothesis testing
from statistics and (2) search algorithms, modeling techniques, and learning
theories from artificial intelligence, pattern recognition, and machine learning.
Data mining has also been quick to adopt ideas from other areas, including
optimization, evolutionary computing, information theory, signal processing,
visualization, and information retrieval.
A number of other areas also play key supporting roles. In particular,
database systems are needed to provide support for efficient storage, indexing, and query processing. Techniquesfrom high performance (parallel) computing are often important in addressing the massive size of some data sets.
Distributed techniques can also help addressthe issue of size and are essential
when the data cannot be gathered in one location.
Figure 1.2 shows the relationship of data mining to other areas.
asa conlluence
Data Mining Tasks 7
Data Mining Tasks
Data mining tasks are generally divided into two major categories:
Predictive tasks. The objective of thesetasks is to predict the value of a particular attribute based on the values of other attributes. The attribute
to be predicted is commonly known as the target or dependent variable, while the attributes used for making the prediction are known as
the explanatory or independent variables.
Descriptive tasks. Here, the objective is to derive patterns (correlations,
trends, clusters, trajectories, and anomalies) that summarize the underlying relationships in data. Descriptive data mining tasks are often
exploratory in nature and frequently require postprocessingtechniques
to validate and explain the results.
Figure 1.3 illustrates four of the core data mining tasks that are described
in the remainder of this book.
Predictive modeling refers to the task of building a model for the target
variable as a function of the explanatory variables. There are two types of
predictive modeling tasks: classification, which is used for discrete target
variables, and regression, which is used for continuous target variables. For
example, predicting whether a Web user will make a purchase at an online
bookstore is a classification task becausethe target variable is binary-valued.
On the other hand, forecasting the future price of a stock is a regression task
becauseprice is a continuous-valued attribute. The goal of both tasks is to
learn a model that minimizes the error between the predicted and true values
of the target variable. Predictive modeling can be used to identify customers
that will respond to a marketing campaign, predict disturbances in the Earth’s
ecosystem,or judge whether a patient has a particular diseasebased on the
results of medical tests.
Example 1.1 (Predicting the Type of a Flower). Consider the task of
predicting a species of flower based on the characteristics of the flower. In
particular, consider classifying an Iris flower as to whether it belongs to one
of the following three Iris species: Setosa, Versicolour, or Virginica. To perform this task, we need a data set containing the characteristics of various
flowers of these three species. A data set with this type of information is
the well-known Iris data set from the UCI Machine Learning Repository at
In addition to the speciesof a flower,
http: /
this data set contains four other attributes: sepal width, sepal length, petal
length, and petal width. (The Iris data set and its attributes are described
further in Section 3.1.) Figure 1.4 shows a plot of petal width versus petal
length for the 150 flowers in the Iris data set. Petal width is broken into the
categories low, med’ium, and hi’gh,which correspond to the intervals [0′ 0.75),
[0.75,1.75), [1.75,oo), respectively.Also, petal length is broken into categories
low, med,’ium,and hi,gh,which correspondto the intervals [0′ 2.5), [2.5,5), [5′
oo), respectively. Based on these categories of petal width and length, the
following rules can be derived:
Petal width low and petal length low implies Setosa.
Petal width medium and petal length medium implies Versicolour.
Petal width high and petal length high implies Virginica.
While these rules do not classify all the flowers, they do a good (but not
perfect) job of classifying most of the flowers. Note that flowers from the
Setosa speciesare well separated from the Versicolour and Virginica species
with respect to petal width and length, but the latter two species overlap
somewhat with respect to these attributes.
Data Mining Tasks I
,ftfo oto
r Setosa
. Versicolour
o Virginica

!0_l! _.! o__o
.4. a?o
Figure1.4. Petalwidthversus
Association analysis is used to discover patterns that describe strongly associated features in the data. The discoveredpatterns are typically represented
in the form of implication rules or feature subsets. Becauseof the exponential
size of its search space, the goal of association analysis is to extract the most
interesting patterns in an efficient manner. Useful applications of association
analysis include finding groups of genes that have related functionality, identifying Web pages that are accessedtogether, or understanding the relationships
between different elements of Earth’s climate system.
Example 1.2 (Market Basket Analysis). The transactions shown in Table 1.1 illustrate point-of-sale data collected at the checkout counters of a
grocery store. Association analysis can be applied to find items that are frequently bought together by customers. For example, we may discover the
rule {Diapers} —–*{lt:.ft}, which suggeststhat customers who buy diapers
also tend to buy milk. This type of rule can be used to identify potential
cross-sellingopportunities among related items.
Cluster analysis seeksto find groups of closely related observationsso that
observations that belong to the same cluster are more similar to each other
Table1.1. Market
Tlansaction ID
{Bread, Butter, Diapers, Milk}
{Coffee, Sugar, Cookies, Sakoon}
{Bread, Butter, Coffee, Diapers, Milk, Eggs}
{Bread, Butter, Salmon,Chicken}
{fgg”, Bread, Butter}
{Salmon, Diapers, Milk}
{Bread, Tea, Sugar, Eggs}
{Coffee, Sugar, Chicken, Eggs}
{Bread, Diapers, Mi1k, Salt}
{Tea, Eggs, Cookies, Diapers, Milk}
than observations that belong to other clusters. Clustering has been used to
group sets of related customers, find areas of the ocean that have a significant
impact on the Earth’s climate, and compressdata.
Example 1.3 (Document Clustering). The collection of news articles
shown in Table 1.2 can be grouped based on their respective topics. Each
where tu is a word
article is representedas a set of word-frequency pairs (r,
and c is the number of times the word appears in the article. There are two
natural clusters in the data set. The first cluster consists of the first four articles, which correspond to news about the economy,while the second cluster
contains the last four articles, which correspond to news about health care. A
good clustering algorithm should be able to identify these two clusters based
on the similarity between words that appear in the articles.
of newsarticles.
dollar: 1, industry: 4, country: 2, loan: 3, deal: 2, government: 2
machinery: 2, labor: 3, market: 4, industry: 2, work: 3, country: 1
job: 5, inflation: 3, rise: 2, jobless: 2, market: 3, country: 2, index: 3
domestic: 3, forecast: 2, gain: 1, market: 2, sale: 3, price: 2
patient: 4, symptom: 2, drug: 3, health: 2, clinic: 2, doctor: 2
pharmaceutical:2, company: 3, drug: 2,vaccine:1, flu: 3
death: 2, cancer: 4, drug: 3, public: 4, health: 3, director: 2
medical: 2, cost: 3, increase: 2, patient: 2, health: 3, care: 1
Scope and Organization of the Book
Anomaly detection is the task of identifying observationswhose characteristics are significantly different from the rest of the data. Such observations
are known as anomalies or outliers. The goal of an anomaly detection algorithm is to discover the real anomalies and avoid falsely labeling normal
objects as anomalous. In other words, a good anomaly detector must have
a high detection rate and a low false alarm rate. Applications of anomaly
detection include the detection of fraud, network intrusions, unusual patterns
of disease,and ecosystemdisturbances.
Example 1.4 (Credit Card trYaud Detection). A credit card company
records the transactions made by every credit card holder, along with personal
information such as credit limit, age, annual income, and address. since the
number of fraudulent cases is relatively small compared to the number of
legitimate transactions, anomaly detection techniques can be applied to build
a profile of legitimate transactions for the users. When a new transaction
arrives, it is compared against the profile of the user. If the characteristics of
the transaction are very different from the previously created profile, then the
transaction is flagged as potentially fraudulent.
Scope and Organization of the Book
This book introduces the major principles and techniques used in data mining
from an algorithmic perspective. A study of these principles and techniques is
essentialfor developing a better understanding of how data mining technology
can be applied to various kinds of data. This book also serves as a starting
point for readers who are interested in doing researchin this field.
We begin the technical discussion of this book with a chapter on data
(Chapter 2), which discussesthe basic types of data, data quality, preprocessing techniques, and measures of similarity and dissimilarity. Although
this material can be covered quickly, it provides an essential foundation for
data analysis. Chapter 3, on data exploration, discussessummary statistics,
visualization techniques, and On-Line Analytical Processing (OLAP). These
techniques provide the means for quickly gaining insight into a data set.
Chapters 4 and 5 cover classification. Chapter 4 provides a foundation
by discussing decision tree classifiers and several issues that are important
to all classification: overfitting, performance evaluation, and the comparison
of different classification models. Using this foundation, Chapter 5 describes
a number of other important classification techniques: rule-based systems,
nearest-neighborclassifiers,Bayesianclassifiers,artificial neural networks, support vector machines, and ensembleclassifiers,which are collections of classi-
fiers. The multiclass and imbalanced class problems are also discussed.These
topics can be covered independently.
Association analysis is explored in Chapters 6 and 7. Chapter 6 describes
the basics of association analysis: frequent itemsets, association rules, and
some of the algorithms used to generate them. Specific types of frequent
itemsets-maximal, closed,and hyperclique-that are important for data mining are also discussed,and the chapter concludeswith a discussionof evaluation measuresfor association analysis. Chapter 7 considers a variety of more
advancedtopics, including how association analysis can be applied to categorical and continuous data or to data that has a concept hierarchy. (A concept
hierarchy is a hierarchical categorization of objects, e.g., store items, clothing,
shoes,sneakers.) This chapter also describeshow association analysis can be
extended to find sequential patterns (patterns involving order), patterns in
graphs, and negative relationships (if one item is present, then the other is
Cluster analysis is discussedin Chapters 8 and 9. Chapter 8 first describes
the different types of clusters and then presentsthree specific clustering techniques: K-means, agglomerative hierarchical clustering, and DBSCAN. This
is followed by a discussion of techniques for validating the results of a clustering algorithm. Additional clustering concepts and techniques are explored in
Chapter 9, including fiszzy and probabilistic clustering, Self-Organizing Maps
(SOM), graph-based clustering, and density-basedclustering. There is also a
discussion of scalability issues and factors to consider when selecting a clustering algorithm.
The last chapter, Chapter 10, is on anomaly detection. After some basic
definitions, several different types of anomaly detection are considered: statistical, distance-based,density-based, and clustering-based. Appendices A
through E give a brief review of important topics that are used in portions of
the book: linear algebra, dimensionality reduction, statistics, regression,and
The subject of data mining, while relatively young compared to statistics
or machine learning, is already too large to cover in a single book. Selected
references to topics that are only briefly covered, such as data quality’ are
provided in the bibliographic notes of the appropriate chapter. Referencesto
topics not covered in this book, such as data mining for streams and privacypreserving data mining, are provided in the bibliographic notes of this chapter.
1_.6 Bibliographic Notes 13
Bibliographic Notes
The topic of data mining has inspired many textbooks. Introductory textbooks include those by Dunham [10], Han and Kamber l2L), Hand et al. [23],
and Roiger and Geatz [36]. Data mining books with a stronger emphasis on
businessapplications include the works by Berry and Linoff [2], Pyle [34], and
Parr Rud [33]. Books with an emphasis on statistical learning include those
by Cherkassky and Mulier [6], and Hastie et al. 124]. Some books with an
emphasis on machine learning or pattern recognition are those by Duda et
al. [9], Kantardzic [25], Mitchell [31], Webb [41], and Witten and F]ank [42].
There are also some more specialized books: Chakrabarti [a] (web mining),
Fayyad et al. [13] (collection of early articles on data mining), Fayyad et al.
111](visualization), Grossman et al. [18] (scienceand engineering), Kargupta
and Chan [26] (distributed data mining), Wang et al. [a0] (bioinformatics),
and Zaki and Ho [44] (parallel data mining).
There are several conferencesrelated to data mining. Some of the main
conferencesdedicated to this field include the ACM SIGKDD International
Conferenceon Knowledge Discovery and Data Mining (KDD), the IEEE International Conferenceon Data Mining (ICDM), the SIAM International Conference on Data Mining (SDM), the European Conferenceon Principles and
Practice of Knowledge Discovery in Databases (PKDD), and the Pacific-Asia
Conferenceon Knowledge Discovery and Data Mining (PAKDD). Data mining papers can also be found in other major conferencessuch as the ACM
SIGMOD/PODS conference,the International Conferenceon Very Large Data
Bases (VLDB), the Conferenceon Information and Knowledge Management
(CIKM), the International Conferenceon Data Engineering (ICDE), the International Conferenceon Machine Learning (ICML), and the National Conference on Artificial Intelligence (AAAI).
Journal publications on data mining include IEEE Transact’ionson Knowledge and Data Engi,neering,Data Mi,ning and Knowledge Discouery, Knowledge and Information Systems, Intelli,gent Data Analysi,s, Inforrnati,on Systems, and lhe Journal of Intelligent Informati,on Systems.
There have been a number of generalarticles on data mining that define the
field or its relationship to other fields, particularly statistics. Fayyad et al. [12]
describedata mining and how it fits into the total knowledgediscoveryprocess.
Chen et al. [5] give a database perspective on data mining. Ramakrishnan
and Grama [35] provide a general discussion of data mining and present several
viewpoints. Hand [22] describeshow data mining differs from statistics, as does
Fliedman lf 4]. Lambert [29] exploresthe use of statistics for large data sets and
provides some comments on the respective roles of data mining and statistics.
Glymour et al. 116]consider the lessons that statistics may have for data
mining. Smyth et aL [38] describe how the evolution of data mining is being
driven by new types of data and applications, such as those involving streams,
graphs, and text. Emerging applications in data mining are consideredby Han
et al. [20] and Smyth [37] describessome researchchallengesin data mining.
A discussionof how developmentsin data mining researchcan be turned into
practical tools is given by Wu et al. [43]. Data mining standards are the
subject of a paper by Grossman et al. [17]. Bradley [3] discusseshow data
mining algorithms can be scaled to large data sets.
With the emergence of new data mining applications have come new challengesthat need to be addressed.For instance, concernsabout privacy breaches
as a result of data mining have escalated in recent years, particularly in application domains such as Web commerce and health care. As a result, there
is growing interest in developing data mining algorithms that maintain user
privacy. Developing techniques for mining encrypted or randomized data is
known as privacy-preserving data mining. Some general referencesin this
area include papers by Agrawal and Srikant l1], Clifton et al. [7] and Kargupta
et al. [27]. Vassilios et al. [39] provide a survey.
Recent years have witnessed a growing number of applications that rapidly
generatecontinuous streams of data. Examples of stream data include network
traffic, multimedia streams, and stock prices. Severalissuesmust be considered
when mining data streams, such as the limited amount of memory available,
the need for online analysis, and the change of the data over time. Data
mining for stream data has become an important area in data mining. Some
selected publications are Domingos and Hulten [8] (classification), Giannella
et al. [15] (associationanalysis), Guha et al. [19] (clustering), Kifer et al. [28]
(changedetection), Papadimitriou et al. [32] (time series),and Law et al. [30]
(dimensionality reduction).
[1] R. Agrawal and R. Srikant. Privacy-preserving data mining. ln Proc. of 2000 ACM-
SIGMOD IntI. Conf. on Management of Data, pages 439-450, Dallas, Texas, 2000.
ACM Press.
12lM. J. A. Berry and G. Linofi. Data Mtni,ng Technr,ques: For Marketing, Sales, and’
Customer Relati,onship Management. Wiley Computer Publishing, 2nd edition, 2004.
[‘)] S. Bradley, J. Gehrke, R. Ramakrishnan, and R. Srikant. Scaling mining algorithms
to large databases. Communicati,ons of the ACM, 45(8):38 43,2002.
[4] S. Chakrabarti. the Web: Di.scoueri.ngKnouledge from Hypertert Data’ Morgan
Kaufmann, San Flancisco, CA, 2003.
[5] M.-s. chen, J. Han, and P. s. Yu. Data Mining: An overview from a Database
Perspective. IEEE Transact’ions on Knowled,ge abd Data Engineering, g(6):g66-gg3,
[6] v. cherkassky and F. Mulier.
Wiley Interscience, 1g98.
Learn’ing from Data:
concepts, Theory, and, Method,s.
[7] c. clifton, M. Kantarcioglu, and J. vaidya. Defining privacy for data mining. In
National Sc’ience Foundat’ion workshop on Nert Generation Data Mining, pages 126133, Baltimore, MD, November 2002.
f8] P’ Domingos and G. Hulten. Mining high-speed data streams. In Proc. of the 6th Intt.
conf. on Knowled,ge Discouery and Data M’in’ing, pages z1-80, Boston, Massachusetts,
2000. ACM Press.
J9] R. o’ Duda, P. E. Hart, and D. G. stork. Pattern classification John wiley & sons,
Inc., New York, 2nd edition, 2001.
H. Dunham. Data Mini,ng: Introd,uctory and, Ad,uancedropics. prentice Hall, 2002.
Fayyad, G. G. Grinstein, and A. Wierse, editors. Information Visualization in
Data Mining and, Knowled,ge Discouery. Morgan Kaufmann Publishers, San Ftancisco,
CA, September 200I.
112] u. M. Fayyad, G. Piatetsky-Shapiro, and P. smyth. Fyom Data Mining to Knowledge
Discovery: An overview. rn Ad,aances in Knowledge Discouery and Data M’ining, pages
1-34. AAAI Press, 1996.
M. Fayyad, G. Piatetsky-shapiro, P. Smyth, and R. uthurusamy, editors. Aduances
‘in Knowled,ge Discouery and Data
press, 1g96.
[14] J. H. Friedman. Data Mining and Statistics: What’s the Connection? Unpublished.,
[15] c. Giannella, J. Han, J. Pei, X. Yan, and P. s. Yu. Mining Fyequent patterns in Data
streams at Multiple Time Granularities. In H. Kargupta, A. Joshi, K. sivakumar, and
Y. Yesha, editors, Nert Generation Data M,ining, pages ISI-2I2. AAAI/MIT,2003.
116] c. Glymour, D. Madigan, D. Pregibon, and P. smyth. statistical rhemes and Lessons
for Data Mining. Data Mining and Knowleilge D,iscouerg,1(1):11-28, 1992.
L. Grossman, M. F. Hornick, and G. Meyer. Data mining standards initiatives.
c omtnunications of the A c M, 45(g) :59-6I, 2002.
[18] R. L. Grossman, c. Kamath, P. Kegelmeyer, v. Kumar, and R. Namburu, editors. Data
Mini;ng for Sci,entific and Engineering Applicati,ons. Kluwer Academic Publishers, 2001.
119] s. Guha, A. Meyerson, N. Mishra, R. Motwani, and L. o’callaghan. clustering Data
Streams: Theory and Practice. IEEE Tbansact’ions on Knowledge and, Data Engineering,
15(3) :515-528, May/June 2003.
Han, R. B. Altman, V. Kumar, H. Mannila, and D. pregibon. Emerging scientific
applications in data mining. Communications of the ACM, 4S(8):54-b8,2002.
[21] J. Han and M. Kamber. Data Mining: concepts and, Techniques. Morgan Kaufmann
Publishers, San Francisco, 2001.
The American Statistician, 52(2):
[22] D. J. Hand. Data Mining: Statistics and More?
[23] D. J. Hand, H. Mannila, and P. smyth. Principles of Data Mining. MIT press, 2001.
l24l T. Hastie, R. Tibshirani, and J. H. Fliedman. The Elements of Stati.stical Learning:
Data Mini,ng, Inference, Pred,iction. Springer, New York, 2001.
Kantardzic. Data Mini,ng: concepts, Mod,el.s,Method,s, and Algorithms. wiley-IEEE
Press, Piscataway NJ, 2003.
Chapter I
and Parallel Knowledge
[26] H. Kargupta and P. K. Chan, editors. Aduances in Di,stributed
Discouery. AAAI Press, September 2002.
Propl27l H. Kargupta, s. Datta, Q. wang, and K. sivakumar. on the Privacy Preserving
erties of Random
on Data Mi,n’ing, pages 99 106, Melbourne, Florida, December 2003. IEEE Computer
Kifer, s. Ben-David, and J. Gehrke. Detecting change in Data streams. In Proc. of
the 30th VLDB Conf., pages 180 191, Toronto, Canada,2004. Morgan Kaufmann.
f29] D. Lambert. What Use is Statistics for Massive Data? In ACM SIGMOD Workshop
on Research Issues in Data Mini’ng and Knowledge Di’scouery,pages 54-62, 2000.
for Data
[30] M H. C. Law, N. Zhang, and A. K. Jain. Nonlinear Manifold Learning
Streams. In Proc. of the SIAM Intl. Conf. on Data Mi.ning, Lake Buena Vista, Florida,
April2004. SIAM.
Mitchell. Mach’ine Learning. McGraw-Hill, Boston, MA, 1997.
A. Brockwell, and C. Faloutsos. Adaptive, unsupervised stream minPapadimitriou,
ing. VLDB Journal, 13(3):222 239.2004.
f33l O. Parr Rud. Data Mi,ning Cookbook: Modeling Data for Marleet’ing, Risk and, Customer
Relationship Management John Wiley & Sons, New York, NY, 2001.
[34] D. Pyle. Business Modeling and, Data Mining. Morgan Kaufmann, san FYancisco, cA,
Ramakrishnan and A. Grama. Data Mining: From Serendipity to Science-Guest
Editors’ Introduction. IEEE Computer, S2(8):34 37, 1999.
[36] R. Roiger and M. Geatz. Data Mzni,ng: A Tutorial Based, Primer. Addison-Wesley,
137] P. Smyth. Breaking out of the Black-Box: Research Challenges in Data Mining.
Proc. of the 2001
Knowledg e Discouerg, 2OOL.
Smyth, D. Pregibon, and C. Faloutsos. Data-driven evolution of data mining algo[38]
rithms. Commun’ications of the ACM, 45(8):33-37, 2002.
[39] V. S. Verykios, E. Bertino, I. N. Fovino, L. P. Provenza, Y. Saygin, and Y. Theodoridis.
State-of-the-art in privacy preserving data mining. SIGMOD Record,,33(1):50-57′ 2004.
[40] J. T. L. Wang, M. J. Zaki, H. Toivonen, and D. tr. Shasha, editors. Data Mining
Bi,oi,nformatics. Springer,
[41] A. R. Webb. Statistical Pattern Recogn’iti’on.John Wiley & Sons, 2nd edition,
[42] I.H. Witten
niques with Jaaa Implernentat’ions. Morgan Kaufmann, 1999.
[43] X. Wu, P. S. Yu, and G. Piatetsky-Shapiro. Data Mining: How Research Meets Practical
Development ? Knowledg e and Inf ormati’on Systems, 5 (2) :248-261, 2003.
l44l M. J. Zaki and C.-T. Ho, editors. Large-ScaleParallel Data Mining. Springer, September
1. Discuss whether or not each of the following activities is a data mining task.
(a) Dividing the customers of a company according to their gender.
(b) Dividing the customers of a company according to their profitability.
(c) Computing the total sales of a company.
(d) Sorting a student database based on student identification numbers.
(e) Predicting the outcomes of tossing a (fair) pair of dice.
(f) Predicting the future stock price of a company using historical records.
(g) Monitoring the heart rate of a patient for abnormalities.
(h) Monitoring seismic waves for earthquake activities.
(i) Extracting the frequencies of a sound wave.
2 . Suppose that you are employed as a data mining consultant for an Internet
search engine company. Describe how data mining can help the company by
giving specific examples of how techniques, such as clustering, classification,
association rule mining, and anomaly detection can be applied.
For each of the following data sets, explain whether or not data privacy is an
important issue.
(a) Census data collected from 1900-1950.
(b) IP addresses and visit times of Web users who visit your Website.
(c) Images from Earth-orbiting satellites.
(d) Names and addressesof people from the telephone book.
(e) Names and email addressescollected from the Web.
This chapter discussesseveral data-related issues that are important for successful data mining:
The Type of Data
Data sets differ in a number of ways. For example, the
attributes used to describedata objects can be of different types-quantitative
or qualitative-and data sets may have special characteristics;e.g., some data
sets contain time series or objects with explicit relationships to one another.
Not surprisingly, the type of data determines which tools and techniques can
be used to analyze the data. Frrthermore, new research in data mining is
often driven by the need to accommodate new application areas and their new
types of data.
The Quality of the Data Data is often far from perfect. while most data
mining techniques can tolerate some level of imperfection in the data, a focus
on understanding and improving data quality typically improves the quality
of the resulting analysis. Data quality issuesthat often need to be addressed
include the presenceof noise and outliers; missing, inconsistent, or duplicate
data; and data that is biased or, in some other way, unrepresentative of the
phenomenon or population that the data is supposed to describe.
Preprocessing Steps to Make the Data More suitable for Data Mining often, the raw data must be processedin order to make it suitable for
analysis. While one objective may be to improve data quality, other goals
focus on modifying the data so that it better fits a specifieddata mining technique or tool. For example, a continuous attribute, e.g., length, m&y need to
be transformed into an attribute with discrete categories,e.g., short, med,ium,
or long, in order to apply a particular technique. As another example, the
Chapter 2
number of attributes in a data set is often reduced becausemany techniques
are more effective when the data has a relatively small number of attributes.
One approach to data
Analyzing Data in Terms of Its Relationships
analysis is to find relationships among the data objects and then perform
the remaining analysis using these relationships rather than the data objects
themselves. For instance, we can compute the similarity or distance between
pairs of objects and then perform the analysis-clustering, classification, or
anomaly detection-based on these similarities or distances. There are many
such similarity or distance measures)and the proper choice depends on the
type of data and the particular application.
of Data-Related Issues). To further ilExample 2.1 (An Illustration
Iustrate the importance of these issues,consider the following hypothetical situation. You receive an email from a medical researcher concerning a project
that you are eager to work on.
I’ve attached the data file that I mentioned in my previous email.
Each line contains the information for a single patient and consists
of five fields. We want to predict the last field using the other fields.
I don’t have time to provide any more information about the data
since I’m going out of town for a couple of days, but hopefully that
won’t slow you down too much. And if you don’t mind, could we
meet when I get back to discussyour preliminary results? I might
invite a few other members of mv team.
Thanks and see you in a couple of days.
Despite some misgivings, you proceed to analyze the data. The first few
rows of the fiIe are as follows:
0r2 232 33.5 0 10.7
020 72r 16.9 2 2L0.L
027 165 24.0 0 427.6
A brieflook at the data revealsnothing strange. You put your doubts aside
and start the analysis. There are only 1000 lines, a smaller data file than you
had hoped for, but two days later, you feel that you have made some progress.
You arrive for the meeting, and while waiting for others to arrive, you strike
up a conversationwith a statistician who is working on the project. When she
learns that you have also been analyzing the data from the project, she asks
if you would mind giving her a brief overview of your results.
Statistician: So, you got the data for all the patients?
Data Miner: Yes. I haven’t had much time for analysis, but I
do have a few interesting results.
Statistician: Amazing. There were so many data issueswith
this set of patients that I couldn’t do much.
Data Miner: Oh? I didn’t hear about any possible problems.
Statistician: Well, first there is field 5, the variable we want to
predict. It’s common knowledge among people who analyze
this type of data that results are better if you work with the
log of the values, but I didn’t discover this until later. Was it
mentioned to you?
Data Miner: No.
Statistician: But surely you heard about what happened to field
4? It’s supposedto be measured on a scale from 1 to 10, with
0 indicating a missing value, but becauseof a data entry
error, all 10’s were changed into 0’s. Unfortunately, since
some of the patients have missing values for this field, it’s
impossible to say whether a 0 in this field is a real 0 or a 10.
Quite a few of the records have that problem.
Data Miner: Interesting. Were there any other problems?
Statistician: Yes, fields 2 and 3 are basically the same, but I
assumethat you probably noticed that.
Data Miner: Yes, but these fields were only weak predictors of
field 5.
Statistician: Anyway, given all those problems, I’m surprised
you were able to accomplish anything.
Data Miner: Thue, but my results are really quite good. Field 1
is a very strong predictor of field 5. I’m surprised that this
wasn’t noticed before.
Statistician: What? Field 1 is just an identification number.
Data Miner: Nonetheless,my results speak for themselves.
Statistician: Oh, no! I just remembered. We assignedID
numbers after we sorted the records based on field 5. There is
a strong connection, but it’s meaningless.Sorry.
Although this scenario representsan extreme situation, it emphasizesthe
importance of “knowing your data.” To that end, this chapter will address
each of the four issues mentioned above, outlining some of the basic challenges
and standard approaches.
Types of Data
A data set can often be viewed as a collection of data objects. Other
names for a data object are record, po’int, uector, pattern, euent, case,sample,
obseruat’ion,or ent’ity. In turn, data objects are described by a number of
attributes that capture the basic characteristics of an object, such as the
mass of a physical object or the time at which an event occurred. Other
names for an attribute are uariable, characteristi,c,field, feature, ot d’imens’ion.
Often, a data set is a file, in which
Example 2.2 (Student Information).
and each field (or column) correthe objects are records (or
sponds to an attribute. For example, Table 2.1 shows a data set that consists
of student information. Each row correspondsto a student and each column
is an attribute that describes some aspect of a student, such as grade point
average (GPA) or identification number (ID).
Table2,1.A sample
Grade Point Average (GPA)
Although record-based data sets are common, either in flat files or relational database systems, there are other important types of data sets and
systems for storing data. In Section 2.I.2,we will discusssome of the types of
data sets that are commonly encountered in data mining. However, we first
consider attributes.
and Measurement
In this section we address the
types of attributes are used to
tribute, then consider what we
describe the types of attributes
Types of Data
issue of describing data by considering what
describe data objects. We first define an atmean by the type of an attribute, and finally
that are commonly encountered.
Is an attribute?
We start with a more detailed definition of an attribute.
Definition 2.1. An attribute is a property or characteristic of an object
that may vary; either from one object to another or from one time to another.
For example, eye color varies from person to person, while the temperature
of an object varies over time. Note that eye color is a symbolic attribute with
a small number of possiblevalues{brown,black,blue,green, hazel, etc.}, while
temperature is a numerical attribute with a potentially unlimited number of
At the most basic level, attributes are not about numbers or symbols.
However, to {iscuss and more precisely analyze the characteristics of objects,
we assign numbers or symbols to them. To do this in a well-defined way, we
need a measurement scale.
Definition 2.2. A measurement scale is a rule (function) that associates
a numerical or symbolic value with an attribute of an object.
Formally, the process of measurement is the application of a measurement scale to associatea value with a particular attribute of a specific object.
While this may seem a bit abstract, we engagein the processof measurement
all the time. For instance, we step on a bathroom scale to determine our
weight, we classify someone as male or female, or we count the number of
chairs in a room to seeif there will be enough to seat all the people coming to
a meeting. In all these cases)the “physical value” of an attribute of an object
is mapped to a numerical or symbolic value.
With this background, we can now discuss the type of an attribute, a
concept that is important in determining if a particular data analysis technique
is consistent with a specific type of attribute.
The Type of an Attribute
It should be apparent from the previous discussion that the properties of an
attribute need not be the same as the properties of the values used to mea-
sure it. In other words, the values used to represent an attribute may have
properties that are not properties of the attribute itself, and vice versa. This
is illustrated with two examples.
Example 2.3 (Employee Age and ID Number). Two attributes that
might be associatedwith an employee are ID and age (in years). Both of these
attributes can be represented as integers. However, while it is reasonableto
talk about the average age of an employee, it makes no sense to talk about
the average employee ID. Indeed, the only aspect of employees that we want
to capture with the ID attribute is that they are distinct. Consequently,the
only valid operation for employeeIDs is to test whether they are equal. There
is no hint of this limitation, however, when integers are used to represent the
employee ID attribute. For the age attribute, the properties of the integers
used to represent age are very much the properties of the attribute. Even so,
the correspondence is not complete since, for example, ages have a maximum’
while integers do not.
Example 2.4 (Length of Line Segments). Consider Figure 2.1, which
shows some objects-line segments and how the length attribute of these
objects can be mapped to numbers in two different ways. Each successive
line segment, going from the top to the bottom, is formed by appending the
topmost line segment to itself. Thus, the second line segment from the top is
formed by appending the topmost line segment to itself twice, the third line
segment from the top is formed by appending the topmost line segment to
itself three times, and so forth. In a very real (physical) sense, all the line
segmentsare multiples of the first. This fact is captured by the measurements
on the right-hand side of the figure, but not by those on the left hand-side.
More specifically, the measurement scale on the left-hand side captures only
the ordering of the length attribute, while the scale on the right-hand side
captures both the ordering and additivity properties. Thus, an attribute can be
measuredin a way that does not capture all the properties of the attribute. t
The type of an attribute should tell us what properties of the attribute are
reflected in the values used to measure it. Knowing the type of an attribute
is important because it tells us which properties of the measured values are
consistent with the underlying properties of the attribute, and therefore, it
allows us to avoid foolish actions, such as computing the average employee ID.
Note that it is common to refer to the type of an attribute as the type of a
measurement scale.
Types of Data
A mapping
of lengthsto numbers
of linesegments
of measurement.
The Different
Types of Attributes
A useful (and simple) way to specify the type of an attribute is to identify
the properties of numbers that correspond to underlying properties of the
attribute. For example, an attribute such as length has many of the properties
of numbers. It makes senseto compare and order objects by length, as well
as to talk about the differencesand ratios of length. The following properties
(operations) of numbers are typically used to describe attributes.
1. Distinctness
and *
2. Order
Purchase answer to see full

Why Choose Us

  • 100% non-plagiarized Papers
  • 24/7 /365 Service Available
  • Affordable Prices
  • Any Paper, Urgency, and Subject
  • Will complete your papers in 6 hours
  • On-time Delivery
  • Money-back and Privacy guarantees
  • Unlimited Amendments upon request
  • Satisfaction guarantee

How it Works

  • Click on the “Place Order” tab at the top menu or “Order Now” icon at the bottom and a new page will appear with an order form to be filled.
  • Fill in your paper’s requirements in the "PAPER DETAILS" section.
  • Fill in your paper’s academic level, deadline, and the required number of pages from the drop-down menus.
  • Click “CREATE ACCOUNT & SIGN IN” to enter your registration details and get an account with us for record-keeping and then, click on “PROCEED TO CHECKOUT” at the bottom of the page.
  • From there, the payment sections will show, follow the guided payment process and your order will be available for our writing team to work on it.