0% found this document useful (0 votes)
18 views39 pages

Unit-3 DWDM

ddddddddddddddddddddddddddd

Uploaded by

shanisoni529
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
18 views39 pages

Unit-3 DWDM

ddddddddddddddddddddddddddd

Uploaded by

shanisoni529
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 39

ASHISH DIXIT AJAY KUMAR GARG ENGINEERING COLLEGE

UNIT-3
Why we need Data Mining?
Volume of information is increasing everyday that we can handle from business
transactions, scientific data, sensor data, Pictures, videos, etc. So, we need a system that will
be capable of extracting essence of information available and that can automatically generate
report, views or summary of data for better decision-making.

Why Data Mining is used in Business?


Data mining is used in business to make better managerial decisions by:
 Automatic summarization of data
 Extracting essence of information stored.
 Discovering patterns in raw data.

What is Data Mining?


Data Mining is defined as extracting information from huge sets of data. In other words,
we can say that data mining is the procedure of mining knowledge from data. The information
or knowledge extracted so can be used for any of the following applications −
 Market Analysis
 Fraud Detection
 Customer Retention
 Production Control
 Science Exploration

Knowledge discovery from Data (KDD) is essential for data mining. While others view data
mining as an essential step in the process of knowledge discovery. Here is the list of steps
involved in the knowledge discovery process −

 Data Cleaning − In this step, the noise and inconsistent data is removed.
 Data Integration − In this step, multiple data sources are combined.
 Data Selection − In this step, data relevant to the analysis task are retrieved from the
database.
 Data Transformation − In this step, data is transformed or consolidated into forms
appropriate for mining by performing summary or aggregation operations.
 Data Mining − In this step, intelligent methods are applied in order to extract data
patterns.
 Pattern Evaluation − In this step, data patterns are evaluated.
 Knowledge Presentation − In this step, knowledge is represented.

Page 1
ASHISH DIXIT AJAY KUMAR GARG ENGINEERING COLLEGE

What kinds of data can be mined?


1. Flat Files
2. Relational Databases
3. DataWarehouse
4. Transactional Databases
5. Multimedia Databases
6. Spatial Databases
7. Time Series Databases
8. World Wide Web(WWW)

1. Flat Files
 Flat files are defined as data files in text form or binary form with a structure that
can be easily extracted by data mining algorithms.
 Data stored in flat files have no relationship or path among themselves, like if a
relational database is stored on flat file, and then there will be no relations between
the tables.
 Flat files are represented by data dictionary. Eg: CSV file.
 Application: Used in Data Warehousing to store data, Used in carrying data to and
from server, etc.
2. Relational Databases
 A Relational database is defined as the collection of data organized in tables with
rows and columns.
 Physical schema in Relational databases is a schema which defines the structure of
tables.
 Logical schema in Relational databases is a schema which defines the relationship
among tables.
 Standard API of relational database is SQL.
 Application: Data Mining, ROLAP model, etc.

Page 2
ASHISH DIXIT AJAY KUMAR GARG ENGINEERING COLLEGE

3. DataWarehouse
 A datawarehouse is defined as the collection of data integrated from multiple
sources that will query and decision making.
 There are three types of datawarehouse: Enterprise datawarehouse, Data
Mart and Virtual Warehouse.
 Two approaches can be used to update data in DataWarehouse: Query-
driven Approach and Update-driven Approach.
 Application: Business decision making, Data mining, etc.
4. Transactional Databases
 Transactional databases are a collection of data organized by time stamps, date,
etc to represent transaction in databases.
 This type of database has the capability to roll back or undo its operation when a
transaction is not completed or committed.
 Highly flexible system where users can modify information without changing any
sensitive information.
 Follows ACID property of DBMS.
 Application: Banking, Distributed systems, Object databases, etc.
5. Multimedia Databases
 Multimedia databases consists audio, video, images and text media.
 They can be stored on Object-Oriented Databases.
 They are used to store complex information in pre-specified formats.
 Application: Digital libraries, video-on demand, news-on demand, musical
database, etc.
6. Spatial Database
 Store geographical information.
 Stores data in the form of coordinates, topology, lines, polygons, etc.
 Application: Maps, Global positioning, etc.
7. Time-series Databases
 Time series databases contain stock exchange data and user logged activities.
 Handles array of numbers indexed by time, date, etc.
 It requires real-time analysis.
 Application: eXtremeDB, Graphite, InfluxDB, etc.
8. WWW
 WWW refers to World wide web is a collection of documents and resources like
audio, video, text, etc which are identified by Uniform Resource Locators (URLs)
through web browsers, linked by HTML pages, and accessible via the Internet
network.
 It is the most heterogeneous repository as it collects data from multiple resources.
 It is dynamic in nature as Volume of data is continuously increasing and changing.
 Application: Online shopping, Job search, Research, studying, etc.

What kinds of Patterns can be mined?


On the basis of the kind of data to be mined, there are two categories of functions involved
in Data Mining −
a) Descriptive
b) Classification and Prediction

Page 3
ASHISH DIXIT AJAY KUMAR GARG ENGINEERING COLLEGE

a) DescriptiveFunction
The descriptive function deals with the general properties of data in the database.
Here is the list of descriptive functions −
1. Class/Concept Description
2. Mining of Frequent Patterns
3. Mining of Associations
4. Mining of Correlations
5. Mining of Clusters

1. Class/Concept Description
Class/Concept refers to the data to be associated with the classes or concepts. For
example, in a company, the classes of items for sales include computer and printers, and
concepts of customers include big spenders and budget spenders. Such descriptions of a class
or a concept are called class/concept descriptions. These descriptions can be derived by the
following two ways −
 Data Characterization − This refers to summarizing data of class under study. This
class under study is called as Target Class.
 Data Discrimination − It refers to the mapping or classification of a class with some
predefined group or class.

2. Mining of Frequent Patterns


Frequent patterns are those patterns that occur frequently in transactional data. Here is
the list of kind of frequent patterns −
 Frequent Item Set − It refers to a set of items that frequently appear together, for
example, milk and bread.
 Frequent Subsequence − A sequence of patterns that occur frequently such as
purchasing a camera is followed by memory card.
 Frequent Sub Structure − Substructure refers to different structural forms, such as
graphs, trees, or lattices, which may be combined with item-sets or subsequences.
3. Mining of Association
Associations are used in retail sales to identify patterns that are frequently purchased
together. This process refers to the process of uncovering the relationship among data and
determining association rules.
For example, a retailer generates an association rule that shows that 70% of time milk
is sold with bread and only 30% of times biscuits are sold with bread.

4. Mining of Correlations
It is a kind of additional analysis performed to uncover interesting statistical
correlations between associated-attribute-value pairs or between two item sets to analyze
that if they have positive, negative or no effect on each other.

5. Mining of Clusters
Cluster refers to a group of similar kind of objects. Cluster analysis refers to forming
group of objects that are very similar to each other but are highly different from the objects
in other clusters.

Page 4
ASHISH DIXIT AJAY KUMAR GARG ENGINEERING COLLEGE

b) Classification andPrediction
Classification is the process of finding a model that describes the data classes or
concepts. The purpose is to be able to use this model to predict the class of objects whose class
label is unknown. This derived model is based on the analysis of sets of training data. The
derived model can be presented in the following forms −

1. Classification (IF-THEN) Rules


2. Prediction
3. Decision Trees
4. Mathematical Formulae
5. Neural Networks
6. Outlier Analysis
7. Evolution Analysis

The list of functions involved in these processes is as follows −


1. Classification − It predicts the class of objects whose class label is unknown. Its
objective is to find a derived model that describes and distinguishes data classes or
concepts. The Derived Model is based on the analysis set of training data i.e. the data
object whose class label is well known.

2. Prediction − It is used to predict missing or unavailable numerical data values rather


than class labels. Regression Analysis is generally used for prediction. Prediction can
also be used for identification of distribution trends based on available data.

3. Decision Trees − A decision tree is a structure that includes a root node, branches, and
leaf nodes. Each internal node denotes a test on an attribute, each branch denotes the
outcome of a test, and each leaf node holds a class label.

4. Mathematical Formulae – Data can be mined by using some mathematical formulas.

5. Neural Networks − Neural networks represent a brain metaphor for information


processing. These models are biologically inspired rather than an exact replica of how
the brain actually functions. Neural networks have been shown to be very promising
systems in many forecasting applications and business classification applications due
to their ability to “learn” from the data.

6. Outlier Analysis − Outliers may be defined as the data objects that do not comply with
the general behavior or model of the data available.

7. Evolution Analysis − Evolution analysis refers to the description and model


regularities or trends for objects whose behavior changes over time.

Page 5
ASHISH DIXIT AJAY KUMAR GARG ENGINEERING COLLEGE

Data Mining Task Primitives


 We can specify a data mining task in the form of a data mining query.
 This query is input to the system.
 A data mining query is defined in terms of data mining task primitives.

Note − These primitives allow us to communicate in an interactive manner with the data
mining system. Here is the list of Data Mining Task Primitives −
 Set of task relevant data to be mined.
 Kind of knowledge to be mined.
 Background knowledge to be used in discovery process.
 Interestingness measures and thresholds for pattern evaluation.
 Representation for visualizing the discovered patterns.

Which Technologies are used in data mining?

1. Statistics:
 It uses the mathematical analysis to express representations, model and summarize
empirical data or real world observations. 
 Statistical analysis involves the collection of methods, applicable to large amount of
data to conclude and report the trend.
2. Machine learning
 Arthur Samuel defined machine learning as a field of study that gives computers the
ability to learn without being programmed.
 When the new data is entered in the computer, algorithms help the data to grow or
change due to machine learning.
 In machine learning, an algorithm is constructed to predict the data from the available
database (Predictive analysis).
 It is related to computational statistics.

Page 6
ASHISH DIXIT AJAY KUMAR GARG ENGINEERING COLLEGE

The four types of machine learning are:


a. Supervised learning
 It is based on the classification.
 It is also called as inductive learning. In this method, the desired outputs are
included in the training dataset.
b. Unsupervised learning
 Unsupervised learning is based on clustering. Clusters are formed on the basis of
similarity measures and desired outputs are not included in the training dataset.
c. Semi-supervised learning
 Semi-supervised learning includes some desired outputs to the training dataset to
generate the appropriate functions. This method generally avoids the large number
of labeled examples (i.e. desired outputs).
d. Active learning
 Active learning is a powerful approach in analyzing the data efficiently. 
 The algorithm is designed in such a way that, the desired output should be decided
by the algorithm itself (the user plays important role in this type).
3. Information retrieval
 Information deals with uncertain representations of the semantics of objects (text,
images).
For example: Finding relevant information from a large document.

4. Database systems and data warehouse


 Databases are used for the purpose of recording the data as well as data warehousing.
 Online Transactional Processing (OLTP) uses databases for day to day transaction
purpose.
 Data warehouses are used to store historical data which helps to take strategically
decision for business.
 It is used for online analytical processing (OALP), which helps to analyze the data.
5. PatternRecognition:
Pattern recognition is the automated recognition of patterns and regularities in data.
Pattern recognition is closely related to artificial intelligence and machine learning, together
with applications such as data mining and knowledge discovery in databases (KDD), and is
often used interchangeably with these terms.
6. Visualization:
It is the process of extracting and visualizing the data in a very clear and understandable
way without any form of reading or writing by displaying the results in the form of pie charts,
bar graphs, statistical representation and through graphical forms as well.
7. Algorithms:
To perform data mining techniques we have to design best algorithms.
8. High Performance Computing:
High Performance Computing most generally refers to the practice of aggregating
computing power in a way that delivers much higher performance than one could get out of a
typical desktop computer or workstation in order to solve large problems in science,
engineering, or business.

Page 7
ASHISH DIXIT AJAY KUMAR GARG ENGINEERING COLLEGE

Are all patterns interesting?


 Typically the answer is No – only small fraction of the patterns potentially generated would
actuallybe of interest to a givenuser.
 Whatmakespatternsinteresting?
 The answer is if it is (1) easily understood byhumans, (2) valid on new or test data with
somedegreeof certainty,(3) potentiallyuseful,(4) novel.
 A Pattern is alsointerestingif it is validates a hypothesisthattheusersought to confirm.
Data Mining Applications
Here is the list of areas where data mining is widely used −
 Financial Data Analysis
 Retail Industry
 Telecommunication Industry
 Biological Data Analysis
 Other Scientific Applications
 Intrusion Detection
Financial Data Analysis
The financial data in banking and financial industry is generally reliable and of high
quality which facilitates systematic data analysis and data mining. Like,
 Loan payment prediction and customer credit policy analysis.
 Detection of money laundering and other financial crimes.
Retail Industry
Data Mining has its great application in Retail Industry because it collects large amount
of data from on sales, customer purchasing history, goods transportation, consumption and
services. It is natural that the quantity of data collected will continue to expand rapidly because
of the increasing ease, availability and popularity of the web.
Telecommunication Industry
Today the telecommunication industry is one of the most emerging industries providing
various services such as fax, pager, cellular phone, internet messenger, images, e- mail, web
data transmission, etc. Due to the development of new computer and communication
technologies, the telecommunication industry is rapidly expanding. This is the reason why data
mining is become very important to help and understand the business.
Biological Data Analysis
In recent times, we have seen a tremendous growth in the field of biology such as
genomics, proteomics, functional Genomics and biomedical research. Biological data mining
is a very important part of Bioinformatics.
Other Scientific Applications
The applications discussed above tend to handle relatively small and homogeneous data
sets for which the statistical techniques are appropriate. Huge amount of data have been
collected from scientific domains such as geosciences, astronomy, etc.
A large amount of data sets is being generated because of the fast numerical simulations
in various fields such as climate and ecosystem modeling, chemical engineering, fluid
dynamics, etc.

Page 8
ASHISH DIXIT AJAY KUMAR GARG ENGINEERING COLLEGE

Intrusion Detection
Intrusion refers to any kind of action that threatens integrity, confidentiality, or the
availability of network resources. In this world of connectivity, security has become the
major issue. With increased usage of internet and availability of the tools and tricks for
intruding and attacking network prompted intrusion detection to become a critical component
of network administration.
Major Issues in data mining:
Data mining is a dynamic and fast-expanding field with great strengths. The major issues
can divided into five groups:
a) Mining Methodology
b) User Interaction
c) Efficiency and scalability
d) Diverse Data Types Issues
e) Data mining society
a) Mining Methodology:
It refers to the following kinds of issues −
 Mining different kinds of knowledge in databases − Different users may be
interested in different kinds of knowledge. Therefore it is necessary for data mining
to cover a broad range of knowledge discovery task.
 Mining knowledge in multidimensional space – when searching for knowledge in
large datasets, we can explore the data in multidimensional space. 
 Handling noisy or incomplete data − the data cleaning methods are required to handle
the noise and incomplete objects while mining the data regularities. If the data cleaning
methods are not there then the accuracy of the discovered patterns will be poor.
 Pattern evaluation − the patterns discovered should be interesting because either they
represent common knowledge or lack novelty. 
b) User Interaction:
 Interactive mining of knowledge at multiple levels of abstraction − The data mining
process needs to be interactive because it allows users to focus the search for patterns,
providing and refining data mining requests based on the returned results.
 Incorporation of background knowledge − To guide discovery process and to
express the discovered patterns, the background knowledge can be used. Background
knowledge may be used to express the discovered patterns not only in concise terms
but at multiple levels of abstraction.
 Data mining query languages and ad hoc data mining − Data Mining Query
language that allows the user to describe ad hoc mining tasks, should be integrated
with a data warehouse query language and optimized for efficient and flexible data
mining.
 Presentation and visualization of data mining results − Once the patterns are
discovered it needs to be expressed in high level languages, and visual representations.
These representations should be easily understandable. 

Page 9
ASHISH DIXIT AJAY KUMAR GARG ENGINEERING COLLEGE

c) Efficiency and scalability


There can be performance-related issues such as follows −
 Efficiency and scalability of data mining algorithms − In order to effectively extract
the information from huge amount of data in databases, data mining algorithm must
be efficient and scalable. 
 Parallel, distributed, and incremental mining algorithms − The factors such as huge
size of databases, wide distribution of data, and complexity of data mining methods
motivate the development of parallel and distributed data miningalgorithms. These
algorithms divide the data into partitions which is further processed in a parallel
fashion. Then the results from the partitions is merged. The incremental algorithms,
update databases without mining the data again from scratch.
d) Diverse Data Types Issues
 Handling of relational and complex types of data − The database may contain
complex data objects, multimedia data objects, spatial data, temporal data etc. It is not
possible for one system to mine all these kind of data.
 Mining information from heterogeneous databases and global information
systems − The data is available at different data sources on LAN or WAN. These
data source may be structured, semi structured or unstructured. Therefore mining the
knowledge from them adds challenges to data mining.
e) Data Mining and Society
 Social impacts of data mining – With data mining penetrating our everyday lives, it
is important to study the impact of data mining on society.
 Privacy-preserving data mining – data mining will help scientific discovery, business
management, economy recovery, and security protection.
 Invisible data mining – we cannot expect everyone in society to learn and master data
mining techniques. More and more systems should have data mining functions built
within so that people can perform data mining or use data mining results simply by
mouse clicking, without any knowledge of data mining algorithms.
Data Objects and Attribute Types:
Data Object:
An Object is real time entity.
Attribute:
It can be seen as a data field that represents characteristics or features of a data object.
For a customer object attributes can be customer Id, address etc. The attribute types can
represented as follows—
1. Nominal Attributes – related to names: The values of a Nominal attribute are name
of things, some kind of symbols. Values of Nominal attributes represent some category
or state and that’s why nominal attribute also referred as categorical attributes.
Example:
Attribute Values
Colors Black, Green, Brown, red

Page 10
ASHISH DIXIT AJAY KUMAR GARG ENGINEERING COLLEGE

2. Binary Attributes: Binary data has only 2 values/states. For Example yes or no,
affected or unaffected, true or false.
i) Symmetric: Both values are equally important (Gender).
ii) Asymmetric: Both values are not equally important (Result).

3. Ordinal Attributes: The Ordinal Attributes contains values that have a meaningful
sequence or ranking (order) between them, but the magnitude between values is not
actually known, the order of values that shows what is important but don’t indicate how
important it is.
Attribute Values
Grade O, S, A, B, C, D, F

4. Numeric: A numeric attribute is quantitative because, it is a measurable quantity,


represented in integer or real values. Numerical attributes are of 2 types.
i. An interval-scaled attribute has values, whose differences are interpretable, but the
numerical attributes do not have the correct reference point or we can call zero point.
Data can be added and subtracted at interval scale but cannot be multiplied or divided.
Consider an example of temperature in degrees Centigrade. If a day’s temperature of
one day is twice than the other day we cannot say that one day is twice as hot as another
day.
ii. A ratio-scaled attribute is a numeric attribute with a fix zero-point. If a measurement
is ratio-scaled, we can say of a value as being a multiple (or ratio) of another value. The
values are ordered, and we can also compute the difference between values, and the
mean, median, mode, Quantile-range and five number summaries can be given.

5. Discrete: Discrete data have finite values it can be numerical and can also be in
categorical form. These attributes has finite or countably infinite set of values.
Example
Attribute Values
Teacher, Business man,
Profession
Peon
ZIP Code 521157, 521301

6. Continuous: Continuous data have infinite no of states. Continuous data is of float type.
There can be many values between 2 and 3.
Example:
Attribute Values
Height 5.4, 5.7, 6.2, etc.,
Weight 50, 65, 70, 73, etc.,

Page 11
ASHISH DIXIT AJAY KUMAR GARG ENGINEERING COLLEGE

Basic Statistical Descriptions of Data:


Basic Statistical descriptions of data can be used to identify properties of the data and
highlight which data values should be treated as noise or outliers.

1. Measuring the central Tendency:


There are many ways to measure the central tendency.
a) Mean: Let us consider the values are be a set of N observed values or
observations of X.
The Most common numeric measure of the “center” of a set of data is the mean.

Mean = =
Sometimes, each values in a set maybe associated with weight for i=1, 2, …., N.
then the mean is as follows

Mean = =
This is called weighted arithmetic mean or weighted average.
Median: Median is middle value among all values.
For N number of odd list median is th value.
For N number of even list median is th value.
Mode:
 The mode is another measure of central tendency.
 Datasets with one, two, or three modes are respectively called unimodal, bimodal,
and trimodal.
 A dataset with two or more modes is multimodal. If each data occurs only once,
then there is no mode.

2. Measuring the Dispersion of data:


 The data can be measured as range, quantiles, quartiles, percentiles, and interquartile
range.
 The kth percentile of a set of data in numerical order is the value x i having the property
that k percent of the data entries lay at or below xi.
 The measures of data dispersion: Range, Five-number summary, Interquartile range
(IQR), Variance and Standard deviation

Page 12
ASHISH DIXIT AJAY KUMAR GARG ENGINEERING COLLEGE

Five Number Summary:


This contains five values Minimum, Q1 (25% value), Median, Q3 (75% value), and
Maximum. These Five numbers are represented as Boxplot in graphical format.
 Boxplot is Data is represented with a box.
 The ends of the box are at the first and third quartiles, i.e., the height of the box is IRQ.
 The median is marked by a line within the box.
 Whiskers: two lines outside the box extend to Minimum and Maximum.
 To show outliers, the whiskers are extended to the extreme low and high observations
only if these values are less than 1.5 * IQR beyond the quartiles.

Variance and Standard Deviation:


Let us consider the values are be a set of N observed values or observations
of X. The variance formula is

 Standard Deviation is the square root of variance σ2.


 σ measures spread about the mean and should be used only when the mean is chosen
as the measure of center.
 σ =0 only when there is no spread, that is, when all observations have the same value.

3. Graphical Displays of Basic Statistical data:


There are many types of graphs for the display of data summaries and distributions, such as:
 Bar charts
 Pie charts
 Line graphs
 Boxplot
 Histograms
 Quantile plots, Quantile - Quantile plots
 Scatter plots

The data values can represent as Bar charts, pie charts, Line graphs, etc.

Page 13
ASHISH DIXIT AJAY KUMAR GARG ENGINEERING COLLEGE

Quantile plots:
 A quantile plot is a simple and effective way to have a first look at a univariate data
distribution.
 Plots quantile information
 For a data xi data sorted in increasing order, fi indicates that approximately 100
fi% of the data are below or equal to the value xi
 Note that
 the 0.25 quantile corresponds to quartile Q1,
 the 0.50 quantile is the median, and
 the 0.75 quantile is Q3.

Quantile - Quantile plots:


In statistics, a Q-Q plot is a portability plot, which is a graphical method for comparing
two portability distributions by plotting their quantiles against each other.

Histograms or frequency histograms:


 A univariate graphical method
 Consists of a set of rectangles that reflect the counts or frequencies of the classes
present in the given data
 If the attribute is categorical, then one rectangle is drawn for each known value of
A, and the resulting graph is more commonly referred to as a bar chart.
 If the attribute is numeric, the term histogram is preferred.

Page 14
ASHISH DIXIT AJAY KUMAR GARG ENGINEERING COLLEGE

Scatter Plot:
 Scatter plot
 Is one of the most effective graphical methods for determining if there appears to
be a relationship, clusters of points, or outliers between two numerical attributes.
 Each pair of values is treated as a pair of coordinates and plotted as points in the plane

Data Visualization:
Visualization is the use of computer graphics to create visual images which aid in the
understanding of complex, often massive representations of data.
 Categorization of visualization methods:
a) Pixel-oriented visualization techniques
b) Geometric projection visualization techniques
c) Icon-based visualization techniques
d) Hierarchical visualization techniques
e) Visualizing complex data and relations
a) Pixel-oriented visualization techniques
 For a data set of m dimensions, create m windows on the screen, one for each
dimension
 The m dimension values of a record are mapped to m pixels at the corresponding
positions in the windows
 The colors of the pixels reflect the corresponding values

Page 15
ASHISH DIXIT AJAY KUMAR GARG ENGINEERING COLLEGE

 To save space and show the connections among multiple dimensions, space filling is
often done in a circle segment

Geometric projection visualization techniques


Visualization of geometric transformations and projections of the data
Methods
 Direct visualization
 Scatterplot and scatterplot matrices
 Landscapes
 Projection pursuit technique: Help users find meaningful projections of
multidimensional data
 Prosection views
 Hyperslice
 Parallel coordinates

Page 16
ASHISH DIXIT AJAY KUMAR GARG ENGINEERING COLLEGE

c) Icon-based visualization techniques


Visualization of the data values as features of icons
Typical visualization methods
 Chernoff Faces
 Stick Figures
General techniques
 Shape coding: Use shape to represent certain information encoding
 Color icons: Use color icons to encode more information
 Tile bars: Use small icons to represent the relevant feature vectors in
document retrieval

d) Hierarchical visualization techniques


Visualization of the data using a hierarchical partitioning into subspaces
Methods
 Dimensional Stacking
 Worlds-within-Worlds
 Tree-Map
 Cone Trees
 InfoCube

Info Cube Worlds-within-worlds

Page 17
ASHISH DIXIT AJAY KUMAR GARG ENGINEERING COLLEGE

e) Visualizing complex data and relations


Visualizing non-numerical data: text and social networks
Tag cloud: visualizing user-generated tags
 The importance of tag is represented by font size/color
Besides text data, there are also methods to visualize relationships, such as
visualizing

Similarity and Dissimilarity


Distance or similarity measures are essential to solve many pattern recognition
problems such as classification and clustering. Various distance/similarity measures are
available in literature to compare two data distributions. As the names suggest, a similarity
measures how close two distributions are. For multivariate data complex summary methods are
developed to answer this question.
Similarity Measure
 Numerical measure of how alike two data objects are.
 Often falls between 0 (no similarity) and 1 (complete similarity).
Dissimilarity Measure
 Numerical measure of how different two data objects are.
 Range from 0 (objects are alike) to ∞ (objects are different).
Proximity refers to a similarity or dissimilarity.
Similarity/Dissimilarity for Simple Attributes
Here, p and q are the attribute values for two data objects.

Page 18
ASHISH DIXIT AJAY KUMAR GARG ENGINEERING COLLEGE

Common Properties of Dissimilarity Measures


Distance, such as the Euclidean distance, is a dissimilarity measure and has some well known
properties:
1. d(p, q) ≥ 0 for all p and q, and d(p, q) = 0 if and only if p = q,
2. d(p, q) = d(q,p) for all p and q,
3. d(p, r) ≤ d(p, q) + d(q, r) for all p, q, and r, where d(p, q) is the distance (dissimilarity)
between points (data objects), p and q.
A distance that satisfies these properties are called a metric. Following is a list of several
common distance measures to compare multivariate data. We will assume that the attributes
are all continuous.

a) Euclidean Distance
Assume that we have measurements xik, i = 1, … , N, on variables k = 1, … , p (also
called attributes).
The Euclidean distance between the ith and jth objects is

for every pair (i, j) of observations.


The weighted Euclidean distance is

If scales of the attributes differ substantially, standardization is necessary.


b) Minkowski Distance
The Minkowski distance is a generalization of the Euclidean distance.
With the measurement, xik , i = 1, … , N, k = 1, … , p, the Minkowski distance is

where λ ≥ 1. It is also called the Lλ metric.


λ = 1 : L1 metric, Manhattan or City-block distance.
λ = 2 : L2 metric, Euclidean distance.
λ → ∞ : L∞ metric, Supremum distance.

Note that λ and p are two different parameters. Dimension of the data matrix remains
finite.

Page 19
c) Mahalanobis Distance
Let X be a N × p matrix. Then the ith row of X is

The Mahalanobis distance is

where ∑ is the p×p sample covariance matrix.

Common Properties of Similarity Measures


Similarities have some well known properties:

1. s(p, q) = 1 (or maximum similarity) only if p = q,


2. s(p, q) = s(q, p) for all p and q, where s(p, q) is the similarity between data
objects, p and q.
Similarity Between Two Binary Variables
The above similarity or distance measures are appropriate for continuous variables. However,
for binary variables a different approach is necessary.

Simple Matching and Jaccard Coefficients

Simple matching coefficient = (n1,1+ n0,0) / (n1,1 + n1,0 + n0,1 + n0,0).


Jaccard coefficient = n1,1 / (n1,1 + n1,0 + n0,1).

Page 20

Copy protected with Online-PDF-No-Copy.com


DATA PREPROCESSING
1. Preprocessing
Real-world databases are highly susceptible to noisy, missing, and inconsistent data due
to their typically huge size (often several gigabytes or more) and their likely origin from
multiple, heterogeneous sources. Low-quality data will lead to low-quality mining results, so
we prefer a preprocessing concepts.
Data Preprocessing Techniques
* Data cleaning can be applied to remove noise and correct inconsistencies in the data.
* Data integration merges data from multiple sources into coherent data store, such as
a data warehouse.
* Data reduction can reduce the data size by aggregating, eliminating redundant
features, orclustering, for instance. These techniques are not mutually exclusive; they
may worktogether.
* Data transformations, such as normalization, may be applied.
Need for preprocessing
 Incomplete, noisy and inconsistent data are common place properties of large real world
databases and data warehouses.
 Incomplete data can occur for a number of reasons:
 Attributes of interest may not always be available
 Relevant data may not be recorded due to misunderstanding, or because of equipment
malfunctions.
 Data that were inconsistent with other recorded data may have been deleted.
 Missing data, particularly for tuples with missing values for some attributes, may
need to be inferred.
 The data collection instruments used may be faulty.
 There may have been human or computer errors occurring at data entry.
 Errors in data transmission can also occur.
 There may be technology limitations, such as limited buffer size for coordinating
synchronized data transfer and consumption.
 Data cleaning routines work to ―clean‖ the data by filling in missing values,
smoothing noisy data, identifying or removing outliers, and resolving inconsistencies.
 Data integration is the process of integrating multiple databases cubes or files. Yet some
attributes representing a given may have different names in different databases, causing
inconsistencies and redundancies.
 Data transformation is a kind of operations, such as normalization and aggregation, are
additional data preprocessing procedures that would contribute toward the success of
the mining process.
 Data reduction obtains a reduced representation of data set that is much smaller in
volume, yet produces the same(or almost the same) analytical results.
2. DATA CLEANING
Real-world data tend to be incomplete, noisy, and inconsistent. Data cleaning (or data
cleansing) routines attempt to fill in missing values, smooth out noise while identifying outliers
and correct inconsistencies in the data.
Missing Values
Many tuples have no recorded value for several attributes, such as customer income.so we
can fill the missing values for this attributes.
The following methods are useful for performing missing values over several attributes:
1. Ignore the tuple: This is usually done when the class label missing (assuming the
mining task involves classification). This method is not very effective, unless the
tuple contains several attributes with missing values. It is especially poor when the
percentage of the missing values per attribute varies considerably.
2. Fill in the missing values manually: This approach is time –consuming and may not
be feasible given a large data set with many missing values.
3. Use a global constant to fill in the missing value: Replace all missing attribute value
by the same constant, such as a label like ―unknown‖ or -∞.
4. Use the attribute mean to fill in the missing value: For example, suppose that the
average income of customers is $56,000. Use this value to replace the missing value for
income.
5. Use the most probable value to fill in the missing value: This may be determined
with regression, inference-based tools using a Bayesian formalism or decision tree
induction. For example, using the other customer attributes in the sets decision tree is
constructed to predict the missing value for income.
Noisy Data
Noise is a random error or variance in a measured variable. Noise is removed using
data smoothing techniques.
Binning: Binning methods smooth a sorted data value by consulting its ―neighborhood,‖ that is
the value around it. The sorted values are distributed into a number of ―buckets‖ or ―bins―.
Because binning methods consult the neighborhood of values, they perform local smoothing.
Sorted data for price (in dollars): 3,7,14,19,23,24,31,33,38.
Example 1: Partition into (equal-frequency) bins:
Bin 1: 3,7,14
Bin 2: 19,23,24
Bin 3: 31,33,38
In the above method the data for price are first sorted and then partitioned into equal-
frequency bins of size 3.
Smoothing by bin means:
Bin 1: 8,8,8
Bin 2: 22,22,22
Bin 3: 34,34,34
In smoothing by bin means method, each value in a bin is replaced by the mean value ofthe
bin. For example, the mean of the values 3,7&14 in bin 1 is 8[(3+7+14)/3].
Smoothing by bin boundaries:
Bin 1: 3,3,14
Bin 2: 19,24,24
Bin 3: 31,31,38
In smoothing by bin boundaries, the maximum & minimum values in give bin or identify as
the bin boundaries. Each bin value is then replaced by the closest boundary value.

In general, the large the width, the greater the effect of the smoothing. Alternatively, bins may
be equal-width, where the interval range of values in each bin is constant Example 2: Remove
the noise in the following data using smoothing techniques:
8, 4,9,21,25,24,29,26,28,15
Sorted data for price (in dollars):4,8,9,15,21,21,24,25,26,28,29,34
Partition into equal-frequency (equi-depth) bins:
Bin 1: 4, 8,9,15
Bin 2: 21,21,24,25
Bin 3: 26,28,29,34
Smoothing by bin means:
Bin 1: 9,9,9,9
Bin 2: 23,23,23,23
Bin 3: 29,29,29,29
Smoothing by bin boundaries:
Bin 1: 4, 4,4,15
Bin 2: 21,21,25,25
Bin3: 26,26,26,34
Regression: Data can be smoothed by fitting the data to function, such as with regression.
Linear regression involves finding the ―best‖ line to fit two attributes (or variables), so that one
attribute can be used to predict the other. Multiple linear regressions is an extension of linear
regression, where more than two attributes are involved and the data are fit to a
multidimensional surface.
Clustering: Outliers may be detected by clustering, where similar values are organized into
groups, or ―clusters.‖ Intuitively, values that fall outside of the set of clusters may be
considered outliers.

Inconsistent Data
Inconsistencies exist in the data stored in the transaction. Inconsistencies occur due to occur
during data entry, functional dependencies between attributes and missing values. The
inconsistencies can be detected and corrected either by manually or by knowledge engineering
tools.
Data cleaning as a process
a) Discrepancy detection
b) Data transformations
a) Discrepancy detection
The first step in data cleaning is discrepancy detection. It considers the knowledge
ofmeta data and examines the following rules for detecting the discrepancy.
Unique rules- each value of the given attribute must be different from all other values for that
attribute.
Consecutive rules – Implies no missing values between the lowest and highest values for the
attribute and that all values must also be unique.
Null rules - specifies the use of blanks, question marks, special characters, or other strings
that may indicates the null condition
Discrepancy detection Tools:
 Data scrubbing tools - use simple domain knowledge (e.g., knowledge of postal
addresses, and spell-checking) to detect errors and make corrections in the data
 Data auditing tools – analyzes the data to discover rules and relationship, and
detecting data that violate such conditions.
b) Data transformations
This is the second step in data cleaning as a process. After detecting discrepancies, we
need to define and apply (a series of) transformations to correct them.
Data Transformations Tools:
 Data migration tools – allows simple transformation to be specified, such to replaced
the string ―gender‖ by ―sex‖.
 ETL (Extraction/Transformation/Loading) tools – allows users to specific transforms
through a graphical user interface(GUI)
3. Data Integration
Data mining often requires data integration - the merging of data from stores into a coherent
data store, as in data warehousing. These sources may include multiple data bases, data
cubes, or flat files.
Issues in Data Integration
a) Schema integration & object matching.
b) Redundancy.
c) Detection & Resolution of data value conflict
a) Schema Integration & Object Matching
Schema integration & object matching can be tricky because same entity can be
represented in different forms in different tables. This is referred to as the entity identification
problem. Metadata can be used to help avoid errors in schema integration. The meta data may
also be used to help transform the data.
b) Redundancy:
Redundancy is another important issue an attribute (such as annual revenue, for
instance) may be redundant if it can be ―derived‖ from another attribute are set of attributes.
Inconsistencies in attribute of dimension naming can also cause redundancies in the resulting
data set. Some redundancies can be detected by correlation analysis and covariance analysis.
For Nominal data, we use the 2 (Chi-Square) test.
For Numeric attributes we can use the correlation coefficient and covariance.
2Correlation analysis for numerical data:
For nominal data, a correlation relationship between two attributes, A and B, can be
discovered by a 2 (Chi-Square) test. Suppose A has c distinct values, namely a1, a2, a3,
……., ac. B has r distinct values, namely b1, b2, b3, …., br. The data tuples are described by
table.
The 2 value is computed as
𝟐
𝒓 𝒐𝒊𝒋−𝒆𝒊𝒋
 2
= 𝒄𝒊=𝟏 𝒋=𝟏 𝒆𝒊𝒋

Where oij is the observed frequency of the joint event (Ai,Bj) and eij is the expected
frequency of (Ai,Bj), which can computed as
𝒄𝒐𝒖𝒏𝒕 𝑨=𝒂𝒊 𝑿𝒄𝒐𝒖𝒏𝒕(𝑩=𝒃𝒋)
eij = 𝒏
For Example,

Male Female Total


Fiction 250 200 450
Non_Fiction 50 1000 1050
Total 300 1200 1500
𝑐𝑜𝑢𝑛𝑡 𝑚𝑎𝑙𝑒 𝑋𝑐𝑜𝑢𝑛𝑡 (𝑓𝑖𝑐𝑡𝑖𝑜𝑛 ) 300 𝑋450
e 11 = 𝑛
= 1500 = 90
𝑐𝑜𝑢𝑛𝑡 𝑚𝑎𝑙𝑒 𝑋𝑐𝑜𝑢𝑛𝑡 (𝑛𝑜𝑛 _𝑓𝑖𝑐𝑡𝑖𝑜𝑛 ) 300 𝑋1050
e 12 = = = 210
𝑛 1500
𝑐𝑜𝑢𝑛𝑡 𝑓𝑒𝑚𝑎𝑙𝑒 𝑋𝑐𝑜𝑢𝑛𝑡 (𝑓𝑖𝑐𝑡𝑖𝑜𝑛 ) 1200 𝑋450
e 21 = = = 360
𝑛 1500
𝑐𝑜𝑢𝑛𝑡 𝑓𝑒𝑚𝑎𝑙𝑒 𝑋𝑐𝑜𝑢𝑛𝑡 (𝑛𝑜𝑛 _𝑓𝑖𝑐𝑡𝑖𝑜𝑛 ) 1200 𝑋1050
e 22 = = = 840
𝑛 1500
Data Warehousing and Data Mining
Male Female Total
Fiction 250 (90) 200 (360) 450
Non_Fiction 50 (210) 1000 (840) 1050
Total 300 1200 1500

For 2 computation, we get


(250−90) 2 (50−210) 2 200 −360 2 (1000 −840) 2
2 = + + +
90 210 360 840
= 284.44 + 121.90 + 71.11 + 30.48 = 507.93
For this 2 X 2 table, the degrees of freedom are (2-1(2-1)=1. For 1 degree of freedom,
the x2 value needed to reject the hypothesis at the 0.001 significance level is 10.828(from
statistics table). Since our computed value is greater than this, we can conclude that two
attributes are strongly correlated for the given group of people.
Correlation Coefficient for Numeric data:
For Numeric attributes, we can evaluate the correlation between two attributes, A and
B, by computing the correlation coefficient.
𝒏
This is 𝒏
𝒂𝒊−𝑨 𝒃𝒊−𝑩 𝒂𝒊𝒃𝒊 −𝒏𝑨𝑩
rA,B = 𝒊=𝟏
= 𝒊=𝟏
𝒏𝝈𝑨 𝒏𝝈 𝑨𝝈𝑩
𝝈𝑩

For Covariance between A and B defined as


𝒏 𝒂𝒊−𝑨 𝒃𝒊−𝑩
Cov(A,B) = 𝒊=𝟏
𝒏

c) Detection and Resolution of Data Value Conflicts.


A third important issue in data integration is the detection and resolution of data
valueconflicts. For example, for the same real–world entity, attribute value from
differentsources may differ. This may be due to difference in representation, scaling, or
encoding.
For instance, a weight attribute may be stored in metric units in one system and
British imperial units in another. For a hotel chain, the price of rooms in different cities may
involve not only different currencies but also different services (such as free breakfast) and
taxes. An attribute in one system may be recorded at a lower level of abstraction than the
―same‖ attribute in another.
Careful integration of the data from multiple sources can help to reduce and avoid
redundancies and inconsistencies in the resulting data set. This can help to improve the
accuracy and speed of the subsequent of mining process.

4. Data Reduction:
Obtain a reduced representation of the data set that is much smaller in volume but yet
produces the same (or almost the same) analytical results.
Why data reduction? — A database/data warehouse may store terabytes of data.
Complex data analysis may take a very long time to run on the complete data set.

Page 6
Data Warehousing and Data Mining

Data reduction strategies


4.1.Data cube aggregation
4.2.Attribute Subset Selection
4.3.Numerosity reduction — e.g., fit data into models
4.4.Dimensionality reduction - Data Compression

Data cube aggregation:


For example, the data consists of AllElectronics sales per quarter for the years 2014 to
2017.You are, however, interested in the annual sales, rather than the total per quarter. Thus,
the data can be aggregatedso that the resulting data summarize the total sales per year instead
of per quarter.

Year/Quarter 2014 2015 2016 2017 Year Sales


Quarter 1 200 210 320 230 2014 1640
Quarter 2 400 440 480 420 2015 1710
Quarter 3 480 480 540 460 2016 2020
Quarter 4 560 580 680 640 2017 1750

Attribute Subset Selection


Attribute subset selection reduces the data set size by removing irrelevant or redundant
attributes (or dimensions). The goal of attribute subset selection is to find a minimum set of
attributes. It reduces the number of attributes appearing in the discovered patterns, helping to
make the patterns easier to understand.
For n attributes, there are 2n possible subsets. An exhaustive search for the optimal
subset of attributes can be prohibitively expensive, especially as n and the number of data
classes increase. Therefore, heuristic methods that explore a reduced search space are
commonly used for attribute subset selection. These methods are typically greedy in that, while
searching to attribute space, they always make what looks to be the best choice at that time.
Their strategy to make a locally optimal choice in the hope that this will lead to a

Page 7
Data Warehousing and Data Mining
globally optimal solution. Many other attributes evaluation measure can be used, such as the
information gain measure used in building decision trees for classification.

Techniques for heuristic methods of attribute sub set selection


 Stepwise forward selection
 Stepwise backward elimination
 Combination of forward selection and backward elimination
 Decision tree induction

1. Stepwise forward selection: The procedure starts with an empty set of attributes as the
reduced set. The best of original attributes is determined and added to the reduced set. At each
subsequent iteration or step, the best of the remaining original attributes is added to the set.
2. Stepwise backward elimination: The procedure starts with full set of attributes. At each step,
it removes the worst attribute remaining in the set.
3. Combination of forward selection and backward elimination: The stepwise forward
selection and backward elimination methods can be combined so that, at each step, the
procedure selects the best attribute and removes the worst from among the remaining attributes.
4. Decision tree induction: Decision tree induction constructs a flowchart like structure where
each internal node denotes a test on an attribute, each branch corresponds to an outcome of the
test, and each leaf node denotes a class prediction. At each node, the algorithm choices the
―best‖ attribute to partition the data into individual classes. A tree is constructed from the
given data. All attributes that do not appear in the tree are assumed to be irrelevant. The set of
attributes appearing in the tree from the reduced subset of attributes. Threshold measure is used
as stopping criteria.

Numerosity Reduction:
Numerosity reduction is used to reduce the data volume by choosing alternative, smaller
forms of the data representation
Techniques for Numerosity reduction:
 Parametric - In this model only the data parameters need to be stored, instead of the
actual data. (e.g.,) Log-linear models, Regression

Page 8
Data Warehousing and Data Mining
 Nonparametric – This method stores reduced representations of data include
histograms, clustering, and sampling

Parametric model
1. Regression
 Linear regression
 In linear regression, the data are model to fit a straight line. For example, a random
variable, Y called a response variable), can be modeled as a linear function of
another random variable, X called a predictor variable), with theequation Y=αX+β
 Where the variance of Y is assumed to be constant. The coefficients, α and β (called
regression coefficients), specify the slope of the line and the Y- intercept,
respectively.
 Multiple- linear regression
 Multiple linear regression is an extension of (simple) linear regression, allowing a
response variable Y, to be modeled as a linear function of two or more predictor
variables.
2. Log-Linear Models
 Log-Linear Models can be used to estimate the probability of each point in a
multidimensional space for a set of discretized attributes, based on a smaller subset
of dimensional combinations.
Nonparametric Model
1. Histograms
A histogram for an attribute A partitions the data distribution of A into disjoint subsets,
or buckets. If each bucket represents only a single attribute-value/frequency pair, the buckets
are called singleton buckets.
Ex: The following data are bast of prices of commonly sold items at All Electronics. The
numbers have been sorted:
1,1,5,5,5,5,5,8,8,10,10,10,10,12,14,14,14,15,15,15,15,15,18,18,18,18,18,18,18,18,20,20,20,2
0,20,20,21,21,21,21,21,25,25,25,25,25,28,28,30,30,30

There are several partitioning rules including the following:


Equal-width: The width of each bucket range is uniform

Page 9
Data Warehousing and Data Mining
 (Equal-frequency (or equi-depth): the frequency of each bucket is constant

2. Clustering
Clustering technique consider data tuples as objects. They partition the objects into
groups or clusters, so that objects within a cluster are similar to one another and dissimilar to
objects in other clusters. Similarity is defined in terms of how close the objects are in space,
based on a distance function. The quality of a cluster may be represented by its diameter, the
maximum distance between any two objects in the cluster. Centroid distance is an alternative
measure of cluster quality and is defined as the average distance of each cluster object from the
cluster centroid.
3. Sampling:
Sampling can be used as a data reduction technique because it allows a large data set
to be represented by a much smaller random sample (or subset) of the data. Suppose that a large
data set D, contains N tuples, then the possible samples are Simple Random sample without
Replacement (SRS WOR) of size n: This is created by drawing „n‟ of the „N‟ tuples from D
(n<N), where the probability of drawing any tuple in D is 1/N, i.e., all tuples are equally likely
to be sampled.

Page 10
Data Warehousing and Data Mining

Dimensionality Reduction:
In dimensionality reduction, data encoding or transformations are applied so as to obtained
reduced or ―compressed‖ representation of the oriental data.
Dimension Reduction Types
 Lossless - If the original data can be reconstructed from the compressed data without any
loss of information
 Lossy - If the original data can be reconstructed from the compressed data with loss of
information, then the data reduction is called lossy.
Effective methods in lossy dimensional reduction
a) Wavelet transforms
b) Principal components analysis.
a) Wavelet transforms:
The discrete wavelet transform (DWT) is a linear signal processing technique that, when
applied to a data vector, transforms it to a numerically different vector, of wavelet coefficients.
The two vectors are of the same length. When applying this technique to data reduction, we
consider each tuple as an n-dimensional data vector, that is, X=(x1,x2,…………,xn), depicting
n measurements made on the tuple from n database attributes.
For example, all wavelet coefficients larger than some user-specified threshold can be
retained. All other coefficients are set to 0. The resulting data representation is therefore very
sparse, so that can take advantage of data sparsity are computationally very fast if performed
in wavelet space.
The numbers next to a wave let name is the number of vanishing moment of the wavelet
this is a set of mathematical relationships that the coefficient must satisfy and is related to number
of coefficients.

1. The length, L, of the input data vector must be an integer power of 2. This condition
can be met by padding the data vector with zeros as necessary (L >=n).
2. Each transform involves applying two functions
 The first applies some data smoothing, such as a sum or weighted average.
 The second performs a weighted difference, which acts to bring out the detailed
features of data.
3. The two functions are applied to pairs of data points in X, that is, to all pairs of
measurements (X2i , X2i+1). This results in two sets of data of length L/2. In general,

Page 11
Data Warehousing and Data Mining
these represent a smoothed or low-frequency version of the input data and high
frequency content of it, respectively.
4. The two functions are recursively applied to the sets of data obtained in the previous
loop, until the resulting data sets obtained are of length 2.

b) Principal components analysis


Suppose that the data to be reduced, which Karhunen-Loeve, K-L, method consists of
tuples or data vectors describe by n attributes or dimensions. Principal components analysis, or
PCA (also called the Karhunen-Loeve, or K-L, method), searches for k n-dimensional
orthogonal vectors that can best be used to represent the data where k<=n. PCA combines the
essence of attributes by creating an alternative, smaller set of variables. The initial data can
then be projected onto this smaller set.
The basic procedure is as follows:
 The input data are normalized.
 PCA computes k orthonormal vectors that provide a basis for the normalized input
data. These are unit vectors that each point in a direction perpendicular to the others.
 The principal components are sorted in order of decreasing significance or strength.

In the above figure, Y1 and Y2, for the given set of data originally mapped to the axes X1 and
X2. This information helps identify groups or patterns within the data. The sorted axes are
such that the first axis shows the most variance among the data, the second axis shows the next
highest variance, and so on.
 The size of the data can be reduced by eliminating the weaker components.
Advantage of PCA
 PCA is computationally inexpensive
 Multidimensional data of more than two dimensions can be handled by reducing the
problem to two dimensions.
 Principal components may be used as inputs to multiple regression and cluster analysis.

Page 12
Data Warehousing and Data Mining

5. Data Transformation and Discretization


Data transformation is the process of converting data from one format or structure into
another format or structure.
In data transformation, the data are transformed or consolidated into forms appropriatefor
mining. Strategies for data transformation include the following:
1. Smoothing, which works to remove noise fromthe data. Techniques include
binning,regression, and clustering.
2. Attribute construction (or feature construction), where new attributes are constructedand
added from the given set of attributes to help the mining process.
3. Aggregation, where summary or aggregation operations are applied to the data.
Forexample, the daily sales data may be aggregated so as to compute monthly and annualtotal
amounts. This step is typically used in constructing a data cube for data analysisat multiple
abstraction levels.
4. Normalization, where the attribute data are scaled so as to fall within a smaller range,such
as 1.0 to 1.0, or 0.0 to 1.0.
5. Discretization, where the raw values of a numeric attribute (e.g., age) are replaced
byinterval labels (e.g., 0–10, 11–20, etc.) or conceptual labels (e.g., youth, adult, senior).The
labels, in turn, can be recursively organized into higher-level concepts, resultingin a concept
hierarchy for the numeric attribute. Figure 3.12 shows a concept hierarchyfor the attribute
price. More than one concept hierarchy can be defined for the sameattribute to accommodate
the needs of various users.
6. Concept hierarchy generation for nominal data, where attributes such as street canbe
generalized to higher-level concepts, like city or country. Many hierarchies fornominal
attributes are implicit within the database schema and can be automaticallydefined at the
schema definition level.

5.1 Data Transformation by Normalization:


The measurement unit used can affect the data analysis. For example, changing
measurement units frommeters to inches for height, or fromkilograms to pounds for weight,
may lead to very different results.
For distance-based methods, normalization helps prevent attributes with initially large
ranges (e.g., income) from outweighing attributes with initially smaller ranges (e.g., binary
attributes). It is also useful when given no prior knowledge of the data.
There are many methods for data normalization. We study min-max normalization,z-
score normalization, and normalization by decimal scaling. For our discussion, let A bea
numeric attribute with n observed values, v1, v2, …., vn.

Page 13
Data Warehousing and Data Mining

a) Min-max normalization performs a linear transformation on the original data. Suppose that
minAand maxAare the minimum and maximum values of an attribute, A.Min- maxnormalization
maps a value, vi, of A to vi’in the range [new_minA,new_maxA]by computing

Min-max normalization preserves the relationships among the original data values. Itwill
encounter an ―out-of-bounds‖ error if a future input case for normalization fallsoutside of the
original data range for A.
Example:-Min-max normalization. Suppose that the minimum and maximum values
fortheattribute income are $12,000 and $98,000, respectively. We would like to map income
to the range [0.0, 1.0]. By min-max normalization, a value of $73,600 for income istransformed
to

b) Z-Score Normalization
The values for an attribute, A, are normalized based on the mean (i.e., average) and standard
deviation of A. A value, vi, of A is normalized to vi’ by computing

where𝐴 and A are the mean and standard deviation, respectively, of attribute A.
Example z-score normalization. Suppose that the mean and standard deviation of the values
for the attribute income are $54,000 and $16,000, respectively. With z-score normalization, a
value of $73,600 for income is transformed to

c) Normalization by Decimal Scaling:


Normalization by decimal scaling normalizes by moving the decimal point of values of
attribute A. The number of decimal points moved depends on the maximum absolute value of
A. A value, vi, of A is normalized to vi’ by computing

where j is the smallest integer such that max(|vi’)< 1.

ExampleDecimal scaling. Suppose that the recorded values of A range from -986 to 917.
Themaximum absolute value of A is 986. To normalize by decimal scaling, wethereforedivide
each value by 1000 (i.e., j = 3) so that -986 normalizes to -0.986 and917normalizes to 0.917.

Page 14
Data Warehousing and Data Mining

5.2. Data Discretization


a) Discretization by binning:
Binning is a top-down splitting technique based on a specified number of bins.
Forexample, attribute values can be discretized by applying equal-width or equal-
frequencybinning, and then replacing each bin value by the bin mean or median, as in
smoothingby bin means or smoothing by bin medians, respectively. These techniques can be
appliedrecursively to the resulting partitions to generate concept hierarchies.

b) Discretization by Histogram Analysis:


Like binning, histogram analysis is an unsupervised discretization technique because
itdoes not use class information. A histogrampartitions the values of an attribute, A, into
disjoint ranges called bucketsor bins.
In anequal-width histogram, for example, the values are partitioned into equal-size
partitionsor ranges (e.g., for price, where each bucket has a width of $10).With an equal-
frequency histogram, the values are partitioned so that, ideally, each partitioncontains the same
number of data tuples.

c) Discretization by Cluster, Decision Tree,and Correlation Analyses


Cluster analysis is a popular data discretization method. A clustering algorithm canbe
applied to discretize a numeric attribute, A, by partitioning the values of A into clustersor
groups.
Techniques to generate decision trees for classification can be applied todiscretization.
Such techniques employ a top-down splitting approach. Unlike the othermethods mentioned so
far, decision tree approaches to discretization are supervised,that is, they make use of class label
information.

Concept Hierarchy Generation for Nominal Data


Nominal attributes have a finite (but possiblylarge) number of distinct values, with no
ordering among the values. Examplesinclude geographic location, job category, and item type.
Manual definition of concept hierarchies can be a tedious and time-consuming taskfor
a user or a domain expert. Fortunately, many hierarchies are implicit within thedatabase schema
and can be automatically defined at the schema definition level. Theconcept hierarchies can be
used to transform the data into multiple levels of granularity.
1. Specification of a partial ordering of attributes explicitly at the schema level
byusers or experts:A user or expert can easily define a concept hierarchy byspecifying
a partial or total ordering of the attributes at the schema level.
2. Specification of a portion of a hierarchy by explicit data grouping:In a large
database,it is unrealistic to define an entire concept hierarchy by explicit value
enumeration.For example, after specifying that province and countryform a hierarchy
at the schema level, a user could define some intermediate levelsmanually.
3. Specification of a set of attributes, but not of their partial ordering: A user
mayspecify a set of attributes forming a concept hierarchy, but omit to explicitly

Page 15
Data Warehousing and Data Mining
statetheir partial ordering. The systemcan then try to automatically generate the
attributeordering so as to construct a meaningful concept hierarchy.
4. Specification of only a partial set of attributes: Sometimes a user can be
carelesswhen defining a hierarchy, or have only a vague idea about what should be
includedin a hierarchy. Consequently, the user may have included only a small subset
of therelevant attributes in the hierarchy specification.

 Data cleaning routines attempt to fill in missing values, smooth out noise
whileidentifying outliers, and correct inconsistencies in the data.
 Data integration combines data from multiple sources to form a coherent datastore.
The resolution of semantic heterogeneity, metadata, correlation analysis,tuple
duplication detection, and data conflict detection contribute to smooth dataintegration.
 Data reduction techniques obtain a reduced representation of the data while
minimizingthe loss of information content. These include methods of
dimensionalityreduction, numerosity reduction, and data compression.
 Data transformation routines convert the data into appropriate forms for mining.For
example, in normalization, attribute data are scaled so as to fall within asmall range
such as 0.0 to 1.0. Other examples are data discretization and concepthierarchy
generation.
 Data discretization transforms numeric data by mapping values to interval or
conceptlabels. Such methods can be used to automatically generate concept
hierarchiesfor the data, which allows for mining at multiple levels of granularity.

Page 16
Data Warehousing and Data Mining UNIT-
1.7 Major issues in data mining
Major issues in data mining are mining methodology, user interaction, performance, and diverse data types. These issues
are introduced below:
Mining methodology and user interaction issues:
Mining different kinds of knowledge in databases:

Because different users can be interested in different kinds of knowledge, data mining should cover a wide spectrum of
data analysis and knowledge discovery tasks which may use same database in different ways and require the
development of numerous data mining techniques.
Interactive mining of knowledge at multiple levels of abstraction:

Interactive mining allows users to focus the search for patterns, providing and refini8ng data mining requests based on
returned results to view data and discovered patterns at multiple granularities and from different angles.
Incorporation of background knowledge:

Page 17
Data Warehousing and Data Mining UNIT-
Background knowledge or domain knowledge guides the discovery process in concise terms at different levels of
abstraction and also speed up a data mining process, or judge the interesting of discovered patterns.
Data mining query languages and ad hoc data mining :

High-level data mining query languages need to be developed to allow users to describe ad hoc data mining tasks by
facilitating the specification of the relevant sets of data for analysis, the domain knowledge, the kinds of knowledge to be
mined, and the conditions and constraints to be enforced on the discovered patterns.
Presentation and visualization of data mining results:

Using visual representations, or other expressive forms, the knowledge can be easily understood and directly usable by
humans. This requires the system to adopt expressive knowledge representation techniques, such as tree, tables, rules,
graphs, charts, crosstabs, matrices, or curves.
Handling noisy or incomplete data:

Page 18
Data Warehousing and Data Mining UNIT-
Background knowledge or domain knowledge guides the discovery process in concise terms at different levels of
abstraction and also speed up a data mining process, or judge the interesting of discovered patterns.
Data mining query languages and ad hoc data mining :

High-level data mining query languages need to be developed to allow users to describe ad hoc data mining tasks by
facilitating the specification of the relevant sets of data for analysis, the domain knowledge, the kinds of knowledge to be
mined, and the conditions and constraints to be enforced on the discovered patterns.
Presentation and visualization of data mining results:

Using visual representations, or other expressive forms, the knowledge can be easily understood and directly usable by
humans. This requires the system to adopt expressive knowledge representation techniques, such as tree, tables, rules,
graphs, charts, crosstabs, matrices, or curves.
Handling noisy or incomplete data:

Noise or incomplete data may be confuse the process, causing the knowledge model constructed to over fit the data
which intern result the accuracy of the discovered patterns to be poor data clearing methods and data analysis methods
that can handle noise are required, as well as outlier mining methods for the discovery and analysis of exceptional cases.
Pattern evolution – the interestingness problem:

A data mining system can uncover thousands of patterns. Many of the patterns discovered may be uninteresting to the
given user, either because they represent common knowledge or lack novelty. Several challenges 0remain regarding the
development of techniques to asses the interestingness of discovered patterns, particularly with regard to subjective
measures that estimate the value of patterns with respect to a given user class, based on user beliefs or expectations. The
use of interestingness measure or user specified constraints to guide the discovery process and reduce the screech space
in another active area of research
Performance issues:
This include efficiency scalability, and parallelization of data mining algorithm
Efficiency and scalability of data mining algorithms:

in order to effectively extract for information from huge amount data in databases, data mining algorithms must be
efficient and scalable as well as running time of a data mining algorithms must predictable and acceptable.
Parallel, distributed, and incremental mining algorithms:

The huge size of databases, wide distribution of data, high cost and computational complexity of data mining methods
leads to the developments of parallel and distributed data mining algorithms moreover, the incremental data mining
algorithms updates database without having to mine this entire data again “from scratch”

Issues relating to the diversity of database types


Handling of relational and complex types of data:

It is unrealistic to expect one system to mine all kinds of data, given the diversity of data types and different goals of data
mining specific data mining systems should be constructed for mining specific kinds of data. Therefore, one may expect
to have different data mining systems for different kinds of data.
Mining information from heterogeneous databases and global information systems:

Data mining may help disclose high level data regularities in multiples heterogeneous databases that are unlikely to be
discovered by simple query systems and may improve information exchange and interoperability in heterogeneous
databases. Web mining, with uncovers interesting knowledge about web contents, web structures, web usage, and web
dynamics, becomes a very challenging and fast evolving field in data mining.
Data mining systems relay on databases to supply raw data for input and this raises problems in that databases tend be
dynamic, incomplete, noisy, and data large. Other problems arise as a result of the adequacy relevance of the information
stored. The above issues are considered major requirements and challenges for the further evolution of data mining
technology.

Page 19

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy