0% found this document useful (0 votes)
3 views79 pages

3 Processing

Chapter 3 of 'Data Mining: Concepts and Techniques' focuses on data preprocessing, emphasizing the importance of data quality and the major tasks involved, including data cleaning, integration, reduction, and transformation. It discusses various issues such as missing data, noisy data, and the need for effective data cleaning processes. The chapter also covers strategies for data reduction to improve analytical efficiency and manage large datasets.

Uploaded by

22ituos110
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
3 views79 pages

3 Processing

Chapter 3 of 'Data Mining: Concepts and Techniques' focuses on data preprocessing, emphasizing the importance of data quality and the major tasks involved, including data cleaning, integration, reduction, and transformation. It discusses various issues such as missing data, noisy data, and the need for effective data cleaning processes. The chapter also covers strategies for data reduction to improve analytical efficiency and manage large datasets.

Uploaded by

22ituos110
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 79

Data Mining:

Concepts and Techniques


(3rd ed.)

— Chapter 3 —
Jiawei Han, Micheline Kamber, and Jian Pei
University of Illinois at Urbana-Champaign &
Simon Fraser University
©2011 Han, Kamber & Pei. All rights reserved.
1
Chapter 3: Data Preprocessing

■ Data Preprocessing: An Overview


■ Data Quality
■ Major Tasks in Data Preprocessing
■ Data Cleaning
■ Data Integration

■ Data Reduction
■ Data Transformation and Data Discretization
■ Summary

2
Data Quality: Why Preprocess the Data?
■ Measures for data quality: A multidimensional view
■ Accuracy: correct or wrong, accurate or not
■ Human or computer error, limited buffer size etc

■ Completeness: not recorded, unavailable, …


■ Consistency: some modified but some not, dangling, …
■ Timeliness: timely update?
■ Believability: how trustable the data are correct?
■ Interpretability: how easily the data can be
understood?

3
Major Tasks in Data Preprocessing
■ Data cleaning
■ Fill in missing values, smooth noisy data, identify or remove outliers,
and resolve inconsistencies

■ Data integration
■ Integration of multiple databases, data cubes, or files
■ Eg.: Customer_ID, C_ID
■ FIRST NAME, MIDDL NAME, LAST NAME

4
Major Tasks in Data Preprocessing
■ Data reduction
■ Dimensionality reduction
■ To obtain reduce or compressed representation
■ Data compression
■ Wavelet transformation, PCA, Attribute subset selection, Attribute
construction
■ Numerosity reduction
■ Replace data by alternative smaller representation
■ Data transformation and data discretization
■ Normalization
■ [0,1]
■ Concept hierarchy generation
■ Age attribute replace by higher-level concepts such as youth, adult, senior

5
Forms of data preprocessing

6
Chapter 3: Data Preprocessing

■ Data Preprocessing: An Overview


■ Data Quality
■ Major Tasks in Data Preprocessing
■ Data Cleaning
■ Data Integration

■ Data Reduction
■ Data Transformation and Data Discretization
■ Summary

7
Data Cleaning
■ Data in the Real World Is Dirty: Lots of potentially incorrect data,
e.g., instrument faulty, human or computer error, transmission error
■ incomplete: lacking attribute values, lacking certain attributes of
interest, or containing only aggregate data
■ e.g., Occupation=“ ” (missing data)
■ noisy: containing noise, errors, or outliers
■ e.g., Salary=“−10” (an error)
■ inconsistent: containing discrepancies in codes or names, e.g.,
■ Age=“42”, Birthday=“03/07/2010”
■ Was rating “1, 2, 3”, now rating “A, B, C”

■ discrepancy between duplicate records

■ Intentional (e.g., disguised missing data)


■ Jan. 1 as everyone’s birthday?

8
Incomplete (Missing) Data

■ Data is not always available


■ E.g., many tuples have no recorded value for several
attributes, such as customer income in sales data
■ Missing data may be due to
■ equipment malfunction
■ inconsistent with other recorded data and thus deleted
■ data not entered due to misunderstanding
■ certain data may not be considered important at the
time of entry
■ not register history or changes of the data
■ Missing data may need to be inferred
9
How to Handle Missing Data?
1. Ignore the tuple:
■ usually done when class label is missing (when doing
classification)
■ not effective when the % of missing values per attribute
varies considerably

Developer Experience Salary


Java 1 20000
Python 4
Java 1.5 25000
Python 2 40000
Python 4 80000
2
10
How to Handle Missing Data?
2. Fill in the missing value manually:
■ tedious + infeasible?

Student Pr. Present


80 P
25 P
34 P
31
0 A
78 P

11
How to Handle Missing Data?
3. Fill in it automatically with
■ a global constant : e.g., “unknown”, a new class?!
■ the attribute mean (normal/symmetric distribution)
■ median (asymmetric distribution)

Developer Experience Salary


Java 1 20000
Python 4
Java 1.5 25000
Python 2 40000
Python 4 80000
2

12
How to Handle Missing Data?
■ Fill in it automatically with
■ the attribute mean for all samples belonging to the
same class: smarter
■ the most probable value: inference-based such as
Bayesian formula or decision tree

Developer Experience Salary


Java 1 20000
Python 4
Java 1.5 25000
Python 2 40000
Python 4 80000
Java 2
13
Noisy Data
■ Noise: random error or variance in a measured variable
■ Incorrect attribute values may be due to
■ faulty data collection instruments

■ data entry problems

■ data transmission problems

■ technology limitation

■ inconsistency in naming convention

■ Other data problems which require data cleaning


■ duplicate records

■ incomplete data

■ inconsistent data

14
How to Handle Noisy Data?

■ Binning
■ first sort data and

partition into
(equal-frequency) bins
■ smooth by bin means,

smooth by bin median,


smooth by bin
boundaries, etc.

15
How to Handle Noisy Data?

■ Regression
■ smooth by fitting the data into regression functions

■ Linear Regression (best-fit line for two attributes)


■ Multiple linear regression (more than two attributes)

■ Clustering
■ detect and remove outliers

■ Combined computer and human inspection


■ detect suspicious values and check by human (e.g.,

deal with possible outliers)

16
Data Cleaning as a Process
■ But data cleaning is a big job.

■ What about data cleaning as a process?

■ How exactly does one proceed in tackling this task?

■ Are there any tools out there to help?”

■ Two steps
■ Discrepancy detection

■ Data Transformation

17
Data Cleaning as a Process
1. Data discrepancy detection
■ Use metadata (e.g., domain, range, dependency, distribution)

■ Check field overloading

■ Check uniqueness rule

■ Consecutive rule (e.g. cheque number)

■ Null rule

■ Use commercial tools

■ Data scrubbing: use simple domain knowledge (e.g.,

postal code, spell-check) to detect errors and make


corrections
■ Data auditing: by analyzing data to discover rules and

relationship to detect violators (e.g., correlation and


clustering to find outliers)

18
Data Cleaning as a Process
2. Data transformation
■ Data migration and integration

■ Data migration tools: allow transformations to be specified


■ E.g replace string “gender” by “sex”

■ ETL (Extraction/Transformation/Loading) tools: allow


users to specify transformations through a graphical user
interface

■ Integration of the two processes


■ Iterative and interactive

19
Chapter 3: Data Preprocessing

■ Data Preprocessing: An Overview


■ Data Quality
■ Major Tasks in Data Preprocessing
■ Data Cleaning
■ Data Integration

■ Data Reduction
■ Data Transformation and Data Discretization
■ Summary

20
Data Integration
■ Data integration:
■ Combines data from multiple sources into a coherent store

1. Entity identification problem


2. Redundancy and Correlation Analysis
3. Tuple duplication
4. Data value conflict detection and resolution

21
1. Entity identification problem:
■ How can equivalent real-world entities from multiple data sources
be matched up?
■ Schema integration: e.g., A.cust-id ≡ B.cust-#
■ Integrate metadata from different sources

■ Object matching:
■ Identify real world entities from multiple data sources
■ e.g., Bill Clinton = William Clinton

■ attribute values from different sources are different so use of


metadata
■ Possible reasons: different representations, different scales,
■ E.g one system , a discount applied to the whole order amount, in another
system each individual line item is discounted.

22
2. Redundancy and Correlation Analysis

■ Redundant data occur often when integration of multiple


databases
■ Object identification: The same attribute or object
may have different names in different databases
■ Derivable data: One attribute may be a “derived”
attribute in another table, e.g., annual revenue
■ Redundant attributes may be able to be detected by
correlation analysis and covariance analysis
■ Careful integration of the data from multiple sources may
help reduce/avoid redundancies and inconsistencies and
improve mining speed and quality
23
Correlation Analysis (Nominal Data)
■ Χ2 (chi-square) test

■ The larger the Χ2 value, the more likely the variables are
related
■ The cells that contribute the most to the Χ2 value are
those whose actual count is very different from the
expected count
■ Correlation does not imply causality
■ # of hospitals and # of car-theft in a city are correlated
■ Both are causally linked to the third variable: population

24
Chi-Square Calculation: An Example
(Nominal Data)
Contingency table
Play chess Not play chess Sum (row)
Like science fiction 250(90) 200(360) 450

Not like science fiction 50(210) 1000(840) 1050

Sum(col.) 300 1200 1500

■ Χ2 (chi-square) calculation (numbers in parenthesis are


expected counts calculated based on the data distribution
in the two categories)

■ It shows that like_science_fiction and play_chess are


correlated in the group
25
Correlation Analysis (Numeric Data)

■ Correlation coefficient (also called Pearson’s product


moment coefficient)

■ where n is the number of tuples, and are the respective


means of A and B, σA and σB are the respective standard deviation
of A and B, and Σ(aibi) is the sum of the AB cross-product.
■ If rA,B > 0, A and B are positively correlated (A’s values
increase as B’s). The higher, the stronger correlation.
■ rA,B = 0: independent; rAB < 0: negatively correlated

26
Visually Evaluating Correlation

Scatter plots
showing the
similarity from
–1 to 1.

27
Covariance (Numeric Data)
■ Covariance is similar to correlation

Correlation coefficient:

where n is the number of tuples, and are the respective mean or


expected values of A and B, σA and σB are the respective standard
deviation of A and B.

■ Positive covariance: If CovA,B > 0, then A and B both tend to be larger


than their expected values.
■ Negative covariance: If CovA,B < 0 then if A is larger than its expected
value, B is likely to be smaller than its expected value.
■ Independence: CovA,B = 0
■ Variance is special case of Covariance
28
Co-Variance: An Example

■ It can be simplified in computation as

■ Suppose two stocks A and B have the following values in one week:
(2, 5), (3, 8), (5, 10), (4, 11), (6, 14).

■ Question: If the stocks are affected by the same industry trends, will
their prices rise or fall together?

■ E(A) = (2 + 3 + 5 + 4 + 6)/ 5 = 20/5 = 4

■ E(B) = (5 + 8 + 10 + 11 + 14) /5 = 48/5 = 9.6

■ Cov(A,B) = (2×5+3×8+5×10+4×11+6×14)/5 − 4 × 9.6 = 4


■ Thus, A and B rise together since Cov(A, B) > 0.
Covariance and Correlation
■ Co Variance:
■ Relation between X and Y (Direction)

■ Correlation
■ How strongly X and Y are related?

7/20/2023 Data Mining: Concepts and Techniques 30


3. Tuple Duplication
■ Due to denormalized tables

4. Data value conflict detection and


resolution
■ Detection and resolution of data value conflict.
■ E.g. height in cm and height in inch/feet

■ temp in .C and temp in .F

■ Difference due to abstract level


■ “Total_sales” at branch level

■ “Total_sales” at region level

7/20/2023 Data Mining: Concepts and Techniques 31


Chapter 3: Data Preprocessing

■ Data Preprocessing: An Overview


■ Data Quality
■ Major Tasks in Data Preprocessing
■ Data Cleaning
■ Data Integration

■ Data Reduction
■ Data Transformation and Data Discretization
■ Summary

32
Data Reduction

■ Data reduction: Obtain a reduced representation


of the data set that is much smaller in volume but
yet produces the same (or almost the same)
analytical results

■ Why data reduction? — A database/data


warehouse may store terabytes of data.
■ Complex data analysis may take a very long time
to run on the complete data set.
■ Curse of dimensionality

33
Data Reduction 1: Dimensionality Reduction
■ Curse of dimensionality
■ When dimensionality increases, data becomes increasingly sparse
■ Density and distance between points, which is critical to clustering, outlier
analysis, becomes less meaningful
■ The possible combinations of subspaces will grow exponentially

■ Dimensionality reduction
■ Avoid the curse of dimensionality
■ Help eliminate irrelevant features and reduce noise
■ Reduce time and space required in data mining
■ Allow easier visualization

34
Data Reduction Strategies

■ Data reduction strategies


■ Dimensionality reduction, e.g., remove unimportant
attributes
■ Wavelet transforms

■ Principal Components Analysis (PCA)

■ Feature subset selection, feature creation

■ Numerosity reduction (some simply call it: Data


Reduction)
■ Regression and Log-Linear Models

■ Histograms, clustering, sampling

■ Data cube aggregation

■ Data compression

35
What Is Wavelet Transform?
■ Signal processing technique that, when applied to a
data vector X, transforms it to a numerically different
vector X’ of wavelet coefficients.
■ Both are of same length
■ Data are transformed to preserve relative distance
between objects at different levels of resolution
■ Store only a small fraction of the strongest of the
wavelet coefficients
■ Used for image compression

36
Wavelet Decomposition
■ Wavelets: A math tool for space-efficient hierarchical
decomposition of functions
■ S = [2, 2, 0, 2, 3, 5, 4, 4] can be transformed to S ^ =
[23/4, -11/4, 1/2, 0, 0, -1, -1, 0]
■ Compression: many small detail coefficients can be
replaced by 0’s, and only the significant coefficients are
retained

37
Wavelet Transformation
■ Method:
■ Length, L, must be an integer power of 2 (padding with 0’s, when
necessary)
■ Each transform has 2 functions: smoothing, difference
■ Applies to pairs of data, resulting in two set of data of length L/2
■ Applies two functions recursively, until reaches the desired length

38
Why Wavelet Transform?
■ Effective removal of outliers
■ Insensitive to noise, insensitive to input order

■ Efficient
■ Complexity O(N)

■ Only applicable to low dimensional data

39
Principal Component Analysis (PCA)
■ Find a projection that captures the largest amount of variation in data
■ The original data are projected onto a much smaller space, resulting
in dimensionality reduction.
■ We find the eigenvectors of the covariance matrix, and these
eigenvectors define the new space

x2

x1 40
Principal Component Analysis (Steps)
■ Given N data vectors from n-dimensions, find k ≤ n orthogonal vectors
(principal components) that can be best used to represent data
■ Normalize input data: Each attribute falls within the same range
■ Compute k orthonormal (unit) vectors, i.e., principal components
■ Each input data (vector) is a linear combination of the k principal
component vectors
■ The principal components are sorted in order of decreasing
“significance” or strength
■ Since the components are sorted, the size of the data can be
reduced by eliminating the weak components, i.e., those with low
variance (i.e., using the strongest principal components, it is
possible to reconstruct a good approximation of the original data)
■ Works for numeric data only

41
Attribute Subset Selection
■ Another way to reduce dimensionality of data
■ Redundant attributes
■ Duplicate much or all of the information contained in
one or more other attributes
■ E.g., purchase price of a product and the amount of
GST paid
■ Irrelevant attributes
■ Contain no information that is useful for the data
mining task at hand
■ E.g., students' ID is often irrelevant to the task of
predicting students' GPA

42
Heuristic methods in Attribute Selection

43
Heuristic Search in Attribute Selection

■ There are 2d possible attribute combinations of d attributes


■ Typical heuristic attribute selection methods:
■ Best single attribute under the attribute independence

assumption: choose by significance tests


■ Best step-wise forward selection:

■ The best single-attribute is picked first

■ Then next best attribute condition to the first, ...

■ Step-wise attribute elimination:

■ Repeatedly eliminate the worst attribute

■ Best combined attribute selection and elimination

■ Optimal branch and bound:

■ Use attribute elimination and backtracking

44
Attribute Creation (Feature Generation)
■ Create new attributes (features) that can capture the
important information in a data set more effectively than
the original ones

■ E.g. Add area attribute based on height and width of


the shape.

45
2. Numerosity Reduction

46
Data Reduction 2: Numerosity Reduction
■ Reduce data volume by choosing alternative, smaller
forms of data representation
■ Parametric methods (e.g., regression)
■ Assume the data fits some model,

■ estimate model parameters,

■ store only the parameters, and discard the data

(except possible outliers)


■ Ex.: Log-linear models—obtain value at a point in m-D

space as the product on appropriate marginal


subspaces
■ Non-parametric methods
■ Do not assume models

■ Major families: histograms, clustering, sampling, …


47
Parametric Data Reduction: Regression and
Log-Linear Models
■ Linear regression
■ Data modeled to fit a straight line

■ Often uses the least-square method to fit the line

■ Multiple regression
■ Allows a response variable Y to be modeled as a

linear function of multidimensional feature vector


■ Log-linear model
■ Approximates discrete multidimensional probability

distributions

48
y
Regression Analysis
Y1
■ Regression analysis: A collective name
for techniques for the modeling and Y1’
y=x+1
analysis of numerical data consisting of
values of a dependent variable (also called
response variable or measurement) and of x
X1
one or more independent variables (aka.
explanatory variables or predictors)
■ Used for prediction
■ The parameters are estimated so as to give (including forecasting of
a "best fit" of the data time-series data), inference,
■ Most commonly the best fit is evaluated by hypothesis testing, and
modeling of causal
using the least squares method, but other
relationships
criteria have also been used

49
Regress Analysis and Log-Linear Models
■ Linear regression: Y = w X + b
■ Two regression coefficients, w and b, specify the line and are to be
estimated by using the data at hand
■ Using the least squares criterion to the known values of Y1, Y2, …,
X1, X2, ….
■ Multiple regression: Y = b0 + b1 X1 + b2 X2
■ Many nonlinear functions can be transformed into the above
■ Log-linear models:
■ Approximate discrete multidimensional probability distributions
■ Estimate the probability of each point (tuple) in a multi-dimensional
space for a set of discretized attributes, based on a smaller subset
of dimensional combinations
■ Useful for dimensionality reduction and data smoothing
50
Histogram Analysis
■ Partitioning rules:
■ Equal-width: equal bucket range
■ Equal-frequency (or equal-depth)

51
Clustering
■ Partition data set into clusters based on similarity, and
store cluster representation (e.g., centroid and diameter)
only
■ Can have hierarchical clustering and be stored in
multi-dimensional index tree structures
■ There are many choices of clustering definitions and
clustering algorithms
■ Cluster analysis will be studied in depth in Chapter 10

52
Sampling

■ Sampling: obtaining a small sample s to represent the


whole data set N
■ Allow a mining algorithm to run in complexity that is
potentially sub-linear to the size of the data
■ Key principle: Choose a representative subset of the data
■ Simple random sampling may have very poor
performance in the presence of skew
■ Develop adaptive sampling methods, e.g., stratified
sampling

53
Types of Sampling

■ Simple random sample without replacement


(SRSWOR) of size:
■ Once an object is selected, it is removed from the
population
■ Simple random sample with replacement
(SRSWR) of size
■ A selected object is not removed from the population

54
Sampling: With or without Replacement

W O R
SRS le random
i m p h ou t
( s e w i t
p l
sam ment)
p l a c e
re

SRSW
R

Raw Data
55
Types of Sampling

■ Cluster sample:

56
Types of Sampling

■ Stratified sampling:
■ Partition the data set, and draw samples from each
partition (proportionally, i.e., approximately the same
percentage of the data)
■ Used in conjunction with skewed data

57
Sampling: Stratified Sampling

Raw Data Stratified Sample

58
Data Cube Aggregation

■ The lowest level of a data cube (base cuboid)


■ The aggregated data for an individual entity of interest
■ Multiple levels of aggregation in data cubes
■ Further reduce the size of data to deal with
■ Reference appropriate levels
■ Use the smallest representation which is enough to
solve the task
■ Queries regarding aggregated information should be
answered using data cube, when possible

59
Data Aggregation

60
Data cube Aggregation

61
Data Reduction 3: Data Compression
■ String compression
■ There are extensive theories and well-tuned algorithms

■ Typically lossless, but only limited manipulation is

possible without expansion

■ Audio/video compression
■ Typically lossy compression, with progressive refinement

■ Sometimes small fragments of signal can be

reconstructed without reconstructing the whole

■ Dimensionality and numerosity reduction may also be


considered as forms of data compression
62
Data Compression

Original Data Compressed


Data
lossless

s s y
lo
Original Data
Approximated

63
Chapter 3: Data Preprocessing

■ Data Preprocessing: An Overview


■ Data Quality
■ Major Tasks in Data Preprocessing
■ Data Cleaning
■ Data Integration

■ Data Reduction
■ Data Transformation and Data Discretization
■ Summary

64
Data Transformation
■ A function that maps the entire set of values of a given attribute to a
new set of replacement values s.t. each old value can be identified
with one of the new values
■ Methods
■ Smoothing: Remove noise from data
■ Attribute/feature construction
■ New attributes constructed from the given ones
■ Aggregation: Summarization, data cube construction
■ Normalization: Scaled to fall within a smaller, specified range
■ min-max normalization
■ z-score normalization
■ normalization by decimal scaling
■ Discretization: Concept hierarchy climbing 65
Normalization
■ Min-max normalization: to [new_minA, new_maxA]

■ Ex. Let income range $12,000 to $98,000 normalized to [0.0, 1.0].


Then $73,000 is mapped to
■ Z-score normalization (μ: mean, σ: standard deviation):

■ Ex. Let μ = 54,000, σ = 16,000. Then


■ Normalization by decimal scaling

Where j is the smallest integer such that Max(|ν’|) < 1

■ Ex. A range from -986 to 917. So|-986| = 968 so, V’ = -986/1000 = -0.986
66
Discretization
■ Three types of attributes
■ Nominal—values from an unordered set, e.g., color, profession
■ Ordinal—values from an ordered set, e.g., military or academic
rank
■ Numeric—real numbers, e.g., integer or real numbers
■ Discretization: Divide the range of a continuous attribute into intervals
■ Interval labels can then be used to replace actual data values
■ Reduce data size by discretization
■ Supervised vs. unsupervised
■ Split (top-down) vs. merge (bottom-up)
■ Discretization can be performed recursively on an attribute
■ Prepare for further analysis, e.g., classification

67
Data Discretization Methods
■ Typical methods: All the methods can be applied recursively
■ Binning
■ Top-down split, unsupervised
■ Histogram analysis
■ Top-down split, unsupervised
■ Clustering analysis (unsupervised, top-down split or
bottom-up merge)
■ Decision-tree analysis (supervised, top-down split)
■ Correlation (e.g., χ2) analysis (unsupervised, bottom-up
merge)

68
Simple Discretization: Binning
■ Equal-width (distance) partitioning
■ Divides the range into N intervals of equal size: uniform grid
■ if A and B are the lowest and highest values of the attribute, the
width of intervals will be: W = (B –A)/N.
■ The most straightforward, but outliers may dominate presentation
■ Skewed data is not handled well

■ Equal-depth (frequency) partitioning


■ Divides the range into N intervals, each containing approximately
same number of samples
■ Good data scaling
■ Managing categorical attributes can be tricky

69
Binning Methods for Data Smoothing
❑ Sorted data for price (in dollars): 4, 8, 9, 15, 21, 21, 24, 25, 26,
28, 29, 34
* Partition into equal-frequency (equi-depth) bins:
- Bin 1: 4, 8, 9, 15
- Bin 2: 21, 21, 24, 25
- Bin 3: 26, 28, 29, 34
* Smoothing by bin means:
- Bin 1: 9, 9, 9, 9
- Bin 2: 23, 23, 23, 23
- Bin 3: 29, 29, 29, 29
* Smoothing by bin boundaries:
- Bin 1: 4, 4, 4, 15
- Bin 2: 21, 21, 25, 25
- Bin 3: 26, 26, 26, 34

70
Discretization by Classification &
Correlation Analysis
■ Classification (e.g., decision tree analysis)
■ Supervised: Given class labels, e.g., cancerous vs. benign
■ Using entropy to determine split point (discretization point)
■ Top-down, recursive split
■ Details to be covered in Chapter 7
■ Correlation analysis (e.g., Chi-merge: χ2-based discretization)
■ Supervised: use class information
■ Bottom-up merge: find the best neighboring intervals (those
having similar distributions of classes, i.e., low χ2 values) to merge
■ Merge performed recursively, until a predefined stopping condition

71
Concept Hierarchy Generation
■ Concept hierarchy organizes concepts (i.e., attribute values)
hierarchically and is usually associated with each dimension in a data
warehouse
■ Concept hierarchies facilitate drilling and rolling in data warehouses to
view data in multiple granularity
■ Concept hierarchy formation: Recursively reduce the data by collecting
and replacing low level concepts (such as numeric values for age) by
higher level concepts (such as youth, adult, or senior)
■ Concept hierarchies can be explicitly specified by domain experts
and/or data warehouse designers
■ Concept hierarchy can be automatically formed for both numeric and
nominal data. For numeric data, use discretization methods shown.

72
Concept Hierarchy Generation
for Nominal Data
■ Specification of a partial/total ordering of attributes
explicitly at the schema level by users or experts
■ street < city < state < country
■ Specification of a hierarchy for a set of values by explicit
data grouping
■ Price range grouping or grouping by location i.e. north

INDIA, SOUTH INDIA etc.


■ Specification of only a partial set of attributes
■ E.g., only street < city, not others

■ Automatic generation of hierarchies (or attribute levels) by


the analysis of the number of distinct values
■ E.g., for a set of attributes: { street, city, state, country}
73
Automatic Concept Hierarchy Generation
■ Some hierarchies can be automatically generated based on
the analysis of the number of distinct values per attribute in
the data set
■ The attribute with the most distinct values is placed at
the lowest level of the hierarchy
■ Exceptions, e.g., weekday, month, quarter, year

country 15 distinct values

province_or_ state 365 distinct values

city 3567 distinct values

street 674,339 distinct values


74
Chapter 3: Data Preprocessing

■ Data Preprocessing: An Overview


■ Data Quality
■ Major Tasks in Data Preprocessing
■ Data Cleaning
■ Data Integration

■ Data Reduction
■ Data Transformation and Data Discretization
■ Summary

75
Summary
■ Data quality: accuracy, completeness, consistency, timeliness,
believability, interpretability
■ Data cleaning: e.g. missing/noisy values, outliers
■ Data integration from multiple sources:
■ Entity identification problem
■ Remove redundancies
■ Detect inconsistencies
■ Data reduction
■ Dimensionality reduction
■ Numerosity reduction
■ Data compression
■ Data transformation and data discretization
■ Normalization
■ Concept hierarchy generation

76
References
■ D. P. Ballou and G. K. Tayi. Enhancing data quality in data warehouse environments. Comm. of
ACM, 42:73-78, 1999
■ A. Bruce, D. Donoho, and H.-Y. Gao. Wavelet analysis. IEEE Spectrum, Oct 1996
■ T. Dasu and T. Johnson. Exploratory Data Mining and Data Cleaning. John Wiley, 2003
■ J. Devore and R. Peck. Statistics: The Exploration and Analysis of Data. Duxbury Press, 1997.
■ H. Galhardas, D. Florescu, D. Shasha, E. Simon, and C.-A. Saita. Declarative data cleaning:
Language, model, and algorithms. VLDB'01
■ M. Hua and J. Pei. Cleaning disguised missing data: A heuristic approach. KDD'07
■ H. V. Jagadish, et al., Special Issue on Data Reduction Techniques. Bulletin of the Technical
Committee on Data Engineering, 20(4), Dec. 1997
■ H. Liu and H. Motoda (eds.). Feature Extraction, Construction, and Selection: A Data Mining
Perspective. Kluwer Academic, 1998
■ J. E. Olson. Data Quality: The Accuracy Dimension. Morgan Kaufmann, 2003
■ D. Pyle. Data Preparation for Data Mining. Morgan Kaufmann, 1999
■ V. Raman and J. Hellerstein. Potters Wheel: An Interactive Framework for Data Cleaning and
Transformation, VLDB’2001
■ T. Redman. Data Quality: The Field Guide. Digital Press (Elsevier), 2001
■ R. Wang, V. Storey, and C. Firth. A framework for analysis of data quality research. IEEE Trans.
Knowledge and Data Engineering, 7:623-640, 1995
77
Examples

78
79

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy