0% found this document useful (0 votes)
7 views71 pages

02-DataQuality Compressed

The document discusses data quality issues in data science, highlighting problems such as missing data, entity resolution, and unit mismatches. It emphasizes the importance of understanding data quality from different perspectives, including statistical, database, and domain expert views. Additionally, it outlines various data quality problems, their causes, and potential solutions, while stressing the need for a multi-disciplinary approach to effectively address these challenges.

Uploaded by

mailsacrifice14
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
7 views71 pages

02-DataQuality Compressed

The document discusses data quality issues in data science, highlighting problems such as missing data, entity resolution, and unit mismatches. It emphasizes the importance of understanding data quality from different perspectives, including statistical, database, and domain expert views. Additionally, it outlines various data quality problems, their causes, and potential solutions, while stressing the need for a multi-disciplinary approach to effectively address these challenges.

Uploaded by

mailsacrifice14
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 71

Introduction to Data Science

Data Quality
DB-hard Queries
Company_Name Address Market Cap
Google Googleplex, Mtn. View, CA $210Bn
Intl. Business Machines Armonk, NY $200Bn
Microsoft Redmond, WA $250Bn

SELECT Market_Cap
From Companies
Where Company_Name = “Apple”

Number of Rows: 0

Problem:
Missing Data
DB-hard Queries
Company_Name Address Market Cap
Google Googleplex, Mtn. View, CA $210Bn
Intl. Business Machines Armonk, NY $200Bn
Microsoft Redmond, WA $250Bn

SELECT Market_Cap
From Companies
Where Company_Name = “IBM”

Number of Rows: 0

Problem:
Entity Resolution
DB-hard Queries
Company_Name Address Market Cap
Google Googleplex, Mtn. View $210
Intl. Business Machines Armonk, NY $200
Microsoft Redmond, WA €250
Sally’s Lemonade Stand Alameda,CA $260

SELECT MAX(Market_Cap)
From Companies

Number of Rows: 1

Problem:
Unit Mismatch
WHO’S CALLING WHO’S DATA
DIRTY?
Dirty Data
• The Statistics View:
• There is a process that produces data
• Any dataset is a sample of the output of that
process
• Results are probabilistic
• You can correct bias in your sample
Dirty Data
• The Database View:
• I got my hands on this data set
• Some of the values are missing, corrupted, wrong,
duplicated
• Results are absolute (relational model)
• You get a better answer by improving the quality
of the values in your dataset
Dirty Data
• The Domain Expert’s View:
• This Data Doesn’t look right
• This Answer Doesn’t look right
• What happened?
Dirty Data
• The Data Scientist’s View:
• Some Combination of all of the above
Data Quality Problems
• Data is dirty on its own

• Data sets are clean but integration (i.e.,


combining them) messes them up

• Data sets are clean but suffer “bit rot”


• Old data loses its value over time

• Any combination of the above


Big Picture: Where can Dirty Data Arise?

Integrate
Clean

Extract
Transform
Load
Example Data Quality Problems
T.Das|97336o8327|24.95|Y|-|0.0|1000
Ted J.|973-360-8779|2000|N|M|NY|1000

• Can we interpret the data?


– What do the fields mean?
– What is the key? The measures?
• Data glitches
– Typos, multiple formats, missing / default values
• Metadata and domain expertise
– Field three is Revenue. In dollars or cents?
– Field seven is Usage. Is it censored?
• Field 4 is a censored flag. How to handle censored data?

Adapted from Ted Johnson’s SIGMOD 2003 Tutorial


Data Glitches (a telco view)
• Systemic changes to data which are external to
the recorded process.
– Changes in data layout / data types
• Integer becomes string, fields swap positions, etc.
– Changes in scale / format
• Dollars vs. euros
– Temporary reversion to defaults
• Failure of a processing step
– Missing and default values
• Application programs do not handle NULL values well …
– Gaps in time series
• Especially when records represent incremental changes.

Adapted from Ted Johnson’s SIGMOD 2003 Tutorial


Dirty Data Problems
• From Stanford Data Integration Course:
1) parsing text into fields (separator issues)
2) Naming conventions: ER: NYC vs New York
3) Missing required field (e.g. key field)
4) Different representations (2 vs Two)
5) Fields too long (get truncated)
6) Primary key violation (from un to structured or during
integation
7) Redundant Records (exact match or other)
8) Formating issues – esp dates
9) Licensing issues/Privacy/ keep you from using the data as you
would like?
Typical problems
• Why are so many of our customers in Beverly
Hills, CA and Schenectady, NY?
Numeric Outliers

Adapted from Joe Hellerstein’s 2012 CS 194 Guest Lecture


Schema and Data Integration

Which problems does


Integration exacerbate? Mediated Schema

Semantic mappings
Which problems does
schema on write help?

wrapper wrapper wrapper wrapper wrapper

Courtesy of Alon Halevy


Data Cleaning Makes Everything Okay?
The appearance of a hole in
the earth's ozone layer over
Antarctica, first detected in
1976, was so unexpected
that scientists didn't pay
attention to what their
instruments were telling
them; they thought their
instruments were
malfunctioning.
National Center for In fact, the data
Atmospheric Research were rejected as
unreasonable by
data quality control
algorithms
How Clean is “clean-enough”?
• How much cleaning is too much?
• Answers are likely to be:
• domain-specific
• data source-specific
• application-specific
• user-specific
• all of the above?
How to split between shared and
application-specific cleaning?
from http://www-new.insightsquared.com/wp-
content/uploads/2012/01/insightsquared_dq_infographic-2.png

DATA QUALITY
Meaning of Data Quality (1)
• Generally, you have a problem if the data
doesn t mean what you think it does, or should
– Data not up to spec : garbage in, glitches, etc.
– You don t understand the spec : complexity, lack of
metadata.
• Many sources and manifestations
– As we have discussed
• Data quality problems are expensive and
pervasive
– DQ problems cost hundreds of billion $$$ each year.
– Resolving data quality problems is often the biggest
effort in a data mining study.

Adapted from Ted Johnson’s SIGMOD 2003 Tutorial


Conventional Definition of Data Quality
• Accuracy
– The data was recorded correctly.
• Completeness
– All relevant data was recorded.
• Uniqueness
– Entities are recorded once.
• Timeliness
– The data is kept up to date.
• Special problems in federated data: time consistency.
• Consistency
– The data agrees with itself.

Adapted from Ted Johnson’s SIGMOD 2003 Tutorial


Problems …
• Unmeasurable
– Accuracy and completeness are extremely difficult,
perhaps impossible to measure.
• Context independent
– No accounting for what is important. E.g., if you are
computing aggregates, you can tolerate a lot of
inaccuracy.
• Incomplete
– What about interpretability, accessibility, metadata,
analysis, etc.
• Vague
– The conventional definitions provide no guidance
towards practical improvements of the data.

Adapted from Ted Johnson’s SIGMOD 2003 Tutorial


Finding a modern definition
• We need a definition of data quality which
– Reflects the use of the data
– Leads to improvements in processes
– Is measurable (we can define metrics)

• First, we need a better understanding of how


and where data quality problems occur
– The data quality continuum

Adapted from Ted Johnson’s SIGMOD 2003 Tutorial


Meaning of Data Quality (2)
• There are many types of data, which have
different uses and typical quality problems
– Federated data
– High dimensional data
– Descriptive data
– Longitudinal data
– Streaming data
– Web (scraped) data
– Numeric vs. categorical vs. text data

Adapted from Ted Johnson’s SIGMOD 2003 Tutorial


Meaning of Data Quality (2)
• There are many uses of data
– Operations
– Aggregate analysis
– Customer relations …
• Data Interpretation : the data is useless if we
don t know all of the rules behind the data.
• Data Suitability : Can you get the answer from
the available data
– Use of proxy data
– Relevant data is missing

Adapted from Ted Johnson’s SIGMOD 2003 Tutorial


The Data Quality Continuum
• Data and information is not static, it flows in a
data collection and usage process
– Data gathering
– Data delivery
– Data storage
– Data integration
– Data retrieval
– Data mining/analysis

Adapted from Ted Johnson’s SIGMOD 2003 Tutorial


Data Gathering
• How does the data enter the system?
• Sources of problems:
– Manual entry
– No uniform standards for content and formats
– Parallel data entry (duplicates)
– Approximations, surrogates – SW/HW
constraints
– Measurement errors.

Adapted from Ted Johnson’s SIGMOD 2003 Tutorial


Solutions
• Potential Solutions:
– Preemptive:
• Process architecture (build in integrity checks)
• Process management (reward accurate data entry,
data sharing, data stewards)
– Retrospective:
• Cleaning focus (duplicate removal, merge/purge,
name & address matching, field value
standardization)
• Diagnostic focus (automated detection of
glitches).

Adapted from Ted Johnson’s SIGMOD 2003 Tutorial


Internet of Things has Special Problems
• RFID data has many dropped readings
• Typically, use a smoothing filter to interpolate

SELECT distinct tag_id


FROM RFID_stream [RANGE 5 sec ]
GROUP BY tag_id

Smoothed
output

Smoothing Filter
Raw
readings

Time
Physical Data Cleaning

Restock every time inventory goes below 5


Tracking Superman @ home?
Ubisense tracking data from Ryan Appierspach

He walks Too much


through walls; cleaning
and you
He flies across
lose detail.
the room…
Adding Quality Assessment
Data Delivery
• Destroying or mutilating information by
inappropriate pre-processing
– Inappropriate aggregation
– Nulls converted to default values
• Loss of data:
– Buffer overflows
– Transmission problems
– No checks

Adapted from Ted Johnson’s SIGMOD 2003 Tutorial


Solutions
• Build reliable transmission protocols
– Use a relay server
• Verification
– Checksums, verification parser
– Do the uploaded files fit an expected pattern?
• Relationships
– Are there dependencies between data streams and
processing steps
• Interface agreements
– Data quality commitment from the data stream
supplier.

Adapted from Ted Johnson’s SIGMOD 2003 Tutorial


Data Storage
• You get a data set. What do you do with it?
• Problems in physical storage
– Can be an issue, but terabytes are cheap.
• Problems in logical storage
– Poor metadata.
• Data feeds are often derived from application programs or
legacy data sources. What does it mean?
– Inappropriate data models.
• Missing timestamps, incorrect normalization, etc.
– Ad-hoc modifications.
• Structure the data to fit the GUI.
– Hardware / software constraints.
• Data transmission via Excel spreadsheets, Y2K

Adapted from Ted Johnson’s SIGMOD 2003 Tutorial


Solutions
• Metadata
– Document and publish data specifications.
• Planning
– Assume that everything bad will happen.
– Can be very difficult.
• Data exploration
– Use data browsing and data mining tools to examine
the data.
• Does it meet the specifications you assumed?
• Has something changed?

Adapted from Ted Johnson’s SIGMOD 2003 Tutorial


Data Integration
• Combine data sets (acquisitions, across departments).
• Common source of problems
– Heterogenous data : no common key, different field formats
• Approximate matching
– Different definitions
• What is a customer: an account, an individual, a family, …
– Time synchronization
• Does the data relate to the same time periods? Are the time
windows compatible?
– Legacy data
• IMS, spreadsheets, ad-hoc structures
– Sociological factors
• Reluctance to share – loss of power.

Adapted from Ted Johnson’s SIGMOD 2003 Tutorial


Solutions
• Commercial Tools
– Significant body of research in data integration
– Many tools for address matching, schema mapping
are available.
• Data browsing and exploration
– Many hidden problems and meanings : must extract
metadata.
– View before and after results : did the integration go
the way you thought?

Adapted from Ted Johnson’s SIGMOD 2003 Tutorial


Data Retrieval
• Exported data sets are often a view of the actual
data. Problems occur because:
– Source data not properly understood.
– Need for derived data not understood.
– Just plain mistakes.
• Inner join vs. outer join
• Understanding NULL values
• Computational constraints
– E.g., too expensive to give a full history, we ll supply
a snapshot.
• Incompatibility
– Ebcdic? Unicode?

Adapted from Ted Johnson’s SIGMOD 2003 Tutorial


Data Mining and Analysis
• What are you doing with all this data anyway?
• Problems in the analysis.
– Scale and performance
– Confidence bounds?
– Black boxes and dart boards
– Attachment to models
– Insufficient domain expertise
– Casual empiricism

Adapted from Ted Johnson’s SIGMOD 2003 Tutorial


Solutions
• Data exploration
– Determine which models and techniques are
appropriate, find data bugs, develop domain expertise.
• Continuous analysis
– Are the results stable? How do they change?
• Accountability
– Make the analysis part of the feedback loop.

Adapted from Ted Johnson’s SIGMOD 2003 Tutorial


Data Quality Constraints
• Many data quality problems can be captured by
static constraints based on the schema.
– Nulls not allowed, field domains, foreign key
constraints, etc.
• Many others are due to problems in workflow,
and can be captured by dynamic constraints
– E.g., orders above $200 are processed by Biller 2
• The constraints follow an 80-20 rule
– A few constraints capture most cases, thousands of
constraints to capture the last few cases.
• Constraints are measurable. Data Quality
Metrics?

Adapted from Ted Johnson’s SIGMOD 2003 Tutorial


Data Quality Metrics
• We want a measurable quantity
– Indicates what is wrong and how to improve
– Realize that DQ is a messy problem, no set of
numbers will be perfect
• Types of metrics
– Static vs. dynamic constraints
– Operational vs. diagnostic
• Metrics should be directionally correct with an
improvement in use of the data.
• A very large number metrics are possible
– Choose the most important ones.

Adapted from Ted Johnson’s SIGMOD 2003 Tutorial


Examples of Data Quality Metrics
• Conformance to schema
– Evaluate constraints on a snapshot.
• Conformance to business rules
– Evaluate constraints on changes in the database.
• Accuracy
– Perform inventory (expensive), or use proxy (track
complaints). Audit samples?
• Accessibility
• Interpretability
• Glitches in analysis
• Successful completion of end-to-end process
Adapted from Ted Johnson’s SIGMOD 2003 Tutorial
Technical Approaches
• We need a multi-disciplinary approach to attack
data quality problems
– No one approach solves all problem
• Process management
– Ensure proper procedures
• Statistics
– Focus on analysis: find and repair anomalies in data.
• Database
– Focus on relationships: ensure consistency.
• Metadata / domain expertise
– What does it mean? Interpretation
Adapted from Ted Johnson’s SIGMOD 2003 Tutorial
Data Types
Ben Shneiderman, 1996
• 1D (sequences) • nD (relational)
• Temporal • Trees (hierarchical)
• 2D (maps) • Networks (graphs)
• 3D (shaped) • Others?

The Eyes Have It:A Task by Data Type Taxonomy for Information
Visualization [Shneiderman, 96]
Tamara Munzner, 2013
Semantics vs.Types
• Data Semantics: The real-world meaning
e.g.,company name,day of the month,person height,
etc.
• DataType: Interpretation in terms of scales of
measurements
e.g., quantity or category, sensiblemathematical
operations, data structure,etc.
Nominal
Categorical
Qualitative

Ordinal

Interval

Ratio

On the theory of scales and measurements [S.Stevens, 46]


Data Types
• Nominal (Categorical) (N)
Are = or ≠ to other values
Apples, Oranges,Bananas,...
• Ordinal (O)
Obey a < relationship
Small, medium,large
• Quantitative (Q)
Can do arithmetic on them
10 inches, 23 inches,etc.

On the theory of scales and measurements [S.Stevens, 46]


Data Types
• Q - Interval (location of zero arbitrary)
Dates: Jan 19; Location: (Lat,Long)
Like a geometric point. Cannot compare directly.
Only differences (i.e., intervals) can be compared
• Q - Ratio (zero fixed)
Measurements:Length,Mass,Temp,...
Origin is meaningful, can measure ratios & proportions
Like a geometric vector, origin is meaningful

On the theory of scales and measurements [S.Stevens, 46]


Data Types
• N - Nominal (labels)
Operations: =,≠
• O - Ordinal (ordered)
Operations:=,≠,>,<
• Q - Interval (location of zero arbitrary)
Operations:=,≠,>,<,+,− (distance)
• Q - Ratio (zero fixed)
Operations:=,≠,>,<,+,−, , (proportions)

On the theory of scales and measurements [S.Stevens, 46]


Semantics
Item
Attribute
aka
Feature
1 = Quantitative
2 = Nominal
3 = Ordinal
1 = Quantitative
2 = Nominal
3 = Ordinal
Nominal /Ordinal = Dimensions
Describe the data, independent variables
Quantitative = Measures
Numbers to be analyzed, dependent variables
Data vs. Conceptual Model
• Data Model: Low-level description of the data
Set with operations,e.g.,floats with +,-,/,*
• Conceptual Model: Mental construction
Includes semantics, supports reasoning

Data Conceptual

1D floats temperature

3D vector space
of
Data vs. Conceptual Model
• From data model...
32.5,54.0,-17.3,… (floats)
• using conceptual model...
Temperature
• to data type
Continuous to 4 significant figures (Q)
Hot, warm,cold (O)
Burned vs. Not burned (N)

Based on slide from Munzner


Data Dimensions
Univariate Data

http://www.smartmoney.com/marketmap
/

Based on slide from M.Agrawala


Bivariate Data
Scatterplot is common

Based on slide from M.Agrawala


Trivariate Data
Do NOT use 3D scatterplots!

Based on slide from M.Agrawala


Trivariate Data
Map the third dimension to some other visual attribute

Based on slide from M.Agrawala


Multivariate Data
Give each attribute its own display (small multiples)

Based on slide from M.Agrawala


Multivariate Data
Representations
Data Reduction
• Filtering:Eliminate some items or attributes
e.g.,select range of interest,zoom in,remove outliers,
etc.
• Aggregation:Represent a group of elements by a
new derived element
e.g.,take average,min,max,count,sum
Attribute aggregation a.k.a. dimensionality reduction

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy