0% found this document useful (0 votes)
18 views30 pages

BRM CH-6

Chapter Six discusses the processes of data processing, analysis, and interpretation essential for research. It outlines key operations such as editing, coding, classification, tabulation, data screening, and data entry, followed by methods of data analysis including descriptive and inferential statistics. The chapter emphasizes the importance of careful interpretation to avoid errors and draw valid conclusions from the data.

Uploaded by

petrosmulu5
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
18 views30 pages

BRM CH-6

Chapter Six discusses the processes of data processing, analysis, and interpretation essential for research. It outlines key operations such as editing, coding, classification, tabulation, data screening, and data entry, followed by methods of data analysis including descriptive and inferential statistics. The chapter emphasizes the importance of careful interpretation to avoid errors and draw valid conclusions from the data.

Uploaded by

petrosmulu5
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 30

Chapter Six

DATA PROCESSING, ANALYSIS,


AND INTERPRETATION
6.1 Introduction
 The data, after collection, has to be processed and

analyzed in accordance with the out line laid


down for the purpose at the time of developing
the research plan.

• After processing and analyzing the data, the


researcher has to accomplish the task of drawing
inferences followed by report writing.
6.2 Data Processing Operations
• Data continues to be in raw form, unless and until they
are processed and analyzed.
• Processing is a statistical method by which the collected
data is so organized the further analysis and interpretation
of data become easy.
• Data processing operations are the prerequisites for data
analysis. These include:
♠ Editing
♠ Coding
♠ Classification
♠ Tabulation
♠ Data Screening
♠ Data Entry
1. Editing

• It is a process of examining the collected raw data


to detect errors and omissions and to correct
these when possible.
• Editing involves a careful scrutiny of the
completed questionnaires.
• It is done to assure that the data are accurate,
consistent with other facts gathered, uniformly
entered.
• It also assures whether the data is fed as
completed as possible and well arranged to
facilitate coding and tabulation.
2. Coding

• It refers to the process of assigning numerals or


other symbols to responses.
• Give each respondent a code number for
identification.
• So that they can be put in to a limited number of
categories or classes.
• The classes should be appropriate to the research
problem.
• They must also possess the characteristic of
exhaustiveness (i.e. a class for every data item)
• It must keep uni-dimensionality i.e. every class is defined
in terms of only one concept.
3. Classification

 A large volume of raw data must be reduced in to


homogeneous groups to get meaningful relationships.

• Classification is the process of arranging data in groups or


classes on the basis of common characteristics.

• Data having a common characteristic are placed in one class

• In this way the entire data get divided in to a number of


groups or classes (themes).
Two ways of classification

a) Classification according to attributes: data are classified based on


common descriptive internal characteristics (such as literacy, sex,
honesty, etc.)
 Refers to qualitative phenomenon which can not be measured
quantitatively.

b) Classification according to class-intervals: data are classified based


on numerical characteristics
 Refers to quantitative phenomenon which can be measured through
some statistical units.
 Data relating to income, production, age, weight, etc.
Objectives of classification

• To organize data in to concise, logical and intelligible


form.

• To take the similarities and dissimilarities s between


various classes clear.

• To facilitate comparison between various classes of data.

• To help the researcher in understanding the significance


of various classes of data.

• To facilitate analysis and formulate generalizations.


4. Tabulation

 Is the next step to classification. It is the process of


summarizing raw data and displaying the same in compact form
(i.e., in the form of statistical tables) for further analysis.
• It is an orderly arrangement of data in columns and rows.

Importance of tabulation:
 It conserves space and reduces analysis statements.

 It facilitates the process of comparison.

 It facilitates the summation of items and the detection of errors and


omissions.
 It provides a basis for various statistical computations.
5. Data Screening

 Immediately following data collection, but prior to data entry,


the researcher should carefully screen all data for accuracy.
 It helps to ensure the accuracy and completeness of data.

 The promptness of these procedures is very important

 It is because research staff may still be able to re contact study


participants to address any omissions, errors, or inaccuracies.
 Using computerized assessment instruments is efficient and
highly preferred.
Cont’d
Computerized assessments can be
programmed:
To accept only responses with in certain ranges,
To check for blank fields or skipped items, and
To identify in consistencies between responses.
To electronically transfer the data in to a
permanent data base
Screening helps to make certain that:
–responses are legible and understandable,
–responses are with in an acceptable range,
–responses are complete, and
–All of the necessary information has been
included.
6. Data Entry

 After the data have been screened for completeness and


accuracy, the data has to be entered to a well-structured
data base.

• One way of ensuring the accuracy of data entry is through


double entry.

• In the double entry procedure, data are entered in to the


data base twice and then compared to determine
whether there are any discrepancies.
Cont’d

 Many data base programs can be used for data entry.

• Some of them include: Microsoft Excel, Microsoft Access,


SPSS e.t.c.

• These data bases will make it impossible to enter


information that does not meet the preset criteria.

• Data entry using the above tools will substantially reduce


the time spent on data cleaning.
6.3 Data Analysis

• Analysis means studying the tabulated material in order


to determine inherent facts or meanings.

• It involves breaking down existing complex factors in to


simpler parts and putting the parts together in new
arrangements for the purpose of interpretation.

• Statistical processing is required in order to perform a


scientific analysis

• Analysis of data means making raw data meaningful


Need for Data Analysis

The analysis of data serves the following main functions:

♠To make the raw data meaningful,

♠To test null hypothesis,

♠To obtain the significant results,

♠To draw inferences or make generalization, and

♠To estimate parameters.


Statistical Analysis of Data:

 Statistics is the body of mathematical techniques or


processes for gathering, describing, organizing and
interpreting numerical data.

• It is, thus, a basic tool of measurement and research.

• Research may deal with two types of statistical data


application:

1.Descriptive Statistical Analysis, and

2.Inferential Statistical Analysis.


1. Descriptive Statistical Analysis

• It is concerned with numerical description of a particular


group observed.

• The data describe one group and that one group only, no
other variables are taken care of.

• It provides valuable information about the nature of a


particular group or class.

• It helps to summarize data either numerically or


graphically.
Common Descriptive Techniques are:
 Percentages: Percentages are a popular method of
displaying distribution.
 Percentages are the most powerful in making
comparisons.
In percentages, we simplify the data by reducing all
numbers in a range of 10 to 100.
 Frequency Tables: One of the most common ways
to describe a single variable is with a frequency
distribution.
 Frequency distribution can be depicted in two
ways, as table or as a graph.
 If the frequency distribution is depicted in the form
of a table, we call it frequency table.
Cont’d

• Contingency Tables: A Contingency table shows the relationship


between two variables in tabular form.

• Contingency tables are especially used in Chi- square test.

• Graphs and Diagrams: Diagrams and graphs is one of the


methods which simplifies the complexity of quantitative data and
make them easily intelligible.

• They present dry and uninteresting statistical facts in the shape


of attracting and appealing pictures.

• They have a lasting effect on the human mind than the


The following graphs are commonly used to represent data.
Line Graphs, or Charts:

The Bar Graph:

Circle charts or pie diagram:

Pictograms

 Histogram and

 Cumulative frequency polygon (Ogive)


• In descriptive analysis there are univariate analysis,
bivariate analysis and multivariate analysis can be exist.

1. Univariate analysis: involves describing the distribution


of a single variable, including its central tendency and
dispersion.

2. Bivariate analysis: is one of the simplest forms of the


quantitative (statistical) analysis.
 It involves the analysis of two variables (often denoted as
X, Y), for the purpose of determining the empirical
Cont’d
Multivariate analysis: Is analysis of multiple relations
between multiple variables are examined
simultaneously.
 Multivariate analysis (MVA) is based on the
statistical principle of multivariate statistics,
which involves observation and analysis of more
than one statistical outcome variable at a time.
 In design and analysis, the technique is used to
perform trade studies across multiple
dimensions while taking into account the effects
of all variables on the responses of interest.
Tools and Statistical Methods For descriptive Analysis

• In descriptive statistics we develop certain indices


and measures of raw data. They are;
• Measures of Central Tendency: involves estimates
such as mean, median, mode, geometric mean,
and harmonic mean.
• Measures of Dispersion: common measures of
dispersion, the range and the standard deviation.
• It can be used to compare the variability in two
statistical series
Cont’d
• Measures of skeweness and kurtosis: A
fundamental task in many statistical analyses is to
characterize the location and variability of a data
set.
• Skewness is a measure of symmetry, or more
precisely, the lack of symmetry.
• A distribution, or data set, is symmetric, if it looks
the same to the left and right of the center point.
• Kurtosis is a measure of whether the data are
peaked or flat relative to a normal distribution.
2. Inferential Statistical Analysis

• Inferential statistics deals with forecasting,


estimating or judging some results of the universe based
on some units selected from the universe.
• This process is called Sampling. It facilitates estimation of
some population values known as parameters.
• It also deals with testing of hypothesis to determine with
what validity the conclusions are drawn.
• The sample characteristic (data collected from sample) is
called statistics.
• The estimated value of the population (population
characteristics) is called parameter.
• The primary purpose of research is to discover principles’
that have universal application.
6.5 Interpretation

• Interpretation refers to the technique of drawing inference


from the collected facts of study.

• It is a search for broader and more abstract means of the


research findings.

• If the interpretation is not done very carefully, misleading


conclusions may be drawn.

• The interpreter must be creative of ideas he should be free


from bias and prejudice.
Fundamental principles of interpretation

• Sound interpretation involves willingness on the part of


the interpreter to see what is in the data.

• Sound interpretation requires that the interpreter knows


something more than the mere figures.

• Sound interpretation demands logical thinking.

• Clear and simple language is necessary for


communicating the interpretation.
Cont’d
Errors of interpretation
• The errors of interpretation can be classified into two
groups.
1. Errors due to false generalizations: Errors occur when:
(i) un warranted conclusions are drawn from the facts
available.
(ii) Drawing conclusions from an argument running from
effect to cause.
(iii) Comparing between two sets of data with unequal base.
(iv)Conclusions are drawn from data irrelevant to the
problem.
(iv) False generalizations and faulty statistical methods are
made.
Cont’d
• Errors due to misuse of statistical measures
(i) Conclusions are based on what is true, on an average.
(ii) Percentages are used for comparisons, when total
numbers are different.
(iii) Index numbers are used without proper care.
(iv) Casual correlation is used as real correlation.
The End
Thank You!

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy