Research Methods Notes
Research Methods Notes
Outcomes:
1 1-11
Types of Research:
There are different types of research. The basic ones are as follows.
Descriptive Versus Analytical:
Descriptive research consists of surveys and fact-finding enquiries of different types. The main
objective of descriptive research is describing the state of affairs as it prevails at the time of
social sciences and business research. The most distinguishing feature of this method is that the
researcher has no control over the variables here. He/she has to only report what is happening or
what has happened. Majority of the ex post facto research projects are used for descriptive studies
in which the researcher attempts to examine phenomena, such as th
frequency of purchases, shopping, etc. Despite the inability of the researchers to control the
variables, ex post facto studies may also comprise attempts by them to discover the causes of the
selected problem. The methods of research adopted in conducting descriptive research are survey
methods of all kinds, including correlational and comparative methods.
Meanwhile in the Analytical research, the researcher has to use the already available facts or
information, and analyze them to make a critical evaluation of the subject.
Applied Versus Fundamental: Research can also be applied or fundamental in nature. An
attempt to find a solution to an immediate problem encountered by a firm, an industry, a business
organization, or the society is known as applied research. Researchers engaged in such
researches aim at drawing certain conclusions confronting a concrete social or business problem.
On the other hand, fundamental research mainly concerns generalizations and formulat ion of a
or
(Young in Kothari, 1988). Researches relating to pure mathematics or concerning
some natural phenomenon are instances of Fundamental Research. Likewise, studies focusing on
human behaviour also fall under the category of fundamental research.
Thus, while the principal objective of applied research is to find a solution to some pressing
practical problem, the objective of basic research is to find information with a broad base of
application and add to the already existing organized body of scientific knowledge.
Definition of Schedule
The schedule is a proforma which contains a list of questions filled by the research
workers or enumerators, specially appointed for the purpose of data collection.
Enumerators go to the informants with the schedule, and ask them the questions from the
set, in the sequence and record the replies in the space provided.
There are certain situations, where the schedule is distributed to the respondents, and the
enumerators assist them in answering the questions.
Problem identification
Sampling Methods
Sampling is a process used in statistical analysis in which a predetermined number of
observations are taken from a larger population.
Sampling helps a lot in research. It is one of the most important factors which determines
the accuracy of your research/survey result. If anything goes wrong with your sample then it will
be directly reflected in the final result.
Sample is the subset of the population. The process of selecting a sample is known as
sampling. Number of elements in the sample is the sample size.
There are lot of sampling techniques which are grouped into two categories as
Probability Sampling
Non- Probability Sampling
Probability Sampling
This Sampling technique uses randomization to make sure that every element of the
population
as random sampling.
Simple Random Sampling
Stratified sampling
Systematic sampling
Cluster Sampling
Simple Random Sampling: Every element has an equal chance of getting selected to be the part
sample. It is used when we have any kind of prior information about the target population.
For example: Random selection of 20 students from class of 50 student. Each student
has equal chance of getting selected. Here probability of selection is 1/50
Stratified Sampling
This technique divides the elements of the population into small subgroups (strata) based on the
similarity in such a way that the elements within the group are homogeneous and heterogeneous
among the other subgroups formed. And then the elements are randomly selected f rom each of
these strata. We need to have prior information about the population to create subgroups.
Cluster Sampling
Our entire population
is divided into clusters
or sections and then the
clusters are randomly selected. All the elements of the cluster are used for sampling.
Clusters are identified using details such as age, sex, location etc.
Cluster sampling can be done in following ways:
Non-Probability Sampling
It does not rely on randomization. This technique is more reliant on the ability to
select elements for a sample. This type of sampling is also known as non-random sampling.
Convenience Sampling
Purposive Sampling
Quota Sampling
Referral /Snowball Sampling
Convenience Sampling
Here the samples are selected based on the availability. This method is used when the
availability of sample is rare and also costly. So based on the convenience samp les are
selected.
For example: Researchers prefer this during the initial stages of survey research, as quick
and easy to deliver results.
Purposive Sampling
This is based on the intention or the purpose of study. Only those elements will be
selected from the population which suits the best for the purpose of our study.
For Example: If we want to understand the thought process of the people who are
interested in
interested for Masters
All the people who respond with a will be excluded from our sample.
Quota Sampling
This type of sampling depends of some pre-set standard. It selects the representative
sample from the population. Proportion of characteristics/ trait in sample should be same
as population. Elements are selected until exact proportions of certain types of data is
obtained or sufficient data in different categories is collected.
For example: If our population has 45% females and 55% males then our sample should
reflect the same percentage of males and females.
Referral /Snowball Sampling
This technique is used in the situations where the population is completely unknown and
rare.
Therefore we will take the help from the first element which we select for the population
and ask him to recommend other elements who will fit the description of the sample
needed.
So this referral technique goes on, increasing the size of population like a snowball. Eg-
Chronic Diseases
Tabulating is a way of processing information or data by putting it in a table. This doesn't mean
the kind of table you eat off of, though. It refers to a table, or chart, with rows and columns .
When tabulating, you might have to make calculations.
What is simple tabulation?
The process of placing classified data into tabular form is known as tabulation. A table is
a symmetric arrangement of statistical data in rows and columns. Rows are horizon tal
arrangements whereas columns are vertical arrangements. It may be simple, double or
complex depending upon the type of classification.
Interpretation of Data: Data analysis and interpretation is the process of assigning meaning to
the collected information and determining the conclusions, significance, and implications of the
findings.
What is the significance of data interpretation?
Data interpretation refers to the process of critiquing and determining the significance of
important information, such as survey results, experimental findings, observations or
narrative reports. Interpreting data is an important critical thinking skill that helps you
comprehend text books, graphs and tables.
The varying scales include:
Nominal Scale: non-numeric categories that cannot be ranked or compared
quantitatively. Variables are exclusive and exhaustive.
Ordinal Scale: exclusive categories that are exclusive and exhaustive but with a logical
order. Quality ratings and agreement ratings are examples of ordinal scales (i.e., good,
very good, fair, etc., OR agree, strongly agree,
disagree, etc.).
Interval: a measurement scale where data is grouped into categories with orderly and
equal distances between the categories. There is always an arbitrary zero point.
Ratio: contains features of all three.
Qualitative Data Interpretation
Qualitative data analysis can be summed up in one word categorical. With qualitat ive
analysis, data is not described through numerical values or patterns, but through the use
of descriptive context (i.e., text). These techniques include:
Observations: detailing behavioral patterns that occur within an observation group.
Documents: much like how patterns of behavior can be observed, different types of
documentation resources can be coded and divided based on the type of material they
contain.
Interviews: one of the best collection methods for narrative data. Enquiry responses can
be grouped by theme, topic or category.
Quantitative Data Interpretation
More often than not, it involves the use of statistical modeling such as standard deviation,
mean and median. quickly review the most common statistical terms:
Mean: a mean represents a numerical average for a set of responses. When dealing with a
data set (or multiple data sets), a mean will represent a central value of a specific set of
numbers.
Standard deviation: Standard deviation reveals the distribution of the responses around
the mean. It describes the degree of consistency within the responses;
Frequency distribution: When using a survey, for example, frequency distribution has
the capability of determining the number of times a specific ordinal scale response
appears (i.e., agree, strongly agree, disagree, etc.). Frequency distribution is extremely
keen in determining the degree of consensus among data points.
What is the meaning of data analysis in research?
The process of evaluating data using analytical and logical reasoning to examine each
component of the data provided. Data from various sources is gathered, reviewed, and
then analyzed to form some sort of finding or conclusion.
Levels of Measurement Scales
The level of measurement refers to the relationship among the values that are assigned to the
attributes, feelings or opinions for a variable.
Typically, there are four levels of measurement scales or methods of assigning numbers:
(a) Nominal scale,
(b) Ordinal scale,
(c) Interval scale, and
(d) Ratio scale.
Nominal Scale is the crudest among all measurement scales but it is also the simplest scale. In
this scale the different scores on a measurement simply indicate different categories. The
nominal scale does not express any values or relationships between variables.
Example: Nominal scales are used for labeling
Here are some examples, below. Notice that all of these scales are mutually exclusive (no
overlap) and none of them have any numerical significance. A good way to remember all of this
is that sounds a lot like and nominal scales are kind of like or labels.
Ordinal Scale involves the ranking of items along the continuum of the characteristic being
scaled. In this scale, the items are classified according to whether they have more or less of a
characteristic.
The main characteristic of the ordinal scale is that the categories have a logical or ordered
relationship. This type of or
but not the specific amount of differences (i.e. how much or
COMPARITIVES SCALES
In comparative scaling, the respondent is asked to compare one object with another. The
comparative scales can further be divided into the following four types of scaling techniques:
(a) Paired Comparison Scale,
(b) Rank Order Scale,
(c) Constant Sum Scale, and
(d) Q-sort Scale.
Paired Comparison Scale: This is a comparative scaling technique in which a respondent is
presented with two objects at a time and asked to select one object according to some criterion.
The data obtained are ordinal in nature.
For example, there are four types of cold drinks -
Coke, Pepsi, Sprite, and Limca. The respondents can prefer Pepsi to Coke or Coke to Sprite, etc.
NON-COMPARITIVE SCALES
In non-comparative scaling respondents need only evaluate a single object. Their evaluation is
independent of the other object which the researcher is studying.
The non-comparative scaling techniques can be further divided into:
(a) Continuous Rating Scale, and
(b) Itemized Rating Scale.
Continuous Rating Scales: It is very simple and highly useful. In continuous rating scale, the
continuous line
that runs from one extreme of the criterion variable to the other.
Example: Question: How would you rate the TV advertisement as a guide for buying?
The itemized rating scales can be in the form of: (a) graphic, (b) verbal, or (c) numeric as shown
below :
Likert Scale: Likert, is extremely popular for measuring attitudes, because, the method is simple
to administer. With the Likert scale, the respondents indicate their own attitudes by checking
how strongly they agree or disagree with carefully worded statements that range from very
positive to very negative towards the attitudinal
Object. Respondents generally choose from five alternatives (say strongly agree, agree, neit her
agree nor disagree, disagree, strongly disagree).
A Likert scale may include a number of items or statements. Disadvantage of Likert Scale is that
it takes longer time to complete than other itemised rating scales because respondents have to
read each statement.
Despite the above disadvantages, this scale has several advantages.
It is easy to construct, administer and use.
Semantic Differential Scale: This is a seven point rating scale with end points associated wit h
bipolar labels (such as good and bad, complex and simple) that have semantic meaning. It can be
used to find whether a respondent has a positive or negative attitude towards an object. It has
been widely used in comparing brands, products and company images. It has also been used to
develop advertising and promotion strategies and in a new product development study.
Staple Scale: The Stapel scale was originally developed to measure the direction and intensity of
an attitude simultaneously. Modern versions of the Stapel scale place a single adjective as a
substitute for the Semantic differential when it is difficult to create pairs of bipolar adject ives.
The modified Stapel scale places a single adjective in the centre of an even number of numerical
Values.
Sources of Error in Measurement
Measurement should be precise and unambiguous in an ideal research study. This objective,
however, is often not met with in entirety. As such the researcher must be aware about the
sources of error in measurement. The following are the possible sources of error in measurement.
Respondent: At times the respondent may be reluctant to express strong negative feelings or it is
just possible that he may have very little knowledge but may not admit his ignorance. All this
anxiety, etc. may limit the ability of the respondent to respond accurately and fully.
Situation: Situational factors may also come in the way of correct measurement. Any condition
which places a strain on interview can have serious effects on the interviewer-respondent
rapport. For instance, if someone else is present, he can distort responses by joining in or merely
by being present. If the respondent feels that anonymity is not assured, he may be reluctant to
express certain feelings.
Measurer: The interviewer can distort responses by rewording or reordering questions. His
behavior, style and looks may encourage or discourage certain replies from respondents. Careless
mechanical processing may distort the findings. Errors may also creep in because of incorrect
coding, faulty tabulation and/or statistical calculations, particularly in the data-analysis stage.
Instrument: Error may arise because of the defective measuring instrument. The use of complex
words, beyond the comprehension of the respondent, ambiguous meanings, poor printing,
inadequate space for replies, response choice omissions, etc. are a few things that make the
measuring instrument defective and may result in measurement errors. Another type of
instrument deficiency is the poor sampling of the universe of items of concern. Researcher must
know that correct measurement depends on successfully meeting all of the problems listed
above. He must, to the extent possible, try to eliminate, neutralize or otherwise deal with all the
possible sources of error so that the final results may not be contaminated.
Research Design: The most important step after defining the research problem is pre paring the
design helps to decide upon issues like what, when, where, how much, by what means etc. With
regard to an enquiry or a research study a research design is the arrangement of condit ions f or
collection and analysis of data in a manner that aims to combine relevance to the research
purpose with economy in procedure. In fact, research design is the conceptual structure wit hin
which research is conducted; it constitutes the blueprint for the collection, measurement and
analysis of data (Selltiz et al, 1962). Thus, research design provides an outline of what the
researcher is going to do in terms of framing the hypothesis, its operational implications and the
final data analysis. Specifically, the research design highlights decisions which include:
The nature of the study
The purpose of the study
The location where the study would be conducted
The nature of data required
From where the required data can be collected
What time period the study would cover
The type of sample design that would be used
The techniques of data collection that would be used
The methods of data analysis that would be adopted and
The manner in which the report would be prepared
What is Research Design?
A framework or blueprint for conducting the Major research project.
Specifies the details of the procedures necessary for obtaining information needed to
structure or solve the Major research problem.
A research design lays the foundation for conducting the research.
Task of defining the research problem is the preparation of the research project, popularly
known as the design".
Decisions regarding what, where, when, how much, by what means concerning an inquiry
or a research study constitute a research design.
Meaning of research design-
A research design is the arrangement of conditions for collection and analysis of data in a
manner that aims to combine relevance to the research purpose with economy in procedure.
Research design has following parts:
Sampling design
Observational design
Statistical design
Operational design
Sampling design: Which deals with the methods of selecting items to be observed for the study.
Observational design: Which relates to the condition under which the observation are to be
create.
Statistical design: Which concern the question of the of How the information and data gathered
are to be analyzed ?
Operational design: Which deals with techniques by which the procedures satisfied in sampling
It provides insights into the problem or helps It is used to quantify attitudes, opinions,
to develop ideas or hypotheses for potential behaviors, and other defined variables and
quantitative research. generalize results from a larger sample
Qualitative Research is also used to uncover Quantitative Research uses measurable data to
trends in thought and opinions, and dive formulate facts and uncover patterns in research.
deeper into the problem. Qualitative data
collection methods vary using unstructured
or semi-structured techniques.
Some common methods include focus Quantitative data collection methods are much
groups (group discussions), individual more structured than Qualitative data collection
interviews, and participation/observations. methods.
The sample size is typically small, and Quantitative data collection methods include
respondents are selected to fulfill a given various forms of surveys online surveys, paper
quota. surveys, mobile surveys and kiosk surveys, face-
to-face interviews, telephone interviews,
longitudinal studies, website interceptors, online
polls, and systematic observations.
Types of Research Design
There are different types of research designs. They may be broadly
categorized as:
Exploratory Research Design;
Descriptive and Diagnostic Research Design; and
Hypothesis-Testing Research Design.
your study to the constructs on which those operationalizations are based. To establish construct
validity you must first provide evidence that your data supports the theoretical structure. You
must also show that you control the operationalization of the construct, in other words, show that
your theory has some correspondence with reality.
Convergent Validity the degree to which an operation is similar to other operations it
should theoretically be similar to.
Discriminative Validity if a scale adequately differentiates itself or does not
differentiate between groups that should differ or not differ based on theoretical reasons
or previous research.
Nomo logical Network representation of the constructs of interest in a study, their
observable manifestations, and the interrelationships among and between these.
According to Cronbach and Meehl, a nomological network has to be developed for a
measure in order for it to have construct validity
Multitrait-Multimethod Matrix six major considerations when examining Construct
Validity according to Campbell and Fiske. This includes evaluations of the convergent
validity and discriminative validity. The others are trait method unit, multi-method/trait,
truley different methodology, and trait characteristics.
Internal Validity
This refers to the extent to which the independent variable can accurately be stated to produce
the observed effect. If the effect of the dependent variable is only due to the independent
variable(s) then internal validity is achieved. This is the degree to which a result can be
manipulated.
Statistical Conclusion Validity
A determination of whether a relationship or co-variation exists between cause and effect
variables. Requires ensuring adequate sampling procedures, appropriate statistical tests, and
reliable measurement procedures. This is the degree to which a conclusion is credible or
believable.
External Validity
This refers to the extent to which the results of a study can be generalized beyond the samp le.
Which is to say that you can apply your findings to other people and settings. Think of this as
the degree to which a result can be generalized.
Criterion-Related Validity
Can alternately be referred to as Instrumental Validity. The accuracy of a measure is
demonstrated by comparing it with a measure that has been demonstrated to be valid. In other
words, correlations with other measures that have known validity. For this to work you must
know that the criterion has been measured well. And be aware that appropriate criteria do not
always exist. What you are doing is checking the performance of your operationalization against
a criteria. The criteria you use as a standard of judgment accounts for the different approaches
you would use:
Predictive Validity ability to predict what it is theoretically able to
predict. The extent to which a measure predicts expected outcomes.
Concurrent Validity ability to distinguish between groups it
theoretically should be able to. This is where a test correlates well with a measure that
has been previously validated.
When we look at validity in survey data we are asking whether the data represents what we think
it should represent. We depend on the mind set and attitude in order to give us
valid data. In other words we depend on them to answer all questions honestly and
conscientiously. We also depend on whether they are able to answer the questions that we ask.
When questions are asked that the respondent cannot comprehend or understand then the data
does not tell us what we think it does.
Types of Design:
There are four main types of quantitative research designs: descriptive, correlational, quasi-
experimental and experimental. The differences between the four types primarily relates to the
degree the researcher designs for control of the variables in the experiment. Following is a brief
description of each type of quantitative research design, as well as chart comparing and
contrasting the approaches.
A Descriptive Design seeks to describe the current status of a variable or phenomenon. The
researcher does not begin with a hypothesis, but typically develops one after the data is collected.
Data collection is mostly observational in nature.
A Correlational Design explores the relationship between variables using statistical analyses.
However, it does not look for cause and effect and therefore, is also mostly observational in
terms of data collection.
A Quasi-Experimental Design (often referred to as Causal-Comparative) seeks to establish a
cause-effect relationship between two or more variables. The researcher does not assign groups
and does not manipulate the independent variable. Control groups are identified and exposed to
the variable. Results are compared with results from groups not exposed to the variable.
Experimental Designs, often called true experimentation, use the scientific method to establish
cause-effect relationship among a group of variables in a research study. Researchers make an
effort to control for all variables except the one being manipulated (the independent variable).
The effects of the independent variable on the dependent variable are collected and analyzed f or
a relationship.
UNIT-V REPORT WRITING
There are two main types of reports:
Informational
Analytical
An Analytical Report:
Provides information
Analyses information
Draws conclusions from the information
Recommends action on the basis of the information.
An Informational Report:
Provides information
Does not analyse information
Does not recommend action.
For general topics, such as the impacts of privatization of the media, it is likely that you will
write analytical reports. For lab reports you would more likely write an informational report
on the findings of an experiment you have conducted.
The typical structure of a report includes most, if not all, of the following sections.
Refer to your unit outline and your tutor for clarification on what sections you will need
to include in your report
A typical report will include:
A Title Page
An Abstract
A Table of Contents (this must be included
if the report is longer than 10 pages)
Acknowledgements (if required)
An Introduction
The Discussion, or body, of the report (the content)
Your Conclusion
Any Recommendations
An Appendix or Appendices
And your Reference list.
Title Page:
The title page will contain:
The report title, which clearly states the topic of the report
Full details of the person or persons for whom the report is intended
Full details of the person or persons who prepared the report
Date of the presentation of the report (or the date submitted if you are not presenting it).
Abstract:
The abstract is one of the most important components of the report. It will be read by vastly
more people than those who will read the whole report, and needs to provide enough
information to invite the audience to read on.
Although the audience will read this first, you should leave the writing of your abstract as the
last step. This will allow you to summarise the content of your report in a concise and clear
format.
Depending on the length of your report, an abstract is usually no longer than 10% of the
paper, or 100-200 words.
An abstract aims to:
Provide a brief overview of the whole report
Give concise, complete, specific and self -sufficient information that can be easily
understood
Offer recommendations for executives and managers to base their decisions on.
Introduction:
Your introduction will:
Provide background information on the topic
State the purpose of the report
Indicate the scope, including limitations
Outline the methods used to gather information
Clarify key terms
Inform the reader of what your report will cover
Give the reader a preview of how the information will be presented.
It will also include your literature review of any publications you have used for your report.
For tips on how to write a literature review, follow the link below this slide to Grammarly's
post on How to Write a Literature Review.
Content:
The content of your report will depend on its purpose.
Your report should contain primary sources if possible (such as observations and
interviews), as well as secondary sources to provide explanations of theory and
background.
You should further detail the methods of your investigation, including what you did and
why, and any issues encountered in the process.
In the body content you will explain the findings gathered from your research, and discuss the
implications they hold.
Remember to separate your key ideas and concepts into clear headings and subheadings, so
that you break up your report into digestible pieces of information for the reader.
Conclusion:
Your conclusion will be a summary of the key points you have raised in your discussion.
In this, you will need to:
Contextualise your observations, findings, and analyses
Remind the reader what you have informed them in the body content (i.e. what you
researched, what you discovered, what implications or problems this raises)
Do NOT include new information here
Recommendations:
Think of this as an action plan for how to resolve or improve the issue.
Try to make your recommendations as realistic as possible, and identif y
Conclusion Clearly and concise conclusion to study.
Briefly re-states how well the study design met the study's aims.
Emphasises major findings and implications of findings as
addressed in discussion section.
Briefly re-caps any faults or limitations covered in full in the
discussion section.
If applicable, suggests future research directions.
Recommendations Summarises and lists in order of importance.
(if applicable) Might also be numbered.
References Alphabetical list of references. Start on new page, attach to end
of report, before appendices.
Appendices Relevant and necessary material not included elsewhere,
e.g., copy of questionnaires or survey forms; participant consent
form; large tables referred to but not included in the body of
report; raw data. Start each appendix on a new page.