Business Research Notess
Business Research Notess
Abhishek Sharma
Contact : 7859826868
Introduction
1. Meaning of Research
Research is a scientific and systematic search for various information about a specific topic. It is just
like a search for truth and knowledge. The English Dictionary meaning of Research is “a careful
investigation or inquiry especially through search for new facts in any branch of knowledge.”
information about a subject can be collected by deliberate effort and it is presented in a new form after
analyzing thoroughly in research work.
Research is an academic activity. It is a movement from the known to the unknown, which may be
called a discovery. Different definitions of research are given by the experts.
According to Redman and Mory, “Research is a systematized effort to gain new knowledge.”
1. Slesinger and M Stephenson define research as, “the manipulation of things, concepts or
symbols for the purpose of generalizing to extend correct or verify knowledge whether that
knowledge aids in construction of theory or in the practice of an art ”
According to P.M. Cook, “Research is an honest, exhaustive, intelligent searching for facts and their
meanings or implications with reference to a given problem.”
J.M. Francis Rumel defines, “Research is an endeavour to discover, develop and verify knowledge.”
Clifford Woody, defines “Research is a careful enquiry or examination in seeking facts or principles a
diligent investigation to ascertain something.”
Objectives
The main purpose of research is to discover answers to the meaningful questions through scientific
procedures and systematic attempt. The hidden truths which are not discovered yet can easily come to
light by research.
1. To gain familiarity or to achieve new insights into a phenomenon. This is known as Exploratory
or Formularize Research studies.
2. To describe the accurate characteristics of a particular individual, situation or a group. This is
known as Descriptive Research studies.
3. To determine the frequency with which something occurs or with which it is associated with
other things. This is known as Diagnostic Research studies.
4. To test a hypothesis of a casual relationship between variables. Such studies are known as
Hypothesis-testing Research studies.
Mr. Abhishek Sharma
Contact : 7859826868
Characteristics of Research
Research Methods
All those methods which are used by the researcher during the course of studying his research
problems are called as Research Methods. Methods of research may be classified from different points
of view.
These are:
(i) Production Management: The research performs an important function in product development,
diversification, introducing a new product, product improvement, process technologies, choosing a site,
new investment etc.
Personnel Management: Research works well for job redesign, organization restructuring,
development of motivational strategies and organizational development.
(ii) Marketing Management: Research performs an important part in choice and size of target market,
the consumer behavior with regards to attitudes, life style, and influences of the target market. It is the
primary tool in determining price policy, selection of channel of distribution and development of sales
strategies, product mix, promotional strategies, etc.
(iii) Financial Management: Research can be useful for portfolio management, distribution of
dividend, capital raising, hedging and looking after fluctuations in foreign currency and product cycles.
(iv) Materials Management: It is utilized in choosing the supplier, making the decisions relevant to
make or buy as well as in selecting negotiation strategies.
(v) General Management: It contributes greatly in developing the standards, objectives, long-term
goals, and growth strategies.
To perform well in a complex environment, you will have to be equipped with an understanding of
scientific methods and a way of integrating them into decision making. You will have to understand
what good research means and how to conduct it. Since the complexity of the business environment
has amplified, there is a commensurate rise in the number and power of the instruments to carry out
research. There is certainly more knowledge in all areas of management.
We now have started to develop much better theories. The computer has provided us a quantum leap
in the capability to take care of difficulties. New techniques of quantitative analysis utilize this power.
Communication and measurement techniques have also been improved. These developments
reinforce each other and are having a substantial impact on business management.
Business research assists decision makers shift from intuitive information gathering to organized and
objective study. Even though researchers in different functional fields may examine different
phenomena, they are comparable to each other simply because they make use of similar research
techniques. Research is the fountain of knowledge for the sake of knowledge and it is a crucial source
of providing guidelines for solving various business issues. Thus, we can say that the scope of
business research is enormous.
When you conduct a business research, you will be gathering relevant information that you can use to
make your business better. For instance, in a marketing search, you will be identifying your target
market and the needs. You have to offer something that the market needs or you will not be able sell
anything! Never enter into a business unless you have everything planned out. Running a business
can get complicated especially if you lack knowledge. If you conduct a thorough business research,
you can learn the basics and put it to good use.
EXPLORATORY RESEARCH
Exploratory research, as the name implies, intends merely to explore the research questions and does
not intend to offer final and conclusive solutions to existing problems. This type of research is usually
conducted to study a problem that has not been clearly defined yet.
Conducted in order to determine the nature of the problem, exploratory research is not intended to
provide conclusive evidence, but helps us to have a better understanding of the problem. When
conducting exploratory research, the researcher ought to be willing to change his/her direction as a
result of revelation of new data and new insights.
Exploratory research design does not aim to provide the final and conclusive answers to the research
questions, but merely explores the research topic with varying levels of depth. It has been noted that
“exploratory research is the initial research, which forms the basis of more conclusive research. It can
even help in determining the research design, sampling methodology and data collection method”.
Exploratory research “tends to tackle new problems on which little or no previous research has been
done”. Unstructured interviews are the most popular primary data collection method with exploratory
studies.
● A study into the role of social networking sites as an effective marketing communication
channel
● An investigation into the ways of improvement of quality of customer services within hospitality
sector in London
● An assessment of the role of corporate social responsibility on consumer behaviour in
pharmaceutical industry in the USA
DESCRIPTIVE RESEARCH
Mr. Abhishek Sharma
Contact : 7859826868
Descriptive research focuses on throwing more light on current issues through a process of data
collection. Descriptive studies are used to describe the behavior of a sample population. In descriptive
research, only one variable (anything that has quantity or quality that varies) is required to conduct a
study. The three main purposes of descriptive research are describing, explaining and validating the
findings. For example, a research conducted to know if top-level management leaders in the 21st
century posses the moral right to receive a huge sum of money from the company profit.
EXPLANATORY RESEARCH
Explanatory research or causal research is conducted to understand the impact of certain changes in
existing standard procedures. Conducting experiments is the most popular form of casual research.
For example, research conducted to understand the effect of rebranding on customer loyalty.
Comparative analysis
4. Unit of Analysis
Units of Analysis are the objects of study within a research project. In sociology, the most common
units of analysis are individuals, groups, social interactions, organizations and institutions, and social
and cultural artifacts. In many cases, a research project can require multiple units of analysis.
Identifying your units of analysis is an important part of the research process. Once you have identified
a research question, you will have to select your units of analysis as part of the process of deciding on
a research method and how you will operationalize that method. Let’s review the most common units
of analysis and why a researcher might choose to study them.
4.1. Individual
Individuals
Mr. Abhishek Sharma
Contact : 7859826868
Individuals are the most common units of analysis within sociological research. This is the case
because the core problem of sociology is understanding the relationships between individuals and
society, so we routinely turn to studies composed of individual people in order to refine our
understanding of the ties that bind individuals together into a society. Taken together, information about
individuals and their personal experiences can reveal patterns and trends that are common to society
or particular groups within it, and can provide insight into social problems and their solutions.
For example, researchers at the University of California-San Francisco found through interviews with
individual women who have had abortions that the vast majority of women do not ever regret the
choice to terminate the pregnancy. Their findings prove that a common right-wing argument against
access to abortion–that women will suffer undue emotional distress and regret if they have an
abortion–is based on myth rather than fact.
4.2. Organisation
Organizations
Organizations differ from groups in that they are considered more formal and, well, organized ways of
collecting people together around specific goals and norms. Organizations take many forms, including
corporations, religious congregations and whole systems like the Catholic Church, judicial systems,
police departments, and social movements, for example.
Social scientists who study organizations might be interested in, for example, how corporations like
Apple, Amazon, and Walmart impact various aspects of social and economic life, like how we shop
and what we shop for, and what work conditions have become normal and/or problematic within the
U.S. labor market. Sociologists who study organizations might also be interested in comparing different
examples of similar organizations to reveal the nuanced ways in which they operate, and the values
and norms that shape those operations.
4.3. Groups
Groups
Sociologists are keenly interested in social ties and relationships, which means that they often study
groups of people, be they large or small. Groups can be anything from romantic couples to families, to
people who fall into particular racial or gender categories, to friend groups, to whole generations of
people (think Millennials and all the attention they get from social scientists). By studying groups
sociologists can reveal how social structure and forces affect whole categories of people on the basis
of race, class, or gender, for example.
Sociologists have done this in pursuit of understanding a wide range of social phenomena and
problems, like for example this study that proved that living in a racist place leads to Black people
Mr. Abhishek Sharma
Contact : 7859826868
having worse health outcomes than white people; or this study that examined the gender gap across
different nations to find out which are better or worse at advancing and protecting the rights of women
and girls.
Observational Data
Observational data are captured through observation of a behavior or activity. It is collected using
methods such as human observation, open-ended surveys, or the use of an instrument or sensor
to monitor and record information -- such as the use of sensors to observe noise levels at the
Mpls/St Paul airport. Because observational data are captured in real time, it would be very difficult
or impossible to re-create if lost.
Experimental Data
Experimental data are collected through active intervention by the researcher to produce and
measure change or to create difference when a variable is altered. Experimental data typically
allows the researcher to determine a causal relationship and is typically projectable to a larger
population. This type of data are often reproducible, but it often can be expensive to do so.
Mr. Abhishek Sharma
Contact : 7859826868
Simulation Data
Simulation data are generated by imitating the operation of a real-world process or system over
time using computer test models. For example, to predict weather conditions, economic models,
chemical reactions, or seismic activity. This method is used to try to determine what would, or
could, happen under certain conditions. The test model used is often as, or even more, important
than the data generated from the simulation.
5. Conception
Conception
The first step in the measurement process is to define the concepts we are studying. Researchers
generate concepts by generalizing from particular facts. Concepts are based on our experiences.
Concepts can be based on real phenomena and are a generalized idea of something of meaning.
Examples of concepts include common demographic measures: Income, Age, Eduction Level, Number
of SIblings.
● Direct Observation:We can measure someone’s weight or height. And, we can record the
color of their hair or eyes.
● Indirect Observation:We can use a questionnaire in which respondents provide answers to
our questions about gender, income, age, attitudes, and behaviors.
6. Construct
Mr. Abhishek Sharma
Contact : 7859826868
Constructs
Constructs are measured with multiple variables. Constructs exist at a higher level of
abstraction than concepts. Justice, Beauty, Happiness, and Health are all constructs.
Constructs are considered latent variable because they cannot be directly observable or
measured. Typical constructs in marketing research include Brand Loyalty, Purchase Intent,
and Customer Satisfaction. Constructs are the basis of working hypotheses.
Brand loyalty is a construct that marketing researchers study often. Brand loyalty can be
measured using a variety of measures:
7. Attributes
An attribute refers to the quality of a characteristic. The theory of attributes
deals with qualitative types of characteristics that are calculated by using
quantitative measurements. Therefore, the attribute needs slightly different
kinds of statistical treatments, which the variables do not get. Attributes refer to
the characteristics of the item under study, like the habit of smoking, or
drinking. So ‘smoking’ and ‘drinking’ both refer to the example of an attribute.
The researcher should note that the techniques involve statistical knowledge and are used at a wider
extent in the theory of attributes.
In the theory of attributes, the researcher puts more emphasis on quality (rather than on quantity).
Since the statistical techniques deal with quantitative measurements, qualitative data is converted into
quantitative data in the theory of attributes.
There are certain representations that are made in the theory of attributes. The population in the theory
of attributes is divided into two classes, namely the negative class and the positive class. The positive
class signifies that the attribute is present in that particular item under study, and this class in the
theory of attributes is represented as A, B, C, etc. The negative class signifies that the attribute is not
present in that particular item under study, and this class in the theory of attributes is represented as α,
β, etc.
Mr. Abhishek Sharma
Contact : 7859826868
The assembling of the two attributes, i.e. by combining the letters under consideration (such as AB),
denotes the assembling of the two attributes.
This assembling of the two attributes is termed dichotomous classification. The number of the
observations that have been allocated in the attributes is known as the class frequencies. These class
frequencies are symbolically denoted by bracketing the attribute terminologies. (B), for example,
stands for the class frequency of the attribute B. The frequencies of the class also have some levels in
the attribute. For example, the class that is represented by the ‘n’ attribute refers to the class that has
the nth order. For example, (B) refers to the class of 2nd order in the theory of attributes.
8. Variables
Variables
Variables are measurements that are free to vary. Variable can be divided into Independent Variables
or Dependent Variables. A dependent variable changes in response to changes in the independent
variable or variables.
A variable can be transformed into a constant when the researcher decides to control the variable by
reducing its expression to a single value. Suppose a researcher is conducting a test of consumers’
taste preference for three brands of frozen pizza. There are a number of variables in this test:
(2) The manner in which is each pizza is presented, the type of the plates and table cloths used, and
(3) The manner in which each brand is prepared. To get an accurate measure of the first
variable—respondents’ ratings of the taste of the three pizza brands—the researcher will hold the
second and third variables constant. By serving all three pizzas on the same kind of plates with the
table dressed in the same manner, preparing the pizzas in identical ways, and serving them at identical
temperatures, the research controls for these variables. In doing so, the researcher has removed, or
controlled for the affect of the second and third variables on respondents’ taste preferences.
9. Hypothesis
Hypotheses
A different meaning of the term hypothesis is used in formal logic, to denote the antecedent of a
proposition; thus in the proposition “If P, then Q”, P denotes the hypothesis (or antecedent); Q can be
called a consequent. P is the assumption in a (possibly counterfactual) What If question.
Mr. Abhishek Sharma
Contact : 7859826868
The adjective hypothetical, meaning “having the nature of a hypothesis”, or “being assumed to exist as
an immediate consequence of a hypothesis”, can refer to any of these meanings of the term
“hypothesis”.
Research Process
1. Overview
Research Process involves identifying, locating, assessing, and analyzing the information you need to
support your research question, and then developing and expressing your ideas. These are the same
skills you need any time you write a report, proposal, or put together a presentation.
Library research involves the step-by-step process used to gather information in order to write your
paper, create a presentation, or complete a project. As you progress from one step to the next, it is
often necessary to rethink, revise, add additional material or even adjust your topic. Much will depend
on what you discover during your research.
The research process can be broken down into seven steps, making it more manageable and easier to
understand. This module will give you an idea of what’s involved at each step in order to give you a
better overall picture of where you are in your research, where you will be going, and what to expect at
each step.
The first stage is to develop a clear and precise understanding of the research problem, to permit
effective conduct of the research process. It is very important to analyse the problems to conduct the
research effectively. In this scenario, a veteran market researcher wants to enter into the business of
operating a coffee shop and the problem is to identify the potential market and to find the appropriate
outlet and product mix for the products and services of the business. The determination of product line
and the price to be charged for the product is the identified problem. At the same time, the business is
also facing problems with the positioning of the shop in the relevant market.
At times, the first step determines the nature of the last step to be undertaken.If subsequent
procedures have not been taken into account in the early stages, serious difficulties may arise which
may even prevent the completion of the study. One should remember that the various steps involved in
a research process are not mutually exclusive; nor they are separate and distinct.
Mr. Abhishek Sharma
Contact : 7859826868
They do not necessarily follow each other in any specific order and the researcher has to be constantly
anticipating at each step in the research process the requirements of the subsequent steps. However,
the following order concerning various steps provides a useful procedural guideline regarding the
research process:
For this purpose, the abstracting and indexing journals and published or unpublished bibliographies
are the first place to go to. Academic journals, conference proceedings, government reports, books
etc., must be tapped depending on the nature of the problem. In this process, it should be remembered
that one source will lead to another. The earlier studies, if any, which are similar to the study in and
should be carefully studied. A good library will be a great help to the researcher at this stage.
In other words, the function of research design is to provide for the collection of relevant evidence with
minimal expenditure of effort, time and money. But how all these can be achieved depends mainly on
the research purpose. Research purposes may be grouped into four categories, vi.,
● Exploration,
● Description,
● Diagnosis, and
● Experimentation
5. Determining sample design: All the items under consideration in any field of inquiry constitute
‘universe’ or ‘population’. A complete enumeration of all the items in the ‘population’ is known
asa census inquiry. It can be presumed that in such an inquiry when all the items are covered
no element of chance is left and highest accuracy is obtained. But in practice this may not be
true.
Even the slightest element of bias in such an inquiry will get larger and larger as the number of
observations increases. Moreover, there is no way of checking the element of bias or its extent except
through are survey or use of sample checks. Besides, this type of inquiry involves a great deal of time,
money and energy. Not only this, census inquiry is not possible in practice under many circumstances.
For instance, blood testing is done only on sample basis. Hence, quite often we select only a few items
from the universe for our study purposes. The items so selected constitute what is technically called
sample.
The researcher must decide the way of selecting a sample or what is popularly known as the sample
design. In other words, a sample design is a definite plan determined before any data are actually
collected for obtaining a sample from a given population. Thus, the plan to select 12 of a city’s 200
Mr. Abhishek Sharma
Contact : 7859826868
drugstores in a certain way constitutes a sample design. Samples can be either probability samples or
non-probability samples.
With probability samples each element has a known probability of being included in the sample but the
non-probability samples do not allow the researcher to determine this probability. Probability samples
are those based on simple random sampling, systematic sampling, stratified sampling, cluster/area
sampling whereas non-probability samples are those based on convenience sampling, judgment
sampling and quota sampling techniques.
6. Collecting the data: In dealing with any real life problem it is often found that data at hand are
inadequate, and hence, it becomes necessary to collect data that are appropriate. There are
severing always of collecting the appropriate data which differ considerably in context of money
costs, time and other resources at the disposal of the researcher.
Primary data can be collected either through experiment or through survey. If the researcher conducts
an experiment, he observes some quantitative measurements, or the data, with the help of which he
examines the truth contained in his hypothesis.
7. Execution of the project: Execution of the project is a very important step in the research
process. If the execution of the project proceeds on correct lines, the data to be collected would
be adequate and dependable. The researcher should see that the project is executed in a
systematic manner and in time. If the survey is to be conducted by means of structured
questionnaires, data can be readily machine-processed. In such a situation, questions as well
as the possible answers may be coded. If the data are to be collected through interviewers,
arrangements should be made for proper selection and training of the interviewers.
8. Analysis of data: After the data have been collected, the researcher turns to the task of
analyzing them. The analysis of data requires a number of closely related operations such as
establishment of categories, the application of these categories to raw data through coding,
tabulation and then drawing statistical inferences. The unwieldy data should necessarily be
condensed into a few manageable groups and tables for further analysis. Thus, researcher
should classify the raw data into some purposeful and usable categories. Coding operation is
usually done at this stage through which the categories of data are transformed into symbols
that may be tabulated and counted.
9. Hypothesis-testing: After analyzing the data as stated above, the researcher is in a position to
test the hypotheses, if any, he had formulated earlier. Do the facts support the hypotheses or
they happen to be contrary? This is the usual question which should be answered while testing
hypotheses .Various tests, such as Chi square test, t-test, F-test, have been developed by
statisticians for the purpose. The hypotheses may be tested through the use of one or more of
such tests, depending upon the nature and object of research inquiry. Hypothesis -testing will
result in either accepting the hypothesis or in rejecting it. If the researcher had no hypotheses to
start with, generalizations established on the basis of data may be stated as hypotheses to be
tested by subsequent researches in times to come.
10. Generalizations and interpretation: If a hypothesis is tested and upheld several times, it
maybe possible for the researcher to arrive at generalization, i.e., to build a theory. As a matter
of fact,the real value of research lies in its ability to arrive at certain generalizations. If the
researcher had no hypothesis to start with, he might seek to explain his findings on the basis of
some theory. It is known as interpretation. The process of interpretation may quite often trigger
off new questions which in turn may lead to further researches.
Mr. Abhishek Sharma
Contact : 7859826868
11. Preparation of the report or the thesis: Finally, the researcher has to prepare the report of
what has been done.
Basic research can be descriptive, explanatory or exploratory. Mostly basic research is the explanatory
research.
Basic research creates new ideas, new principles, new theories that are not immediately applied in the
practical life. But later this basic research helps in applied research where scientist uses this basic
research to utilize it in the practical life.
Field Studies
Field studies involve collecting data outside of an experimental or lab setting. This type of data
collection is most often done in natural settings or environments and can be done in a variety of ways
for various disciplines. Field studies are known to be expensive and timely; however, the amount and
diversity of the data collected can be invaluable.
Field studies collect original or unconventional data via face-to-face interviews, surveys, or direct
observation. This research technique is usually treated as an initial form of research because the data
collected is specific only to the purpose for which it was gathered. Therefore, it is not applicable to the
general public.
In this method, the data is collected via an observational method or subjects in a natural environment.
In this method, the behavior or outcome of situation is not interfered in any way by the researcher. The
advantage of direct observation is that it offers contextual data on people, situations, interactions and
the surroundings. This method of field research is widely used in a public setting or environment but
not in a private environment as it raises an ethical dilemma.
In this method of field research, the researcher is deeply involved in the research process, not just
purely as an observer, but also as a participant. This method too is conducted in a natural environment
but the only difference is the researcher gets involved in the discussions and can mould the direction of
Mr. Abhishek Sharma
Contact : 7859826868
the discussions. In this method, researchers live in a comfortable environment with the participants of
the research, to make them comfortable and open up to in-depth discussions.
(iii) Ethnography
Ethnography is an expanded observation of social research and social perspective and the cultural
values of an entire social setting. In ethnography, entire communities are observed objectively. For
example, if a researcher would like to understand how an Amazon tribe lives their life and operates,
he/she may chose to observe them or live amongst them and silently observe their day-to-day
behavior.
Qualitative interviews are close-ended questions that are asked directly to the research subjects. The
qualitative interviews could be either informal and conversational, semi-structured, standardized and
open-ended or a mix of all the above three. This provides a wealth of data to the researcher that they
can sort through. This also helps collect relational data. This method of field research can use a mix of
one-on-one interviews, focus groups and text analysis.
A case study research is an in-depth analysis of a person, situation or event. This method may look
difficult to operate, however, it is one of the simplest ways of conducting research as it involves a deep
dive and thorough understanding the data collection methods and inferring the data.
Due to the nature of field research, the magnitude of timelines and costs involved, field research can
be very tough to plan, implement and measure. Some basic steps in the management of field research
are:
1. Build the Right Team: To be able to conduct field research, having the right team is important.
The role of the researcher and any ancillary team members is very important and defining the
tasks they have to carry out with defined relevant milestones is important. It is important that the
upper management too is vested in the field research for its success.
2. Recruiting People for the Study: The success of the field research depends on the people
that the study is being conducted on. Using sampling methods, it is important to derive the
people that will be a part of the study.
3. Data Collection Methodology: As spoken in length about above, data collection methods for
field research are varied. They could be a mix of surveys, interviews, case studies and
observation. All these methods have to be chalked out and the milestones for each method too
have to be chalked out at the outset. For example, in the case of a survey, the survey design is
important that it is created and tested even before the research begins.
4. Site Visit: A site visit is important to the success of the field research and it is always
conducted outside of traditional locations and in the actual natural environment of the
respondent/s. Hence, planning a site visit along with the methods of data collection is important.
5. Data Analysis: Analysis of the data that is collected is important to validate the premise of the
field research and decide the outcome of the field research.
Mr. Abhishek Sharma
Contact : 7859826868
6. Communicating Results: Once the data is analyzed, it is important to communicate the results
to the stakeholders of the research so that it could be auctioned upon.
The logic of the experimental method is that it is a controlled environment which enables the
scientist to measure precisely the effects of independent variables on dependent variables, thus
establishing cause and effect relationships. This in turn enables them to make predictions about
how the dependent variable will act in the future.
For a general introduction to the key features of experiments and the experimental method
(including key terms such as hypothesis and dependent and independent variables) and some of
their advantages please see this post: experiments in sociology: an introduction.
The laboratory experiment and is commonly used psychology, where experiments are used to
measure the effects of sleep loss and alcohol on concentration and reaction time, as well as some
more ethically dubious experiments designed to measure the effects of media violence on children
and the responses of people to authority figures.
However, they are less common in sociology, so this post draws on the example of Milgram’s
Obedience Experiment to illustrate the advantages and disadvantages of laboratory experiment in
sociology.
The data is usually obtained through the use of standardized procedures whose
purpose is to ensure that each respondent is able to answer the questions at a level
playing field to avoid biased opinions that could influence the outcome of the research
or study. A survey involves asking people for information through a questionnaire,
which can be distributed on paper, although with the arrival of new technologies it is
Mr. Abhishek Sharma
Contact : 7859826868
more common to distribute them using digital media such as social networks, email, QR
codes or URLs.
Characteristics of a Survey
The need to observe or research facts about a situation leads us to conduct a survey.
As we mentioned at the beginning, a survey is a method of gathering information.
First, a sample, also referred to as audience, is needed which should consist of a series
of survey respondents data with required demographic characteristics, who can
relevantly answer your survey questions and provide the best insights. Better the
quality of your survey sample, better will be your response quality and better your
insights.
Surveys come in many different forms and have a variety of purposes, but they have
common underlying characteristics. The basic characteristics of a survey are:
● Determining sample size: Once you have determined your sample, the total
number of individuals in that particular sample is the sample size. Selecting a
sample size depends on the end objective of your research study. It should
consist of a series of survey respondents data with required demographic
characteristics, who can relevantly answer your survey questions and provide
the best insights.
● Types of sampling: There are two essential types of sampling methods, they
are probability sampling and non-probability sampling. Although sampling is
Mr. Abhishek Sharma
Contact : 7859826868
conducted at the discretion of the researcher, the two methods used in detail,
are:
nonparticipant and participant observation. Both approaches create new data, while
archival research involves the analysis of data that already exist. A hypothesis is
generated and then tested by analyzing data that have already been collected. This is a
useful approach when one has access to large amounts of information collected over
long periods of time. Such databases are available, for example, in longitudinal
research that collects information from the same individuals over many years.
Special qualitative analysis software tools like ATLAS.ti help the researcher to catalog,
penetrate and analyze the data generated (or, in archival research found) in a given
research project. All forms of observational or field research benefit extensively from
the special capabilities of a dedicated data anlaysis tool like ATLAS.ti.
The main advantages of studies using existing data are speed and economy. A research question that
might otherwise require much time and money to investigate can sometimes be answered rapidly and
inexpensively.
Studies using existing data or specimens also have disadvantages. The selection of the population to
study, which data to collect, the quality of data gathered, and how variables were measured and
recorded are all predetermined. The existing data may have been collected from a population that is
not ideal (e.g., men only rather than men and women), the measurement approach may not be what
the investigator would prefer (history of hypertension, a dichotomous historical variable, in place of
actual blood pressure), and the quality of the data may be poor (frequent missing or incorrect values).
Important confounders and outcomes may not have been measured or recorded.
ANCILLARY STUDIES
Research using secondary data takes advantage of the fact that most of the data needed to answer a
research question are already available. In an ancillary study, the investigator adds one or several
measurements to an existing study to answer a different research question.
Ancillary studies have many of the advantages of secondary data analysis with fewer constraints.
Mr. Abhishek Sharma
Contact : 7859826868
They are both inexpensive and efficient, and the investigator can design a few key ancillary
measurements specifically to answer the research question. Ancillary studies can be added to any
type of study, including cross-sectional and case–control studies, but large prospective cohort studies
and randomized trials are particularly well suited.
Ancillary studies have the problem that the measurements may be most informative when added
before the study begins, and it may be difficult for an outsider to identify studies in the planning phase.
Even when a variable was not measured at baseline, however, a single measurement during or at the
end of a trial can produce useful information.
SYSTEMATIC REVIEWS
Systematic reviews identify a set of completed studies that address a particular research question,
and evaluate the results of these studies to arrive at conclusions about a body of research. In contrast
to other approaches to reviewing the literature, a systematic review uses a well- defined approach to
identify all relevant studies, display the characteristics and results of eligible studies, and, when
appropriate, calculate a summary estimate of the overall results. The statistical aspects of a
systematic review (calculating summary effect estimates and variance, statistical tests of
heterogeneity, and statistical estimates of publication bias) are called meta-analysis.
A systematic review can be a great opportunity for a new investigator. Although it takes a surprising
amount of time and effort, a systematic review generally does not require substantial financial or other
resources.
5. Longitudinal studies
Longitudinal studies employ continuous or repeated measures to follow particular individuals over
prolonged periods of time—often years or decades. They are generally observational in nature, with
quantitative and/or qualitative data being collected on any combination of exposures and outcomes,
without any external influenced being applied. This study type is particularly useful for evaluating the
relationship between risk factors and the development of disease, and the outcomes of treatments
over different lengths of time. Similarly, because data is collected for given individuals within a
predefined group, appropriate statistical testing may be employed to analyze change over time for the
group as a whole, or for particular individuals.
In contrast, cross-sectional analysis is another study type that may analyze multiple variables at a
given instance, but provides no information with regards to the influence of time on the variables
measured—being static by its very nature. It is thus generally less valid for examining cause-and-effect
relationships. Nonetheless, cross-sectional studies require less time to be set up, and may be
considered for preliminary evaluations of association prior to embarking on cumbersome
longitudinal-type studies.
Longitudinal research may take numerous different forms. They are generally observational, however,
may also be experimental. Some of these are briefly discussed below:
(i) Repeated cross-sectional studies where study participants are largely or entirely different on each
sampling occasion;
Mr. Abhishek Sharma
Contact : 7859826868
(ii) Prospective studies where the same participants are followed over a period of time. These may
include:
● Cohort panels wherein some or all individuals in a defined population with similar exposures or
outcomes are considered over time;
● Representative panels where data is regularly collected for a random sample of a population;
● Linked panels wherein data collected for other purposes is tapped and linked to form
individual-specific datasets.
(iii) Retrospective studies are designed after at least some participants have already experienced
events that are of relevance; with data for potential exposures in the identified cohort being collected
and examined retrospectively.
Longitudinal cohort studies, particularly when conducted prospectively in their pure form, offer
numerous benefits. These include:
(i) The ability to identify and relate events to particular exposures, and to further define these
exposures with regards to presence, timing and chronicity;
(iii) Following change over time in particular individuals within the cohort;
(iv) Excluding recall bias in participants, by collecting data prospectively and prior to knowledge of a
possible subsequent event occurring, and;
(v) Ability to correct for the “cohort effect”—that is allowing for analysis of the individual time
components of cohort (range of birth dates), period (current time), and age (at point of
measurement)—and to account for the impact of each individually.
Numerous challenges are implicit in the study design; particularly by virtue of this occurring over
protracted time periods. We briefly consider the below:
(i) Incomplete and interrupted follow-up of individuals, and attrition with loss to follow-up over time; with
notable threats to the representative nature of the dynamic sample if potentially resulting from a
particular exposure or occurrence that is of relevance;
(ii) Difficulty in separation of the reciprocal impact of exposure and outcome, in view of the potentiation
of one by the other; and particularly wherein the induction period between exposure and occurrence is
prolonged;
(iii) The potential for inaccuracy in conclusion if adopting statistical techniques that fail to account for
the intra-individual correlation of measures, and;
Mr. Abhishek Sharma
Contact : 7859826868
(iv) Generally-increased temporal and financial demands associated with this approach.
Conclusions
Longitudinal methods may provide a more comprehensive approach to research, that allows an
understanding of the degree and direction of change over time. One should carefully consider the cost
and time implications of embarking on such a project, whilst ensuring complete and proven clarity in
design and process, particularly in view of the protracted nature of such an Endeavour; and noting the
peculiarities for consideration at the interpretation stage.
6. Panel studies
A study that provides longitudinal data on a group of people, households, employers, or other social
unit, termed ‘the panel’, about which information is collected over a period of months, years, or
decades. Two of the most common types of panel are age-cohorts, people within a common age-band,
and groups with some other date-specific common experience, such as people graduating from
university, having a first child, or migrating to another country in a given year or band of years. Another
type is the nationally representative cross-sectional sample of households or employers that is
interviewed at regular intervals over a period of years. Because data relate to the same social units,
change is measured more reliably than in regular cross-sectional studies, and sample sizes can be
correspondingly smaller (often under 500), while remaining nationally representative, as long as
non-response and sample attrition are kept within bounds. These are the key problems for panel
studies, as initial samples are eroded by deaths, migration, fatigue with the study, and other causes.
Another problem is that people become experienced interviewees, leading to response bias. For
example, they may report ‘no change’ since the previous interview, so as to avoid detailed questioning
on changes that have in fact occurred.
Data are usually collected through interview surveys with respondents in the panel, with other
informants (such as parents, doctors), with their spouses and other members of their household. With
the respondent’s permission, data from administrative records may be added, such as information from
educational or medical records, which are usually more precise than the respondent’s recollection. A
panel element is sometimes added to regular cross-sectional surveys, and rotating sample designs are
a hybrid between panel study and regular survey.
(a) If mini-samples of a given population are studied by single contacts and differences in the results
noted from one period to another, one cannot know whether these differences are due to differences in
the samples surveyed during each period includes the same persons or groups, as in the panel
techniques, the variations or shifts in the results may be attributed with certitude to a real change in the
phenomena studied.
For example, full effect of a campaign cannot be ascertained through sequence of polls taken on
different people. They show only majority changes.
They conceal minor changes which tend to cancel out one another and sometimes even major
changes if these are nullified by opposing trends. Most importantly, they neither indicate who is
changing nor do they follow the vagaries of the individual voter along the path of his vote, to discover
the relative effects of various other influential factors on his final voting verdict.
Mr. Abhishek Sharma
Contact : 7859826868
(b) Data secured from the same persons over a period of time, affording a detailed picture of the
factors involved in bringing about shifts in opinions or attitudes, can be secured for everyone in the
panel. An analysis of the chartered profile of individuals in a panel may afford the researcher an insight
into the causal relationships.
(c) The information collected about each person from time to time tends to be deeper and more
voluminous than that obtained in single contacts. It is possible, despite certain limitations to build up an
inclusive case history of each panel member.
(d) Provided, of course, that the group constituting the panel is cooperative, it may well be possible to
set up experimental situations which expose all members of the panel to a certain influence and thus
enable the effectiveness of this influence to be measured.
(e) It has been the experience of researchers that the members of a panel learn to open out and
unload their feeling in the course of frequentative interviews and so valuable comments and
elaboration of points made by them can be secured.
Whereas the first interview may elicit only ‘yes’ or ‘no’ responses from the respondents, the repeated
interviews or measurements spread over a continuum of time may elicit from them elaborate
responses in so far as they might have thought deeply about the problem after the first administration.
On first contact, the informants may be suspicious of the investigator and may have little familiarity with
the problem.
The problems raised by the panel procedure are often sufficient to off-set the gains attendant upon it.
We may briefly discuss the Limitations of the Panel techniques.
(a) The loss of panel members presents a formidable problem for the researcher. People change their
locale, become ill, or die or are subjected to other influences which make it necessary for them to drop
out of the panel. Thus, the panel that was initially intended as a representative sample of the
population may subsequently become unrepresentative.
The losses in the membership of the panel may be occasioned by the loss of interest among the panel
members or a change in attitude toward the panel idea. Not infrequently, the enthusiasm of the panel
members dies down after the first or the second interview.
(b) Paul Lazarsfeld has pointed out that the members of a panel develop a ‘critical set’ and hence
cease to be representatives of the general public. The panel invariably has an educational effect.
It tends to dramatize and increase one’s interest in otherwise unobserved elements and to heighten
one’s interest in otherwise unobserved elements and to heighten one’s awareness of things and
events around him. Hence the mere fact of participation in the panel may change a person’s attitude
and opinions.
(c) Once the members of a panel have expressed an attitude or opinion they tend to try to be
consistent and stick to it. Thus, panel members as compared to the general public are less likely to
change. Thus, the panel may misrepresent the population.
Mr. Abhishek Sharma
Contact : 7859826868
(d) The detailed records are available for the most stationary elements of the population. Of course,
the mobile groups of a community belong to the panel for a shorter time. Panels composed of the
same persons for many years will gradually become panels of old people and eventually die out.
A panel study, however, is not always feasible. One of the difficulties is that the events or thoughts may
already be long past by the time the researcher begins. Occasionally, memory is not always reliable
and the respondents may be inclined to ‘construct’ these part events not so much from their fading
memories but from their personalized theory about their past.
7. Questionnaire studies
Questionnaire is the most popularly and widely used tool for collecting the primary data. It suits to any
kind of research problem. In today’s marketing research activities, the questionnaire has become
indispensable tool. It is not used only in marketing field, but also all types of social research projects.
The task of composing questionnaire may be considered more an art than a science. It needs a great
deal of experience, expertise, and creativity.
Types of Questionnaire
1. On the basis of structure and distinguishes, there are four types of questionnaire:
This type of questionnaire involves structured and undisguised questions. Response is limited to
certain options. A structured means that answers of the questions are predetermined. Respondents
have to select answer from the given list of answers. Undisguised means questions are open-ended.
They are asked directly. Respondents can know what the researcher wants to know. For example,
there are four products a, b, c, and d. Customers are asked to select the most preferred product.
Structured means the answers of the questions to be asked are determined in advance. Disguised
means indirect way of asking questions. Customers do not know the exact purpose/intension of
question but can answer easily. For example, which of the following products is more harmful? Why?
1. Unstructured Disguised
Here, the response is not fixed. Respondents have full freedom to answer the question. Disguised
means something hidden. For example, which motorbike is more risky? Why?
This questionnaire is prepared to administer for personal interview. It may involve more questions and
indirect questions which requires explanation or clarification.
This questionnaire is prepared to collect information via telephone. Obviously, such questionnaire
involves a limited number of short and simple questions.
This questionnaire it meant for the mail survey. This questionnaire is sent to respondents with a
request to return it dully filled. It also involves short and simple questions. However, it consists of more
questions.
When this questionnaire is to be administered, the presence of both interviewer and respondents is
essential. Here, questions are asked to respondents one by one. Their response is recorded either in
the same questionnaire or in the separate form. For example, picture or cartoon are shown to
respondents and are asked to describe it. Personal interview and telephone survey are based on this
type of questionnaire.
Mr. Abhishek Sharma
Contact : 7859826868
(ii) Self-administered Questionnaire
It is given to respondent by the interviewer to fill in her/his answers. This is possible even in absence of
interviewer. For mail survey, this type of questionnaire is used.
This questionnaire involves certain number of questions of the same type. It may involve only
dichotomous, multiple choice, or otherwise. It has limited utility.
Obviously, it involves variety of questions. Such questionnaire consists of certain questions of different
categories. It is a popular questionnaire.
1. Measurement
Measure is important in research. Measure aims to ascertain the dimension, quantity, or capacity of
the behaviors or events that researchers want to explore. According to Maxim (1999), measurement is
a process of mapping empirical phenomena with using system of numbers.
Basically, the events or phenomena that researchers interested can be existed as domain.
Measurement links the events in domain to events in another space which called range. In another
words, researchers can measure certain events in certain range. The range is consisting of scale.
Thus, researchers can interpret the data with quantitative conclusion which leads to more accurate and
standardized outcomes. Without measure, researchers can’t interpret the data accurately and
systematically.
Quantitative Measurements
Qualitative Measurements
Qualitative measurements are ways of gaining a deeper understanding of a topic. Researchers who
are looking to find the meanings behind certain phenomenon or are investigating a new topic about
Mr. Abhishek Sharma
Contact : 7859826868
which little is known, use qualitative measures. Qualitative measures are often contrasted with
quantitative measures. Both are complex methods of research, however, qualitative measures typically
deal with textual data or words while quantitative measures analyze numerical data or statistics.
1.1. Definition
Measurement is the process observing and recording the observations that are
collected as part of a research effort. There are two major issues that will be
considered here.
First, you have to understand the fundamental ideas involved in measuring. Here we
consider two of major measurement concepts. In Levels of Measurement, I explain
the meaning of the four major levels of measurement: nominal, ordinal, interval and
ratio. Then we move on to the reliability of measurement, including consideration of
true score theory and a variety of reliability estimators.
The sketch of how research should be conducted can be prepared using research
design. Hence, the market research study will be carried out on the basis of research
design.
The design of a research topic is used to explain the type of research (experimental,
survey, correlational, semi-experimental, review) and also its sub-type (experimental
design, research problem, descriptive case-study). There are three main sections of
research design: Data collection, measurement, and analysis.
The type of research problem an organization is facing will determine the research
design and not vice-versa. Variables, designated tools to gather information, how will
the tools be used to collect and analyze data and other factors are decided in research
design on the basis of a research technique is decided.
An impactful research design usually creates minimum bias in data and increases trust
on the collected and analyzed research information. Research design which produces
the least margin of error in experimental research can be touted as the best. The
essential elements of research design are:
Validity: There are multiple measuring tools available for research design but valid
measuring tools are those which help a researcher in gauging results according to the
objective of research and nothing else. The questionnaire developed from this research
design will be then valid.
How to write a
research methodology
In your thesis or dissertation, you will have to discuss the methods you used to do your research.
The methodology or methods section explains what you did and how you did it, allowing readers to
or question did you investigate, and what kind of data did you need to answer it?
○ Quantitative methods (e.g. surveys) are best for measuring, ranking, categorizing,
exploration
● Depending on your discipline and approach, you might also begin with a discussion
○ Why is this the most suitable approach to answering your research questions?
○ What are the criteria for validity and reliability in this type of research?
designed study with a representative sample and controlled variables that can be
real-world knowledge about the behaviors, social structures and shared beliefs of a
interpretive, you will need to reflect on your position as researcher, taking into
account how your participation and perception might have influenced the results.
full details of the methods you used to conduct the research. Outline the tools,
procedures and materials you used to gather data, and the criteria you used to
Mr. Abhishek Sharma
Contact : 7859826868
Quantitative methods
Surveys
rating scale)?
○ Did you conduct surveys by phone, mail, online or in person, and how long did
● You might want to include the full questionnaire as an appendix so that your reader
Experiments
Give full details of the tools, techniques and procedures you used to conduct the
experiment.
○ How did you design the experiment?
Existing data
Explain how you gathered and selected material (such as publications or archival
○ What criteria did you use to select material (e.g. date range)?
respondents had to answer with a 7-point Likert scale. The aim was to conduct the
Mr. Abhishek Sharma
Contact : 7859826868
survey with 350 customers of Company X on the company premises in The Hague
from 4-8 July 2017 between 11:00 and 15:00. A customer was defined as a person
Participants were given 5 minutes to fill in the survey anonymously, and 408
customers responded. Because not all surveys were fully completed, 371 survey
Qualitative methods
○ How long were the interviews and how were they recorded?
● Participant observation
○ How long did you spend conducting the research and where was it located?
○ How did you record your data (e.g. audiovisual recordings, note-taking)?
● Existing data
Explain how you selected case study materials (such as texts or images) for the
In order to gain a better insight into the possibilities for improvement of the product
the main target group of Company X. A returning customer was defined as someone
who usually bought products at least twice a week from Company X. The surveys
were used to select participants who belonged to the target group (20-45 years old).
Mr. Abhishek Sharma
Contact : 7859826868
Interviews were conducted in a small office next to the cash register, and lasted
interviews were also filmed with consent. One interviewee preferred not to be filmed.
● Height of people.
● Weight of cars.
● IQ.
● Volume of liquid.
Unidimensionality can also refer to measuring a single ability, attribute, construct, or skill. For
example, a unidimensional mathematical test would be designed to measure only mathematical ability
(and not, say, grasp of English grammar, knowledge of sports, or other non-mathematical subjects or
concepts).
Some concepts (like height or weight) are obviously unidimensional. Others can be forced into a
unidimensional status by narrowing the idea into a single, measurable construct. For example,
self-worth is a psychological concept that has many layers of complexity and can be different for
different situations (at home, at a party, at work, at your wedding). However, you can narrow the
concept by making a simple line that has “low self worth” on the left and “high self worth” on the right.
The term scaling comes from psychometrics, where abstract concepts (“objects”) are assigned
numbers according to a rule (Trochim, 2006). For example, you may want to quantify a person’s
attitude to global warming. You could assign a “1” to “doesn’t believe in global warming”, a 10 to “firmly
believes in global warming” and a scale of 2 to 9 for attitudes in between. You can also think of
“scaling” as the fact that you’re essentially scaling down the data (i.e. making it simpler by creating
lower-dimensional data). Data that is scaled down in dimension keeps similar properties. For example,
two data points that are close together in high-dimensional space will also be close together in
low-dimensional space (Martinez, 2005). The “multidimensional” part is due to the fact that you aren’t
limited to two dimensional graphs or data. Three-dimensional, four-dimensional and higher plots are
possible.
2. Measurement Scales
Mr. Abhishek Sharma
Contact : 7859826868
Measurement scale, in statistical analysis, the type of information provided by numbers. Each of the
four scales (i.e., nominal, ordinal, interval, and ratio) provides a different type of information.
Measurement refers to the assignment of numbers in a meaningful way, and understanding
measurement scales is important to interpreting the numbers assigned to people, objects, and events.
2.1. Nominal
Nominal Scales
In nominal scales, numbers, such as driver’s license numbers and product serial numbers, are used to
name or identify people, objects, or events. Gender is an example of a nominal measurement in which
a number (e.g., 1) is used to label one gender, such as males, and a different number (e.g., 2) is used
for the other gender, females. Numbers do not mean that one gender is better or worse than the other;
they simply are used to classify persons. In fact, any other numbers could be used, because they do
not represent an amount or a quality. It is impossible to use word names with certain statistical
techniques, but numerals can be used in coding systems. For example, fire departments may wish to
examine the relationship between gender (where male = 1, female = 2) and performance on
physical-ability tests (with numerical scores indicating ability).
2.2. Ordinal
Ordinal Scales
In ordinal scales, numbers represent rank order and indicate the order of quality or quantity, but they
do not provide an amount of quantity or degree of quality. Usually, the number 1 means that the person
(or object or event) is better than the person labeled 2; person 2 is better than person 3, and so
forth—for example, to rank order persons in terms of potential for promotion, with the person assigned
the 1 rating having more potential than the person assigned a rating of 2. Such ordinal scaling does
not, however, indicate how much more potential the leader has over the person assigned a rating of 2,
and there may be very little difference between 1 and 2 here. When ordinal measurement is used
(rather than interval measurement), certain statistical techniques are applicable (e.g., Spearman’s rank
correlation).
2.3. Interval
Interval Scale
Mr. Abhishek Sharma
Contact : 7859826868
In interval scales, numbers form a continuum and provide information about the amount of difference,
but the scale lacks a true zero. The differences between adjacent numbers are equal or known. If zero
is used, it simply serves as a reference point on the scale but does not indicate the complete absence
of the characteristic being measured. The Fahrenheit and Celsius temperature scales are examples of
interval measurement. In those scales, 0 °F and 0 °C do not indicate an absence of temperature.
2.4. Ratio
Ratio Scales
Ratio scales have all of the characteristics of interval scales as well as a true zero, which refers to
complete absence of the characteristic being measured. Physical characteristics of persons and
objects can be measured with ratio scales, and, thus, height and weight are examples of ratio
measurement. A score of 0 means there is complete absence of height or weight. A person who is 1.2
metres (4 feet) tall is two-thirds as tall as a 1.8-metre- (6-foot-) tall person. Similarly, a person weighing
45.4 kg (100 pounds) is two-thirds as heavy as a person who weighs 68 kg (150 pounds).
Rating scale is defined as a closed-ended survey question used to represent respondent feedback in a
comparative form for specific particular features/products/services. It is one of the most established
question types for online and offline surveys where survey respondents are expected to rate an
attribute or feature. Rating scale is a variant of the popular multiple-choice question which is widely
used to gather information that provides relative information about a specific topic.
Researchers use a rating scale in research when they intend to associate a qualitative measure with
the various aspects of a product or feature. Generally, this scale is used to evaluate the performance of
a product or service, employee skills, customer service performances, processes followed for a
particular goal etc. Rating scale survey question can be compared to a checkbox question but rating
scale provides more information than merely Yes/No.
Broadly speaking, rating scales can be divided into two categories: Ordinal and Interval Scales.
An ordinal scale is a scale the depicts the answer options in an ordered manner. The difference
between the two answer option may not be calculable but the answer options will always be in a
certain innate order. Parameters such as attitude or feedback can be presented using an ordinal scale.
An interval scale is a scale where not only is the order of the answer variables established but the
magnitude of difference between each answer variable is also calculable. Absolute or true zero value
is not present in an interval scale. Temperature in Celsius or Fahrenheit is the most popular example of
an interval scale. Net Promoter Score, Likert Scale, Bipolar Matrix Table are some of the most effective
types of interval scale.
Mr. Abhishek Sharma
Contact : 7859826868
There are four primary types of rating scales which can be suitably used in an online survey:
(i) Graphic Rating Scale: Graphic rating scale indicates the answer options on a scale of 1-3, 1-5,
etc. Likert Scale is a popular graphic rating scale example. Respondents can select a particular option
on a line or scale to depict rating. This rating scale is often implemented by HR managers to conduct
employee evaluation.5 point likert scale for satisfaction
(ii) Numerical Rating Scale: Numerical rating scale has numbers as answer options and not each
number corresponds to a characteristic or meaning. For instance, a Visual Analog Scale or a Semantic
Differential Scale can be presented using a numerical rating scale
(iii) Descriptive Rating Scale: In a descriptive rating scale, each answer option is elaborately
explained for the respondents. A numerical value is not always related to the answer options in the
descriptive rating scale. There are certain surveys, for example, a customer satisfaction survey, which
needs to describe all the answer options in detail so that every customer has thoroughly explained
information about what is expected from the survey.
(iv) Comparative Rating Scale: Comparative rating scale, as the name suggests, expects
respondents to answer a particular question in terms of comparison, i.e. on the basis of relative
measurement or keeping other organizations/products/features as a reference.
RANKING SCALE
A ranking scale is a survey question tool that measures people’s preferences by asking them to rank
their views on a list of related items. Using these scales can help your business establish what matters
and what doesn’t matter to either external or internal stakeholders. You could use ranking scale
questions to evaluate customer satisfaction or to assess ways to motivate your employees, for
example. Ranking scales can be a source of useful information, but they do have some disadvantages.
Businesses typically use ranking scales when they want to establish preferences or levels of
importance in a group of items. A respondent completing a scale with five items, for example, will
assign a number 1 through 5 to each individual one. Typically, the number 1 goes to the item that is
most important to the respondent; the number 5 goes to the one that is of least importance. In some
cases, scales do not force respondents to rank all items, asking them to choose their top three out of
the five, for example. Online surveys may remove the need to key in numbers, allowing respondents to
drag and drop items into order.
3. Thurston Scaling
Although there are technically three scales, when people refer to the “Thurstone Scale” they’re
usually talking about the method of equal-appearing intervals. It’s called “Equal appearing
intervals” because when you choose the items for your test (see Step 6 below), you’re picking
items equally spaced apart.
● The method of successive intervals: this method is more challenging to implement than
equal-appearing intervals.
● The method of paired comparisons: requires twice the judgments than the equal-appearing
intervals method and can quickly become very consuming.
The three methods differ in their construction, but still result in the same Agree/Disagree quiz
given to respondents.
Step 1: Develop a large number of agree/disagree statements for a topic. For example, if you
wanted to find out people’s attitudes towards immigrants, your statements might include:
Step 2: Have a panel of judges rate the items on a scale of 1 to 11 for how favorable each
item is towards the topic (in this case, immigration). The lowest score(1) should indicate an
extremely unfavorable attitude and the highest score(11) should indicate an extremely
favorable attitude. Note that you do not want the judges to agree or disagree with the
statements — you want them to rate the statements on how effective they would be at
uncovering attitudes.
Step 3: Find the median score and interquartile range (IQR) for each item. If you have 50
items, you should have 50 median scores and 50 IQRs.
Step 4: Sort the table in ascending order(smallest to largest) by median. In other words, the 1s
should be at the top of the table and the 11s should be at the bottom.
Step 5: For each set of medians (i.e. 1s. 2s, 3s) sort the IQRs by descending order (largest to
smallest).
The figure below shows a partial table with the data sorted according to ascending medians
with their respective, descending IQRs.
Step 6: Select your final scale items using the table you created in Step 4 and 5. For example,
you might choose one item from each median value.
You want the statements with the most agreement between judges. For each median value,
Mr. Abhishek Sharma
Contact : 7859826868
this is the item with the lowest interquartile range. This is a “Rule of Thumb”: you don’t have to
choose this item. If you decide it’s poorly worded or ambiguous, choose the item above it (with
the next lowest IQR).
Various kinds of rating scales have been developed to measure attitudes directly (i.e. the person
knows their attitude is being studied). The most widely used is the Likert Scale.
Likert (1932) developed the principle of measuring attitudes by asking people to respond to a series of
statements about a topic, in terms of the extent to which they agree with them, and so tapping into the
cognitive and affective components of attitudes.
Likert-type or frequency scales use fixed choice response formats and are designed to measure
attitudes or opinions (Bowling, 1997; Burns, & Grove, 1997). These ordinal scales measure levels of
agreement/disagreement.
A Likert-type scale assumes that the strength/intensity of experience is linear, i.e. on a continuum from
strongly agree to strongly disagree, and makes the assumption that attitudes can be measured.
Respondents may be offered a choice of five to seven or even nine pre-coded responses with the
neutral point being neither agree nor disagree.
In its final form, the Likert Scale is a five (or seven) point scale which is used to allow the individual to
express how much they agree or disagree with a particular statement.
For example
I believe that ecological questions are the most important issues facing human beings today.
Each of the five (or seven) responses would have a numerical value which would be used to measure
the attitude under investigation.
(i) Agreement
Mr. Abhishek Sharma
Contact : 7859826868
● Strongly Agree
● Agree
● Undecided
● Disagree
● Strongly Disagree
(ii) Frequency
● Very Frequently
● Frequently
● Occasionally
● Rarely
● Never
(iii) Importance
● Very Important
● Important
● Moderately Important
● Of Little Importance
● Unimportant
(iv) Likelihood
(i) Summarize using a median or a mode (not a mean); the mode is probably the most suitable for
easy interpretation.
Mr. Abhishek Sharma
Contact : 7859826868
(ii) Display the distribution of observations in a bar chart (it can’t be a histogram, because the data is
not continuous).
Critical Evaluation
Likert Scales have the advantage that they do not expect a simple yes / no answer from the
respondent, but rather allow for degrees of opinion, and even no opinion at all. Therefore quantitative
data is obtained, which means that the data can be analyzed with relative ease.
However, like all surveys, the validity of Likert Scale attitude measurement can be compromised due
social desirability. This means that individuals may lie to put themselves in a positive light. For
example, if a likert scale was measuring discrimination, who would admit to being racist?
Offering anonymity on self-administered questionnaires should further reduce social pressure, and
thus may likewise reduce social desirability bias.
Paulhus (1984) found that more desirable personality characteristics were reported when people were
asked to write their names, addresses and telephone numbers on their questionnaire than when they
told not to put identifying information on the questionnaire.
The Semantic Differential Scale is a seven-point rating scale used to derive the respondent’s attitude
towards the given object or event by asking him to select an appropriate position on a scale between
two bipolar adjectives (such as “warm” or “cold”, “powerful” or “weak”, etc.)
For example, the respondent might be asked to rate the following five attributes of shoppers stop by
choosing a position on a scale between the adjectives that best describe what really the shoppers stop
means to him.
Mr. Abhishek Sharma
Contact : 7859826868
The respondent will place a mark anywhere between the two extreme adjectives, representing his
attitude towards the object. Such as, in the above example, the shoppers stop is evaluated as
organized, cold, modern, reliable and simple.
Sometimes the negative adjectives are placed on the right and sometimes on the left side of a scale.
This is done to control the tendency of the respondents, especially those with either very positive or
negative attitudes, to mark the right or left-hand sides of a scale without reading the labels.
The items on a semantic differential scale can be scored on either a numerical range of -3 to +3 or 1
to 7. The data obtained are analyzed through profile analysis. In profile analysis, the means and
medians of the scale values are found out and then are compared by plotting or statistical analysis.
Through this method, it is possible to compare the overall similarities and differences among the
objects.
The versatility of the semantic differential scale increases its application in the marketing research. It is
widely used in comparing the brand, company image, and product. It also helps in developing an
advertising campaign and promotional strategies in new product development studies.
6. Paired comparison
Definition: The Paired Comparison Scaling is a comparative scaling technique wherein
the respondent is shown two objects at the same time and is asked to select one according
to the defined criterion. The resulting data are ordinal in nature.
The paired Comparison scaling is often used when the stimulus objects are physical
products. The comparison data so obtained can be analyzed in either of the ways. First,
the researcher can compute the percentage of respondents who prefer one object over
another by adding the matrices for each respondent, dividing the sum by the number of
respondents and then multiplying it by 100. Through this method, all the stimulus objects
can be evaluated simultaneously.
Second, under the assumption of transitivity (which implies that if brand X is preferred to
Brand Y, and brand Y to brand Z, then brand X is preferred to brand Z) the paired
comparison data can be converted into a rank order. To determine the rank order, the
researcher identifies the number of times the object is preferred by adding up all the
matrices.
The paired comparison method is effective when the number of objects is limited because
it requires the direct comparison. And with a large number of stimulus objects the
comparison becomes cumbersome. Also, if there is a violation of the assumption of
transitivity the order in which the objects are placed may bias the results.
Reliability
Reliability
Test-Retest Reliability
When researchers measure a construct that they assume to be consistent across time, then the scores they obtain
should also be consistent across time.
Test-retest reliability
is the extent to which this is actually the case. For example, intelligence is generally thought to be consistent
across time. A person who is highly intelligent today will be highly intelligent next week. This means that any
Mr. Abhishek Sharma
Contact : 7859826868
good measure of intelligence should produce roughly the same scores for this individual next week as it does
today. Clearly, a measure that produces highly inconsistent scores over time cannot be a very good measure of a
construct that is supposed to be consistent.
Assessing test-retest reliability requires using the measure on a group of people at one time, using it again on the
same group of people at a later time, and then looking at
test-retest correlation
between the two sets of scores. This is typically done by graphing the data in a scatterplot and computing
Pearson’s r. Figure 5.2 shows the correlation between two sets of scores of several university students on the
Rosenberg Self-Esteem Scale, administered two times, a week apart. Pearson’s r for these data is +.95. In general,
a test-retest correlation of +.80 or greater is considered to indicate good reliability.
Figure 5.2 Test-Retest Correlation Between Two Sets of Scores of Several College Students on the
Rosenberg Self-Esteem Scale, Given Two Times a Week Apart
Again, high test-retest correlations make sense when the construct being measured is assumed to be consistent
over time, which is the case for intelligence, self-esteem, and the Big Five personality dimensions. But other
constructs are not assumed to be stable over time. The very nature of mood, for example, is that it changes. So a
measure of mood that produced a low test-retest correlation over a period of a month would not be a cause for
concern.
Internal Consistency
internal consistency
Mr. Abhishek Sharma
Contact : 7859826868
, which is the consistency of people’s responses across the items on a multiple-item measure. In general, all the
items on such measures are supposed to reflect the same underlying construct, so people’s scores on those items
should be correlated with each other. On the Rosenberg Self-Esteem Scale, people who agree that they are a
person of worth should tend to agree that that they have a number of good qualities. If people’s responses to the
different items are not correlated with each other, then it would no longer make sense to claim that they are all
measuring the same underlying construct. This is as true for behavioural and physiological measures as for
self-report measures. For example, people might make a series of bets in a simulated game of roulette as a
measure of their level of risk seeking. This measure would be internally consistent to the extent that individual
participants’ bets were consistently high or low across trials.
Like test-retest reliability, internal consistency can only be assessed by collecting and analyzing data. One
approach is to look at a
split-half correlation
. This involves splitting the items into two sets, such as the first and second halves of the items or the even- and
odd-numbered items. Then a score is computed for each set of items, and the relationship between the two sets of
scores is examined. For example, Figure 5.3 shows the split-half correlation between several university students’
scores on the even-numbered items and their scores on the odd-numbered items of the Rosenberg Self-Esteem
Scale. Pearson’s r for these data is +.88. A split-half correlation of +.80 or greater is generally considered good
internal consistency.
Figure 5.3 Split-Half Correlation Between Several College Students’ Scores on the Even-Numbered
Items and Their Scores on the Odd-Numbered Items of the Rosenberg Self-Esteem Scale
Perhaps the most common measure of internal consistency used by researchers in psychology is a statistic called
Cronbach’s α
Mr. Abhishek Sharma
Contact : 7859826868
(the Greek letter alpha). Conceptually, α is the mean of all possible split-half correlations for a set of items. For
example, there are 252 ways to split a set of 10 items into two sets of five. Cronbach’s α would be the mean of
the 252 split-half correlations. Note that this is not how α is actually computed, but it is a correct way of
interpreting the meaning of this statistic. Again, a value of +.80 or greater is generally taken to indicate good
internal consistency.
Interrater Reliability
Many behavioural measures involve significant judgment on the part of an observer or a rater.
Inter-rater reliability
is the extent to which different observers are consistent in their judgments. For example, if you were interested
in measuring university students’ social skills, you could make video recordings of them as they interacted with
another student whom they are meeting for the first time. Then you could have two or more observers watch the
videos and rate each student’s level of social skills. To the extent that each participant does in fact have some
level of social skills that can be detected by an attentive observer, different observers’ ratings should be highly
correlated with each other. Inter-rater reliability would also have been measured in Bandura’s Bobo doll study. In
this case, the observers’ ratings of how many acts of aggression a particular child committed while playing with
the Bobo doll should have been highly positively correlated. Interrater reliability is often assessed using
Cronbach’s α when the judgments are quantitative or an analogous statistic called Cohen’s κ (the Greek letter
kappa) when they are categorical.
Validity
Validity
is the extent to which the scores from a measure represent the variable they are intended to. But how do
researchers make this judgment? We have already considered one factor that they take into account—reliability.
When a measure has good test-retest reliability and internal consistency, researchers should be more confident
that the scores represent what they are supposed to. There has to be more to it, however, because a measure can
be extremely reliable but have no validity whatsoever. As an absurd example, imagine someone who believes
that people’s index finger length reflects their self-esteem and therefore tries to measure self-esteem by holding a
ruler up to people’s index fingers. Although this measure would have extremely good test-retest reliability, it
Mr. Abhishek Sharma
Contact : 7859826868
would have absolutely no validity. The fact that one person’s index finger is a centimetre longer than another’s
would indicate nothing about which one had higher self-esteem.
Discussions of validity usually divide it into several distinct “types.” But a good way to interpret these types is
that they are other kinds of evidence—in addition to reliability—that should be taken into account when judging
the validity of a measure. Here we consider three basic kinds: face validity, content validity, and criterion
validity.
Face Validity
Face validity
is the extent to which a measurement method appears “on its face” to measure the construct of interest. Most
people would expect a self-esteem questionnaire to include items about whether they see themselves as a person
of worth and whether they think they have good qualities. So a questionnaire that included these kinds of items
would have good face validity. The finger-length method of measuring self-esteem, on the other hand, seems to
have nothing to do with self-esteem and therefore has poor face validity. Although face validity can be assessed
quantitatively—for example, by having a large sample of people rate a measure in terms of whether it appears to
measure what it is intended to—it is usually assessed informally.
Face validity is at best a very weak kind of evidence that a measurement method is measuring what it is supposed
to. One reason is that it is based on people’s intuitions about human behaviour, which are frequently wrong. It is
also the case that many established measures in psychology work quite well despite lacking face validity. The
Minnesota Multiphasic Personality Inventory-2 (MMPI-2) measures many personality characteristics and
disorders by having people decide whether each of over 567 different statements applies to them—where many
of the statements do not have any obvious relationship to the construct that they measure. For example, the items
“I enjoy detective or mystery stories” and “The sight of blood doesn’t frighten me or make me sick” both
measure the suppression of aggression. In this case, it is not the participants’ literal answers to these questions
that are of interest, but rather whether the pattern of the participants’ responses to a series of questions matches
those of individuals who tend to suppress their aggression.
Content Validity
Content validity
is the extent to which a measure “covers” the construct of interest. For example, if a researcher conceptually
defines test anxiety as involving both sympathetic nervous system activation (leading to nervous feelings) and
negative thoughts, then his measure of test anxiety should include items about both nervous feelings and negative
Mr. Abhishek Sharma
Contact : 7859826868
thoughts. Or consider that attitudes are usually defined as involving thoughts, feelings, and actions toward
something. By this conceptual definition, a person has a positive attitude toward exercise to the extent that he or
she thinks positive thoughts about exercising, feels good about exercising, and actually exercises. So to have
good content validity, a measure of people’s attitudes toward exercise would have to reflect all three of these
aspects. Like face validity, content validity is not usually assessed quantitatively. Instead, it is assessed by
carefully checking the measurement method against the conceptual definition of the construct.
Criterion Validity
Criterion validity
is the extent to which people’s scores on a measure are correlated with other variables (known as
criteria
) that one would expect them to be correlated with. For example, people’s scores on a new measure of test
anxiety should be negatively correlated with their performance on an important school exam. If it were found that
people’s scores were in fact negatively correlated with their exam performance, then this would be a piece of
evidence that these scores really represent people’s test anxiety. But if it were found that people scored equally
well on the exam regardless of their test anxiety scores, then this would cast doubt on the validity of the measure.
A criterion can be any variable that one has reason to think should be correlated with the construct being
measured, and there will usually be many of them. For example, one would expect test anxiety scores to be
negatively correlated with exam performance and course grades and positively correlated with general anxiety
and with blood pressure during an exam. Or imagine that a researcher develops a new measure of physical risk
taking. People’s scores on this measure should be correlated with their participation in “extreme” activities such
as snowboarding and rock climbing, the number of speeding tickets they have received, and even the number of
broken bones they have had over the years. When the criterion is measured at the same time as the construct,
criterion validity is referred to as
concurrent validity
; however, when the criterion is measured at some point in the future (after the construct has been measured), it is
referred to as
predictive validity
convergent validity
Assessing convergent validity requires collecting data using the measure. Researchers John Cacioppo and
Richard Petty did this when they created their self-report Need for Cognition Scale to measure how much people
value and engage in thinking (Cacioppo & Petty, 1982). In a series of studies, they showed that people’s scores
were positively correlated with their scores on a standardized academic achievement test, and that their scores
were negatively correlated with their scores on a measure of dogmatism (which represents a tendency toward
obedience). In the years since it was created, the Need for Cognition Scale has been used in literally hundreds of
studies and has been shown to be correlated with a wide variety of other variables, including the effectiveness of
an advertisement, interest in politics, and juror decisions (Petty, Briñol, Loersch, & McCaslin, 2009)
Discriminant Validity
Discriminant validity
, on the other hand, is the extent to which scores on a measure are not correlated with measures of variables that
are conceptually distinct. For example, self-esteem is a general attitude toward the self that is fairly stable over
time. It is not the same as mood, which is how good or bad one happens to be feeling right now. So people’s
scores on a new measure of self-esteem should not be very highly correlated with their moods. If the new
measure of self-esteem were highly correlated with a measure of mood, it could be argued that the new measure
is not really measuring self-esteem; it is measuring mood instead.
When they created the Need for Cognition Scale, Cacioppo and Petty also provided evidence of discriminant
validity by showing that people’s scores were not correlated with certain other variables. For example, they found
only a weak correlation between people’s need for cognition and a measure of their cognitive style—the extent to
which they tend to think analytically by breaking ideas into smaller parts or holistically in terms of “the big
picture.” They also found no correlation between people’s need for cognition and measures of their test anxiety
and their tendency to respond in socially desirable ways. All these low correlations provide evidence that the
measure is reflecting a conceptually distinct construct.
8. Sampling
Mr. Abhishek Sharma
Contact : 7859826868
In research terms a sample is a group of people, objects, or items that are taken from a larger
population for measurement. The sample should be representative of the population to ensure
that we can generalise the findings from the research sample to the population as a whole.
What is the purpose of sampling? To draw conclusions about populations from samples, we
must use inferential statistics, to enable us to determine a population’s characteristics by
directly observing only a portion (or sample) of the population. We obtain a sample of the
population for many reasons as it is usually not practical and almost never economical. There
would also be difficulties measuring whole populations because: -
• The large size of many populations
• Inaccessibility of some of the population - Some populations are so difficult to get access to
that only a sample can be used. E.g. prisoners, people with severe mental illness, disaster
survivors etc. The inaccessibility may be associated with cost or time or just access.
• Destructiveness of the observation- Sometimes the very act of observing the desired
characteristic of the product destroys it for the intended use. Good examples of this occur in
quality control. E.g. to determine the quality of a fuse and whether it is defective, it must be
destroyed. Therefore if you tested all the fuses, all would be destroyed.
• Accuracy and sampling - A sample may be more accurate than the total study population. A
badly identified population can provide less reliable information than a carefully obtained
sample.
8.1. Steps
Defining the population of interest, for business research, is the first step in sampling process. In general, target
population is defined in terms of element, sampling unit, extent, and time frame. The definition should be in line
with the objectives of the research study. For ex, if a kitchen appliances firm wants to conduct a survey to
ascertain the demand for its micro ovens, it may define the population as ‘all women above the age of 20 who
cook (assuming that very few men cook)’. However this definition is too broad and will include every household
in the country, in the population that is to be covered by the survey. Therefore the definition can be further
refined and defined at the sampling unit level, that, all women above the age 20, who cook and whose monthly
household income exceeds Rs.20,000. This reduces the target population size and makes the research more
focused. The population definition can be refined further by specifying the area from where the researcher has to
draw his sample, that is, households located in Hyderabad.
Mr. Abhishek Sharma
Contact : 7859826868
A well defined population reduces the probability of including the respondents who do not fit the research
objective of the company. For ex, if the population is defined as all women above the age of 20, the researcher
may end up taking the opinions of a large number of women who cannot afford to buy a micro oven.
Once the definition of the population is clear a researcher should decide on the sampling frame. A sampling
frame is the list of elements from which the sample may be drawn. Continuing with the micro oven ex, an ideal
sampling frame would be a database that contains all the households that have a monthly income above
Rs.20,000. However, in practice it is difficult to get an exhaustive sampling frame that exactly fits the
requirements of a particular research. In general, researchers use easily available sampling frames like
telephone directories and lists of credit card and mobile phone users. Various private players provide databases
developed along various demographic and economic variables. Sometimes, maps and aerial pictures are also
used as sampling frames. Whatever may be the case, an ideal sampling frame is one that entire population and
lists the names of its elements only once.
A sampling frame error pops up when the sampling frame does not accurately represent the total population or
when some elements of the population are missing another drawback in the sampling frame is over
–representation. A telephone directory can be over represented by names/household that have two or more
connections.
A sampling unit is a basic unit that contains a single element or a group of elements of the population to be
sampled. In this case, a household becomes a sampling unit and all women above the age of 20 years living in
that particular house become the sampling elements. If it is possible to identify the exact target audience of the
business research, every individual element would be a sampling unit. This would present a case of primary
sampling unit. However, a convenient and better means of sampling would be to select households as the
sampling unit and interview all females above 20 years, who cook. This would present a case of secondary
sampling unit.
The sampling method outlines the way in which the sample units are to be selected. The choice of the sampling
method is influenced by the objectives of the business research, availability of financial resources, time
constraints, and the nature of the problem to be investigated. All sampling methods can be grouped under two
distinct heads, that is, probability and non-probability sampling.
The sample size plays a crucial role in the sampling process. There are various ways of classifying the
techniques used in determining the sample size. A couple those hold primary importance and are worth
mentioning are whether the technique deals with fixed or sequential sampling and whether its logic is based on
traditional or Bayesian methods. In non-probability sampling procedures, the allocation of budget, thumb rules
and number of sub groups to be analyzed, importance of the decision, number of variables, nature of analysis,
incidence rates, and completion rates play a major role in sample size determination. In the case of probability
sampling, however, formulas are used to calculate the sample size after the levels of acceptable error and level
Mr. Abhishek Sharma
Contact : 7859826868
of confidence are specified. The details of the various techniques used to determine the sample size will be
explained at the end of the chapter.
In this step, the specifications and decisions regarding the implementation of the research process are outlined.
Suppose, blocks in a city are the sampling units and the households are the sampling elements. This step
outlines the modus operandi of the sampling plan in identifying houses based on specified characteristics. It
includes issues like how is the interviewer going to take a systematic sample of the houses. What should the
interviewer do when a house is vacant? What is the recontact procedure for respondents who were unavailable?
All these and many other questions need to be answered for the smooth functioning of the research process.
These are guide lines that would help the researcher in every step of the process. As the interviewers and their
co-workers will be on field duty of most of the time, a proper specification of the sampling plans would make their
work easy and they would not have to revert to their seniors when faced with operational problems.
This is the final step in the sampling process, where the actual selection of the sample elements is carried out.
At this stage, it is necessary that the interviewers stick to the rules outlined for the smooth implementation of the
business research. This step involves implementing the sampling plan to select the sampling plan to select a
sample required for the survey.
8.2. Types
● Reduce Sample Bias: Using the probability sampling method, the bias in the
sample derived from a population is negligible to non-existent. The selection of
the sample largely depicts the understanding and the inference of the
researcher. Probability sampling leads to higher quality data collection as the
population is appropriately represented by the sample.
● Diverse Population: When the population is large and diverse, it is important
to have adequate representation so that the data is not skewed towards one
demographic. For example, if Square would like to understand the people that
could their point-of-sale devices, a survey conducted from a sample of people
across US from different industries and socio-economic backgrounds, helps.
● Create an Accurate Sample: Probability sampling helps the researchers plan
and create an accurate sample. This helps to obtain well-defined data.
Mr. Abhishek Sharma
Contact : 7859826868
There are 4 types of non-probability sampling which will explain the purpose of this
sampling method in a better manner:
thought process of people who are interested in studying for their master’s
degree. The selection criteria will be: “Are you interested in studying for
Masters in …?” and those who respond with a “No” will be excluded from the
sample.
● Snowball sampling: Snowball sampling is a sampling method that is used in
studies which need to be carried out to understand subjects which are difficult
to trace. For example, it will be extremely challenging to survey shelterless
people or illegal immigrants. In such cases, using the snowball theory,
researchers can track a few of that particular category to interview and results
will be derived on that basis. This sampling method is implemented in
situations where the topic is highly sensitive and not openly discussed such as
conducting surveys to gather information about HIV Aids. Not many victims will
readily respond to the questions but researchers can contact people they might
know or volunteers associated with the cause to get in touch with the victims
and collect information.
● Quota sampling: In Quota sampling, selection of members in this sampling
technique happens on basis of a pre-set standard. In this case, as a sample is
formed on basis of specific attributes, the created sample will have the same
attributes that are found in the total population. It is an extremely quick method
of collecting samples.
Time Taken Take a longer time to conduct This type of sampling method
since the research design is quick since neither the
defines the selection
Mr. Abhishek Sharma
Contact : 7859826868
Secondary data are data that are taken from research works already done by somebody and used for
the purpose of the research data collection. The reason why secondary data are being increasingly
used in research is that published statistics are now available covering diverse fields so that an
investigator finds required data readily available to him’ in many cases. For certain studies like stock
price behavior, interest and exchange rate scenario, etc. only secondary data are used.
There are two broad categories of secondary data – internal secondary data and external secondary
data.
1. Internal secondary data: Internal (secondary) data refers to information that already exists within the
company in which the research problem arises. For instance, in many companies, salesmen routinely
record and report their sales. Examples of secondary data include records of sales, budgets, advertising
and promotion expenditures, previous marketing research studies and similar reports. Use of such
secondary data can help the marketing manager analyse the effect of the different elements of the
marketing mix, develop a marketing plan, make Budget and sales territory allocations, and, in general,
help in managerial decision making.
Mr. Abhishek Sharma
Contact : 7859826868
2. External secondary data: External (secondary) data refers to information which is collected by a source
external to the firm (whose major purpose is not the solution of the particular research problem facing the
firm). There are three major categories of external data;
1. Government sources and publications
2. Business reference sources
3. Commercial agencies
1. Personal Documents
These documents are recorded by the individuals. An individual may record his views and thoughts
about various problems and without knowing for these documents at a latter data so formed a subject
or source of study.
Personal documents may be categorized or divided under the following heads for the convenience of
the study:
1. Life History: Life history, generally speaking contains all kinds of biographical , material, from the point
of view of personal documents only an autobiography which contains description and views about social
and personal events is a life history. It may be further classified under the following three sub-heads:
Spontaneous Autobiography, Voluntary Autobiography of self record, and Compiled life history.
2. Diaries: Many people keep diaries in which they record the daily events of their life and their feelings
and reactions relating to those events. Some of the diaries are also published later on. Diaries are the
most important source of knowing the life history of a person. If they have been written continuously over
long periods.
3. Letters: Letters also provide useful and reliable material on many social problems. They throw light upon
more intimate aspects of an event, and clarity the stand taken by a person regarding it. They are helpful
in giving an idea of the attitudes of a person and the trend of his mind. The validity of letters is beyond all
doubt and they should be accepted as prima facie proof of the attitude of the writer. In such social
problems as love, marriage or divorce the letters can supply much revealing information.
4. Memoirs: Some people write memoirs of their travels, important events of their life and other significant
phenomena that they come across. These memoirs provide useful material in the study of many a social
phenomena. Memoirs are different from dairies in the sense that they describe only some events and are
more elaborate than the dairy. Memoirs of travelers have provided us with useful information regarding
the language, social customs, religious faiths, culture and many other social aspects of the people they
visited.
2. Public Documents
Public documents are quite different from personal documents. They deal with the matters of different
interest. Public documents may be divided into the following two categories:
1. Unpublished Records: Unpublished records give matters of public interest not available to people in
published form. Everybody cannot have access to them. Proceedings of the meetings, noting on the files
and memoranda etc., form the category of unpublished records. It is said that these records are reliable.
Since there is no fear of their being made public the writers give out their views clearly.
2. Published Records: Published records are available to people for investigation and perusal. Survey,
reports, report of survey enquiries and such other documents fall under this category.
The data contained in these documents are considered by some people as quite reliable because the
collecting agency knows that it shall be difficult to test, while others are of the view that if the data are
Mr. Abhishek Sharma
Contact : 7859826868
to be published the collecting or publishing agency does some window dressing, as a result of which
the accuracy is sometimes postulated.
Now most of the information that is available to people and researchers in regard to social problems is
to be found in form of reports. The reports published by Government are considered as more
dependable. On the other hand some people think that the reports that are published by certain
individuals and agencies are more dependable and reliable.
● Journals and Magazines: Journals and magazines are important public documents including a wide
variety of information which .can be usefully utilized in social research. Most of these information are
very much reliable. Letters to the editors published in various magazines and journals are an important
source of information.
● Newspapers: Newspapers publish news, discussion on contemporary issues, reports of meetings and
conferences, essays and articles on living controversies and the letters of the readers to the editors. All
this is an important source of formation for different kinds of social research.
● Other Sources: Besides the above mentioned public documents, film, television, radio and public
speeches etc., are other important sources of information. They supply useful information – about
contemporary issues. The investigator, however, should be capable of sorting out the reliable material
and distinguishing it from the unreliable material advanced by these sources.
1. Provides an insight into total situation: The purpose of use of available materials is to explore the
nature of the data and the subjects to get an insight into the total situation. While looking for the data
required by the researcher he may uncover many more available data than are often assumed to exists
and hence contributes significantly to the unfolding of hidden information.
2. Helps in the formulation of hypothesis: The use of documentary sources sometimes, helps in the
formulation of research hypothesis. While an investigator may have one or two hypotheses which he
might have deduced from theory, the study of available materials may suggest further hypotheses. If a
research idea or hypotheses can be formulated in such a manner that the available recorded material
bears on the question, the use of such material becomes possible.
3. Helps in testing the hypotheses: The available records may also help in testing the hypothesis.
4. Provides supplementary information: Available documents may be used to supplement or to check
information gathered specifically for the purposes of a given investigation. For example, if one has drawn
in random sample of a small group in order to interview individuals, the accuracy of one’s sample could
be checked by comparing socio-economic data of the sample, like income, education standard, caste,
family size etc., with the same data of the most recent census or with available data in local Government
offices.
1. Collected for a specific purpose: Data are often collected with a specific purpose in mind, a purpose
that may produce deliberate or unintentional bias. Thus, secondary sources must be evaluated carefully.
The fact that secondary data were collected originally for particular purposes may produce other
problems. Category definitions, particular measures or treatment effects may not be the most appropriate
for the purpose at hand.
2. Old data: Secondary data are by definition, old data. Thus, the data may not be particularly timely for
same purposes.
3. Aggregation of data in Inappropriate Unit: Seldom are secondary data available at the individual
observation level. This means that the data are aggregated in some form, and the unit of aggregation
may be inappropriate for a particular purpose.
Mr. Abhishek Sharma
Contact : 7859826868
4. Authenticity: The authenticity of same secondary sources of data is doubtful.
5. Context change: Secondary data refer to a given situation. As situations change, the data lose their
contextual validity.