0% found this document useful (0 votes)
15 views65 pages

Business Research Notess

This document discusses the meaning, objectives, and scope of business research. It defines research as a systematic search for knowledge involving careful investigation and collection of new information. The main objectives of research are to gain insights, describe characteristics, determine frequencies, and test hypotheses. Business research can help with production, personnel, marketing, financial, and general management. It assists decision-making by lowering uncertainty and risk. The document also distinguishes between exploratory research, which explores problems without definitive solutions, descriptive research, which describes characteristics, and explanatory research, which aims to explain relationships between variables.

Uploaded by

hariomrajawat028
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
15 views65 pages

Business Research Notess

This document discusses the meaning, objectives, and scope of business research. It defines research as a systematic search for knowledge involving careful investigation and collection of new information. The main objectives of research are to gain insights, describe characteristics, determine frequencies, and test hypotheses. Business research can help with production, personnel, marketing, financial, and general management. It assists decision-making by lowering uncertainty and risk. The document also distinguishes between exploratory research, which explores problems without definitive solutions, descriptive research, which describes characteristics, and explanatory research, which aims to explain relationships between variables.

Uploaded by

hariomrajawat028
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 65

Mr.

Abhishek Sharma
Contact : 7859826868

Introduction

1. Meaning of Research
Research is a scientific and systematic search for various information about a specific topic. It is just
like a search for truth and knowledge. The English Dictionary meaning of Research is “a careful
investigation or inquiry especially through search for new facts in any branch of knowledge.”
information about a subject can be collected by deliberate effort and it is presented in a new form after
analyzing thoroughly in research work.

Research is an academic activity. It is a movement from the known to the unknown, which may be
called a discovery. Different definitions of research are given by the experts.

According to Redman and Mory, “Research is a systematized effort to gain new knowledge.”

1. Slesinger and M Stephenson define research as, “the manipulation of things, concepts or
symbols for the purpose of generalizing to extend correct or verify knowledge whether that
knowledge aids in construction of theory or in the practice of an art ”

According to P.M. Cook, “Research is an honest, exhaustive, intelligent searching for facts and their
meanings or implications with reference to a given problem.”

J.M. Francis Rumel defines, “Research is an endeavour to discover, develop and verify knowledge.”

Clifford Woody, defines “Research is a careful enquiry or examination in seeking facts or principles a
diligent investigation to ascertain something.”

Objectives

The main purpose of research is to discover answers to the meaningful questions through scientific
procedures and systematic attempt. The hidden truths which are not discovered yet can easily come to
light by research.

The main objectives of Research are

1. To gain familiarity or to achieve new insights into a phenomenon. This is known as Exploratory
or Formularize Research studies.
2. To describe the accurate characteristics of a particular individual, situation or a group. This is
known as Descriptive Research studies.
3. To determine the frequency with which something occurs or with which it is associated with
other things. This is known as Diagnostic Research studies.
4. To test a hypothesis of a casual relationship between variables. Such studies are known as
Hypothesis-testing Research studies.
Mr. Abhishek Sharma
Contact : 7859826868

Characteristics of Research

1. Research is directed towards the solution of a problem.


2. Research gathers new knowledge or data from primary sources.
3. Research is based upon observable experience or experimental evidence.
4. Research is logical and objective, applying every possible test to verify the data collected and
the procedures employed.
5. Research is expert, systematic and accurate investigation.
6. Research demands accurate observation and description.
7. Research requires patience and courage. The researcher should courageously face the
unpleasant consequences of his finding if any.
8. Research is highly purposive. It deals with a significant problem which must be solved.
9. Research is carefully recorded and reported. Everything must be carefully defined and
described in detail.
10. Research activity is characterized by carefully designed procedures which are to be analyzed
thoroughly.

Research Methods

All those methods which are used by the researcher during the course of studying his research
problems are called as Research Methods. Methods of research may be classified from different points
of view.

These are:

1. The fields to which applied-Education, Philosophy, Psychology.


2. Purpose-Description, Prediction. Determination of status and causes.
3. Place where it is to be conducted-in the field or in the laboratory.
4. Application-Pure research or applied research
5. Data gathering devices employed-Testing, rating scales, questionnaires etc.
6. Character of the data collected-Objective, Subjective, Quantitative, and Qualitative.
7. Forms of thinking-Deductive and Inductive.
8. Control of factors-Controlled and Uncontrolled.

2. Scope of Business Research


Business Research is described as the systematic and objective procedure for producing information
for help in making business decisions. Business research should be objective, which means that the
information found needs to be detached and impersonal instead of biased. Research facilitates the
managerial decision process for all aspects of a business.
Mr. Abhishek Sharma
Contact : 7859826868
By lowering the uncertainty of decisions, it cuts down on the risk of making incorrect decisions.
Research should be an aid to managerial judgment but not a replacement for it.

Scope of Business Research Includes the Following Areas

(i) Production Management: The research performs an important function in product development,
diversification, introducing a new product, product improvement, process technologies, choosing a site,
new investment etc.

Personnel Management: Research works well for job redesign, organization restructuring,
development of motivational strategies and organizational development.

(ii) Marketing Management: Research performs an important part in choice and size of target market,
the consumer behavior with regards to attitudes, life style, and influences of the target market. It is the
primary tool in determining price policy, selection of channel of distribution and development of sales
strategies, product mix, promotional strategies, etc.

(iii) Financial Management: Research can be useful for portfolio management, distribution of
dividend, capital raising, hedging and looking after fluctuations in foreign currency and product cycles.

(iv) Materials Management: It is utilized in choosing the supplier, making the decisions relevant to
make or buy as well as in selecting negotiation strategies.

(v) General Management: It contributes greatly in developing the standards, objectives, long-term
goals, and growth strategies.

To perform well in a complex environment, you will have to be equipped with an understanding of
scientific methods and a way of integrating them into decision making. You will have to understand
what good research means and how to conduct it. Since the complexity of the business environment
has amplified, there is a commensurate rise in the number and power of the instruments to carry out
research. There is certainly more knowledge in all areas of management.

We now have started to develop much better theories. The computer has provided us a quantum leap
in the capability to take care of difficulties. New techniques of quantitative analysis utilize this power.
Communication and measurement techniques have also been improved. These developments
reinforce each other and are having a substantial impact on business management.

Business research assists decision makers shift from intuitive information gathering to organized and
objective study. Even though researchers in different functional fields may examine different
phenomena, they are comparable to each other simply because they make use of similar research
techniques. Research is the fountain of knowledge for the sake of knowledge and it is a crucial source
of providing guidelines for solving various business issues. Thus, we can say that the scope of
business research is enormous.

3. Purpose of Research - Exploration, Description, Explanation


Purpose of Research
Mr. Abhishek Sharma
Contact : 7859826868
It can be hard to tell the exact purpose of business research. It will always be based on the situation
and the person conducting such study. Generally speaking, a business research’s purpose is to ensure
future success. Whenever a person or a group enters the market, their aim is to earn considerable
profits. Well, almost all businesses want to earn money, right? Unless yours is a non-profit
organization, the primary purpose of researching the market is to generate more sales and income.

When you conduct a business research, you will be gathering relevant information that you can use to
make your business better. For instance, in a marketing search, you will be identifying your target
market and the needs. You have to offer something that the market needs or you will not be able sell
anything! Never enter into a business unless you have everything planned out. Running a business
can get complicated especially if you lack knowledge. If you conduct a thorough business research,
you can learn the basics and put it to good use.

EXPLORATORY RESEARCH

Exploratory research, as the name implies, intends merely to explore the research questions and does
not intend to offer final and conclusive solutions to existing problems. This type of research is usually
conducted to study a problem that has not been clearly defined yet.

Conducted in order to determine the nature of the problem, exploratory research is not intended to
provide conclusive evidence, but helps us to have a better understanding of the problem. When
conducting exploratory research, the researcher ought to be willing to change his/her direction as a
result of revelation of new data and new insights.

Exploratory research design does not aim to provide the final and conclusive answers to the research
questions, but merely explores the research topic with varying levels of depth. It has been noted that
“exploratory research is the initial research, which forms the basis of more conclusive research. It can
even help in determining the research design, sampling methodology and data collection method”.
Exploratory research “tends to tackle new problems on which little or no previous research has been
done”. Unstructured interviews are the most popular primary data collection method with exploratory
studies.

Examples of Exploratory Research Design

● A study into the role of social networking sites as an effective marketing communication
channel
● An investigation into the ways of improvement of quality of customer services within hospitality
sector in London
● An assessment of the role of corporate social responsibility on consumer behaviour in
pharmaceutical industry in the USA

DESCRIPTIVE RESEARCH
Mr. Abhishek Sharma
Contact : 7859826868

Descriptive research focuses on throwing more light on current issues through a process of data
collection. Descriptive studies are used to describe the behavior of a sample population. In descriptive
research, only one variable (anything that has quantity or quality that varies) is required to conduct a
study. The three main purposes of descriptive research are describing, explaining and validating the
findings. For example, a research conducted to know if top-level management leaders in the 21st
century posses the moral right to receive a huge sum of money from the company profit.

EXPLANATORY RESEARCH

Explanatory research or causal research is conducted to understand the impact of certain changes in
existing standard procedures. Conducting experiments is the most popular form of casual research.
For example, research conducted to understand the effect of rebranding on customer loyalty.

Comparative analysis

Exploratory Descriptive Explanatory


Research Research Research

Research Approach Unstructured Structured Highly structured


used

Research Asking research Asking research By using research


Conducted Through questions questions hypotheses.

When is it Early stages of Later stages of Later stages of


conducted? decision making decision making decision making

4. Unit of Analysis
Units of Analysis are the objects of study within a research project. In sociology, the most common
units of analysis are individuals, groups, social interactions, organizations and institutions, and social
and cultural artifacts. In many cases, a research project can require multiple units of analysis.

Identifying your units of analysis is an important part of the research process. Once you have identified
a research question, you will have to select your units of analysis as part of the process of deciding on
a research method and how you will operationalize that method. Let’s review the most common units
of analysis and why a researcher might choose to study them.

4.1. Individual
Individuals
Mr. Abhishek Sharma
Contact : 7859826868

Individuals are the most common units of analysis within sociological research. This is the case
because the core problem of sociology is understanding the relationships between individuals and
society, so we routinely turn to studies composed of individual people in order to refine our
understanding of the ties that bind individuals together into a society. Taken together, information about
individuals and their personal experiences can reveal patterns and trends that are common to society
or particular groups within it, and can provide insight into social problems and their solutions.

For example, researchers at the University of California-San Francisco found through interviews with
individual women who have had abortions that the vast majority of women do not ever regret the
choice to terminate the pregnancy. Their findings prove that a common right-wing argument against
access to abortion–that women will suffer undue emotional distress and regret if they have an
abortion–is based on myth rather than fact.

4.2. Organisation
Organizations

Organizations differ from groups in that they are considered more formal and, well, organized ways of
collecting people together around specific goals and norms. Organizations take many forms, including
corporations, religious congregations and whole systems like the Catholic Church, judicial systems,
police departments, and social movements, for example.

Social scientists who study organizations might be interested in, for example, how corporations like
Apple, Amazon, and Walmart impact various aspects of social and economic life, like how we shop
and what we shop for, and what work conditions have become normal and/or problematic within the
U.S. labor market. Sociologists who study organizations might also be interested in comparing different
examples of similar organizations to reveal the nuanced ways in which they operate, and the values
and norms that shape those operations.

4.3. Groups
Groups

Sociologists are keenly interested in social ties and relationships, which means that they often study
groups of people, be they large or small. Groups can be anything from romantic couples to families, to
people who fall into particular racial or gender categories, to friend groups, to whole generations of
people (think Millennials and all the attention they get from social scientists). By studying groups
sociologists can reveal how social structure and forces affect whole categories of people on the basis
of race, class, or gender, for example.

Sociologists have done this in pursuit of understanding a wide range of social phenomena and
problems, like for example this study that proved that living in a racist place leads to Black people
Mr. Abhishek Sharma
Contact : 7859826868
having worse health outcomes than white people; or this study that examined the gender gap across
different nations to find out which are better or worse at advancing and protecting the rights of women
and girls.

4.4. Data series


Data may be grouped into four main types based on methods for collection: observational,
experimental, simulation, and derived. The type of research data you collect may affect the way
you manage that data. For example, data that is hard or impossible to replace (e.g. the recording
of an event at a specific time and place) requires extra backup procedures to reduce the risk of
data loss. Or, if you will need to combine data points from different sources, you will need to follow
best practices to prevent data corruption.

Observational Data
Observational data are captured through observation of a behavior or activity. It is collected using
methods such as human observation, open-ended surveys, or the use of an instrument or sensor
to monitor and record information -- such as the use of sensors to observe noise levels at the
Mpls/St Paul airport. Because observational data are captured in real time, it would be very difficult
or impossible to re-create if lost.

Experimental Data
Experimental data are collected through active intervention by the researcher to produce and
measure change or to create difference when a variable is altered. Experimental data typically
allows the researcher to determine a causal relationship and is typically projectable to a larger
population. This type of data are often reproducible, but it often can be expensive to do so.
Mr. Abhishek Sharma
Contact : 7859826868

Simulation Data
Simulation data are generated by imitating the operation of a real-world process or system over
time using computer test models. For example, to predict weather conditions, economic models,
chemical reactions, or seismic activity. This method is used to try to determine what would, or
could, happen under certain conditions. The test model used is often as, or even more, important
than the data generated from the simulation.

Derived / Compiled Data


Derived data involves using existing data points, often from different data sources, to create new
data through some sort of transformation, such as an arithmetic formula or aggregation. For
example, combining area and population data from the Twin Cities metro area to create population
density data. While this type of data can usually be replaced if lost, it may be very time-consuming
(and possibly expensive) to do so.

5. Conception
Conception

The first step in the measurement process is to define the concepts we are studying. Researchers
generate concepts by generalizing from particular facts. Concepts are based on our experiences.
Concepts can be based on real phenomena and are a generalized idea of something of meaning.
Examples of concepts include common demographic measures: Income, Age, Eduction Level, Number
of SIblings.

We can measure concepts through direct and indirect observations:

● Direct Observation:We can measure someone’s weight or height. And, we can record the
color of their hair or eyes.
● Indirect Observation:We can use a questionnaire in which respondents provide answers to
our questions about gender, income, age, attitudes, and behaviors.

6. Construct
Mr. Abhishek Sharma
Contact : 7859826868
Constructs

Constructs are measured with multiple variables. Constructs exist at a higher level of
abstraction than concepts. Justice, Beauty, Happiness, and Health are all constructs.
Constructs are considered latent variable because they cannot be directly observable or
measured. Typical constructs in marketing research include Brand Loyalty, Purchase Intent,
and Customer Satisfaction. Constructs are the basis of working hypotheses.

Brand loyalty is a construct that marketing researchers study often. Brand loyalty can be
measured using a variety of measures:

● Number of items purchased in the past


● Monetary value of past purchases
● Frequency of past purchase occasions
● The likelihood of future purchases
● The likelihood of recommending the brand to a friend or family member
● The likelihood of switching to a competitive brand

An attribute is a single feature or dimension of a construct.

Measurement: Measurement is the assignment of numbers or symbols to phenomena.


Measurement requires a scale. A scale provides a range of values—a yardstick—that
corresponds to the presence of the properties of the concept under investigation. A scale
provides the rules that associate values on the scale to the concept we are studying.

7. Attributes
An attribute refers to the quality of a characteristic. The theory of attributes
deals with qualitative types of characteristics that are calculated by using
quantitative measurements. Therefore, the attribute needs slightly different
kinds of statistical treatments, which the variables do not get. Attributes refer to
the characteristics of the item under study, like the habit of smoking, or
drinking. So ‘smoking’ and ‘drinking’ both refer to the example of an attribute.
The researcher should note that the techniques involve statistical knowledge and are used at a wider
extent in the theory of attributes.
In the theory of attributes, the researcher puts more emphasis on quality (rather than on quantity).
Since the statistical techniques deal with quantitative measurements, qualitative data is converted into
quantitative data in the theory of attributes.
There are certain representations that are made in the theory of attributes. The population in the theory
of attributes is divided into two classes, namely the negative class and the positive class. The positive
class signifies that the attribute is present in that particular item under study, and this class in the
theory of attributes is represented as A, B, C, etc. The negative class signifies that the attribute is not
present in that particular item under study, and this class in the theory of attributes is represented as α,
β, etc.
Mr. Abhishek Sharma
Contact : 7859826868

The assembling of the two attributes, i.e. by combining the letters under consideration (such as AB),
denotes the assembling of the two attributes.

This assembling of the two attributes is termed dichotomous classification. The number of the
observations that have been allocated in the attributes is known as the class frequencies. These class
frequencies are symbolically denoted by bracketing the attribute terminologies. (B), for example,
stands for the class frequency of the attribute B. The frequencies of the class also have some levels in
the attribute. For example, the class that is represented by the ‘n’ attribute refers to the class that has
the nth order. For example, (B) refers to the class of 2nd order in the theory of attributes.

8. Variables
Variables

Variables are measurements that are free to vary. Variable can be divided into Independent Variables
or Dependent Variables. A dependent variable changes in response to changes in the independent
variable or variables.

A variable can be transformed into a constant when the researcher decides to control the variable by
reducing its expression to a single value. Suppose a researcher is conducting a test of consumers’
taste preference for three brands of frozen pizza. There are a number of variables in this test:

(1) Respondents’ ratings of the taste of each brand of pizza,

(2) The manner in which is each pizza is presented, the type of the plates and table cloths used, and

(3) The manner in which each brand is prepared. To get an accurate measure of the first
variable—respondents’ ratings of the taste of the three pizza brands—the researcher will hold the
second and third variables constant. By serving all three pizzas on the same kind of plates with the
table dressed in the same manner, preparing the pizzas in identical ways, and serving them at identical
temperatures, the research controls for these variables. In doing so, the researcher has removed, or
controlled for the affect of the second and third variables on respondents’ taste preferences.

9. Hypothesis
Hypotheses

A hypothesis (plural hypotheses) is a proposed explanation for a phenomenon. For a hypothesis to be


a scientific hypothesis, the scientific method requires that one can test it. Scientists generally base
scientific hypotheses on previous observations that cannot satisfactorily be explained with the
available scientific theories. Even though the words “hypothesis” and “theory” are often used
synonymously, a scientific hypothesis is not the same as a scientific theory. A working hypothesis is a
provisionally accepted hypothesis proposed for further research, in a process beginning with an
educated guess or thought.

A different meaning of the term hypothesis is used in formal logic, to denote the antecedent of a
proposition; thus in the proposition “If P, then Q”, P denotes the hypothesis (or antecedent); Q can be
called a consequent. P is the assumption in a (possibly counterfactual) What If question.
Mr. Abhishek Sharma
Contact : 7859826868

The adjective hypothetical, meaning “having the nature of a hypothesis”, or “being assumed to exist as
an immediate consequence of a hypothesis”, can refer to any of these meanings of the term
“hypothesis”.

Research Process

1. Overview
Research Process involves identifying, locating, assessing, and analyzing the information you need to
support your research question, and then developing and expressing your ideas. These are the same
skills you need any time you write a report, proposal, or put together a presentation.

Library research involves the step-by-step process used to gather information in order to write your
paper, create a presentation, or complete a project. As you progress from one step to the next, it is
often necessary to rethink, revise, add additional material or even adjust your topic. Much will depend
on what you discover during your research.

The research process can be broken down into seven steps, making it more manageable and easier to
understand. This module will give you an idea of what’s involved at each step in order to give you a
better overall picture of where you are in your research, where you will be going, and what to expect at
each step.

2. Problem identification and definition


PROBLEM IDENTIFICATION AND DEFINITION

The first stage is to develop a clear and precise understanding of the research problem, to permit
effective conduct of the research process. It is very important to analyse the problems to conduct the
research effectively. In this scenario, a veteran market researcher wants to enter into the business of
operating a coffee shop and the problem is to identify the potential market and to find the appropriate
outlet and product mix for the products and services of the business. The determination of product line
and the price to be charged for the product is the identified problem. At the same time, the business is
also facing problems with the positioning of the shop in the relevant market.

3. Selection of basic research methods


Steps involved in Research Process in Research Methodology

At times, the first step determines the nature of the last step to be undertaken.If subsequent
procedures have not been taken into account in the early stages, serious difficulties may arise which
may even prevent the completion of the study. One should remember that the various steps involved in
a research process are not mutually exclusive; nor they are separate and distinct.
Mr. Abhishek Sharma
Contact : 7859826868

They do not necessarily follow each other in any specific order and the researcher has to be constantly
anticipating at each step in the research process the requirements of the subsequent steps. However,
the following order concerning various steps provides a useful procedural guideline regarding the
research process:

● Formulating the research problem.


● Extensive literature survey.
● Developing the hypothesis.
● Preparing the research design.
● Determining sample design.
● Collecting the data.
● Execution of the project.
● Analysis of data.
● Hypothesis testing.
● Generalizations and interpretation, and
● Preparation of the report or presentation of the results, i.e., formal write-up of conclusions
reached.
1. Formulating the research problem: There are two types of research problems, vi., those
which relate to states of nature and those which relate to relationships between variables. At
thievery outset the researcher must single out the problem he wants to study, i.e., he must
decide the general area of interest or aspect of a subject-matter that he would like to inquire
into. Initially the problem may be stated in a broad general way and then the ambiguities, if any,
relating to the problem be resolved. Then, the feasibility of a particular solution has to be
considered before a working formulation of the problem can be set up. The formulation of a
general topic into a specific research problem, thus, constitutes the first step in a scientific
enquiry. Essentially two steps are involved in formulating the research problem, vi.,
understanding the problem thoroughly, and rephrasing the same into meaningful terms from an
analytical point of view.
2. Extensive literature survey: Once the problem is formulated, a brief summary of it should be
written down. It is compulsory for a research worker writing a thesis for a Ph.D. degree to write
synopsis of the topic and submit it to the necessary Committee or the Research Board for
Mr. Abhishek Sharma
Contact : 7859826868
approval.At this juncture the researcher should undertake extensive literature survey connected
with the problem.

For this purpose, the abstracting and indexing journals and published or unpublished bibliographies
are the first place to go to. Academic journals, conference proceedings, government reports, books
etc., must be tapped depending on the nature of the problem. In this process, it should be remembered
that one source will lead to another. The earlier studies, if any, which are similar to the study in and
should be carefully studied. A good library will be a great help to the researcher at this stage.

3. Development of working hypotheses: After extensive literature survey, researcher should


state in clear terms the working hypothesis or hypotheses. Working hypothesis is tentative
assumption made in order to draw out and test its logical or empirical consequences. As such
the manner in which research hypotheses are developed is particularly important since they
provide the focal point for research.
4. Preparing the research design: The research problem having been formulated in clear cut
terms, the researcher will be required to prepare a research design, i.e., he will have to state
the conceptual structure within which research would be conducted. The preparation of such a
design facilitates research to be as efficient as possible yielding maximal information.

In other words, the function of research design is to provide for the collection of relevant evidence with
minimal expenditure of effort, time and money. But how all these can be achieved depends mainly on
the research purpose. Research purposes may be grouped into four categories, vi.,

● Exploration,
● Description,
● Diagnosis, and
● Experimentation
5. Determining sample design: All the items under consideration in any field of inquiry constitute
‘universe’ or ‘population’. A complete enumeration of all the items in the ‘population’ is known
asa census inquiry. It can be presumed that in such an inquiry when all the items are covered
no element of chance is left and highest accuracy is obtained. But in practice this may not be
true.

Even the slightest element of bias in such an inquiry will get larger and larger as the number of
observations increases. Moreover, there is no way of checking the element of bias or its extent except
through are survey or use of sample checks. Besides, this type of inquiry involves a great deal of time,
money and energy. Not only this, census inquiry is not possible in practice under many circumstances.
For instance, blood testing is done only on sample basis. Hence, quite often we select only a few items
from the universe for our study purposes. The items so selected constitute what is technically called
sample.

The researcher must decide the way of selecting a sample or what is popularly known as the sample
design. In other words, a sample design is a definite plan determined before any data are actually
collected for obtaining a sample from a given population. Thus, the plan to select 12 of a city’s 200
Mr. Abhishek Sharma
Contact : 7859826868
drugstores in a certain way constitutes a sample design. Samples can be either probability samples or
non-probability samples.

With probability samples each element has a known probability of being included in the sample but the
non-probability samples do not allow the researcher to determine this probability. Probability samples
are those based on simple random sampling, systematic sampling, stratified sampling, cluster/area
sampling whereas non-probability samples are those based on convenience sampling, judgment
sampling and quota sampling techniques.

6. Collecting the data: In dealing with any real life problem it is often found that data at hand are
inadequate, and hence, it becomes necessary to collect data that are appropriate. There are
severing always of collecting the appropriate data which differ considerably in context of money
costs, time and other resources at the disposal of the researcher.

Primary data can be collected either through experiment or through survey. If the researcher conducts
an experiment, he observes some quantitative measurements, or the data, with the help of which he
examines the truth contained in his hypothesis.

7. Execution of the project: Execution of the project is a very important step in the research
process. If the execution of the project proceeds on correct lines, the data to be collected would
be adequate and dependable. The researcher should see that the project is executed in a
systematic manner and in time. If the survey is to be conducted by means of structured
questionnaires, data can be readily machine-processed. In such a situation, questions as well
as the possible answers may be coded. If the data are to be collected through interviewers,
arrangements should be made for proper selection and training of the interviewers.
8. Analysis of data: After the data have been collected, the researcher turns to the task of
analyzing them. The analysis of data requires a number of closely related operations such as
establishment of categories, the application of these categories to raw data through coding,
tabulation and then drawing statistical inferences. The unwieldy data should necessarily be
condensed into a few manageable groups and tables for further analysis. Thus, researcher
should classify the raw data into some purposeful and usable categories. Coding operation is
usually done at this stage through which the categories of data are transformed into symbols
that may be tabulated and counted.
9. Hypothesis-testing: After analyzing the data as stated above, the researcher is in a position to
test the hypotheses, if any, he had formulated earlier. Do the facts support the hypotheses or
they happen to be contrary? This is the usual question which should be answered while testing
hypotheses .Various tests, such as Chi square test, t-test, F-test, have been developed by
statisticians for the purpose. The hypotheses may be tested through the use of one or more of
such tests, depending upon the nature and object of research inquiry. Hypothesis -testing will
result in either accepting the hypothesis or in rejecting it. If the researcher had no hypotheses to
start with, generalizations established on the basis of data may be stated as hypotheses to be
tested by subsequent researches in times to come.
10. Generalizations and interpretation: If a hypothesis is tested and upheld several times, it
maybe possible for the researcher to arrive at generalization, i.e., to build a theory. As a matter
of fact,the real value of research lies in its ability to arrive at certain generalizations. If the
researcher had no hypothesis to start with, he might seek to explain his findings on the basis of
some theory. It is known as interpretation. The process of interpretation may quite often trigger
off new questions which in turn may lead to further researches.
Mr. Abhishek Sharma
Contact : 7859826868
11. Preparation of the report or the thesis: Finally, the researcher has to prepare the report of
what has been done.

3.1. Field Study


Basic Research is the research to find the basic knowledge or to refine the basic knowledge. Basic
research is also called pure research and fundamental research.

The basic purpose of this research is to expand the knowledge.

Basic research can be descriptive, explanatory or exploratory. Mostly basic research is the explanatory
research.

Basic research creates new ideas, new principles, new theories that are not immediately applied in the
practical life. But later this basic research helps in applied research where scientist uses this basic
research to utilize it in the practical life.

Field Studies

Field studies involve collecting data outside of an experimental or lab setting. This type of data
collection is most often done in natural settings or environments and can be done in a variety of ways
for various disciplines. Field studies are known to be expensive and timely; however, the amount and
diversity of the data collected can be invaluable.

Field studies collect original or unconventional data via face-to-face interviews, surveys, or direct
observation. This research technique is usually treated as an initial form of research because the data
collected is specific only to the purpose for which it was gathered. Therefore, it is not applicable to the
general public.

Methods of Field Research

Field research is typically conducted in 5 distinctive methods. They are:

(i) Direct Observation

In this method, the data is collected via an observational method or subjects in a natural environment.
In this method, the behavior or outcome of situation is not interfered in any way by the researcher. The
advantage of direct observation is that it offers contextual data on people, situations, interactions and
the surroundings. This method of field research is widely used in a public setting or environment but
not in a private environment as it raises an ethical dilemma.

(ii) Participant Observation

In this method of field research, the researcher is deeply involved in the research process, not just
purely as an observer, but also as a participant. This method too is conducted in a natural environment
but the only difference is the researcher gets involved in the discussions and can mould the direction of
Mr. Abhishek Sharma
Contact : 7859826868
the discussions. In this method, researchers live in a comfortable environment with the participants of
the research, to make them comfortable and open up to in-depth discussions.

(iii) Ethnography

Ethnography is an expanded observation of social research and social perspective and the cultural
values of an entire social setting. In ethnography, entire communities are observed objectively. For
example, if a researcher would like to understand how an Amazon tribe lives their life and operates,
he/she may chose to observe them or live amongst them and silently observe their day-to-day
behavior.

(iv) Qualitative Interviews

Qualitative interviews are close-ended questions that are asked directly to the research subjects. The
qualitative interviews could be either informal and conversational, semi-structured, standardized and
open-ended or a mix of all the above three. This provides a wealth of data to the researcher that they
can sort through. This also helps collect relational data. This method of field research can use a mix of
one-on-one interviews, focus groups and text analysis.

(v) Case Study

A case study research is an in-depth analysis of a person, situation or event. This method may look
difficult to operate, however, it is one of the simplest ways of conducting research as it involves a deep
dive and thorough understanding the data collection methods and inferring the data.

Steps in Conducting Field Research

Due to the nature of field research, the magnitude of timelines and costs involved, field research can
be very tough to plan, implement and measure. Some basic steps in the management of field research
are:

1. Build the Right Team: To be able to conduct field research, having the right team is important.
The role of the researcher and any ancillary team members is very important and defining the
tasks they have to carry out with defined relevant milestones is important. It is important that the
upper management too is vested in the field research for its success.
2. Recruiting People for the Study: The success of the field research depends on the people
that the study is being conducted on. Using sampling methods, it is important to derive the
people that will be a part of the study.
3. Data Collection Methodology: As spoken in length about above, data collection methods for
field research are varied. They could be a mix of surveys, interviews, case studies and
observation. All these methods have to be chalked out and the milestones for each method too
have to be chalked out at the outset. For example, in the case of a survey, the survey design is
important that it is created and tested even before the research begins.
4. Site Visit: A site visit is important to the success of the field research and it is always
conducted outside of traditional locations and in the actual natural environment of the
respondent/s. Hence, planning a site visit along with the methods of data collection is important.
5. Data Analysis: Analysis of the data that is collected is important to validate the premise of the
field research and decide the outcome of the field research.
Mr. Abhishek Sharma
Contact : 7859826868
6. Communicating Results: Once the data is analyzed, it is important to communicate the results
to the stakeholders of the research so that it could be auctioned upon.

3.2. Laboratory study


Laboratory experiments take place in controlled environments and are the main method used in
the natural sciences such as Physics, Chemistry and Biology. There are numerous experiments
which have been designed to test numerous scientific theories about the temperatures at which
various substances freeze or melt, or how different chemicals react when they are combined under
certain conditions.

The logic of the experimental method is that it is a controlled environment which enables the
scientist to measure precisely the effects of independent variables on dependent variables, thus
establishing cause and effect relationships. This in turn enables them to make predictions about
how the dependent variable will act in the future.

For a general introduction to the key features of experiments and the experimental method
(including key terms such as hypothesis and dependent and independent variables) and some of
their advantages please see this post: experiments in sociology: an introduction.

The laboratory experiment and is commonly used psychology, where experiments are used to
measure the effects of sleep loss and alcohol on concentration and reaction time, as well as some
more ethically dubious experiments designed to measure the effects of media violence on children
and the responses of people to authority figures.

However, they are less common in sociology, so this post draws on the example of Milgram’s
Obedience Experiment to illustrate the advantages and disadvantages of laboratory experiment in
sociology.

3.3. Survey method


A Survey is defined as a research method used for collecting data from a pre-defined
group of respondents to gain information and insights on various topics of interest.
Surveys have a variety of purposes and can be carried out in many ways depending on
the methodology chosen and the objectives to be achieved.

The data is usually obtained through the use of standardized procedures whose
purpose is to ensure that each respondent is able to answer the questions at a level
playing field to avoid biased opinions that could influence the outcome of the research
or study. A survey involves asking people for information through a questionnaire,
which can be distributed on paper, although with the arrival of new technologies it is
Mr. Abhishek Sharma
Contact : 7859826868

more common to distribute them using digital media such as social networks, email, QR
codes or URLs.

Characteristics of a Survey
The need to observe or research facts about a situation leads us to conduct a survey.
As we mentioned at the beginning, a survey is a method of gathering information.

So, what do you need to conduct a survey?

First, a sample, also referred to as audience, is needed which should consist of a series
of survey respondents data with required demographic characteristics, who can
relevantly answer your survey questions and provide the best insights. Better the
quality of your survey sample, better will be your response quality and better your
insights.

Surveys come in many different forms and have a variety of purposes, but they have
common underlying characteristics. The basic characteristics of a survey are:

Sample and Sample Determination


A sample is a selection of respondents from a population in such a manner that the
sample represents the total population as closely as possible.

The characteristics of a survey sample, are:

● Determining sample size: Once you have determined your sample, the total
number of individuals in that particular sample is the sample size. Selecting a
sample size depends on the end objective of your research study. It should
consist of a series of survey respondents data with required demographic
characteristics, who can relevantly answer your survey questions and provide
the best insights.
● Types of sampling: There are two essential types of sampling methods, they
are probability sampling and non-probability sampling. Although sampling is
Mr. Abhishek Sharma
Contact : 7859826868

conducted at the discretion of the researcher, the two methods used in detail,
are:

○ Probability sampling: Probability sampling is a sampling method


where the respondent is selected based on the theory of probability.
The major characteristic of this method is that each individual in a
population has an equal chance of being selected.
○ Non-probability sampling: Non-probability sampling is a sampling
method where the researcher selects a sample of respondents purely
on the basis of their own discretion or gut. There is no predefined
selection method.

3.4. Observational method


Observational research (or field research) is a type of correlational (i.e.,
non-experimental) research in which a researcher observes ongoing behavior. There
are a variety of types of observational research, each of which has both strengths and
weaknesses. These types are organized below by the extent to which an experimenter
intrudes upon or controls the environment.
Observational research is particularly prevalent in the social sciences and in marketing.
It is a social research technique that involves the direct observation of phenomena in
their natural setting. This differentiates it from experimental research in which a
quasi-artificial environment is created to control for spurious factors, and where at least
one of the variables is manipulated as part of the experiment. It is typically divided into
naturalistic (or “nonparticipant”) observation, and participant observation. Cases studies
and archival research are special types of observational research. Naturalistic (or
nonparticipant) observation has no intervention by a researcher. It is simply studying
behaviors that occur naturally in natural contexts, unlike the artificial environment of a
controlled laboratory setting. Importantly, in naturalistic observation, there is no attempt
to manipulate variables. It permits measuring what behavior is really like. However, its
typical limitations consist in its incapability exploring the actual causes of behaviors,
and the impossibility to determine if a given observation is truly representative of what
normally occurs.
In participant observation, the researcher intervenes in the environment. Most
commonly, this refers to inserting himself/herself as a member of a group, aimed at
observing behavior that otherwise would not be accessible. Also, behaviors remain
relatively natural, thereby giving the measurements high external validity. Case Studies
are a type of observational research that involve a thorough descriptive analysis of a
single individual, group, or event. They can be designed along the lines of both
Mr. Abhishek Sharma
Contact : 7859826868

nonparticipant and participant observation. Both approaches create new data, while
archival research involves the analysis of data that already exist. A hypothesis is
generated and then tested by analyzing data that have already been collected. This is a
useful approach when one has access to large amounts of information collected over
long periods of time. Such databases are available, for example, in longitudinal
research that collects information from the same individuals over many years.
Special qualitative analysis software tools like ATLAS.ti help the researcher to catalog,
penetrate and analyze the data generated (or, in archival research found) in a given
research project. All forms of observational or field research benefit extensively from
the special capabilities of a dedicated data anlaysis tool like ATLAS.ti.

4. Existing data based research


Many research questions can be answered quickly and efficiently using data or specimens that have
already been collected. There are three general approaches to using these existing resources.
Secondary data analysis is the use of existing data to investigate research questions other than the
main ones for which the data were originally gathered. Ancillary studies add one or more
measurements to a study, often in a subset of the participants, to answer a separate research
question. Systematic reviews combine the results of multiple previous studies of a given research
question, often including calculation of a summary estimate of effect that has greater precision than the
individual study estimates. Making creative use of existing data and specimens is a fast and effective
way for new investigators with limited resources to begin to answer important research questions, gain
valuable experience in a research area, and sometimes have a publishable finding in a short time
frame.

■ ADVANTAGES AND DISADVANTAGES

The main advantages of studies using existing data are speed and economy. A research question that
might otherwise require much time and money to investigate can sometimes be answered rapidly and
inexpensively.

Studies using existing data or specimens also have disadvantages. The selection of the population to
study, which data to collect, the quality of data gathered, and how variables were measured and
recorded are all predetermined. The existing data may have been collected from a population that is
not ideal (e.g., men only rather than men and women), the measurement approach may not be what
the investigator would prefer (history of hypertension, a dichotomous historical variable, in place of
actual blood pressure), and the quality of the data may be poor (frequent missing or incorrect values).
Important confounders and outcomes may not have been measured or recorded.

ANCILLARY STUDIES

Research using secondary data takes advantage of the fact that most of the data needed to answer a
research question are already available. In an ancillary study, the investigator adds one or several
measurements to an existing study to answer a different research question.

Ancillary studies have many of the advantages of secondary data analysis with fewer constraints.
Mr. Abhishek Sharma
Contact : 7859826868

They are both inexpensive and efficient, and the investigator can design a few key ancillary
measurements specifically to answer the research question. Ancillary studies can be added to any
type of study, including cross-sectional and case–control studies, but large prospective cohort studies
and randomized trials are particularly well suited.

Ancillary studies have the problem that the measurements may be most informative when added
before the study begins, and it may be difficult for an outsider to identify studies in the planning phase.
Even when a variable was not measured at baseline, however, a single measurement during or at the
end of a trial can produce useful information.

SYSTEMATIC REVIEWS

Systematic reviews identify a set of completed studies that address a particular research question,
and evaluate the results of these studies to arrive at conclusions about a body of research. In contrast
to other approaches to reviewing the literature, a systematic review uses a well- defined approach to
identify all relevant studies, display the characteristics and results of eligible studies, and, when
appropriate, calculate a summary estimate of the overall results. The statistical aspects of a
systematic review (calculating summary effect estimates and variance, statistical tests of
heterogeneity, and statistical estimates of publication bias) are called meta-analysis.

A systematic review can be a great opportunity for a new investigator. Although it takes a surprising
amount of time and effort, a systematic review generally does not require substantial financial or other
resources.

5. Longitudinal studies
Longitudinal studies employ continuous or repeated measures to follow particular individuals over
prolonged periods of time—often years or decades. They are generally observational in nature, with
quantitative and/or qualitative data being collected on any combination of exposures and outcomes,
without any external influenced being applied. This study type is particularly useful for evaluating the
relationship between risk factors and the development of disease, and the outcomes of treatments
over different lengths of time. Similarly, because data is collected for given individuals within a
predefined group, appropriate statistical testing may be employed to analyze change over time for the
group as a whole, or for particular individuals.

In contrast, cross-sectional analysis is another study type that may analyze multiple variables at a
given instance, but provides no information with regards to the influence of time on the variables
measured—being static by its very nature. It is thus generally less valid for examining cause-and-effect
relationships. Nonetheless, cross-sectional studies require less time to be set up, and may be
considered for preliminary evaluations of association prior to embarking on cumbersome
longitudinal-type studies.

Longitudinal study designs

Longitudinal research may take numerous different forms. They are generally observational, however,
may also be experimental. Some of these are briefly discussed below:

(i) Repeated cross-sectional studies where study participants are largely or entirely different on each
sampling occasion;
Mr. Abhishek Sharma
Contact : 7859826868
(ii) Prospective studies where the same participants are followed over a period of time. These may
include:

● Cohort panels wherein some or all individuals in a defined population with similar exposures or
outcomes are considered over time;
● Representative panels where data is regularly collected for a random sample of a population;
● Linked panels wherein data collected for other purposes is tapped and linked to form
individual-specific datasets.

(iii) Retrospective studies are designed after at least some participants have already experienced
events that are of relevance; with data for potential exposures in the identified cohort being collected
and examined retrospectively.

Advantages of Longitudinal Study

Longitudinal cohort studies, particularly when conducted prospectively in their pure form, offer
numerous benefits. These include:

(i) The ability to identify and relate events to particular exposures, and to further define these
exposures with regards to presence, timing and chronicity;

(ii) Establishing sequence of events;

(iii) Following change over time in particular individuals within the cohort;

(iv) Excluding recall bias in participants, by collecting data prospectively and prior to knowledge of a
possible subsequent event occurring, and;

(v) Ability to correct for the “cohort effect”—that is allowing for analysis of the individual time
components of cohort (range of birth dates), period (current time), and age (at point of
measurement)—and to account for the impact of each individually.

Disadvantages of Longitudinal Study

Numerous challenges are implicit in the study design; particularly by virtue of this occurring over
protracted time periods. We briefly consider the below:

(i) Incomplete and interrupted follow-up of individuals, and attrition with loss to follow-up over time; with
notable threats to the representative nature of the dynamic sample if potentially resulting from a
particular exposure or occurrence that is of relevance;

(ii) Difficulty in separation of the reciprocal impact of exposure and outcome, in view of the potentiation
of one by the other; and particularly wherein the induction period between exposure and occurrence is
prolonged;

(iii) The potential for inaccuracy in conclusion if adopting statistical techniques that fail to account for
the intra-individual correlation of measures, and;
Mr. Abhishek Sharma
Contact : 7859826868
(iv) Generally-increased temporal and financial demands associated with this approach.

Conclusions

Longitudinal methods may provide a more comprehensive approach to research, that allows an
understanding of the degree and direction of change over time. One should carefully consider the cost
and time implications of embarking on such a project, whilst ensuring complete and proven clarity in
design and process, particularly in view of the protracted nature of such an Endeavour; and noting the
peculiarities for consideration at the interpretation stage.

6. Panel studies
A study that provides longitudinal data on a group of people, households, employers, or other social
unit, termed ‘the panel’, about which information is collected over a period of months, years, or
decades. Two of the most common types of panel are age-cohorts, people within a common age-band,
and groups with some other date-specific common experience, such as people graduating from
university, having a first child, or migrating to another country in a given year or band of years. Another
type is the nationally representative cross-sectional sample of households or employers that is
interviewed at regular intervals over a period of years. Because data relate to the same social units,
change is measured more reliably than in regular cross-sectional studies, and sample sizes can be
correspondingly smaller (often under 500), while remaining nationally representative, as long as
non-response and sample attrition are kept within bounds. These are the key problems for panel
studies, as initial samples are eroded by deaths, migration, fatigue with the study, and other causes.
Another problem is that people become experienced interviewees, leading to response bias. For
example, they may report ‘no change’ since the previous interview, so as to avoid detailed questioning
on changes that have in fact occurred.

Data are usually collected through interview surveys with respondents in the panel, with other
informants (such as parents, doctors), with their spouses and other members of their household. With
the respondent’s permission, data from administrative records may be added, such as information from
educational or medical records, which are usually more precise than the respondent’s recollection. A
panel element is sometimes added to regular cross-sectional surveys, and rotating sample designs are
a hybrid between panel study and regular survey.

Advantages of Panel Studies

(a) If mini-samples of a given population are studied by single contacts and differences in the results
noted from one period to another, one cannot know whether these differences are due to differences in
the samples surveyed during each period includes the same persons or groups, as in the panel
techniques, the variations or shifts in the results may be attributed with certitude to a real change in the
phenomena studied.

For example, full effect of a campaign cannot be ascertained through sequence of polls taken on
different people. They show only majority changes.

They conceal minor changes which tend to cancel out one another and sometimes even major
changes if these are nullified by opposing trends. Most importantly, they neither indicate who is
changing nor do they follow the vagaries of the individual voter along the path of his vote, to discover
the relative effects of various other influential factors on his final voting verdict.
Mr. Abhishek Sharma
Contact : 7859826868

(b) Data secured from the same persons over a period of time, affording a detailed picture of the
factors involved in bringing about shifts in opinions or attitudes, can be secured for everyone in the
panel. An analysis of the chartered profile of individuals in a panel may afford the researcher an insight
into the causal relationships.

(c) The information collected about each person from time to time tends to be deeper and more
voluminous than that obtained in single contacts. It is possible, despite certain limitations to build up an
inclusive case history of each panel member.

(d) Provided, of course, that the group constituting the panel is cooperative, it may well be possible to
set up experimental situations which expose all members of the panel to a certain influence and thus
enable the effectiveness of this influence to be measured.

(e) It has been the experience of researchers that the members of a panel learn to open out and
unload their feeling in the course of frequentative interviews and so valuable comments and
elaboration of points made by them can be secured.

Whereas the first interview may elicit only ‘yes’ or ‘no’ responses from the respondents, the repeated
interviews or measurements spread over a continuum of time may elicit from them elaborate
responses in so far as they might have thought deeply about the problem after the first administration.
On first contact, the informants may be suspicious of the investigator and may have little familiarity with
the problem.

Limitations of the Panel Studies

The problems raised by the panel procedure are often sufficient to off-set the gains attendant upon it.
We may briefly discuss the Limitations of the Panel techniques.

(a) The loss of panel members presents a formidable problem for the researcher. People change their
locale, become ill, or die or are subjected to other influences which make it necessary for them to drop
out of the panel. Thus, the panel that was initially intended as a representative sample of the
population may subsequently become unrepresentative.

The losses in the membership of the panel may be occasioned by the loss of interest among the panel
members or a change in attitude toward the panel idea. Not infrequently, the enthusiasm of the panel
members dies down after the first or the second interview.

(b) Paul Lazarsfeld has pointed out that the members of a panel develop a ‘critical set’ and hence
cease to be representatives of the general public. The panel invariably has an educational effect.

It tends to dramatize and increase one’s interest in otherwise unobserved elements and to heighten
one’s interest in otherwise unobserved elements and to heighten one’s awareness of things and
events around him. Hence the mere fact of participation in the panel may change a person’s attitude
and opinions.

(c) Once the members of a panel have expressed an attitude or opinion they tend to try to be
consistent and stick to it. Thus, panel members as compared to the general public are less likely to
change. Thus, the panel may misrepresent the population.
Mr. Abhishek Sharma
Contact : 7859826868
(d) The detailed records are available for the most stationary elements of the population. Of course,
the mobile groups of a community belong to the panel for a shorter time. Panels composed of the
same persons for many years will gradually become panels of old people and eventually die out.

A panel study, however, is not always feasible. One of the difficulties is that the events or thoughts may
already be long past by the time the researcher begins. Occasionally, memory is not always reliable
and the respondents may be inclined to ‘construct’ these part events not so much from their fading
memories but from their personalized theory about their past.

7. Questionnaire studies
Questionnaire is the most popularly and widely used tool for collecting the primary data. It suits to any
kind of research problem. In today’s marketing research activities, the questionnaire has become
indispensable tool. It is not used only in marketing field, but also all types of social research projects.

Steps in Questionnaire Design

The task of composing questionnaire may be considered more an art than a science. It needs a great
deal of experience, expertise, and creativity.

1. Determine the Data to be collected.


2. Determine the Method to be used for Data Collection.
3. Evaluate the Contents of the Question.
4. Decide on Type of Questions and Response Format.
5. Decide on Wording of Questions.
6. Determine on Questionnaire Structure or Physical Format.
7. Pretest, Review and Final Draft.

Types of Questionnaire

Questionnaire can be classified on the basis of several criteria as stated below

1. On the basis of structure and distinguishes, there are four types of questionnaire:

(i) Structured Undisguised

This type of questionnaire involves structured and undisguised questions. Response is limited to
certain options. A structured means that answers of the questions are predetermined. Respondents
have to select answer from the given list of answers. Undisguised means questions are open-ended.
They are asked directly. Respondents can know what the researcher wants to know. For example,
there are four products a, b, c, and d. Customers are asked to select the most preferred product.

(ii) Unstructured Undisguised


Mr. Abhishek Sharma
Contact : 7859826868
Unstructured means free questions are asked. Their response is not limited to certain answers only.
They have full freedom to answer the question. In short, answers of the question are not decided in
advance. For example, customer is asked to name the most preferred products in particular category.

(iii) Structured Disguised

Structured means the answers of the questions to be asked are determined in advance. Disguised
means indirect way of asking questions. Customers do not know the exact purpose/intension of
question but can answer easily. For example, which of the following products is more harmful? Why?

1. Unstructured Disguised

Here, the response is not fixed. Respondents have full freedom to answer the question. Disguised
means something hidden. For example, which motorbike is more risky? Why?

2. On the basis of use/purpose of questionnaire, there are three types of questionnaire

(i) Questionnaire for Personal Interview

This questionnaire is prepared to administer for personal interview. It may involve more questions and
indirect questions which requires explanation or clarification.

(ii) Questionnaire for Telephone Survey

This questionnaire is prepared to collect information via telephone. Obviously, such questionnaire
involves a limited number of short and simple questions.

(iii) Questionnaire for Mail Survey

This questionnaire it meant for the mail survey. This questionnaire is sent to respondents with a
request to return it dully filled. It also involves short and simple questions. However, it consists of more
questions.

3. On the basis of administration, there can be two types of questionnaire:

(i) Interviewer-administered Questionnaire

When this questionnaire is to be administered, the presence of both interviewer and respondents is
essential. Here, questions are asked to respondents one by one. Their response is recorded either in
the same questionnaire or in the separate form. For example, picture or cartoon are shown to
respondents and are asked to describe it. Personal interview and telephone survey are based on this
type of questionnaire.
Mr. Abhishek Sharma
Contact : 7859826868
(ii) Self-administered Questionnaire

It is given to respondent by the interviewer to fill in her/his answers. This is possible even in absence of
interviewer. For mail survey, this type of questionnaire is used.

4. On the basis of type of questions, there can be two types of questionnaire:

(i) Simple Questionnaire

This questionnaire involves certain number of questions of the same type. It may involve only
dichotomous, multiple choice, or otherwise. It has limited utility.

(ii) Multiple Questionnaires

Obviously, it involves variety of questions. Such questionnaire consists of certain questions of different
categories. It is a popular questionnaire.

Measurement | Sampling | Hypothesis Testing

1. Measurement
Measure is important in research. Measure aims to ascertain the dimension, quantity, or capacity of
the behaviors or events that researchers want to explore. According to Maxim (1999), measurement is
a process of mapping empirical phenomena with using system of numbers.

Basically, the events or phenomena that researchers interested can be existed as domain.
Measurement links the events in domain to events in another space which called range. In another
words, researchers can measure certain events in certain range. The range is consisting of scale.
Thus, researchers can interpret the data with quantitative conclusion which leads to more accurate and
standardized outcomes. Without measure, researchers can’t interpret the data accurately and
systematically.

Quantitative Measurements

Quantitative Measurement is a quantitative description of the events or characteristics which involves


numerical measurement. For example, the description made as “There are three birds in the nest”.
This description includes the numerical measurement on the birds. Quantitative measurement enables
researchers to make comparison between the events or characteristics. For example, researchers
tend to know who the tallest person in a family is. So, they use centimeter to measure their height and
make comparison between all the family members.

Qualitative Measurements

Qualitative measurements are ways of gaining a deeper understanding of a topic. Researchers who
are looking to find the meanings behind certain phenomenon or are investigating a new topic about
Mr. Abhishek Sharma
Contact : 7859826868
which little is known, use qualitative measures. Qualitative measures are often contrasted with
quantitative measures. Both are complex methods of research, however, qualitative measures typically
deal with textual data or words while quantitative measures analyze numerical data or statistics.

1.1. Definition

Measurement is the process observing and recording the observations that are
collected as part of a research effort. There are two major issues that will be
considered here.
First, you have to understand the fundamental ideas involved in measuring. Here we
consider two of major measurement concepts. In Levels of Measurement, I explain
the meaning of the four major levels of measurement: nominal, ordinal, interval and
ratio. Then we move on to the reliability of measurement, including consideration of
true score theory and a variety of reliability estimators.

Second, you have to understand the different types of measures that


you might use in social research. We consider four broad categories
of measurements. Survey research includes the design and
implementation of interviews and questionnaires. Scaling involves
consideration of the major methods of developing and implementing
a scale. Qualitative research provides an overview of the broad range
of non-numerical measurement approaches. And unobtrusive
measures presents a variety of measurement methods that don't
intrude on or interfere with the context of the research.

1.2. Designing and writing items


Research design is defined as a framework of methods and techniques chosen by a
researcher to combine various components of research in a reasonably logical manner
so that the research problem is efficiently handled. It provides insights about “how” to
conduct research using a particular methodology. Every researcher has a list of
research questions which need to be assessed – this can be done with research
design.
Mr. Abhishek Sharma
Contact : 7859826868

The sketch of how research should be conducted can be prepared using research
design. Hence, the market research study will be carried out on the basis of research
design.

The design of a research topic is used to explain the type of research (experimental,
survey, correlational, semi-experimental, review) and also its sub-type (experimental
design, research problem, descriptive case-study). There are three main sections of
research design: Data collection, measurement, and analysis.

The type of research problem an organization is facing will determine the research
design and not vice-versa. Variables, designated tools to gather information, how will
the tools be used to collect and analyze data and other factors are decided in research
design on the basis of a research technique is decided.

An impactful research design usually creates minimum bias in data and increases trust
on the collected and analyzed research information. Research design which produces
the least margin of error in experimental research can be touted as the best. The
essential elements of research design are:

1. Accurate purpose statement of research design


2. Techniques to be implemented for collecting details for research
3. Method applied for analyzing collected details
4. Type of research methodology
5. Probable objections for research
6. Settings for research study
7. Timeline
8. Measurement of analysis

Research Design Characteristics


There are four key characteristics of research design:
Mr. Abhishek Sharma
Contact : 7859826868
Neutrality: The results projected in research design should be free from bias and
neutral. Understand opinions about the final evaluated scores and conclusion from
multiple individuals and consider those who agree with the derived results.

Reliability: If a research is conducted on a regular basis, the researcher involved


expects similar results to be calculated every time. Research design should indicate
how the research questions can be formed to ensure the standard of obtained results
and this can happen only when the research design is reliable.

Validity: There are multiple measuring tools available for research design but valid
measuring tools are those which help a researcher in gauging results according to the
objective of research and nothing else. The questionnaire developed from this research
design will be then valid.

Generalization: The outcome of research design should be applicable to a population


and not just a restricted sample. Generalization is one of the key characteristics of
research design.

Types of Research Design


A researcher must have a clear understanding of the various types of research design
to select which type of research design to implement for a study. Research design can
be broadly classified into quantitative and qualitative research design.

Qualitative Research Design: Qualitative research is implemented in cases where a


relationship between collected data and observation is established on the basis of
mathematical calculations. Theories related to a naturally existing phenomenon can be
proved or disproved using mathematical calculations. Researchers rely on qualitative
research design where they are expected to conclude “why” a particular theory exists
along with “what” respondents have to say about it.

Quantitative Research Design: Quantitative research is implemented in cases where


it is important for a researcher to have statistical conclusions to collect actionable
Mr. Abhishek Sharma
Contact : 7859826868
insights. Numbers provide a better perspective to make important business decisions.
Quantitative research design is important for the growth of any organization because
any conclusion drawn on the basis of numbers and analysis will only prove to be
effective for the business.

Further, research design can be divided into five types –

1. Descriptive Research Design: In a descriptive research design, a researcher is


solely interested in describing the situation or case under his/her research study. It is a
theory-based research design which is created by gather, analyze and presents
collected data. By implementing an in-depth research design such as this, a researcher
can provide insights into the why and how of research.

2. Experimental Research Design: Experimental research design is used to establish


a relationship between the cause and effect of a situation. It is a causal research design
where the effect caused by the independent variable on the dependent variable is
observed. For example, the effect of an independent variable such as price on a
dependent variable such as customer satisfaction or brand loyalty is monitored. It is a
highly practical research design method as it contributes towards solving a problem at
hand. The independent variables are manipulated to monitor the change it has on the
dependent variable. It is often used in social sciences to observe human behavior by
analyzing two groups – affect of one group on the other.

3. Correlational Research Design: Correlational research is a non-experimental


research design technique which helps researchers to establish a relationship between
two closely connected variables. Two different groups are required to conduct this
research design method. There is no assumption while evaluating a relationship
between two different variables and statistical analysis techniques are used to calculate
the relationship between them.

Correlation between two variables is concluded using a correlation coefficient, whose


value ranges between -1 and +1. If the correlation coefficient is towards +1, it indicates
Mr. Abhishek Sharma
Contact : 7859826868

a positive relationship between the variables and -1 indicates a negative relationship


between the two variables.

4. Diagnostic Research Design: In the diagnostic research design, a researcher is


inclined towards evaluating the root cause of a specific topic. Elements that contribute
towards a troublesome situation are evaluated in this research design method.

There are three parts of diagnostic research design:

● Inception of the issue


● Diagnosis of the issue
● Solution for the issue

5. Explanatory Research Design: In exploratory research design, the researcher’s


ideas and thoughts are key as it is primarily dependent on their personal inclination
about a particular topic. Explanation about unexplored aspects of a subject is provided
along with details about what, how and why related to the research questions.

How to write a
research methodology
In your thesis or dissertation, you will have to discuss the methods you used to do your research.

The methodology or methods section explains what you did and how you did it, allowing readers to

evaluate the reliability and validity of the research. It should include:

● The type of research you did

● How you collected your data

● How you analyzed your data

● Any tools or materials you used in the research

● Your rationale for choosing these methods


Mr. Abhishek Sharma
Contact : 7859826868

● Step 1: Explain your methodological approach


Begin by introducing your overall approach to the research. What research problem

or question did you investigate, and what kind of data did you need to answer it?
○ Quantitative methods (e.g. surveys) are best for measuring, ranking, categorizing,

identifying patterns and making generalizations

○ Qualitative methods (e.g. interviews) are best for describing, interpreting,

contextualizing, and gaining in-depth insight into specific concepts or phenomena

○ Mixed methods allow for a combination of numerical measurement and in-depth

exploration

● Depending on your discipline and approach, you might also begin with a discussion

of the rationale and assumptions underpinning your methodology.


○ Was your aim to address a practical or a theoretical research problem?

○ Why is this the most suitable approach to answering your research questions?

○ Is this a standard methodology in your field or does it require justification?

○ Were there any ethical or philosophical considerations?

○ What are the criteria for validity and reliability in this type of research?

● In a quantitative experimental study, you might aim to produce generalizable

knowledge about the causes of a phenomenon. Valid research requires a carefully

designed study with a representative sample and controlled variables that can be

replicated by other researchers.

In a qualitative ethnographic case study, you might aim to produce contextual

real-world knowledge about the behaviors, social structures and shared beliefs of a

specific group of people. As this methodology is less controlled and more

interpretive, you will need to reflect on your position as researcher, taking into

account how your participation and perception might have influenced the results.

Step 2: Describe your methods of data collection


Once you have introduced your overall methodological approach, you should give

full details of the methods you used to conduct the research. Outline the tools,

procedures and materials you used to gather data, and the criteria you used to
Mr. Abhishek Sharma
Contact : 7859826868

select participants or sources.

Quantitative methods

Surveys

Describe where, when and how the survey was conducted.


○ How did you design the questions and what form did they take (e.g. multiple choice,

rating scale)?

○ What sampling method did you use to select participants?

○ Did you conduct surveys by phone, mail, online or in person, and how long did

participants have to respond?

○ What was the sample size and response rate?

● You might want to include the full questionnaire as an appendix so that your reader

can see exactly what data was collected.

Experiments

Give full details of the tools, techniques and procedures you used to conduct the

experiment.
○ How did you design the experiment?

○ How did you recruit participants?

○ How did you manipulate and measure the variables?

○ What tools or technologies did you use in the experiment?

● In experimental research, it is especially important to give enough detail for another

researcher to reproduce your results.

Existing data

Explain how you gathered and selected material (such as publications or archival

data) for inclusion in your analysis.


○ Where did you source the material?

○ How was the data originally produced?

○ What criteria did you use to select material (e.g. date range)?

● Quantitative methods example

The survey consisted of 5 multiple-choice questions and 10 questions that the

respondents had to answer with a 7-point Likert scale. The aim was to conduct the
Mr. Abhishek Sharma
Contact : 7859826868

survey with 350 customers of Company X on the company premises in The Hague

from 4-8 July 2017 between 11:00 and 15:00. A customer was defined as a person

who had purchased a product from Company X on the day of questioning.

Participants were given 5 minutes to fill in the survey anonymously, and 408

customers responded. Because not all surveys were fully completed, 371 survey

results were included in the analysis.

Qualitative methods

Interviews or focus groups

Describe where, when and how the interviews were conducted.


○ How did you find and select participants?

○ How many people took part?

○ What form did the interviews take (structured, semi-structured, unstructured)?

○ How long were the interviews and how were they recorded?

● Participant observation

Describe where, when and how you conducted the observation.


○ What group or community did you observe and how did you gain access to them?

○ How long did you spend conducting the research and where was it located?

○ How did you record your data (e.g. audiovisual recordings, note-taking)?

● Existing data

Explain how you selected case study materials (such as texts or images) for the

focus of your analysis.


○ What type of materials did you analyze?

○ How did you collect and select them?

● Qualitative methods example

In order to gain a better insight into the possibilities for improvement of the product

range, semi-structured interviews were conducted with 8 returning customers from

the main target group of Company X. A returning customer was defined as someone

who usually bought products at least twice a week from Company X. The surveys

were used to select participants who belonged to the target group (20-45 years old).
Mr. Abhishek Sharma
Contact : 7859826868

Interviews were conducted in a small office next to the cash register, and lasted

approximately 20 minutes each. Answers were recorded by note-taking, and seven

interviews were also filmed with consent. One interviewee preferred not to be filmed.

1.3. Uni-dimensional and Multidimensional scales


“Unidimensionality” is used to describe a specific type of measurement scale. A unidimensional
measurement scale has only one (“uni”) dimension. In other words, it can be represented by a
single number line. Some examples of simple, unidimensional scales:

● Height of people.
● Weight of cars.
● IQ.
● Volume of liquid.

Unidimensionality can also refer to measuring a single ability, attribute, construct, or skill. For
example, a unidimensional mathematical test would be designed to measure only mathematical ability
(and not, say, grasp of English grammar, knowledge of sports, or other non-mathematical subjects or
concepts).

Some concepts (like height or weight) are obviously unidimensional. Others can be forced into a
unidimensional status by narrowing the idea into a single, measurable construct. For example,
self-worth is a psychological concept that has many layers of complexity and can be different for
different situations (at home, at a party, at work, at your wedding). However, you can narrow the
concept by making a simple line that has “low self worth” on the left and “high self worth” on the right.

Multidimensional scaling is a visual representation of distances or dissimilarities between sets of


objects. “Objects” can be colors, faces, map coordinates, political persuasion, or any kind of real or
conceptual stimuli (Kruskal and Wish, 1978). Objects that are more similar (or have shorter distances)
are closer together on the graph than objects that are less similar (or have longer distances). As well
as interpreting dissimilarities as distances on a graph, MDS can also serve as a dimension reduction
technique for high-dimensional data (Buja et. al, 2007).

The term scaling comes from psychometrics, where abstract concepts (“objects”) are assigned
numbers according to a rule (Trochim, 2006). For example, you may want to quantify a person’s
attitude to global warming. You could assign a “1” to “doesn’t believe in global warming”, a 10 to “firmly
believes in global warming” and a scale of 2 to 9 for attitudes in between. You can also think of
“scaling” as the fact that you’re essentially scaling down the data (i.e. making it simpler by creating
lower-dimensional data). Data that is scaled down in dimension keeps similar properties. For example,
two data points that are close together in high-dimensional space will also be close together in
low-dimensional space (Martinez, 2005). The “multidimensional” part is due to the fact that you aren’t
limited to two dimensional graphs or data. Three-dimensional, four-dimensional and higher plots are
possible.

2. Measurement Scales
Mr. Abhishek Sharma
Contact : 7859826868
Measurement scale, in statistical analysis, the type of information provided by numbers. Each of the
four scales (i.e., nominal, ordinal, interval, and ratio) provides a different type of information.
Measurement refers to the assignment of numbers in a meaningful way, and understanding
measurement scales is important to interpreting the numbers assigned to people, objects, and events.

2.1. Nominal
Nominal Scales

In nominal scales, numbers, such as driver’s license numbers and product serial numbers, are used to
name or identify people, objects, or events. Gender is an example of a nominal measurement in which
a number (e.g., 1) is used to label one gender, such as males, and a different number (e.g., 2) is used
for the other gender, females. Numbers do not mean that one gender is better or worse than the other;
they simply are used to classify persons. In fact, any other numbers could be used, because they do
not represent an amount or a quality. It is impossible to use word names with certain statistical
techniques, but numerals can be used in coding systems. For example, fire departments may wish to
examine the relationship between gender (where male = 1, female = 2) and performance on
physical-ability tests (with numerical scores indicating ability).

2.2. Ordinal
Ordinal Scales

In ordinal scales, numbers represent rank order and indicate the order of quality or quantity, but they
do not provide an amount of quantity or degree of quality. Usually, the number 1 means that the person
(or object or event) is better than the person labeled 2; person 2 is better than person 3, and so
forth—for example, to rank order persons in terms of potential for promotion, with the person assigned
the 1 rating having more potential than the person assigned a rating of 2. Such ordinal scaling does
not, however, indicate how much more potential the leader has over the person assigned a rating of 2,
and there may be very little difference between 1 and 2 here. When ordinal measurement is used
(rather than interval measurement), certain statistical techniques are applicable (e.g., Spearman’s rank
correlation).

2.3. Interval
Interval Scale
Mr. Abhishek Sharma
Contact : 7859826868
In interval scales, numbers form a continuum and provide information about the amount of difference,
but the scale lacks a true zero. The differences between adjacent numbers are equal or known. If zero
is used, it simply serves as a reference point on the scale but does not indicate the complete absence
of the characteristic being measured. The Fahrenheit and Celsius temperature scales are examples of
interval measurement. In those scales, 0 °F and 0 °C do not indicate an absence of temperature.

2.4. Ratio
Ratio Scales

Ratio scales have all of the characteristics of interval scales as well as a true zero, which refers to
complete absence of the characteristic being measured. Physical characteristics of persons and
objects can be measured with ratio scales, and, thus, height and weight are examples of ratio
measurement. A score of 0 means there is complete absence of height or weight. A person who is 1.2
metres (4 feet) tall is two-thirds as tall as a 1.8-metre- (6-foot-) tall person. Similarly, a person weighing
45.4 kg (100 pounds) is two-thirds as heavy as a person who weighs 68 kg (150 pounds).

2.5. Ratings and Ranking scales


RATING SCALE

Rating scale is defined as a closed-ended survey question used to represent respondent feedback in a
comparative form for specific particular features/products/services. It is one of the most established
question types for online and offline surveys where survey respondents are expected to rate an
attribute or feature. Rating scale is a variant of the popular multiple-choice question which is widely
used to gather information that provides relative information about a specific topic.

Researchers use a rating scale in research when they intend to associate a qualitative measure with
the various aspects of a product or feature. Generally, this scale is used to evaluate the performance of
a product or service, employee skills, customer service performances, processes followed for a
particular goal etc. Rating scale survey question can be compared to a checkbox question but rating
scale provides more information than merely Yes/No.

Types of Rating Scale

Broadly speaking, rating scales can be divided into two categories: Ordinal and Interval Scales.

An ordinal scale is a scale the depicts the answer options in an ordered manner. The difference
between the two answer option may not be calculable but the answer options will always be in a
certain innate order. Parameters such as attitude or feedback can be presented using an ordinal scale.

An interval scale is a scale where not only is the order of the answer variables established but the
magnitude of difference between each answer variable is also calculable. Absolute or true zero value
is not present in an interval scale. Temperature in Celsius or Fahrenheit is the most popular example of
an interval scale. Net Promoter Score, Likert Scale, Bipolar Matrix Table are some of the most effective
types of interval scale.
Mr. Abhishek Sharma
Contact : 7859826868
There are four primary types of rating scales which can be suitably used in an online survey:

● Graphic Rating Scale


● Numerical Rating Scale
● Descriptive Rating Scale
● Comparative Rating Scale

(i) Graphic Rating Scale: Graphic rating scale indicates the answer options on a scale of 1-3, 1-5,
etc. Likert Scale is a popular graphic rating scale example. Respondents can select a particular option
on a line or scale to depict rating. This rating scale is often implemented by HR managers to conduct
employee evaluation.5 point likert scale for satisfaction

(ii) Numerical Rating Scale: Numerical rating scale has numbers as answer options and not each
number corresponds to a characteristic or meaning. For instance, a Visual Analog Scale or a Semantic
Differential Scale can be presented using a numerical rating scale

(iii) Descriptive Rating Scale: In a descriptive rating scale, each answer option is elaborately
explained for the respondents. A numerical value is not always related to the answer options in the
descriptive rating scale. There are certain surveys, for example, a customer satisfaction survey, which
needs to describe all the answer options in detail so that every customer has thoroughly explained
information about what is expected from the survey.

(iv) Comparative Rating Scale: Comparative rating scale, as the name suggests, expects
respondents to answer a particular question in terms of comparison, i.e. on the basis of relative
measurement or keeping other organizations/products/features as a reference.

RANKING SCALE

A ranking scale is a survey question tool that measures people’s preferences by asking them to rank
their views on a list of related items. Using these scales can help your business establish what matters
and what doesn’t matter to either external or internal stakeholders. You could use ranking scale
questions to evaluate customer satisfaction or to assess ways to motivate your employees, for
example. Ranking scales can be a source of useful information, but they do have some disadvantages.

Businesses typically use ranking scales when they want to establish preferences or levels of
importance in a group of items. A respondent completing a scale with five items, for example, will
assign a number 1 through 5 to each individual one. Typically, the number 1 goes to the item that is
most important to the respondent; the number 5 goes to the one that is of least importance. In some
cases, scales do not force respondents to rank all items, asking them to choose their top three out of
the five, for example. Online surveys may remove the need to key in numbers, allowing respondents to
drag and drop items into order.

Advantages of Ranking Scales


Mr. Abhishek Sharma
Contact : 7859826868
Ranking scales give you an insight into what matters to your respondents. Each response to an item
has an individual value, giving results that you can easily average and rank numerically. This can be a
valuable business tool, as it gives a statistical breakdown of your audience’s preferences based on
what you need to know. If you are making business decisions and have various options to choose
from, data from a ranking scale might give you a clearer insight into how to satisfy your audience
based on what is important to them.

3. Thurston Scaling

A Thurstone scale has a number of “agree” or “disagree” statements. It is a unidimensional


scale to measure attitudes towards people. Developing the scale is time consuming and
relatively complex compared to other scales (like the Likert scale).

Although there are technically three scales, when people refer to the “Thurstone Scale” they’re
usually talking about the method of equal-appearing intervals. It’s called “Equal appearing
intervals” because when you choose the items for your test (see Step 6 below), you’re picking
items equally spaced apart.

The other two variations are:

● The method of successive intervals: this method is more challenging to implement than
equal-appearing intervals.
● The method of paired comparisons: requires twice the judgments than the equal-appearing
intervals method and can quickly become very consuming.

The three methods differ in their construction, but still result in the same Agree/Disagree quiz
given to respondents.

Method of Equal-Appearing Intervals

Step 1: Develop a large number of agree/disagree statements for a topic. For example, if you
wanted to find out people’s attitudes towards immigrants, your statements might include:

● Immigrants drain social services.


Mr. Abhishek Sharma
Contact : 7859826868
● Immigrants take jobs away from regular people.
● Immigrants perform low-wage, unpopular tasks.

Step 2: Have a panel of judges rate the items on a scale of 1 to 11 for how favorable each
item is towards the topic (in this case, immigration). The lowest score(1) should indicate an
extremely unfavorable attitude and the highest score(11) should indicate an extremely
favorable attitude. Note that you do not want the judges to agree or disagree with the
statements — you want them to rate the statements on how effective they would be at
uncovering attitudes.

Step 3: Find the median score and interquartile range (IQR) for each item. If you have 50
items, you should have 50 median scores and 50 IQRs.

Step 4: Sort the table in ascending order(smallest to largest) by median. In other words, the 1s
should be at the top of the table and the 11s should be at the bottom.

Step 5: For each set of medians (i.e. 1s. 2s, 3s) sort the IQRs by descending order (largest to
smallest).

The figure below shows a partial table with the data sorted according to ascending medians
with their respective, descending IQRs.

Step 6: Select your final scale items using the table you created in Step 4 and 5. For example,
you might choose one item from each median value.
You want the statements with the most agreement between judges. For each median value,
Mr. Abhishek Sharma
Contact : 7859826868
this is the item with the lowest interquartile range. This is a “Rule of Thumb”: you don’t have to
choose this item. If you decide it’s poorly worded or ambiguous, choose the item above it (with
the next lowest IQR).

4. Likert Differential scaling


LIKERT SCALE

Various kinds of rating scales have been developed to measure attitudes directly (i.e. the person
knows their attitude is being studied). The most widely used is the Likert Scale.

Likert (1932) developed the principle of measuring attitudes by asking people to respond to a series of
statements about a topic, in terms of the extent to which they agree with them, and so tapping into the
cognitive and affective components of attitudes.

Likert-type or frequency scales use fixed choice response formats and are designed to measure
attitudes or opinions (Bowling, 1997; Burns, & Grove, 1997). These ordinal scales measure levels of
agreement/disagreement.

A Likert-type scale assumes that the strength/intensity of experience is linear, i.e. on a continuum from
strongly agree to strongly disagree, and makes the assumption that attitudes can be measured.
Respondents may be offered a choice of five to seven or even nine pre-coded responses with the
neutral point being neither agree nor disagree.

In its final form, the Likert Scale is a five (or seven) point scale which is used to allow the individual to
express how much they agree or disagree with a particular statement.

For example

I believe that ecological questions are the most important issues facing human beings today.

Strongly agree / agree / don’t know / disagree / strongly disagree

Each of the five (or seven) responses would have a numerical value which would be used to measure
the attitude under investigation.

Likert Scale Examples

(i) Agreement
Mr. Abhishek Sharma
Contact : 7859826868
● Strongly Agree
● Agree
● Undecided
● Disagree
● Strongly Disagree

(ii) Frequency

● Very Frequently
● Frequently
● Occasionally
● Rarely
● Never

(iii) Importance

● Very Important
● Important
● Moderately Important
● Of Little Importance
● Unimportant

(iv) Likelihood

● Almost Always True


● Usually True
● Occasionally True
● Usually Not True
● Almost Never True

How can you analyze data from a Likert Scale?

(i) Summarize using a median or a mode (not a mean); the mode is probably the most suitable for
easy interpretation.
Mr. Abhishek Sharma
Contact : 7859826868
(ii) Display the distribution of observations in a bar chart (it can’t be a histogram, because the data is
not continuous).

Critical Evaluation

Likert Scales have the advantage that they do not expect a simple yes / no answer from the
respondent, but rather allow for degrees of opinion, and even no opinion at all. Therefore quantitative
data is obtained, which means that the data can be analyzed with relative ease.

However, like all surveys, the validity of Likert Scale attitude measurement can be compromised due
social desirability. This means that individuals may lie to put themselves in a positive light. For
example, if a likert scale was measuring discrimination, who would admit to being racist?

Offering anonymity on self-administered questionnaires should further reduce social pressure, and
thus may likewise reduce social desirability bias.

Paulhus (1984) found that more desirable personality characteristics were reported when people were
asked to write their names, addresses and telephone numbers on their questionnaire than when they
told not to put identifying information on the questionnaire.

5. Semantic differential scaling


SEMANTIC DIFFERENTIAL SCALE

The Semantic Differential Scale is a seven-point rating scale used to derive the respondent’s attitude
towards the given object or event by asking him to select an appropriate position on a scale between
two bipolar adjectives (such as “warm” or “cold”, “powerful” or “weak”, etc.)

For example, the respondent might be asked to rate the following five attributes of shoppers stop by
choosing a position on a scale between the adjectives that best describe what really the shoppers stop
means to him.
Mr. Abhishek Sharma
Contact : 7859826868

The respondent will place a mark anywhere between the two extreme adjectives, representing his
attitude towards the object. Such as, in the above example, the shoppers stop is evaluated as
organized, cold, modern, reliable and simple.

Sometimes the negative adjectives are placed on the right and sometimes on the left side of a scale.
This is done to control the tendency of the respondents, especially those with either very positive or
negative attitudes, to mark the right or left-hand sides of a scale without reading the labels.

The items on a semantic differential scale can be scored on either a numerical range of -3 to +3 or 1
to 7. The data obtained are analyzed through profile analysis. In profile analysis, the means and
medians of the scale values are found out and then are compared by plotting or statistical analysis.
Through this method, it is possible to compare the overall similarities and differences among the
objects.

The versatility of the semantic differential scale increases its application in the marketing research. It is
widely used in comparing the brand, company image, and product. It also helps in developing an
advertising campaign and promotional strategies in new product development studies.

6. Paired comparison
Definition: The Paired Comparison Scaling is a comparative scaling technique wherein
the respondent is shown two objects at the same time and is asked to select one according
to the defined criterion. The resulting data are ordinal in nature.

The paired Comparison scaling is often used when the stimulus objects are physical
products. The comparison data so obtained can be analyzed in either of the ways. First,
the researcher can compute the percentage of respondents who prefer one object over
another by adding the matrices for each respondent, dividing the sum by the number of
respondents and then multiplying it by 100. Through this method, all the stimulus objects
can be evaluated simultaneously.

Second, under the assumption of transitivity (which implies that if brand X is preferred to
Brand Y, and brand Y to brand Z, then brand X is preferred to brand Z) the paired
comparison data can be converted into a rank order. To determine the rank order, the
researcher identifies the number of times the object is preferred by adding up all the
matrices.

The paired comparison method is effective when the number of objects is limited because
it requires the direct comparison. And with a large number of stimulus objects the
comparison becomes cumbersome. Also, if there is a violation of the assumption of
transitivity the order in which the objects are placed may bias the results.

7. Reliability and Validity Scale


Mr. Abhishek Sharma
Contact : 7859826868

Reliability

Reliability

refers to the consistency of a measure.


Psychologists consider three types of
consistency: over time (test-retest
reliability), across items (internal
consistency), and across different
researchers (inter-rater reliability).

Test-Retest Reliability

When researchers measure a construct that they assume to be consistent across time, then the scores they obtain
should also be consistent across time.

Test-retest reliability

is the extent to which this is actually the case. For example, intelligence is generally thought to be consistent
across time. A person who is highly intelligent today will be highly intelligent next week. This means that any
Mr. Abhishek Sharma
Contact : 7859826868
good measure of intelligence should produce roughly the same scores for this individual next week as it does
today. Clearly, a measure that produces highly inconsistent scores over time cannot be a very good measure of a
construct that is supposed to be consistent.

Assessing test-retest reliability requires using the measure on a group of people at one time, using it again on the
same group of people at a later time, and then looking at

test-retest correlation

between the two sets of scores. This is typically done by graphing the data in a scatterplot and computing
Pearson’s r. Figure 5.2 shows the correlation between two sets of scores of several university students on the
Rosenberg Self-Esteem Scale, administered two times, a week apart. Pearson’s r for these data is +.95. In general,
a test-retest correlation of +.80 or greater is considered to indicate good reliability.

Figure 5.2 Test-Retest Correlation Between Two Sets of Scores of Several College Students on the
Rosenberg Self-Esteem Scale, Given Two Times a Week Apart

Again, high test-retest correlations make sense when the construct being measured is assumed to be consistent
over time, which is the case for intelligence, self-esteem, and the Big Five personality dimensions. But other
constructs are not assumed to be stable over time. The very nature of mood, for example, is that it changes. So a
measure of mood that produced a low test-retest correlation over a period of a month would not be a cause for
concern.

Internal Consistency

A second kind of reliability is

internal consistency
Mr. Abhishek Sharma
Contact : 7859826868
, which is the consistency of people’s responses across the items on a multiple-item measure. In general, all the
items on such measures are supposed to reflect the same underlying construct, so people’s scores on those items
should be correlated with each other. On the Rosenberg Self-Esteem Scale, people who agree that they are a
person of worth should tend to agree that that they have a number of good qualities. If people’s responses to the
different items are not correlated with each other, then it would no longer make sense to claim that they are all
measuring the same underlying construct. This is as true for behavioural and physiological measures as for
self-report measures. For example, people might make a series of bets in a simulated game of roulette as a
measure of their level of risk seeking. This measure would be internally consistent to the extent that individual
participants’ bets were consistently high or low across trials.

Like test-retest reliability, internal consistency can only be assessed by collecting and analyzing data. One
approach is to look at a

split-half correlation

. This involves splitting the items into two sets, such as the first and second halves of the items or the even- and
odd-numbered items. Then a score is computed for each set of items, and the relationship between the two sets of
scores is examined. For example, Figure 5.3 shows the split-half correlation between several university students’
scores on the even-numbered items and their scores on the odd-numbered items of the Rosenberg Self-Esteem
Scale. Pearson’s r for these data is +.88. A split-half correlation of +.80 or greater is generally considered good
internal consistency.

Figure 5.3 Split-Half Correlation Between Several College Students’ Scores on the Even-Numbered
Items and Their Scores on the Odd-Numbered Items of the Rosenberg Self-Esteem Scale

Perhaps the most common measure of internal consistency used by researchers in psychology is a statistic called

Cronbach’s α
Mr. Abhishek Sharma
Contact : 7859826868
(the Greek letter alpha). Conceptually, α is the mean of all possible split-half correlations for a set of items. For
example, there are 252 ways to split a set of 10 items into two sets of five. Cronbach’s α would be the mean of
the 252 split-half correlations. Note that this is not how α is actually computed, but it is a correct way of
interpreting the meaning of this statistic. Again, a value of +.80 or greater is generally taken to indicate good
internal consistency.

Interrater Reliability

Many behavioural measures involve significant judgment on the part of an observer or a rater.

Inter-rater reliability

is the extent to which different observers are consistent in their judgments. For example, if you were interested
in measuring university students’ social skills, you could make video recordings of them as they interacted with
another student whom they are meeting for the first time. Then you could have two or more observers watch the
videos and rate each student’s level of social skills. To the extent that each participant does in fact have some
level of social skills that can be detected by an attentive observer, different observers’ ratings should be highly
correlated with each other. Inter-rater reliability would also have been measured in Bandura’s Bobo doll study. In
this case, the observers’ ratings of how many acts of aggression a particular child committed while playing with
the Bobo doll should have been highly positively correlated. Interrater reliability is often assessed using
Cronbach’s α when the judgments are quantitative or an analogous statistic called Cohen’s κ (the Greek letter
kappa) when they are categorical.

Validity

Validity

is the extent to which the scores from a measure represent the variable they are intended to. But how do
researchers make this judgment? We have already considered one factor that they take into account—reliability.
When a measure has good test-retest reliability and internal consistency, researchers should be more confident
that the scores represent what they are supposed to. There has to be more to it, however, because a measure can
be extremely reliable but have no validity whatsoever. As an absurd example, imagine someone who believes
that people’s index finger length reflects their self-esteem and therefore tries to measure self-esteem by holding a
ruler up to people’s index fingers. Although this measure would have extremely good test-retest reliability, it
Mr. Abhishek Sharma
Contact : 7859826868
would have absolutely no validity. The fact that one person’s index finger is a centimetre longer than another’s
would indicate nothing about which one had higher self-esteem.

Discussions of validity usually divide it into several distinct “types.” But a good way to interpret these types is
that they are other kinds of evidence—in addition to reliability—that should be taken into account when judging
the validity of a measure. Here we consider three basic kinds: face validity, content validity, and criterion
validity.

Face Validity

Face validity

is the extent to which a measurement method appears “on its face” to measure the construct of interest. Most
people would expect a self-esteem questionnaire to include items about whether they see themselves as a person
of worth and whether they think they have good qualities. So a questionnaire that included these kinds of items
would have good face validity. The finger-length method of measuring self-esteem, on the other hand, seems to
have nothing to do with self-esteem and therefore has poor face validity. Although face validity can be assessed
quantitatively—for example, by having a large sample of people rate a measure in terms of whether it appears to
measure what it is intended to—it is usually assessed informally.

Face validity is at best a very weak kind of evidence that a measurement method is measuring what it is supposed
to. One reason is that it is based on people’s intuitions about human behaviour, which are frequently wrong. It is
also the case that many established measures in psychology work quite well despite lacking face validity. The
Minnesota Multiphasic Personality Inventory-2 (MMPI-2) measures many personality characteristics and
disorders by having people decide whether each of over 567 different statements applies to them—where many
of the statements do not have any obvious relationship to the construct that they measure. For example, the items
“I enjoy detective or mystery stories” and “The sight of blood doesn’t frighten me or make me sick” both
measure the suppression of aggression. In this case, it is not the participants’ literal answers to these questions
that are of interest, but rather whether the pattern of the participants’ responses to a series of questions matches
those of individuals who tend to suppress their aggression.

Content Validity

Content validity

is the extent to which a measure “covers” the construct of interest. For example, if a researcher conceptually
defines test anxiety as involving both sympathetic nervous system activation (leading to nervous feelings) and
negative thoughts, then his measure of test anxiety should include items about both nervous feelings and negative
Mr. Abhishek Sharma
Contact : 7859826868
thoughts. Or consider that attitudes are usually defined as involving thoughts, feelings, and actions toward
something. By this conceptual definition, a person has a positive attitude toward exercise to the extent that he or
she thinks positive thoughts about exercising, feels good about exercising, and actually exercises. So to have
good content validity, a measure of people’s attitudes toward exercise would have to reflect all three of these
aspects. Like face validity, content validity is not usually assessed quantitatively. Instead, it is assessed by
carefully checking the measurement method against the conceptual definition of the construct.

Criterion Validity

Criterion validity

is the extent to which people’s scores on a measure are correlated with other variables (known as

criteria

) that one would expect them to be correlated with. For example, people’s scores on a new measure of test
anxiety should be negatively correlated with their performance on an important school exam. If it were found that
people’s scores were in fact negatively correlated with their exam performance, then this would be a piece of
evidence that these scores really represent people’s test anxiety. But if it were found that people scored equally
well on the exam regardless of their test anxiety scores, then this would cast doubt on the validity of the measure.

A criterion can be any variable that one has reason to think should be correlated with the construct being
measured, and there will usually be many of them. For example, one would expect test anxiety scores to be
negatively correlated with exam performance and course grades and positively correlated with general anxiety
and with blood pressure during an exam. Or imagine that a researcher develops a new measure of physical risk
taking. People’s scores on this measure should be correlated with their participation in “extreme” activities such
as snowboarding and rock climbing, the number of speeding tickets they have received, and even the number of
broken bones they have had over the years. When the criterion is measured at the same time as the construct,
criterion validity is referred to as

concurrent validity

; however, when the criterion is measured at some point in the future (after the construct has been measured), it is
referred to as

predictive validity

(because scores on the measure have “predicted” a future outcome).


Mr. Abhishek Sharma
Contact : 7859826868
Criteria can also include other measures of the same construct. For example, one would expect new measures of
test anxiety or physical risk taking to be positively correlated with existing measures of the same constructs. This
is known as

convergent validity

Assessing convergent validity requires collecting data using the measure. Researchers John Cacioppo and
Richard Petty did this when they created their self-report Need for Cognition Scale to measure how much people
value and engage in thinking (Cacioppo & Petty, 1982). In a series of studies, they showed that people’s scores
were positively correlated with their scores on a standardized academic achievement test, and that their scores
were negatively correlated with their scores on a measure of dogmatism (which represents a tendency toward
obedience). In the years since it was created, the Need for Cognition Scale has been used in literally hundreds of
studies and has been shown to be correlated with a wide variety of other variables, including the effectiveness of
an advertisement, interest in politics, and juror decisions (Petty, Briñol, Loersch, & McCaslin, 2009)

Discriminant Validity

Discriminant validity

, on the other hand, is the extent to which scores on a measure are not correlated with measures of variables that
are conceptually distinct. For example, self-esteem is a general attitude toward the self that is fairly stable over
time. It is not the same as mood, which is how good or bad one happens to be feeling right now. So people’s
scores on a new measure of self-esteem should not be very highly correlated with their moods. If the new
measure of self-esteem were highly correlated with a measure of mood, it could be argued that the new measure
is not really measuring self-esteem; it is measuring mood instead.

When they created the Need for Cognition Scale, Cacioppo and Petty also provided evidence of discriminant
validity by showing that people’s scores were not correlated with certain other variables. For example, they found
only a weak correlation between people’s need for cognition and a measure of their cognitive style—the extent to
which they tend to think analytically by breaking ideas into smaller parts or holistically in terms of “the big
picture.” They also found no correlation between people’s need for cognition and measures of their test anxiety
and their tendency to respond in socially desirable ways. All these low correlations provide evidence that the
measure is reflecting a conceptually distinct construct.

8. Sampling
Mr. Abhishek Sharma
Contact : 7859826868

In research terms a sample is a group of people, objects, or items that are taken from a larger
population for measurement. The sample should be representative of the population to ensure
that we can generalise the findings from the research sample to the population as a whole.
What is the purpose of sampling? To draw conclusions about populations from samples, we
must use inferential statistics, to enable us to determine a population’s characteristics by
directly observing only a portion (or sample) of the population. We obtain a sample of the
population for many reasons as it is usually not practical and almost never economical. There
would also be difficulties measuring whole populations because: -
• The large size of many populations
• Inaccessibility of some of the population - Some populations are so difficult to get access to
that only a sample can be used. E.g. prisoners, people with severe mental illness, disaster
survivors etc. The inaccessibility may be associated with cost or time or just access.
• Destructiveness of the observation- Sometimes the very act of observing the desired
characteristic of the product destroys it for the intended use. Good examples of this occur in
quality control. E.g. to determine the quality of a fuse and whether it is defective, it must be
destroyed. Therefore if you tested all the fuses, all would be destroyed.
• Accuracy and sampling - A sample may be more accurate than the total study population. A
badly identified population can provide less reliable information than a carefully obtained
sample.

8.1. Steps

Steps in Sampling Process


An operational sampling process can be divided into seven steps as given below:

1. Defining the target population.


2. Specifying the sampling frame.
3. Specifying the sampling unit.
4. Selection of the sampling method.
5. Determination of sample size.
6. Specifying the sampling plan.
7. Selecting the sample.

1. Defining the Target Population:

Defining the population of interest, for business research, is the first step in sampling process. In general, target
population is defined in terms of element, sampling unit, extent, and time frame. The definition should be in line
with the objectives of the research study. For ex, if a kitchen appliances firm wants to conduct a survey to
ascertain the demand for its micro ovens, it may define the population as ‘all women above the age of 20 who
cook (assuming that very few men cook)’. However this definition is too broad and will include every household
in the country, in the population that is to be covered by the survey. Therefore the definition can be further
refined and defined at the sampling unit level, that, all women above the age 20, who cook and whose monthly
household income exceeds Rs.20,000. This reduces the target population size and makes the research more
focused. The population definition can be refined further by specifying the area from where the researcher has to
draw his sample, that is, households located in Hyderabad.
Mr. Abhishek Sharma
Contact : 7859826868
A well defined population reduces the probability of including the respondents who do not fit the research
objective of the company. For ex, if the population is defined as all women above the age of 20, the researcher
may end up taking the opinions of a large number of women who cannot afford to buy a micro oven.

2. Specifying the Sampling Frame:

Once the definition of the population is clear a researcher should decide on the sampling frame. A sampling
frame is the list of elements from which the sample may be drawn. Continuing with the micro oven ex, an ideal
sampling frame would be a database that contains all the households that have a monthly income above
Rs.20,000. However, in practice it is difficult to get an exhaustive sampling frame that exactly fits the
requirements of a particular research. In general, researchers use easily available sampling frames like
telephone directories and lists of credit card and mobile phone users. Various private players provide databases
developed along various demographic and economic variables. Sometimes, maps and aerial pictures are also
used as sampling frames. Whatever may be the case, an ideal sampling frame is one that entire population and
lists the names of its elements only once.

A sampling frame error pops up when the sampling frame does not accurately represent the total population or
when some elements of the population are missing another drawback in the sampling frame is over
–representation. A telephone directory can be over represented by names/household that have two or more
connections.

3. Specifying the Sampling Unit:

A sampling unit is a basic unit that contains a single element or a group of elements of the population to be
sampled. In this case, a household becomes a sampling unit and all women above the age of 20 years living in
that particular house become the sampling elements. If it is possible to identify the exact target audience of the
business research, every individual element would be a sampling unit. This would present a case of primary
sampling unit. However, a convenient and better means of sampling would be to select households as the
sampling unit and interview all females above 20 years, who cook. This would present a case of secondary
sampling unit.

4. Selection of the Sampling Method:

The sampling method outlines the way in which the sample units are to be selected. The choice of the sampling
method is influenced by the objectives of the business research, availability of financial resources, time
constraints, and the nature of the problem to be investigated. All sampling methods can be grouped under two
distinct heads, that is, probability and non-probability sampling.

5. Determination of Sample Size:

The sample size plays a crucial role in the sampling process. There are various ways of classifying the
techniques used in determining the sample size. A couple those hold primary importance and are worth
mentioning are whether the technique deals with fixed or sequential sampling and whether its logic is based on
traditional or Bayesian methods. In non-probability sampling procedures, the allocation of budget, thumb rules
and number of sub groups to be analyzed, importance of the decision, number of variables, nature of analysis,
incidence rates, and completion rates play a major role in sample size determination. In the case of probability
sampling, however, formulas are used to calculate the sample size after the levels of acceptable error and level
Mr. Abhishek Sharma
Contact : 7859826868
of confidence are specified. The details of the various techniques used to determine the sample size will be
explained at the end of the chapter.

6. Specifying the Sampling Plan:

In this step, the specifications and decisions regarding the implementation of the research process are outlined.
Suppose, blocks in a city are the sampling units and the households are the sampling elements. This step
outlines the modus operandi of the sampling plan in identifying houses based on specified characteristics. It
includes issues like how is the interviewer going to take a systematic sample of the houses. What should the
interviewer do when a house is vacant? What is the recontact procedure for respondents who were unavailable?
All these and many other questions need to be answered for the smooth functioning of the research process.
These are guide lines that would help the researcher in every step of the process. As the interviewers and their
co-workers will be on field duty of most of the time, a proper specification of the sampling plans would make their
work easy and they would not have to revert to their seniors when faced with operational problems.

7. Selecting the Sample:

This is the final step in the sampling process, where the actual selection of the sample elements is carried out.
At this stage, it is necessary that the interviewers stick to the rules outlined for the smooth implementation of the
business research. This step involves implementing the sampling plan to select the sampling plan to select a
sample required for the survey.

8.2. Types

Types of Sampling: Probability Sampling Methods


Probability Sampling is a sampling technique in which sample from a larger population
are chosen using a method based on the theory of probability. This sampling method
considers every member of the population and forms samples on the basis of a fixed
process. For example, in a population of 1000 members, each of these members will
have 1/1000 chances of being selected to be a part of a sample. It gets rid of bias in the
population and gives a fair chance to all members to be included in the sample.

There are 4 types of probability sampling technique:

● Simple Random Sampling: One of the best probability sampling techniques


that helps in saving time and resources, is the Simple Random Sampling
method. It is a trustworthy method of obtaining information where every single
member of a population is chosen randomly, merely by chance and each
individual has the exact same probability of being chosen to be a part of a
Mr. Abhishek Sharma
Contact : 7859826868
sample.
For example, in an organization of 500 employees, if the HR team
decides on conducting team building activities, it is highly likely that
they would prefer picking chits out of a bowl. In this case, each of the
500 employees has an equal opportunity of being selected.
● Cluster Sampling: Cluster sampling is a method where the researchers divide
the entire population into sections or clusters that represent a population.
Clusters are identified and included in a sample on the basis of defining
demographic parameters such as age, location, sex etc. which makes it
extremely easy for a survey creator to derive effective inference from the
feedback.
For example, if the government of the United States wishes to evaluate
the number of immigrants living in the Mainland US, they can divide it
into clusters on the basis of states such as California, Texas, Florida,
Massachusetts, Colorado, Hawaii etc. This way of conducting a survey
will be more effective as the results will be organized into states and
provides insightful immigration data.
● Systematic Sampling: Using systematic sampling method, members of a
sample are chosen at regular intervals of a population. It requires selection of a
starting point for the sample and sample size that can be repeated at regular
intervals. This type of sampling method has a predefined interval and hence
this sampling technique is the least time-consuming.
For example, a researcher intends to collect a systematic sample of
500 people in a population of 5000. Each element of the population will
be numbered from 1-5000 and every 10th individual will be chosen to
be a part of the sample (Total population/ Sample Size = 5000/500 =
10).

● Stratified Random Sampling: Stratified Random sampling is a method where


the population can be divided into smaller groups, that don’t overlap but
Mr. Abhishek Sharma
Contact : 7859826868
represent the entire population together. While sampling, these groups can be
organized and then draw a sample from each group separately.
For example, a researcher looking to analyze the characteristics of
people belonging to different annual income divisions, will create strata
(groups) according to annual family income such as – Less than
● 20,000,
● 21,000 –
● 30,000,
● 31,000 to
● 40,000,
● 41,000 to $50,000 etc. and people belonging to different income

groups can be observed to draw conclusions of which income strata


have which characteristics. Marketers can analyze which income
groups to target and which ones to eliminate in order to create a
roadmap that would definitely bear fruitful results.

Use of the Probability Sampling Method


There are multiple uses of the probability sampling method. They are:

● Reduce Sample Bias: Using the probability sampling method, the bias in the
sample derived from a population is negligible to non-existent. The selection of
the sample largely depicts the understanding and the inference of the
researcher. Probability sampling leads to higher quality data collection as the
population is appropriately represented by the sample.
● Diverse Population: When the population is large and diverse, it is important
to have adequate representation so that the data is not skewed towards one
demographic. For example, if Square would like to understand the people that
could their point-of-sale devices, a survey conducted from a sample of people
across US from different industries and socio-economic backgrounds, helps.
● Create an Accurate Sample: Probability sampling helps the researchers plan
and create an accurate sample. This helps to obtain well-defined data.
Mr. Abhishek Sharma
Contact : 7859826868

Types of Sampling: Non-probability Sampling Methods


The non-probability method is a sampling method that involves a collection of feedback
on the basis of a researcher or statistician’s sample selection capabilities and not on a
fixed selection process. In most situations, output of a survey conducted with a
non-probable sample leads to skewed results, which may not totally represent the
desired target population. But, there are situations such as the preliminary stages of
research or where there are cost constraints for conducting research, where
non-probability sampling will be much more effective than the other type.

There are 4 types of non-probability sampling which will explain the purpose of this
sampling method in a better manner:

● Convenience sampling: This method is dependent on the ease of access to


subjects such as surveying customers at a mall or passers-by on a busy street.
It is usually termed as convenience sampling, as it’s carried out on the basis of
how easy is it for a researcher to get in touch with the subjects. Researchers
have nearly no authority over selecting elements of the sample and it’s purely
done on the basis of proximity and not representativeness. This non-probability
sampling method is used when there are time and cost limitations in collecting
feedback. In situations where there are resource limitations such as the initial
stages of research, convenience sampling is used.
For example, startups and NGOs usually conduct convenience
sampling at a mall to distribute leaflets of upcoming events or
promotion of a cause – they do that by standing at the entrance of the
mall and giving out pamphlets randomly.
● Judgmental or Purposive Sampling: In judgemental or purposive sampling,
the sample is formed by the discretion of the judge purely considering the
purpose of study along with the understanding of target audience. Also known
as deliberate sampling, the participants are selected solely on the basis of
research requirements and elements who do not suffice the purpose are kept
out of the sample. For instance, when researchers want to understand the
Mr. Abhishek Sharma
Contact : 7859826868

thought process of people who are interested in studying for their master’s
degree. The selection criteria will be: “Are you interested in studying for
Masters in …?” and those who respond with a “No” will be excluded from the
sample.
● Snowball sampling: Snowball sampling is a sampling method that is used in
studies which need to be carried out to understand subjects which are difficult
to trace. For example, it will be extremely challenging to survey shelterless
people or illegal immigrants. In such cases, using the snowball theory,
researchers can track a few of that particular category to interview and results
will be derived on that basis. This sampling method is implemented in
situations where the topic is highly sensitive and not openly discussed such as
conducting surveys to gather information about HIV Aids. Not many victims will
readily respond to the questions but researchers can contact people they might
know or volunteers associated with the cause to get in touch with the victims
and collect information.
● Quota sampling: In Quota sampling, selection of members in this sampling
technique happens on basis of a pre-set standard. In this case, as a sample is
formed on basis of specific attributes, the created sample will have the same
attributes that are found in the total population. It is an extremely quick method
of collecting samples.

Use of the Non-Probability Sampling Method


There are multiple uses of the non-probability sampling method. They are:

● Create a hypothesis: The non-probability sampling method is used to create a


hypothesis when limited to no prior information is available. This method helps
with immediate return of data and helps to build a base for any further
research.
● Exploratory research: This sampling technique is widely used when
researchers aim at conducting qualitative research, pilot studies or exploratory
research.
Mr. Abhishek Sharma
Contact : 7859826868
● Budget and time constraints: The non-probability method when there are
budget and time constraints and some preliminary data has to be collected.
Since the survey design is not rigid, it is easier to pick respondents at random
and have them take the survey or questionnaire.

Difference between Probability Sampling and Non-Probability


Sampling Methods
We have looked at the different types of sampling methods above and their subtypes.
To encapsulate the whole discussion though, the major differences between probability
sampling methods and non-probability sampling methods are as below:

Probability Sampling Non-Probability Sampling


Methods Methods

Definition Probability Sampling is a Non-probability sampling is a


sampling technique in which sampling technique in which
sample from a larger the researcher selects
population are chosen using a samples based on the
method based on the theory subjective judgment of the
of probability. researcher rather than
random selection.

Alternatively Known as Random sampling method. Non-random sampling method

Population selection The population is selected The population is selected


randomly. arbitrarily.

Market Research The research is conclusive in The research is exploratory in


nature. nature.

Sample Since there is method to Since the sampling method is


deciding the sample, the arbitrary, the population
population demographics is demographics representation
conclusively represented. is almost always skewed.

Time Taken Take a longer time to conduct This type of sampling method
since the research design is quick since neither the
defines the selection
Mr. Abhishek Sharma
Contact : 7859826868

parameters before the market sample or selection criteria of


research study begins. the sample is undefined.

Results This type of sampling is This type of sampling is


entirely unbiased and hence entirely biased and hence the
the results are unbiased too results are biased too
and conclusive. rendering the research
speculative.

Hypothesis In probability sampling, there In non-probability sampling,


is an underlying hypothesis the hypothesis is derived after
before the study begins and conducting the research
the objective of this method is study.
to prove the hypothesis.

8.3. Sample Size Decision


Sample size is a count the of individual samples or observations in any statistical setting, such as
a scientific experiment or a public opinion survey. Though a relatively straightforward concept,
choice of sample size is a critical determination for a project. Too small a sample yields unreliable
results, while an overly large sample demands a good deal of time and resources.

TL;DR (Too Long; Didn't Read)


Sample size is a direct count of the number of samples measured or observations being made.

The Definition of Sample Size


Sample size measures the number of individual samples measured or observations used in a
survey or experiment. For example, if you test 100 samples of soil for evidence of acid rain, your
sample size is 100. If an online survey returned 30,500 completed questionnaires, your sample
size is 30,500. In statistics, sample size is generally represented by the variable "n".

Calculation of Sample Size


To determine the sample size needed for an experiment or survey, researchers take a number of
desired factors into account. First, the total size of the population being studied must be
considered -- a survey that is looking to draw conclusions about all of New York state, for example,
will need a much larger sample size than one specifically focused on Rochester. Researchers will
also need to consider the margin of error, the reliability that the data collected is generally
accurate; and the confidence level, the probability that your margin of error is accurate. Finally,
researchers must take into account the standard deviation they expect to see in the data.
Standard deviation measures how much individual pieces of data vary from the average data
measured. For instance, soil samples from one park will likely have a much smaller standard
deviation in their nitrogen content than soils collected from across a whole county.

Dangers of Small Sample Size


Mr. Abhishek Sharma
Contact : 7859826868
Large sample sizes are needed for a statistic to be accurate and reliable, especially if its findings
are to be extrapolated to a larger population or group of data. Say you were conducting a survey
about exercise and interviewed five people, two of whom said they run a marathon annually. If you
take this survey to represent the population of the country as a whole, then according to your
research, 40 percent of people run at least one marathon annually -- an unexpectedly high
percentage. The smaller your sample size, the more likely outliers -- unusual pieces of data -- are
to skew your findings.

Sample Size and Margin of Error


The sample size of a statistical survey is also directly related to the survey's margin of error.
Margin of error is a percentage that expresses the probability that the data received is accurate.
For example, in a survey about religious beliefs, the margin of error is the percentage of
responders who can be expected to provide the same answer if the survey was repeated. To
determine the margin of error, divide 1 by the square root of the sample size, and then multiply by
100 to get a percentage. For instance, a sample size of 2,400 will have a margin of error of 2.04
percent.

8.4. Secondary data sources

Secondary Data Sources for Research

Secondary data are data that are taken from research works already done by somebody and used for
the purpose of the research data collection. The reason why secondary data are being increasingly
used in research is that published statistics are now available covering diverse fields so that an
investigator finds required data readily available to him’ in many cases. For certain studies like stock
price behavior, interest and exchange rate scenario, etc. only secondary data are used.

There are two broad categories of secondary data – internal secondary data and external secondary
data.

1. Internal secondary data: Internal (secondary) data refers to information that already exists within the
company in which the research problem arises. For instance, in many companies, salesmen routinely
record and report their sales. Examples of secondary data include records of sales, budgets, advertising
and promotion expenditures, previous marketing research studies and similar reports. Use of such
secondary data can help the marketing manager analyse the effect of the different elements of the
marketing mix, develop a marketing plan, make Budget and sales territory allocations, and, in general,
help in managerial decision making.
Mr. Abhishek Sharma
Contact : 7859826868
2. External secondary data: External (secondary) data refers to information which is collected by a source
external to the firm (whose major purpose is not the solution of the particular research problem facing the
firm). There are three major categories of external data;
1. Government sources and publications
2. Business reference sources
3. Commercial agencies

Sources of Secondary Data


Secondary data are obtained from personal documents and public documents.

1. Personal Documents
These documents are recorded by the individuals. An individual may record his views and thoughts
about various problems and without knowing for these documents at a latter data so formed a subject
or source of study.

Personal documents may be categorized or divided under the following heads for the convenience of
the study:

1. Life History: Life history, generally speaking contains all kinds of biographical , material, from the point
of view of personal documents only an autobiography which contains description and views about social
and personal events is a life history. It may be further classified under the following three sub-heads:
Spontaneous Autobiography, Voluntary Autobiography of self record, and Compiled life history.
2. Diaries: Many people keep diaries in which they record the daily events of their life and their feelings
and reactions relating to those events. Some of the diaries are also published later on. Diaries are the
most important source of knowing the life history of a person. If they have been written continuously over
long periods.
3. Letters: Letters also provide useful and reliable material on many social problems. They throw light upon
more intimate aspects of an event, and clarity the stand taken by a person regarding it. They are helpful
in giving an idea of the attitudes of a person and the trend of his mind. The validity of letters is beyond all
doubt and they should be accepted as prima facie proof of the attitude of the writer. In such social
problems as love, marriage or divorce the letters can supply much revealing information.
4. Memoirs: Some people write memoirs of their travels, important events of their life and other significant
phenomena that they come across. These memoirs provide useful material in the study of many a social
phenomena. Memoirs are different from dairies in the sense that they describe only some events and are
more elaborate than the dairy. Memoirs of travelers have provided us with useful information regarding
the language, social customs, religious faiths, culture and many other social aspects of the people they
visited.

2. Public Documents
Public documents are quite different from personal documents. They deal with the matters of different
interest. Public documents may be divided into the following two categories:

1. Unpublished Records: Unpublished records give matters of public interest not available to people in
published form. Everybody cannot have access to them. Proceedings of the meetings, noting on the files
and memoranda etc., form the category of unpublished records. It is said that these records are reliable.
Since there is no fear of their being made public the writers give out their views clearly.
2. Published Records: Published records are available to people for investigation and perusal. Survey,
reports, report of survey enquiries and such other documents fall under this category.

The data contained in these documents are considered by some people as quite reliable because the
collecting agency knows that it shall be difficult to test, while others are of the view that if the data are
Mr. Abhishek Sharma
Contact : 7859826868
to be published the collecting or publishing agency does some window dressing, as a result of which
the accuracy is sometimes postulated.

Now most of the information that is available to people and researchers in regard to social problems is
to be found in form of reports. The reports published by Government are considered as more
dependable. On the other hand some people think that the reports that are published by certain
individuals and agencies are more dependable and reliable.

● Journals and Magazines: Journals and magazines are important public documents including a wide
variety of information which .can be usefully utilized in social research. Most of these information are
very much reliable. Letters to the editors published in various magazines and journals are an important
source of information.
● Newspapers: Newspapers publish news, discussion on contemporary issues, reports of meetings and
conferences, essays and articles on living controversies and the letters of the readers to the editors. All
this is an important source of formation for different kinds of social research.
● Other Sources: Besides the above mentioned public documents, film, television, radio and public
speeches etc., are other important sources of information. They supply useful information – about
contemporary issues. The investigator, however, should be capable of sorting out the reliable material
and distinguishing it from the unreliable material advanced by these sources.

Advantages of Secondary Sources of Data


Following merits are usually claimed for using secondary data source.

1. Provides an insight into total situation: The purpose of use of available materials is to explore the
nature of the data and the subjects to get an insight into the total situation. While looking for the data
required by the researcher he may uncover many more available data than are often assumed to exists
and hence contributes significantly to the unfolding of hidden information.
2. Helps in the formulation of hypothesis: The use of documentary sources sometimes, helps in the
formulation of research hypothesis. While an investigator may have one or two hypotheses which he
might have deduced from theory, the study of available materials may suggest further hypotheses. If a
research idea or hypotheses can be formulated in such a manner that the available recorded material
bears on the question, the use of such material becomes possible.
3. Helps in testing the hypotheses: The available records may also help in testing the hypothesis.
4. Provides supplementary information: Available documents may be used to supplement or to check
information gathered specifically for the purposes of a given investigation. For example, if one has drawn
in random sample of a small group in order to interview individuals, the accuracy of one’s sample could
be checked by comparing socio-economic data of the sample, like income, education standard, caste,
family size etc., with the same data of the most recent census or with available data in local Government
offices.

Disadvantages of Secondary Sources of Data


The following are the demerits of using secondary data sources for research purpose.

1. Collected for a specific purpose: Data are often collected with a specific purpose in mind, a purpose
that may produce deliberate or unintentional bias. Thus, secondary sources must be evaluated carefully.
The fact that secondary data were collected originally for particular purposes may produce other
problems. Category definitions, particular measures or treatment effects may not be the most appropriate
for the purpose at hand.
2. Old data: Secondary data are by definition, old data. Thus, the data may not be particularly timely for
same purposes.
3. Aggregation of data in Inappropriate Unit: Seldom are secondary data available at the individual
observation level. This means that the data are aggregated in some form, and the unit of aggregation
may be inappropriate for a particular purpose.
Mr. Abhishek Sharma
Contact : 7859826868
4. Authenticity: The authenticity of same secondary sources of data is doubtful.
5. Context change: Secondary data refer to a given situation. As situations change, the data lose their
contextual validity.

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy