BRM 2024 New
BRM 2024 New
Dr. RENJITH. R
Assistant Professor
Dept. of Commerce
Prajyoti Niketan College, Pudukad
Research
Research is a scientific and systematic search for pertinent
information on a specific topic. It is a systematic method
of finding solutions to a problem.
Features of Research
1. Systematic
2. Business research has a clear objective purpose
3. It is multi disciplinary
4. Covers all the regions in which the business operates
5. Judging local problems of environment
6. Accurately determines the cost or profitability of a business
7. Flexibility
8. Business research is always focus on demand
9. Business research tends to have a time limit.
Role of business research in decision making
1. Product analysis:- it helps to find a product that meet the customer demand.
2. Market analysis:- to know the stage of business cycle the market is currently
in, it helps to determine the price points at which the product can be sold.
3. Financial analysis:- A financial analysis determines the cost of each
production item used to produce goods and services.
4. Competitor analysis:- knowing companies that have best production methods,
to know how they create a competitive advantage or buying a competitor.
5. Growth analysis:- forecasting the growth and direction of the current industry
or market.
Business Research Methods
Research methods means all those methods or techniques that are used for
conducting research.
1. Operational research
2. Case studies
3. Statistical data
4. Surveys and focus groups
5. Interview design
6. Listening
7. Questionnaires and questioning
Purpose or Objectives of Business Research
1. Job seekers
2. Investors
3. B2B providers
4. Philanthropic organizations
5. Companies
a. Testing new products
b. Ensuring adequate distribution
c. Measuring advertising effectiveness
d. studying competition
Functions of Business Research
• In-depth Interviews
• Observations
• Experience survey
• Case Studies
• Literature Reviews
• Pilot Studies
Uses of Exploratory Research
• Guides Research: Provides a clear focus and direction for the study.
• Framework for Data Analysis: Helps in designing the study and determining
the methods of data collection and analysis.
• Predictive Value: Offers predictions that can be tested, thereby contributing
to the body of knowledge.
• Basis for Theory Testing: Enables the testing and validation of theoretical
frameworks.
• It prevents from blind research
Functions of a Research Hypothesis
The research design is the blue print or the frame work for carrying out the
research study. It indicates the plan constituted in order to give the necessary
direction to the research study.
According to Bernard S. Philips, “ the research design constitute the blue print for
the collection, measurement and analysis of data. It aids the scientist in the
collection of his limited resources by posing crucial choices”
Features of Research Design
4. Time Frame and Budget: The research design includes a timeline for
completing various stages of the study and outlines the budget, ensuring that the
research is feasible and resources are adequately allocated.
5. It is framework for specifying the relationship among the variables which are
going to be studied.
6. It also mentions the boundaries of the research activities and enables the
researcher channel his energies in the right work.
Essential concepts concerning to a research design
The dependent variable is the factor that is measured or observed to assess the
effect of the independent variable.
Example: Does the amount of study time affect students' test scores
Dependent Variable: Test scores of the students.
2.Extraneous variable
An extraneous variable is a variable that is not the focus of the study but could
influence the results if not controlled. Controlling for extraneous variables is crucial
in research to ensure that the observed effects are due to the independent variable
and not other factors, thereby increasing the validity of the study's conclusions.
Example of Extraneous Variables Scenario: A company conducts a study to
evaluate the effect of a new training program on employee productivity.
Independent Variable: The new training program (whether employees receive the
training or not).
Dependent Variable: Employee productivity (measured by the number of units
produced per hour).
Possible Extraneous Variables:
1. Experience Level : Employees with varying levels of experience may
naturally have different productivity levels, regardless of the training.
3.Control:The term control is used when we design the study by minimizing the
effects of extraneous variables.
4. Confounded relationship: if the dependent variable is not free from the
influence of extraneous variables such a relationship is known as confounded
relationship.
5. Experimental and control groups: when a group is exposed to some special
condition in an experimental research it is termed as experimental group and if
the group exposed of usual conditions it is termed as control group.
6. Treatments: Treatments are different conditions under which the group are
put.
7. Experiment: An experiment is a process of examining the truth of a statistical
hypothesis relating to some research problem.
8. Experimental units: These are the predetermined plots where different
treatments are applied.
Stages/Steps in Research Design
1. Selection of a problem
2. Identifying the research gap:- Research gap is a question that has not been
answered by any of the existing studies.
3. Ascertain the nature of the study:- statistical, comparative or experimental etc.
4. Setting objectives and hypothesis
5. Geographical area to be covered
6. Socio cultural context of the study
7. Identifying variables:- a variable is the characteristics or attribute of an
individual, group or the environment that is of interest in a research study.
8. Period of study
9. Dimensions of the study
10. Basis for selecting data
11. Technique of the study
12. Document the Research Design and Implement the Research Design
13. Review and Revise as Necessary
Merits of research design
Exploratory Conclusive
Descriptive Causal
The nominal scale is the most basic level of measurement. It categorizes data
without any quantitative value or rank order. The numbers or labels assigned to
categories do not have any intrinsic meaning other than to identify different
groups. It is mainly used in surveys and expost facto research when data is being
classified by major sub groups of population.
Characteristics:
• Labels or categories.
• No intrinsic order.
• No numerical value associated.
• Mode is the measure of central tendency
Examples:
• Customer segments (e.g., "VIP," "Regular," "Occasional").
Ordinal Scale
An ordinal scale organizes data into categories that have a meaningful order or
ranking, but the intervals between the ranks are not uniform or known. This scale
allows for relative comparisons but does not provide precise differences between
ranks.
Characteristics:
• Data is ranked or ordered.
• Intervals between ranks are not equal.
• No absolute zero point.
• Median is the measure of central tendency
Examples:
• Customer satisfaction ratings (e.g., "Very Satisfied," "Satisfied," "Neutral," "Dissatisfied").
Interval Scale
The interval scale builds on the ordinal scale by offering meaningful and equal
intervals between measurements. However, it lacks a true zero point, meaning
that ratios (e.g., "twice as much") are not meaningful.
Characteristics:
• Equal intervals between points.
• No true zero.
• Can measure the difference between items but not the absolute quantity.
• Mean is the measure of central tendency
Examples:
• Temperature in degrees Celsius or Fahrenheit (the difference between 20°C and 30°C is the
same as between 30°C and 40°C, but 0°C is not an absolute absence of temperature).
Ratio Scale
The ratio scale is the highest level of measurement. It possesses all the
characteristics of the interval scale, but with a meaningful absolute zero point,
which allows for a wide range of statistical operations, including meaningful ratios
(e.g., "twice as much").
Characteristics:
• Equal intervals between values.
• A true zero point exists, indicating an absence of the variable being measured.
• Allows for meaningful ratios and a full range of mathematical operations.
Examples:
• Annual revenue (e.g., $0 revenue indicates no earnings, and $100 million is twice as much as
$50 million).
Sources of Errors in Measurement
1.Reliability:-
a)Consistency: The tool should provide consistent results over repeated trials or
time, assuming the variable being measured remains unchanged.
c)Internal Consistency: Multiple items within the tool that measure the same
concept should give consistent responses.
Features of good measurement tool…
2. Validity:-
a)Content Validity: The tool must measure all aspects of the concept being
studied and cover the full domain.
b)Construct Validity: The tool should actually measure the theoretical concept it
is intended to assess.
c)Criterion Validity: The tool’s measurements should correlate well with
established standards or benchmarks.
• Concurrent Validity: The measurement tool’s results align with other assessments taken at the
same time.
• Predictive Validity: The tool should predict future outcomes related to the concept being
measured.
Features of good measurement tool…
3. Measurability
4. Unidimensionality
5. Linearity
6. Practicability
7. Accuracy
Approaches related with construction of scale
1. Arbitrary approach:- the researcher develop a scale on ad hoc basis. He
collects a number of statements which he believes that they are unmistakable
and suitable to a given topic.
2. Consensus approach:- statements are selected by a panel of judges.
Differential scales.
3. Item analysis approach:- The Item Analysis Approach evaluates each item
based on its ability to differentiate between individuals with high and low total
scores, including only the items that best pass this discrimination test in the
final scale. Likert scales.
4. Cumulative approach:- This scale contains a series of statements to which
the respondents express their agreement or disagreement. The special
feature of this scale is that they are cumulative in nature. Guttmann’s scale
5. Factor analysis approach:- Factor scale are designed to inter correlate items
to determine their degree of interdependence between items.
Scaling Techniques
I. Rating Scales:- are used for measuring the attitudes and the intensity of
attitudes.
Following are the types of rating scales.
1. Dichotomous scale:- it is used to draw Yes or No answer.
2. Category scale:- it uses multiple items to draw out a single response.(place of
residence; pkd, tsr, tvm)
3. Likert scale:- it is termed as summated instrument scale. This means that the
items making up a Likert scale are summed to produce a total score. It
consists of number of statements which expresses either a positive or
negative attitude towards the object of interest. (Strongly agree, agree,
neutral, disagree, strongly disagree)
Scaling Techniques..
6. A Graphic Rating Scale :-is a type of rating scale where respondents rate
their opinion, feeling, or performance on a continuous line or graphical
representation, typically between two endpoints representing opposite extremes
(e.g., "Poor" to "Excellent"). Respondents mark a point along the line that best
represents their level of agreement or satisfaction.
7. Fixed or Constant scale:- here the respondents are asked to distribute a given
number of points across various items. The respondents are asked to indicate
importance to the five aspects by allotting points for selecting a toilet soap.
Colour, smell, size, shape, quality of foam.
II. Ranking scale
1. Paired comparison:- A Paired Comparison Scale is a type of measurement scale
where respondents are presented with two items at a time and asked to choose
which one they prefer or which is more important based on a specific criterion. Each
item is compared to every other item in the set, and the frequency of selections is
used to rank the items.
Example: In a product evaluation, respondents might be asked to choose between
"Brand A" and "Brand B" based on quality, then between "Brand A" and "Brand C,"
and so on for all possible pairs.
II. Ranking scale…
2. Comparative Scale :- A Comparative Scale is a type of scale where
respondents evaluate two or more items directly against each other, rather than
independently.
Example: In a comparative scale survey, respondents may be asked to rank
various smartphone brands (e.g., Brand A, Brand B, Brand C) from most pref
3. A Forced Choice Scale is a type of rating scale where respondents are
required to choose between two or more specific options, without the possibility
of selecting a neutral or middle option. This forces the respondent to make a
definitive decision, eliminating indecision or neutrality, and is often used to
uncover clear preferences or attitudes.
● Example: In an employee evaluation, a forced choice question might ask,
"Which statement best describes your colleague?" with options like "Works
well under pressure" or "Completes tasks on time," requiring the respondent
to choose one even if both seem true.
erred to least preferred based on features like design or performance.
II. Ranking scale…
Data: - Data refers to raw facts, figures, or information collected for analysis,
reference, or decision-making. It can represent quantities, qualities, observations,
or measurements about various phenomena, which may be collected in different
forms such as numbers, text, images, or audio.
Classification of data
1. Primary data:- collected for the first time
2. Secondary data:- already been collected, tabulated and presented in some
form by sone one else for some other purpose. It may be internal secondary
data or external secondary data.
Sources of data
1. Primary sources: the researcher directly collect the data that have not been
previously collected.
2. Secondary sources: already published
3. Other classification:
a) Documentary sources:
1. Individual document- diary , life history, letters etc.
2. Public document- news papers, journals and magazines,
published records, statistics and historical documents.
b) Personal sources
c) Library sources
Methods of collecting primary data
1. Observation
2. Experimentation
3. Simulation
4. Interview
5. Use of telephone
6. Panel method
7. Mail survey
8. Projective technique
9. Sociometry
10. Focus group discussion
11. Content analysis
Choice of methods of data collection
1. In-depth Information: Interviews allow for the collection of detailed, rich data,
especially on complex or sensitive topics.
2. Flexibility: Semi-structured and unstructured interviews allow researchers to
adapt questions based on the respondent’s answers, enabling deeper
exploration.
3. Clarification: Interviewers can ask follow-up questions or clarify answers in
real-time, improving the accuracy of the data collected.
4. Non-Verbal Insights: In face-to-face or video interviews, the interviewer can
observe body language and other non-verbal cues to gain additional insights.
5. Personalization: Interviews can be tailored to each respondent, allowing for
a more personalized and relevant conversation.
Disadvantages of the Interview Method
1. Time-Consuming: Conducting and transcribing interviews can be labor-intensive
and slow, especially with large samples.
2. Costly: Interviews, particularly face-to-face ones, can be expensive in terms of
both time and money.
3. Interviewer Bias: The interviewer’s own biases or behavior may influence the
respondent's answers, leading to less reliable data.
4. Limited Generalizability: Interviews often involve smaller samples, which may
not be representative of the broader population.
5. Analysis Complexity: Qualitative data from interviews can be harder to analyze
systematically compared to structured surveys, especially when responses are
diverse or lengthy.
Questionnaire
A questionnaire is a research instrument consisting of a series of questions
designed to gather information from respondents. It is commonly used in surveys
to collect data on attitudes, opinions, behaviors, or demographic information from
individuals or groups. Questionnaires can be administered through various
formats, such as paper forms, online surveys, or face-to-face interviews.
Features:
1. Simplicity
2. Reliability
3. Validity
4. Flexibility
5. Cost effective
6. Structured Format
Objectives of a Questionnaire
1. Data Collection: The primary goal of a questionnaire is to collect accurate and relevant data
from respondents about their views, experiences, or behaviors.
2. Understanding Attitudes and Opinions: Questionnaires are often used to gather opinions,
attitudes, and perceptions on specific issues or topics, such as customer satisfaction or
employee engagement.
3. Measuring Behavior: They can measure past or current behaviors, such as shopping habits,
product usage, or health practices.
4. Exploring Relationships: Questionnaires help explore relationships between different
variables, such as age and purchasing habits or income and lifestyle choices.
5. Gathering Demographic Information: Researchers use questionnaires to collect
demographic data such as age, gender, education, income, and location.
6. Standardized Data Collection: Questionnaires ensure standardized data collection, making
it easier to compare responses across large groups or different segments.
Questionnaire Design Process
Designing a questionnaire is a critical process that requires careful planning to
ensure that it meets the research objectives and yields valid and reliable data.
Below are the key steps in the questionnaire design process.
1. Define the Objectives: Clearly define the purpose of the questionnaire. What
specific information do you want to collect, and how will it help achieve the
overall research goals
2. Identify the Target Audience: Determine who the respondents will be, such
as customers, employees, students, or a specific demographic group. The
language and structure of the questionnaire should match the target audience’s
understanding and needs.
3. Decide on the Type of Questions: Choose the appropriate types of
questions to gather the required data. This includes deciding between open-
ended or close-ended questions, and the use of multiple-choice, Likert scales,
ranking, etc.
Questionnaire Design Process…
4. Structure the Questionnaire: Organize the questions in a logical sequence. The
flow should be easy to follow, starting with general questions and moving toward
more specific ones.
5. Decide on the Format and Layout: Make the questionnaire visually appealing
and easy to read, with a clear structure and format.
6. Pilot Testing: Conduct a trial run of the questionnaire with a small group of
respondents to identify any issues with question clarity, structure, or the overall flow.
7. Revise and Finalize the Questionnaire: After pilot testing, revise the
questionnaire based on feedback. Make necessary adjustments to improve clarity,
remove redundant questions, and ensure all questions align with the research
objectives.
8. Administer the Questionnaire: Choose the method of distribution (online, in-
person, phone, or paper) and administer the questionnaire to the target respondents.
Schedule
A schedule is a structured form or set of questions used by an interviewer to collect data in a face-to-face
or telephonic interaction with respondents. Unlike a questionnaire, where respondents fill out answers on
their own, a schedule is administered by the interviewer, who records the respondent's answers during
the interview.
Purposes of a Schedule
1. Facilitating Data Collection: Schedules are used to systematically gather data through direct
interaction between the interviewer and the respondent. This ensures that all relevant information is
collected in a structured and consistent manner.
2. Clarifying Complex Questions: In cases where questions are complex or technical, an interviewer
using a schedule can explain and clarify the questions to ensure the respondent fully understands,
leading to more accurate responses.
3. Improving Response Rates: Since an interviewer is present to guide the respondent, schedules
typically result in higher response rates compared to self-administered questionnaires. The
interviewer can encourage participation and ensure the completion of all questions.
4. Capturing Non-Verbal Cues: When conducting face-to-face interviews using schedules, the
interviewer can observe and record non-verbal cues such as body language, tone, and facial
expressions, which can provide additional insights into the respondent's answers.
5. Ensuring Consistency: Schedules help standardize data collection across respondents by using a
pre-determined set of questions, ensuring that all interviewees are asked the same questions in the
same way. This consistency helps in reducing bias and improves the reliability of the data.
Types of Schedules
1. Structured Schedule: A pre-defined set of questions with fixed wording and
sequence, leaving little flexibility for the interviewer. It is similar to a structured
questionnaire but conducted in person.
2. Unstructured Schedule: Contains more open-ended questions or topics that
the interviewer can explore based on the respondent’s answers, allowing for
flexibility during the interaction.
3. Semi-Structured Schedule: Combines elements of both structured and
unstructured schedules, where there is a set of key questions, but the
interviewer has some flexibility to probe deeper into certain topics.
4. Rating Schedule: Designed to obtain responses based on a numerical or
descriptive rating scale, where respondents provide quantitative feedback
(e.g., rating a service on a scale from 1 to 5).
5. Interview schedule
6. Observation schedule: Used to recording observations
Difference Between Schedule and Questionnaire
Aspect Schedule Questionnaire
Requires the presence of an interviewer for face- No interviewer is needed; respondents fill it out on
Presence of Interviewer
to-face or telephonic data collection. their own.
The interviewer can clarify questions, explain Respondents must interpret the questions
Response Clarification
terms, and guide the respondent if needed. themselves, with no guidance from an interviewer.
Higher accuracy due to real-time clarification and Risk of inaccurate responses if respondents
Data Accuracy
personal interaction. misunderstand questions.
More expensive and time-consuming due to the More cost-effective and quicker, especially for
Cost and Time need for interviewers and scheduling face-to-face large sample sizes, since no interviewer is
interactions. needed.
Sampling plan
2. Law of Inertia of Large Numbers (Law of Large Numbers): The Law of Inertia
of Large Numbers is a statistical principle that states that as the size of a sample
increases, the average of the sample becomes closer to the true average (or mean)
of the population. The larger the sample, the more reliable the results and the
smaller the deviation from the population parameter.
Data processing stages
1. Editing of Data: Editing involves reviewing and correcting the collected data
to ensure that it is accurate, consistent, and complete before analysis. This is a
crucial step in data processing, as it helps identify and fix errors or
inconsistencies that may have occurred during data collection.
Objectives of Editing:
1. Detect and Correct Errors: Identify mistakes like incomplete responses,
inconsistencies, or outliers.
2. Ensure Accuracy: Verify that the data aligns with the expected values or
formats.
3. Consistency Check: Ensure that responses are logically consistent across
the dataset.
4. Remove Irrelevant Data: Identify and remove data that is not useful for the
study.
Stages of editing: 1. Field editing 2. Central editing
Data processing stages…
Analysis of data means studying the tabulated material in order to determine inherent
facts.
Analysis can be classified into two.
1. Descriptive analysis
2. Inferential analysis
Descriptive analysis
Descriptive analysis refers to transformation of raw data into a form that will facilitate
easy understanding and interpretation. It describe the nature of phenomenon under study.
It consist of three types of analysis.
A). Unidimensional
Example: Correlation analysis and regression analysis, ANOVA and Chi-square Test.
Multivariate analysis-
Critical Region: The critical region is the range of values for which you would reject the
null hypothesis.
It is determined by the level of significance and the distribution of the test statistic.
Parametric tests assume that the data follows a specific probability distribution, often the
normal distribution. Parametric tests are more suitable for data with interval or ratio
scales.
Each of these t-tests serves a specific purpose and can help you draw conclusions about the differences or
similarities between groups or populations based on sample data.
II. Z-test
A Z-test is a statistical hypothesis test that's used to determine whether the mean of a sample is significantly different from
a known population mean when you have a large enough sample size. It's similar to the t-test, but the Z-test is typically
employed when you have a sample size large enough for the Central Limit Theorem to apply, allowing you to assume that
the sample mean follows a normal distribution.
Z-tests are particularly useful when dealing with large sample sizes because they rely on the properties of the normal
distribution, making calculations simpler compared to t-tests. However, t-tests are often preferred when dealing with
smaller sample sizes or when the population standard deviation is unknown and must be estimated from the sample data.
III. F-test
The F-test is a statistical test used to compare the variances or the ratio of variances between two or more
groups or samples. It is commonly used in analysis of variance (ANOVA) and regression analysis. There
are two main types of F-tests:
1. One-Way ANOVA (One-Factor ANOVA): This type of F-test is used to compare the means of three
or more independent (unrelated) groups to determine if there are any statistically significant differences
between them.
2. Two-Way ANOVA (Two-Factor ANOVA): This extension of the one-way ANOVA is used to analyze
the influence of two independent variables (factors) on a single dependent variable. It helps answer
questions like "Do both gender and age significantly affect income levels?" Two-way ANOVA assesses not
only the main effects of each factor but also the interaction effect between the factors.
3. Regression Analysis (F-Test for Regression Models): In regression analysis, the F-test is used to
determine the overall significance of a regression model. It assesses whether the regression model, which
includes multiple independent variables, explains a significant portion of the variance in the dependent
variable. In this context, the F-test is often used to compare a full model (with predictors) to a null model
(without predictors) to see if the predictors collectively contribute significantly to explaining the variance in
the dependent variable.
Correlation
Correlation is a statistical measure that quantifies the extent to which two variables are related or associated
with each other. In simpler terms, it helps you understand if there is a relationship between two variables and,
if so, the nature of that relationship. Correlation does not imply causation, meaning that even if two variables
are correlated, it doesn't necessarily mean that changes in one variable cause changes in the other.
There are several types of correlation measures, including:
1. Pearson Correlation Coefficient (r): Measures the strength and direction of a linear relationship between
two continuous variables. It ranges from -1 (perfect negative correlation) to 1 (perfect positive correlation),
with 0 indicating no linear correlation.
2. Spearman Rank Correlation Coefficient (ρ or rs): Measures the strength and direction of a monotonic
relationship between two variables. It's used when the relationship isn't necessarily linear or when dealing with
ordinal data.
Correlation Coefficient (r): Pearson's correlation coefficient, denoted as "r," quantifies the strength and
direction of the linear relationship between X and Y. The value of r ranges from -1 to 1:
r = 1: Perfect positive linear relationship
r = -1: Perfect negative linear relationship
r ≈ 0: Little to no linear relationship
Interpretation: You interpret the value of r as follows:
r > 0: Indicates a positive linear relationship (as X increases, Y tends to increase).
r < 0: Indicates a negative linear relationship (as X increases, Y tends to decrease).
r = 0: Suggests little to no linear relationship between the variables.
Non parametric Test
Non-parametric tests make fewer assumptions about the distribution of the data. They are
distribution-free or have fewer distributional assumptions.
Techniques of interpretation
The mean is the sum of all the values in a dataset divided by the number of
values. It represents the average of the dataset and is most commonly used for
quantitative data.
Median
The median is the middle value in a dataset when the numbers are arranged in
ascending or descending order. If the dataset has an odd number of values, the
median is the middle value; if it has an even number, the median is the average
of the two middle values.
Example:
For the dataset: 5, 10, 15, 20, 25
Median = 15 (middle value)
For the dataset: 5, 10, 15, 20
Median = 10+152=12.5\frac{10 + 15}{2} = 12.5210+15=12.5
Mode
The mode is the value that appears most frequently in a dataset. A dataset can
have one mode (unimodal), more than one mode (bimodal or multimodal), or no
mode at all if all values are unique.
Example:
For the dataset: 5, 10, 10, 15, 20, 20, 20, 25
Mode = 20 (appears three times)
Geometric Mean
The geometric mean is a measure of central tendency used primarily for sets of
positive numbers, especially when dealing with growth rates, percentages, or
ratios. It is the nth root of the product of n values in a dataset.
Harmonic Mean
The harmonic mean is another measure of central tendency, commonly used for
datasets involving rates, ratios, or where the reciprocal of the data is meaningful
(e.g., speed, rates). It gives the reciprocal of the arithmetic mean of the
reciprocals of the values
2. Measures of Dispersion
2. Variance: The variance measures how much the values in a dataset differ
from the mean. It is the average of the squared differences from the mean, giving
more weight to values further from the mean.
1. Index Numbers
Index numbers are statistical measures that express changes in a variable or
group of variables over time relative to a base value. They are often used to track
changes in economic indicators such as prices, production, sales, or other
economic factors. Index numbers provide a way to compare how these variables
change over time, making them useful for analyzing trends, inflation, cost-of-
living adjustments, and other economic phenomena.
2. Inferential Analysis
Research Report
References: list only the sources that are directly cited or quoted within the text of a
research paper or report. These sources are explicitly referenced to support facts,
arguments, or data presented in the work.
Bibliography is a broader list that includes all the sources consulted for the research,
whether they were cited directly in the text or not. It can contain references to
background readings, additional materials, or works related to the subject that were
influential in the research process.
Citation is the practice of giving credit to the original sources of information, ideas, or
data that you have used in your work. It involves acknowledging the authors,
researchers, or organizations whose work has contributed to your research, paper, or
report. Citations help avoid plagiarism, demonstrate research integrity, and allow
readers to verify the sources of the information.
APA Style vs. MLA Style of Writing
APA (American Psychological Association) and MLA (Modern Language
Association) are two widely used citation styles, each with its own guidelines for
formatting academic papers, citing sources, and organizing references. They are
used in different academic disciplines and serve to provide consistency in how
sources are cited and presented.
The APA style is primarily used in the social sciences, such as psychology,
sociology, education, business, and health sciences. It focuses on the date of
publication to give credit to recent and credible research, which is often critical in
these fields.
ExampleJones, R. (2019). The effects of remote work on employee well-being.
Journal of Occupational Health Psychology, 24(3), 213-227.
https://doi.org/10.1037/ocp0000170.
The MLA style is primarily used in the humanities, such as literature, arts,
philosophy, and languages. MLA focuses more on the author and the page number
rather than the date of publication, which is often less important in these fields.
Example: Jones, Rebecca. “The Effects of Remote Work on Employee Well-Being.”
Journal of Occupational Health Psychology, vol. 24, no. 3, 2019, pp. 213-227.
Stages of Report Writing
Writing a report involves a systematic process that ensures clarity, organization, and
thoroughness. Each stage is essential to producing a well-structured and effective
report that communicates information accurately. Here are the main stages of report
writing.
1. Understanding the report: Clearly define the purpose of the report and understand
who the intended audience is.
2. Gathering data
3. Make overall report format
4. Make detail outline
5. Drafting of the report: first draft (Write the first version of the report, focusing on
content and structure without worrying too much about grammar and style.),
second draft and final draft
6. Editing the final draft
7. Documentation
Ethics in Research
Research ethics refers to the set of moral principles that guide researchers in
conducting their work responsibly and with integrity. Ethical research practices are
essential for ensuring the credibility of the research, protecting participants' rights,
and contributing positively to society. Adhering to these ethical principles helps
prevent harm and promotes trust in the research process.
Ethical principles:
1. Honesty
2. Objectivity: Minimize bias
3. Integrity: Researchers should be honest and transparent in all aspects of their
work, from data collection to reporting findings. Misleading or falsifying data is a
serious breach of research ethics.
4. Confidentiality and Privacy: Researchers must protect the privacy of
participants and maintain confidentiality regarding the information provided by
them.
5. Social responsibility
6. Respect intellectual property
7. Openness