Unit 13 - Research Methods and Statistics
Unit 13 - Research Methods and Statistics
Statistics
Variables
● Something that can be changed or varied
● They help determine if changes to one thing result in changes to another.
Categorical Variables: Variable data points that are placed into qualitatively different
different categories.
Measured variables: data points that appear along a scale and are separated from one
another and where the scale can be discrete or continuous
In experimental studies the independent variable is the categorical variable and the dependent
variable is the measured variable.
Types of Variables
1. Independent variable: manipulated by the experimenter
2. Dependent variable: measured and is influenced by the independent variable
3. Extraneous variable: may have an impact on the relationship between the IV and DV
4. Confounding variable: cannot be controlled.
a. Has an impact on the DV
b. Therefore, makes it difficult to determine if the results are due to the influence
of the independent variable, confounding variables or an interaction between
the two.
What is Kurtosis?
Kurtosis is a statistical measure used to describe the degree to which scores cluster in the tails
or the peak of a frequency distribution. The peak is the tallest part of the distribution, and the
tails are the ends of the distribution.
Mesokurtic: Distributions that are moderate in breadth and curves with a medium peaked
height.
Leptokurtic: More values in the distribution tails and more values close to the mean (i.e.
sharply peaked with heavy tails)
Platykurtic: Fewer values in the tails and fewer values close to the mean (i.e. the curve has a
flat peak and has more dispersed scores with lighter tails).
Negative values of kurtosis indicate that a distribution is flat and has thin tails. Platykurtic
distributions have negative kurtosis values. Normally platykurtic.
Positive values of kurtosis indicate that a distribution is peaked and possess thick tails.
Leptokurtic distributions have positive kurtosis values. Normally leptopkurtic.
When kurtosis is equal to 0, the distribution is mesokurtic. This means the kurtosis is the
same as the normal distribution, it is mesokurtic (medium peak).
The null hypothesis states that there is no relationship between the two variables being
studied (one variable does not affect the other). It states the results are due to chance and are
not significant in terms of supporting the idea being investigated. Thus, the null hypothesis
assumes that whatever you are trying to prove did not happen.
The alternative hypothesis is the one you would believe if the null hypothesis is concluded to
be untrue. The alternative hypothesis states that the independent variable did affect the
dependent variable, and the results are significant in terms of supporting the theory being
investigated (i.e. not due to chance).
Measures of Dispersion
In the previous section we have discussed about measures of central tendency. By knowing
only the mean, median or mode, it is not possible to have a complete picture of a set of data.
Average does not tell us about how the score or measurements are arranged in relation to the
center.
It is possible that two sets of data with equal mean or median may differ in terms of their
variability.
Therefore, it is essential to know how far these observations are scattered from each other or
from the mean. Measures of these variations are known as the 'measures of dispersion'.
The most commonly used measures of dispersion are range, average deviation, quartile
deviation, variance and standard deviation.
Range: Range is one of the simplest measures of dispersion. It is designated by 'R'. The range
is defined as the difference between the largest score and the smallest score in the
distribution.
It is known as distance between the highest and the lowest scores in a distribution. It gives the
two extreme values of the variable but no information about the values in between the
extreme values. A large value of range indicates greater dispersion while a smaller value
indicates lesser dispersion among the scores.
Quartile deviation is the difference between the 75% and 25% scores of a distribution. 75th
percentile is the score which keeps 75% score below itself and 25th percentile is the score
which keeps 25% scores below itself.
Interquartile range is defined as the difference between the 25th and 75th percentile (also
called the first and third quartile). Hence the interquartile range describes the middle 50% of
observations.
Standard deviation is the most stable index of variability. In the computations of average
deviation, the signs of deviation of the observations from the mean were not considered.
In order to avoid this discrepancy, instead of the actual values of the deviations we consider
the squares of deviations, and the outcome is known as variance.
Further, the square root of this variance is known as standard deviation and designated as SD.
Thus, standard deviation is the square root of the mean of the squared deviations of the
individual observations from the mean.
When you perform a statistical test a p-value helps you determine the significance of your
results in relation to the null hypothesis. The level of statistical significance is often
expressed as a p-value between 0 and 1. The smaller the p-value, the stronger the evidence
that you should reject the null hypothesis. A p-value less than 0.05 (typically ≤ 0.05) is
statistically significant. It indicates strong evidence against the null hypothesis, as there is
less than a 5% probability the null is correct (and the results are random). Therefore, we
reject the null hypothesis, and accept the alternative hypothesis. However, this does not mean
that there is a 95% probability that the research hypothesis is true. The p-value is conditional
upon the null hypothesis being true is unrelated to the truth or falsity of the research
hypothesis. A p-value higher than 0.05 (> 0.05) is not statistically significant and indicates
strong evidence for the null hypothesis. This means we retain the null hypothesis and reject
the alternative hypothesis. You should note that you cannot accept the null hypothesis, we can
only reject the null or fail to reject it.
A statistically significant result cannot prove that a research hypothesis is correct (as this
implies 100% certainty). Instead, we may state our results “provide support for” or “give
evidence for” our research hypothesis (as there is still a slight probability that the results
occurred by chance and the null hypothesis was correct – e.g. less than 5%).
What is Skewness?
In probability theory and statistics, skewness is a measure of the asymmetry of the probability
distribution of a real-valued random variable about its mean. The skewness value can be
positive, zero, negative, or undefined.
It is the degree of distortion from the symmetrical bell curve or the normal distribution. It
measures the lack of symmetry in data distribution. It differentiates extreme values in one
versus the other tail. A symmetrical distribution will have a skewness of 0.
Types of Skewness
Positive Skewness means when the tail on the right side of the distribution is longer or fatter.
The mean and median will be greater than the mode.
Negative Skewness is when the tail of the left side of the distribution is longer or fatter than
the tail on the right side. The mean and median will be less than the mode.
If the skewness is between -0.5 and 0.5, the data are fairly symmetrical.
If the skewness is between -1 and -0.5(negatively skewed) or between 0.5 and 1(positively
skewed), the data are moderately skewed.
If the skewness is less than -1(negatively skewed) or greater than 1(positively skewed), the
data are highly skewed.
Parametric Tests:
Correlation
Correlation
It is a way of understanding the relation between two variables. (so, we basically run this to
see if there is a relationship between two variables)
- With this, we study only relationship - not the effect/impact
- There is no IV or DV in correlation.
- Once a relationship is established then there are multiple other statistical tools that we
used to see whether there is an effect (mediating effect or moderating effect) or an
impact between the variables.
Variables vary at the same level. We can determine whether one variable is related to another
by seen whether scores on the two variables covary, that is, whether the vary together
● Eg: variable A ↑, variable B ↓ - in equal measures
*normal distribution doesn’t mean that you won’t have outliers at all (there are outliers there
also)
Scatter Plots
Correlations between quantitative variables are often presented using scatterplots. Each point
in the scatter plot represents one person's score on both variables.
Taking all the points into account, one can see that people under more stress tend to have
more physical symptoms.
This is a good example of a positive relationship, in which higher scores on one variable tend
to be associated with higher scores on the other.
A negative relationship is one in which higher scores on one variable tend to be associated
with lower scores on the other.
KENDALL'S TAU (T) -
The range of tau is from - 1.00 to + 1.00. The interpretation of tau is based on the sign and
the value of coefficient.
The dichotomous variable is the one that can be divided into two sharply distinguished or
mutually exclusive categories.
Some examples are, male-female, rural-urban, Indian-American, diagnosed with illness and
not diagnosed with illness, Experimental group and Control Group, etc.
These are the truly dichotomous variables for which no underlying continuous distribution
can be assumed.
Point Biserial Correlation (rpb) is Pearson's Product moment correlation between one truly
dichotomous variable and other
The dichotomous variable is the one that can be divided into two sharply distinguished or
mutually exclusive categories.
Some examples are, male-female, rural-urban, Indian-American, diagnosed with illness and
not diagnosed with illness, Experimental group and Control Group, etc.
These are the truly dichotomous variables for which no underlying continuous distribution
can be assumed.
Point Biserial Correlation (rpb) is Pearson's Product moment correlation between one truly
dichotomous variable and other
When both the variables are dichotomous, then the Pearson's correlation calculated is called
as Phi Coefficient (Ф).
For example, let us say that you have to compute correlation between gender and ownership
of the property.
The ownership of property can be measured as either the person owns a property and the
person do not own property.
Now if you compute the Pearson's correlation between these two variables is called as Phi
Coefficient (ф ). Both the variables take value of either of 0 or 1.
If the two variables are measured in a more refined way, then the continuous distribution will
result. For example, attitude to females and attitude towards liberalisation are two variables to
be correlated.
Then the correlation between these two variables can be computed using Tetrachoric
correlation (rtet).
Null & Alternative Hypothesis
Hypothesis
A hypothesis is an appropriate explanation that relates to the set of facts that can be tested by
certain further investigations.
Two types:
1. Null Hypothesis
2. Alternate hypothesis
A research generally starts with a problem. Hypothesis provides the researcher(s) with some
specific restatements and clarifications of the research problem. It is an assumption based on
the existing literature or facts
Null hypothesis
- Generally denoted as H0
- States the exact opposite of what an investigator or an experimenter predicts or
expects.
- It basically defines the statement which states that there is no exact or actual
relationship between the variables. - we assume that there will be no outcomes
- Here, no bias is involved. (hence it it typically preferred)
Alternate hypothesis
- Generally denoted as H1
- Makes a statement that suggests or advises a potential result or an outcome that an
investigator or the researcher may expect. (typically used when there is a strong
relationship between the variables)
- Has been categorised into two categories:
1. Directional alternative hypothesis
2. Non-directional alternative hypothesis
Answer: 1st option
Extra
When a curve is presented:
1. hightest value is towards the right side - tail 2
2. lowest value is towards the left side - tail 1
3. moderate/average is the middle
H0: fr = fa (friends and family have equal understanding/no difference in understanding me)
- Non-directional hypothesis because it assumes that there is no difference between
the direction of tail 1 as well as the direction of tail 2 - there is no significant
difference between the two curves, specifically the tails.
- Both curves are identical or almost similar
- Also called two tailed hypothesis because we have to compare both the tails along
with the middle portion of the curve to draw a conclusion
HA: fr ≠ fa
- An alternative of the null hypothesis
- Chosen when null hypothesis is rejected, therefore it is an alternative hypothesis
- Also called two tailed hypothesis because we are comparing tail 1 and 2 in both
cases
- It is bi-directional as the hypothesis indicates a direction towards tail 1 and tail 2
Statistical significance
- It refers to the unlikelihood that mean differences observed in the sample have
occurred due to sampling error. - It is not ideally possible to have our sample
represent an entire population and subject them to a test, hence even in a large sample
size, there might still be the possibility of an error.
- Given a large enough sample, one might still find statistical significance.
- It rules out the possibility of any sort of sampling error.
- With a larger sample size, there is high likelihood of statistical significance. (because
statistical significance is always calculated based on the sample size by the software
---- because the larger the sample is the difference for each data point is being shown,
therefore, when every data point has a minute change also, it will be regarded as
statistically significant)
*Whatever results we get by testing a particular sample, we tend to generalise those to the
entire general population.
- By using the term significant, we aren't automatically implying that the data is
important.
- It is simply a term that is used to assess whether the evidence against the H0 has
reached the standard set by α only. (we usually set the confidence interval at 95%
(sometimes 99%) level, i.e. P ≤ 0.05, so, α = 0.05)
When we say that it (the result/data) is statistically significant, what we mean is that we have
a preset α level, i.e. 0.05. So what we imply is that, according to this level, by testing the H0,
we found that this particular difference is statistically significant.
- For example: Significance level at 0.05 is often expressed by the statement: “The
results were significant at (P< 0.05)”, where P stands for the P-value. If P is less than
0.05, the result is statistically significant.
- The P-value is helpful in providing basic information rather than a statement of
significance, because we can then assess significance at any level we choose.
- So, it is providing some information about the sample at a level that we choose as
researchers. It is actually not conveying the strength or the meaningfulness of the
relationship or the difference. It’s a purely statistical measure.
Practical significance
Statistical significance alone can be misleading because it’s influenced by sample size. (it
alone can not predict any kind of generalizability)
- Increasing the sample size always makes it more likely to find a statistically
significant effect, no matter how small the effect truly is in the real world. But does
that actually mean that the relationship is strong & meaningful?
- Measuring effect sizes is a way in which we can assess whether the values or the
results are actually practically significant.
- In contrast, effect sizes are independent of the sample size. Only the data is used to
calculate the effect sizes. So, this actually measures the strength or the
meaningfulness of the relationship.
- When a difference is statistically significant, it does not necessarily mean that it is big,
important or helpful. It simply means that you can be confident that there is a
difference.
- Effect size (usually denoted by Cohen’s d) is a measure of the strength of the
relationship between 2 variables.
- In research settings, it is not only helpful to know whether results have a statistically
significant effect, but also the magnitude of any observed effects.
- To know if an observed difference is not only statistically significant but also
important or meaningful, you will need to calculate its effect size.
- When H0 (“no effect” or “no difference”) is rejected at the usual levels (α = 0.05 or α
= 0.01), there is a good evidence that an effect is present. However, that effect may be
extremely minor or trivial.
- When a larger sample size is accessible, even tiny deviations from the H0 will be
significant.
- Today, whenever there’s an intervention/treatment provided to the experimental
group, it is important to mention the effect size and to report any sort of effect that the
experimenter observed in the participants or in the sample. For example, if the
participants in the experimental group report that their stress levels were reduced due
to the intervention, this acts as evidence that the intervention provided to the
experimental group has had an effect on the participants and this further contributes to
the research being conducted.
Extra:
- 0.1 to 0.3 is low effect size
- 0.3 to 0.5 is moderate effect size
- and ≥ 0.5 is high effect size
- H0 = null hypothesis
- HA = alternative hypothesis
--------------------------------------------------------------
Significant study results vary based on context. Can significant study results ever be
translated into recommendations for the general population?
We know so far that:
1. Tests of statistical significance rarely tell us about the importance of a research result.
2. Effect size tells us about magnitude of difference, which is important, but it is difficult
for practice-oriented practitioners to comprehend.
What can we do, otherwise, to make some sense out of the statistics? We may not do effect
sizes all the time. Does that mean that if you get statistically significant results, you discard
them or you disbelieve them? NO.
Solutions? (basically how to make sense of the statistically significant data in such a
situation?)
There are several solutions to help translate statistically significant data into results that may
be practically applied to real life situations:
1. Comparison of a sample to a meaningful reference group (standardised proper norms)
2. Confidence limits (helps establish generalisability of your result(s))
3. Know when to detect faulty data (one way of assessing this is to see whether the
sample is representative of the population -- think -- “am I trying to communicate
something which can be easily generalized or something which is specific to my
sample or study?”)
- Certain standard levels of significance are often used such as 10%, 5%, and 1%.
- The 5% level (α= 0.05) is particularly common. (but in intervention it is always better
to supplement that with effect sizes)
- Significance at the 5% level is still a widely accepted criterion for meaningful
evidence in research work.
- Basically, p-value can be 0.04 or 0.051. If it is 0.04, you will mention it as statistically
significant, while the latter as not significant. So, actually the difference is very
minimal but just based on that particular value that you have set, you interpret the data
accordingly. So, it’s not a very reliable measure in terms of understanding the
effectiveness or the practical significance of your data.
- If the results are not statistically significant, it does not mean that they are
meaningless. We just have to further evaluate to land on a conclusion (to confirm the
finding(s)).
- So any scientific enquiry has its own meaning, in its own form. The only way is, the
more you structure it, the better it becomes. The more generalizable it is, the more
meaningful it is.
● Offer relatively more control and use precise statistical procedures for
analysis.
● Variables are more controlled
● These allow the interpretation of results with much more confidence.
● Here, we control most of the variables which we think might affect the
cause-effect relationship between the IV and DV.
In a between-subjects design:
- Also known as an independent measures design or classic ANOVA design
- Individuals receive only one of the possible levels of an experimental treatment.
- In one might use matched pairs within the between-subjects design to make sure that
each treatment group contains the same variety of test subjects in the same
proportions.
In a within-subjects design:
- Also known as a repeated measures design
- Every individual receives each of the experimental treatments consecutively, and their
responses to each treatment are measured.
- Within-subjects or repeated measures refer to an experimental design where an effect
emerges over time, and individual responses are measured over time in order to
measure this effect as it emerges.
D. Factorial
i. Can have between participants, within participants and mixed model
Effects :
Qualitative Research :
Qualitative research can be defined as a type of scientific research that tries to bridge the gap
of incomplete information, systematically collects evidence, produces findings and thereby
seeks answer to a problem or question.
It is widely used in collecting and understanding specific information about the behaviour,
opinion, values and other social aspects of a particular community, culture or population.
An example of a qualitative research can be studying the concepts of spiritual development
amongst college students.
3. Historical method: This method helps in understanding and analysing causal relationships.
Data related to the occurrence of an event is collected and evaluated in order to understand
the reasons behind occurrence of such events.
It helps in testing hypothesis concerning cause, effects and trends of events that may help to
explain present events and anticipate future events as well.
4. Grounded theory: This approach involves an active participation of the researcher in the
activities of the group, culture or the community under study.
The data regarding the required information is collected with the help of observation._
It is generally used in generating or developing theories. This means that the ground theorists
can not only work upon generation of new theories, they can test or elaborate previously
grounded theories.
5. Phenomenology: In this method, behavioural phenomena are explained with the help of
conscious experience of events, without using any theory, calculations or assumptions from
other disciplines.
The concept can be best understood with the help of one of the studies that was done in which
patients were asked to describe about caring and uncaring nurses in hospitals Creswell, 1998.
The patients explained those nurses to be caring who show their existential presence and not
mere their physical presence. The existential presence of caring nurses referred to the positive
response showed by them to the patient's request.
The relaxation, comfort and security that the client expresses both physically and mentally
are an immediate and direct result of the client's stated and unstated needs being heard and
responded to by the nurse.
7. ACTION RESEARCH
Action research is "learning by doing" - a group of people identify a problem, do something
to resolve it, see how successful their efforts were, and if not satisfied, try again. While this is
the essence of the approach, there are other key attributes of action research that differentiate
it from common problem-solving activities that we all engage in every day.
What separates this type of research from general professional practices, consulting, or daily
problem-solving is the emphasis on scientific study, which is to say the researcher studies the
problem systematically and ensures the intervention is informed by theoretical
considerations. Much of the researcher's time is spent on refining the methodological tools to
suit the exigencies of the situation, and on collecting, analyzing, and presenting data on an
ongoing, cyclical basis.
Action research became popular in the 1940s. Kurt Lewin (1946) was influential in spreading
action research. He came interested in helping social workers improve their practice.
Participatory action research (PAR) is a special kind of action research in which there is
collaboration between the study participants and the researcher in all steps of the study.
8. NARRATIVE RESEARCH
Narrative research aims to explore and conceptualize human experience as it is represented in
textual form. Aiming for an in-depth exploration of the meanings people assign to their
experiences, narrative researchers work with small samples of participants to obtain rich and
free-ranging discourse.
The emphasis is on storied experience. Generally, this takes the form of interviewing people
around the topic of interest, but it might also involve the analysis of written documents.
Narrative psychology is concerned with the structure, content, and function of the stories that
we tell each other and ourselves in social interaction. It accepts that we live in a storied world
and that we interpret the actions of others and ourselves through the stories we exchange.
Through narratives we not only shape the world and ourselves but they are shaped for us
through narrative.
People who have undergone some sort of trauma provide instances of how they often try to
make sense of the events that they are going through by creating stories or narratives.
A narrative is essentially a written or spoken account of connected events with an underlying
time dimension.
Designing an Experiment
● logic behind it
● how it needs to be measured
● Concrete
● Structured
● Specific
● And which can be studied
● shouldn’t be ambiguous
The primary purposes of formulating the problem in question form are to ensure that the
researcher has
To translate the research question into an experimental hypothesis, we need to define the
main variables and make predictions about how they are related. This involves extensive
research and identification of the independent variables and dependent variables.
3. The variability in extraneous variables other than the independent and dependent
variable are controlled, or minimized.
➢ We need to be aware of the presence of the extraneous variables/factors that will
affect the study and the variables being studied in the research
➢ In an experiment, an extraneous variable is any variable that we’re not
investigating that can potentially affect the outcomes of our research study.
■ If left uncontrolled, extraneous variables can lead to inaccurate conclusions about
the relationship between independent and dependent variables.
■ Extraneous variables can threaten the internal validity of your study by providing
alternative explanations for your results.
■ In an experiment, you manipulate an independent variable to study its effects on
a dependent variable.
➢ In research that investigates a potential cause-and-effect relationship, a
confounding variable is an unmeasured third variable that influences both the
supposed cause and the supposed effect.
■ Confounding variables (a.k.a. confounders or confounding factors) are a type of
extraneous variable that are related to a study’s independent and dependent
variables.
■ A variable must meet two conditions to be a confounder:
● It must be correlated with the independent variable. This may be a causal
relationship, but it does not have to be.
● It must be causally related to the dependent variable.
One should ensure that the hypothesis written is a specific, testable hypothesis that addresses
the research question.
Developing a Hypothesis
1. Writing a hypothesis begins with a research question that we want to answer. The
question should be focused, specific, and researchable within the constraints of our
project.
2. Construct a conceptual framework to identify which variables you will study and what
you think the relationships are between them.
Choice of design is based on the relative merits for a specific research question
● Randomization
Researchers have focused on four validities to help assess whether an experiment is sound
1. Internal validity : Does changing the IV in two similar experimental conditions cause
a direct change in DV in each condition?
a. Only if the IV results in significant changes in the DV can we establish a
cause - effect relationship.
b. The way an experiment is conducted supports the conclusion that the
independent variable caused observed differences in the dependent variable. -
This proves that the experiment and it’s findings have a strong relevance.
2. External validity : Do the situation and behaviours measured simulate real life
settings?
a. This is applicable in laboratory settings; will the results of a laboratory test be
relevant in a non-laboratory, real life setting - how do we generalise it to an
actual setting.
b. The way a study is conducted supports generalizing the results to people and
situations beyond those actually studied. - what were the variables controlled.
c. If it is a complete laboratory experiment that is artificial to real life, the
findings will not stay true in a real life setting.
3. Construct validity : Do the variables adequately measure the construct being studied?
a. This refers to the quality of the experiment’s manipulations.
b. Whether what we have measured is actually what we intend to measure
and whether the variables have been operationalised well.
c. Ex: Is the number of words read in one minute an accurate measure of reading
ability?
4. Statistical validity : Does the data demonstrate whether the difference or relationship
that was predicted was found?
a. To design a statistically valid experiment, thinking about the statistical tests at
the beginning of the design will help ensure the results can be believed
b. Have used the right test/method of analysis to help establish the cause and
effect relationship.
All the types of validity are ideally not possible in a single study.
Recruiting participants
● Every experimental study consists of participants drawn from a subject pool : An
established group of people who have agreed to be contacted about participating in
research studies.
● Researchers selecting humans as their research participants must decide on the
inclusion and exclusion criteria for their participants.
● What type of participants do you need? Ex: Users of public transport , A general
undergraduate population , persons undergoing therapy for mental health issues etc.
● What resources do I have to recruit participants? Are participants recruited based on
convenience and availability?
● The point of having a well-defined selection rule is to avoid bias in the selection of
participants.
● Alternatively, if your strategy is to post a research study on the Internet and have
participants log on to the Web site and complete the study, you could post the study on
one of several Web sites that specialize in advertising research opportunities.
● One of these sites is hosted by the Social Psychology Network, http://
www.socialpsychology.org/addstudy.htm, and another is hosted by the American
Psychological Society, http://psych.hanover.edu/research/exponnet.html.
● After identifying the target participant population, the researcher must select
individual participants from that group through random sampling.
● The number of participants needed to test the hypothesis adequately must be decided.
This decision is based on the
○ design of the study,
○ variability of the data
○ the type of statistical procedure to be used
● As the number of participants within a study increases, the ability of our statistical
tests to detect an effect of the independent variable increases; that is, the power of the
statistical test increases.
Sampling
● In research terms a sample is a group of people, objects, or items that are taken from a
larger population for measurement.
● The sample should be representative of the population to ensure that we can
generalise the findings from the research sample to the population as a whole.
● Probability sampling involves random selection, allowing you to make strong
statistical inferences about the whole group.
● Non-probability sampling involves non-random selection based on convenience or
other criteria, allowing you to easily collect data.
Probability Samping
In psychology research, sampling methods are crucial for gathering data that accurately
represents a larger population. Probability and non-probability sampling are two primary
approaches used to select participants for a study, each with distinct advantages and
limitations.
Probability Sampling:
Advantages:
Results are more likely to be generalizable to the larger population, increasing external
validity.
Limitations:
Not always feasible due to logistical constraints or when a sampling frame (list of the entire
population) is unavailable.
Non-Probability Sampling
Non-Probability Sampling:
Advantages:
Convenient and cost-effective, especially when access to the entire population is difficult.
Useful for exploratory research or when specific types of participants are needed.
Limitations:
May result in a biased sample that doesn't represent the entire population accurately.
In psychology, the choice between probability and non-probability sampling often depends on
the research goals, available resources, and the nature of the study. For instance, if the aim is
to generalize findings to a larger population, probability sampling is preferred. However, in
cases where specific types of participants are needed or access to the entire population is
challenging, non-probability sampling might be more practical
Apparatus and Instruments
● Researchers must identify how the independent variable conditions will be presented
(computer based software) and how the dependent variable will be measured
(subject’s verbal response to record speed, accuracy, recall etc.)
● In some experimental studies, the presentation and manipulation of the independent
variable requires the active participation of the investigator, and the measurement of
the dependent variable involves the administration of a variety of psychological
assessment instruments.
Instructions
● “What should be included in the instructions?” and “How should they be presented?”
● Instructions must include
○ a clear description of the research purpose, or disguised purpose
● Instructions should be clear, simple, unambiguous, and specific, but at the same time
they should not be too complex
● Include “warm-up” trials as part of your instructions. These are pretest trials that are
similar to those the participant would complete in the actual study. They are included
to ensure that the research participant understands the instructions and the way they
are to respond.
● Instructions requesting that the research participant “pay attention,” “relax,” or
“ignore distractions” are probably ineffective because research participants are
constrained by other factors that limit their ability to adhere to the commands.
● Instructions sometimes request that the participants perform several operations at the
same time. If this is not possible, then they will choose one of the possible operations
to perform, and the experimenter will not know which choice was made (ex : respond
with speed and accuracy)
● Vague instructions (e.g., instructions telling the participants to imagine, guess, or
visualize something) allow the participants to place their own interpretations on the
task.