0% found this document useful (0 votes)
12 views8 pages

Psy Chapter 2.2

Uploaded by

selinkaa.2007
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
12 views8 pages

Psy Chapter 2.2

Uploaded by

selinkaa.2007
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 8

Created by Turbolearn AI

Experimental Methods

What is an Experiment?
An experiment is a research method in which the researcher manipulates
one or more variables and measures their effect on one or more outcome
variables.

Key Components of an Experiment


Independent Variable: The variable controlled and manipulated by the
experimenter.
Dependent Variable: The variable measured to assess the effect of the
independent variable.
Control Group: A group of participants who do not receive the experimental
treatment, used to establish a baseline for comparison.
Random Assignment: The process of assigning participants to experimental or
control groups in a way that ensures each participant has an equal chance of
being assigned to any group.

Experimental Design
A good experimental design features:

Random assignment of participants to groups


Appropriate control groups
Control of situational variables
Carefully selected independent and dependent variables

Example: Cyberbullying Study

Page 1
Created by Turbolearn AI

Variable Description

Independent
Number of bystanders 2or5025
Variable
Dependent
Personal responsibility, likelihood of intervention
Variables
Participants who viewed a cyberbullying scenario with 2
Control Group
bystanders

Limitations of Experiments
Artificiality: Experiments can be somewhat artificial, which may affect the
generalizability of the results.
Ethical Challenges: Making a laboratory experiment more realistic can raise
ethical challenges.
Operationalization: Independent and dependent variables must be defined and
implemented in a concrete fashion.

Operationalization of Variables
Operationalization is the process of defining a variable in practical terms,
so that it can be measured or manipulated.

Examples of operationalization:

Measuring physical aggression by the length and loudness of a sound blast


Measuring aggression by the amount of hot sauce a participant chooses to
administer to another participant
Measuring physical fights among preschoolers

Meta-Analyses
A meta-analysis is a statistical analysis of many previous experiments on
the same topic, used to provide a clearer picture of the results.

Challenges of meta-analyses:

Page 2
Created by Turbolearn AI

Publication Bias: Published studies may not be representative of all the work
done on a particular problem.
File Drawer Problem: Journals may be more likely to publish studies that
demonstrate significant effects of an independent variable.## The Importance
of Multiple Perspectives

When studying the effects of video games on aggression, most research has involved
single participants playing alone. However, what happens when we consider the
social perspective and look at the effects of video games on groups of people playing
together? Research has shown that playing video games cooperatively is associated
with less subsequent aggressive behavior, regardless of whether the game played
was violent or not Misplaced &.

The Limitations of Single-Perspective Research


Most studies of video games and aggression use participants playing alone
Failing to consider multiple perspectives can lead to misleading results
Adding the social perspective can provide a more comprehensive understanding
of the effects of video games on behavior

Testing the Effects of Food Additives on


Children's Hyperactivity
The double-blind procedure is the gold standard for demonstrating the objective
effects of any substance, whether a food additive, medication, or recreational drug.
This procedure requires a placebo, an inactive substance that cannot be distinguished
from a real, active substance.

The Double-Blind Procedure

Description

Double- Neither the participants nor the researchers know whether a participant
Blind has received a real substance or a placebo
An inactive substance that cannot be distinguished from a real, active
Placebo
substance

Page 3
Created by Turbolearn AI

The Importance of the Double-Blind Procedure


"The double-blind procedure controls for the effects of participants'
expectations and researchers' biases, ensuring that the results are due to
the substance being tested rather than external factors."

The Effects of Food Additives on Children's Hyperactivity


Double-blind, placebo-controlled studies have shown that common food
additives in fruit drinks can increase hyperactivity in children
McCannetal. , 2007
The results of this study have implications for attention deficit hyperactivity
disorder ADHD

How Do We Study the Effects of Time?


Psychological scientists frequently ask questions about normal behaviors related to
age. To answer these questions, researchers use various techniques, including cross-
sectional and longitudinal studies.

Cross-Sectional Studies
Gather groups of people of varying ages and assess their behavior
Can introduce cohort effects, or the generational effects of having been born at
a particular point in history

Longitudinal Studies
Observe a group of individuals over a long period
Can provide a more accurate understanding of age-related changes in behavior,
but is expensive and time-consuming

How Do We Draw Conclusions from Data?


Asking the right questions and collecting good information are only the beginning of
good science. Once we have collected our results, or data, we need to figure out what
those data mean for our hypotheses and theories.

Page 4
Created by Turbolearn AI

The Importance of Valid and Reliable Measures


"A valid measure actually measures what it is supposed to measure. A
reliable measure is consistent and produces the same results under the
same conditions."

Types of Reliability

Type of Reliability Description

Test-Retest
The consistency of a measure over time
Reliability
Interrater Reliability The consistency of a measure across different observers
Inter-Method The positive correlation of several approaches to measure a
Reliability feature in an individual
Internal Consistency The positive correlation of measures within a single test

Assessing the Validity of a Measure


Correlate the measure with other existing, established measures of the same
concept
Example: The WFX-FIT is a job requirement for wildland firefighters and is
supposed to measure a firefighter's ability to carry out the extremely physically
demanding tasks of the job. We would therefore expect scores on this test to
correlate with other established measures of physical fitness, such as the Police
Officers Physical Abilities Test P OP AT .## Validity and Reliability

Validity and reliability are two crucial concepts in research. Validity refers to the
extent to which a measure accurately measures what it is supposed to measure.
Reliability, on the other hand, refers to the consistency of a measure.

"A measure cannot be valid without also being reliable, but a measure can
be reliable without being valid."

For example, a bathroom scale that consistently reports an incorrect weight has
reliability but not validity.

Descriptive Statistics

Page 5
Created by Turbolearn AI

Descriptive statistics help us organize individual bits of data into meaningful patterns
and summaries. They provide a way to summarize and communicate information
about a sample.

Frequency Distributions
Frequency distributions show the number of times each value or category occurs in a
dataset. They can be illustrated using bar charts or histograms.

Number of Students
Field of Study
Enrolled

Business, management, and public administration 150,000


Health and related fields 100,000
Social and behavioral sciences and law 80,000
Humanities 60,000
Architecture, engineering, and related technologies 50,000
Physical and life sciences and technology 40,000
Education 30,000
Mathematics, computer, and information sciences 20,000
Visual and performing arts and communication
10,000
technologies

Central Tendency
Central tendency refers to the middle or typical value of a dataset. There are three
measures of central tendency:

Mean: the numerical average of a set of scores


Median: the middle value of a dataset when it is sorted in order
Mode: the most frequently occurring value in a dataset
Measure Definition

Mean The numerical average of a set of scores


Median The middle value of a dataset when it is sorted in order
Mode The most frequently occurring value in a dataset

Page 6
Created by Turbolearn AI

Variance
Variance refers to the spread or dispersion of a dataset. A smaller standard deviation
indicates that most scores are clustered around the mean, while a larger standard
deviation indicates that scores are spread out.

Salary Difference from Mean Squared Difference

$40,000 -$66,800 4,465,600


$45,000 -$61,800 3,812,400
$47,000 -$59,800 3,576,400
$52,000 -$54,800 3,002,400
$350,000 $243,200 59,174,400

The Normal Curve


The normal curve is a symmetrical distribution of scores that is commonly observed
in many natural phenomena. It has several important features:

Symmetry: equal numbers of scores occur above and below the mean
Shape: most scores occur near the mean
Standard Deviation: 68% of the population falls within one standard deviation
of the mean, 95% falls within two standard deviations, and 99% falls within
three standard deviations## Descriptive Statistics with Multiple Variables

Normal Distribution
A normal distribution is a probability distribution that is symmetric about the mean,
showing that data near the mean are more frequent in occurrence than data far from
the mean.

"A normal distribution is a distribution of data points that is symmetric


and follows a bell-shaped curve, where the majority of the data points are
clustered around the mean."

The normal distribution is often described by its mean μ and standard deviation σ.
For example, an IQ score with a mean of 100 and a standard deviation of 15 would
result in:

Page 7
Created by Turbolearn AI

Score Range Percentage of Test Takers

85-115 68%
70-130 95%
Above 130 2.5%
Below 70 2.5%

Scatterplots
A scatterplot is a graphical representation of the relationship between two variables.
Each dot on the plot represents the intersection of the two variables.

Variable 1 Variable 2

GRE Quantitative Score Number of Conference Presentations

For example, a study examined the relationship between GRE quantitative scores and
the number of conference presentations given by biomedical graduate students. The
scatterplot showed no relationship between the two variables.

Correlation Coefficients
A correlation coefficient is a statistical measure that describes the strength and
direction of the relationship between two variables. Correlation coefficients can range
from -1.00 to +1.00.

Correlation Coefficient Interpretation

1.00 or +1.00 Perfect positive or negative correlation


0 No systematic relationship
Between 0 and 1.00 or 0 and -1.00 Positive or negative correlation, but not perfect

A correlation coefficient of 1.00 or +1.00 indicates a perfect positive or negative


correlation, where all data points follow the pattern. A value between 0 and 1.00 or
0 and -1.00 indicates a correlation direction, but the relationship is not perfect.

Page 8

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy