0% found this document useful (0 votes)
40 views2 pages

Psych Assessment Unit V

Reliability

Uploaded by

main.22001293
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
40 views2 pages

Psych Assessment Unit V

Reliability

Uploaded by

main.22001293
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 2

Psych Assessment Name

Unit 4: Reliability Section Title

Reliability
(consistency/dependability) - To know the degree of consistency of a test

- an index of reliability, a proportion that indicates the ratio


Reliability Coefficient
between the true score variance on a test and the total variance.
 Total Variance = True Variance + Error Variance

 true variance – true differences; stable. Because true


True Variance differences are assumed to be stable, they are presumed
to yield consistent scores on repeated administrations of
the same tests as well as on equivalent forms of tests.

 error variance – variances from other factors that has no


Error Variance kinalaman with the variable that is being tested (like test
administration, loc of testing, scoring and interpretation,
etc)
 GOAL: lessen the variance attributed to the error variance

Reliability Estimates

How to estimate reliability?

1. Test-Retest Reliability - If you measure the same variable at two different times and get
Estimate similar results, it suggests that the instrument is reliable.
(Error in Test - obtained by correlating pairs of scores from the same people on
Administration) two different administrations of the same test.
- The test-retest measure is appropriate when evaluating the
reliability of a test that purports to measure something that is
relatively stable over time, such as a personality trait.

Coefficient of stability - When the interval between testing is greater than six months, the
estimate of test-retest reliability is often referred to as the
coefficient of stability.

2. Parallel-Forms and - Alternate Forms: different versions of the test


Alternate-Forms - Parallel Forms: different versions of the test but more similar
Reliability Estimates compared to alternate forms.
(Error in Test - The degree of the relationships between various forms of a test
Construction and Test can be evaluated by means of the coefficient of equivalence.
Administration)
- main advantage: It minimizes the effect of memory for the
content of a previously administered form of the test.
- disadvantage: developing alternate forms of test can be time-
consuming and expensive.
Logically enough, it is referred to as an internal consistency estimate of
reliability or as an estimate of inter-item consistency estimates of
reliability.
- used to describe how well the items within a test or measure are
consistent with each other.

Alternative option for alternate-


forms (cheaper side) is:

3. Split-half Reliability - a method used to assess the internal consistency of a test by


Estimate dividing the test into two equivalent halves that are administered
once and then correlating the scores from these halves.

How to split the test?


a. Odd-Even
b. Random

- Because the tests are divided into half, there is a possibility that
the variable being tested is affected or the internal consistency
will go lower. To prevent this, use Spearman-Brown Formula

- allows a test developer or user to estimate internal consistency


- Spearman-Brown
Formula reliability from a correlation of two halves of a test.
- Longer tests tend to be more reliable, though this isn't always the
case. The extra test items should ideally be comparable to the
original items' content and range of difficulty.

 Inter-item consistency refers to the degree of correlation among


4. Other Methods of
Estimating Internal all the items on a scale.
Consistency  An index of inter-item consistency, in turn, is useful in assessing
the homogeneity of the test.
 Homogeneity: The degree to which items in a scale are
unifactorial and the degree to which a test measures various
aspects. Individuals who score similarly on a homogenous test
are likely to possess similar abilities in the field being tested.
 Heterogeneity: A heterogeneous test is composed of items that
measure more than one trait.

a. Coefficient alpha – by Cronbach (1951)


- Coefficient alpha is widely used as a measure of reliability, in part
because it requires only one administration of the test.
- Recommended for tests with non-dichotomous items (ex. Likert
scales)
- may be thought of as the mean of all possible split-half
correlations, corrected by the Spearman-Brown formula.

b. The Kuder-Richardson formulas


- Recommended for tests with dichotomous items (ex. Yes/No)

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy