Content-Length: 131390 | pFad | http://en.wikipedia.org/wiki/COMPAS_(software)

COMPAS (software) - Wikipedia Jump to content

COMPAS (software)

From Wikipedia, the free encyclopedia

Correctional Offender Management Profiling for Alternative Sanctions (COMPAS)[1] is a case management and decision support tool developed and owned by Northpointe (now Equivant) used by U.S. courts to assess the likelihood of a defendant becoming a recidivist.[2][3]

COMPAS has been used by the U.S. states of New York, Wisconsin, California, Florida's Broward County, and other jurisdictions.[4]

Risk assessment

[edit]

The COMPAS software uses an algorithm to assess potential recidivism risk. Northpointe created risk scales for general and violent recidivism, and for pretrial misconduct. According to the COMPAS Practitioner's Guide, the scales were designed using behavioral and psychological constructs "of very high relevance to recidivism and criminal careers."[5]

Pretrial release risk scale
Pretrial risk is a measure of the potential for an individual to fail to appear and/or to commit new felonies while on release. According to the research that informed the creation of the scale, "current charges, pending charges, prior arrest history, previous pretrial failure, residential stability, employment status, community ties, and substance abuse" are the most significant indicators affecting pretrial risk scores.[5]
General recidivism scale
The General recidivism scale is designed to predict new offenses upon release, and after the COMPAS assessment is given. The scale uses an individual's criminal history and associates, drug involvement, and indications of juvenile delinquency.[6]
Violent recidivism scale
The violent recidivism score is meant to predict violent offenses following release. The scale uses data or indicators that include a person's "history of violence, history of non-compliance, vocational/educational problems, the person's age-at-intake and the person's age-at-first-arrest."[7]

The violent recidivism risk scale is calculated as follows:

where is the violent recidivism risk score, is a weight multiplier, is current age, is the age at first arrest, is the history of violence, is vocational education scale, and is history of noncompliance. The weight, , is "determined by the strength of the item's relationship to person offense recidivism that we observed in our study data."[8]

[edit]

Interventions of AI and algorithms in the court are usually motivated by cognitive biases such as hungry judge effect.[9]

In July 2016, the Wisconsin Supreme Court ruled that COMPAS risk scores can be considered by judges during sentencing, but there must be warnings given to the scores to represent the tool's "limitations and cautions."[4]

A general critique of the use of proprietary software such as COMPAS is that since the algorithms it uses are trade secrets, they cannot be examined by the public and affected parties which may be a violation of due process. Additionally, simple, transparent and more interpretable algorithms (such as linear regression) have been shown to perform predictions approximately as well as the COMPAS algorithm.[10][11][12]

Another general criticism of machine-learning based algorithms is since they are data-dependent if the data are biased, the software will likely yield biased results.[13]

Specifically, COMPAS risk assessments have been argued to violate 14th Amendment Equal Protection rights on the basis of race, since the algorithms are argued to be racially discriminatory, to result in disparate treatment, and to not be narrowly tailored.[14]

Accuracy

[edit]

In 2016, Julia Angwin was co-author of a ProPublica investigation of the algorithm.[15] The team found that "blacks are almost twice as likely as whites to be labeled a higher risk but not actually re-offend," whereas COMPAS "makes the opposite mistake among whites: They are much more likely than blacks to be labeled lower-risk but go on to commit other crimes."[15][10][16] They also found that only 20 percent of people predicted to commit violent crimes actually went on to do so.[15]

In a letter, Northpointe criticized ProPublica's methodology and stated that: "[The company] does not agree that the results of your analysis, or the claims being made based upon that analysis, are correct or that they accurately reflect the outcomes from the application of the model."[15]

Another team at the Community Resources for Justice, a criminal justice think tank, published a rebuttal of the investigation's findings.[17] Among several objections, the CRJ rebuttal concluded that the Propublica's results: "contradict several comprehensive existing studies concluding that actuarial risk can be predicted free of racial and/or gender bias."[17]

A subsequent study has shown that COMPAS software is somewhat more accurate than individuals with little or no criminal justice expertise, yet less accurate than groups of such individuals.[18] They found that: "On average, they got the right answer 63 percent of their time, and the group's accuracy rose to 67 percent if their answers were pooled. COMPAS, by contrast, has an accuracy of 65 percent.".[10] Researchers from the University of Houston found that COMPAS does not conform to group fairness criteria and produces various kinds of unfair outcomes across sex- and race-based demographic groups.[19]

Further reading

[edit]
  • Northpointe (15 March 2015). "A Practitioner's Guide to COMPAS Core" (PDF).
  • Angwin, Julia; Larson, Jeff (2016-05-23). "Machine Bias". ProPublica. Retrieved 2019-11-21.
  • Flores, Anthony; Lowenkamp, Christopher; Bechtel, Kristin. "False Positives, False Negatives, and False Analyses" (PDF). Community Resources for Justice. Retrieved 2019-11-21.
  • Sample COMPAS Risk Assessment

See also

[edit]

References

[edit]
  1. ^ "DOC COMPAS". Retrieved 2023-04-04.
  2. ^ Sam Corbett-Davies, Emma Pierson, Avi Feller and Sharad Goel (October 17, 2016). "A computer program used for bail and sentencing decisions was labeled biased against blacks. It's actually not that clear". The Washington Post. Retrieved January 1, 2018.{{cite news}}: CS1 maint: multiple names: authors list (link)
  3. ^ Aaron M. Bornstein (December 21, 2017). "Are Algorithms Building the New Infrastructure of Racism?". Nautilus. No. 55. Retrieved January 2, 2018.
  4. ^ a b Kirkpatrick, Keith (2017-01-23). "It's not the algorithm, it's the data". Communications of the ACM. 60 (2): 21–23. doi:10.1145/3022181. S2CID 33993859.
  5. ^ a b Northpointe 2015, p. 27.
  6. ^ Northpointe 2015, p. 26.
  7. ^ Northpointe 2015, p. 28.
  8. ^ Northpointe 2015, p. 29.
  9. ^ Chatziathanasiou, Konstantin (May 2022). "Beware the Lure of Narratives: "Hungry Judges" Should Not Motivate the Use of "Artificial Intelligence" in Law". German Law Journal. 23 (4): 452–464. doi:10.1017/glj.2022.32. ISSN 2071-8322. S2CID 249047713.
  10. ^ a b c Yong, Ed (2018-01-17). "A Popular Algorithm Is No Better at Predicting Crimes Than Random People". Retrieved 2019-11-21.
  11. ^ Angelino, Elaine; Larus-Stone, Nicholas; Alabi, Daniel; Seltzer, Margo; Rudin, Cynthia (June 2018). "Learning Certifiably Optimal Rule Lists for Categorical Data". Journal of Machine Learning Research. 18 (234): 1–78. arXiv:1704.01701. Retrieved 2023-07-20.
  12. ^ Robin A. Smith. Opening the lid on criminal sentencing software. Duke Today, 19 July 2017
  13. ^ O'Neil, Cathy (2016). Weapons of Math Destruction. Crown. p. 87. ISBN 978-0553418811.
  14. ^ Thomas, C.; Nunez, A. (2022). "Automating Judicial Discretion: How Algorithmic Risk Assessments in Pretrial Adjudications Violate Equal Protection Rights on the Basis of Race". Law & Inequality. 40 (2): 371–407. doi:10.24926/25730037.649.
  15. ^ a b c d Angwin, Julia; Larson, Jeff (2016-05-23). "Machine Bias". ProPublica. Retrieved 2019-11-21.
  16. ^ Israni, Ellora (2017-10-26). "When an Algorithm Helps Send You to Prison (Opinion)". The New York Times. Retrieved 2019-11-21.
  17. ^ a b Flores, Anthony; Lowenkamp, Christopher; Bechtel, Kristin. "False Positives, False Negatives, and False Analyses" (PDF). Community Resources for Justice. Retrieved 2019-11-21.
  18. ^ Dressel, Julia; Farid, Hany (2018-01-17). "The accuracy, fairness, and limits of predicting recidivism". Science Advances. 4 (1): eaao5580. Bibcode:2018SciA....4.5580D. doi:10.1126/sciadv.aao5580. PMC 5777393. PMID 29376122.
  19. ^ Gursoy, Furkan; Kakadiaris, Ioannis A. (2022-11-28). "Equal Confusion Fairness: Measuring Group-Based Disparities in Automated Decision Systems". 2022 IEEE International Conference on Data Mining Workshops (ICDMW). IEEE. pp. 137–146. arXiv:2307.00472. doi:10.1109/ICDMW58026.2022.00027. ISBN 979-8-3503-4609-1. S2CID 256669476.








ApplySandwichStrip

pFad - (p)hone/(F)rame/(a)nonymizer/(d)eclutterfier!      Saves Data!


--- a PPN by Garber Painting Akron. With Image Size Reduction included!

Fetched URL: http://en.wikipedia.org/wiki/COMPAS_(software)

Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy