Option B: Error Rates
3. Applying Rawls to COMPAS
Applying Rawls to COMPAS
Rawls's approach to justice, particularly through the veil of ignorance and the difference principle, may arguably prioritise equal error rate balance in the COMPAS case, even if this comes at the cost of maximising overall accuracy.
From the perspective of the veil of ignorance, decision-makers evaluating the algorithm would not know their race, gender, or likelihood of being classified as high or low risk. In this position, they would prioritise fairness in how errors are distributed across groups, ensuring that no one group is disproportionately burdened by false positives (being labelled high risk when not dangerous) or false negatives (being labelled low risk when actually dangerous). A system that produces more false positives for one group would likely be rejected as unjust under Rawls’s framework, as individuals behind the veil would seek protection against unequal treatment that could harm them unfairly.
The difference principle reinforces this focus. It demands that inequalities or disadvantages within a system must work to the benefit of the least advantaged. In the context of COMPAS, a higher rate of false positives for certain groups (e.g., Black defendants) disproportionately harms individuals who may already face systemic disadvantages. Rawls’s theory would oppose such an imbalance, as it exacerbates the burdens on the least advantaged rather than alleviating them.
By contrast, maximising overall accuracy may prioritise efficiency at the expense of fairness. If achieving high accuracy involves tolerating disparities in error rates, this would be incompatible with Rawls's principles, as it allows one group to bear a greater share of harm. Rawls’s approach would instead demand that the algorithm ensures equal error rate balance, even if this slightly reduces overall predictive accuracy, as it aligns with the commitment to fairness and the protection of those in vulnerable positions.