An AI loan officer makes decisions โ but is it fair? Discover hidden bias, measure disparate impact, and fix it. The same issues that made Apple Card and Amazon's hiring AI infamous!
If historical data reflects past discrimination, the AI learns and perpetuates it โ even if "protected" features are removed.
Zip code correlates with race. Job title correlates with gender. Removing the protected feature isn't enough!
If approval rates differ significantly between groups with equal qualifications, the model has disparate impact bias.
Re-sampling, re-weighting, fairness constraints, or post-processing can reduce bias โ but often at some accuracy cost.
You audited an AI system for bias and applied fairness fixes!
Using a feature correlated with a protected attribute (e.g. zip code โ race). Removes the feature but keeps the bias.
When a neutral policy produces significantly different outcomes for different groups. Illegal in many countries.
If the less favoured group's approval rate is below 80% of the favoured group's, it counts as disparate impact.
Making a model fairer often reduces raw accuracy. Society must decide which trade-off is acceptable.
Systematically testing AI systems for bias across protected groups. Increasingly required by regulators worldwide.
Apple Card gender bias (2019), Amazon hiring AI (2018), COMPAS recidivism tool, healthcare allocation algorithms.