โš–๏ธ AI Ethics ยท Fairness

Who Gets the Loan?

An AI loan officer makes decisions โ€” but is it fair? Discover hidden bias, measure disparate impact, and fix it. The same issues that made Apple Card and Amazon's hiring AI infamous!

๐Ÿ‘๏ธ Watch the AI Decide
๐Ÿ” Find the Bias
๐Ÿ“Š Measure Fairness
๐Ÿ”ง Fix the Bias
๐Ÿ† Badge

How AI Bias Happens

๐Ÿ“Š

Biased Training Data

If historical data reflects past discrimination, the AI learns and perpetuates it โ€” even if "protected" features are removed.

๐Ÿ”—

Proxy Features

Zip code correlates with race. Job title correlates with gender. Removing the protected feature isn't enough!

๐Ÿ“

Disparate Impact

If approval rates differ significantly between groups with equal qualifications, the model has disparate impact bias.

๐Ÿ”ง

Fairness Fixes

Re-sampling, re-weighting, fairness constraints, or post-processing can reduce bias โ€” but often at some accuracy cost.

โš–๏ธ
Wizzy the AI Tutor
Meet the AI Loan Officer! ๐Ÿค– It reviews loan applications and makes approve/deny decisions. Watch it process these 12 applications. The AI uses income, credit score, loan amount, and employment. Does anything look suspicious? Pay attention to the outcomes!

Step 1 โ€” Watch the AI Decide

Click "Run All Decisions" to see how the AI processes each application.
โš–๏ธ
Wizzy the AI Tutor
๐Ÿ” Something is wrong. Look carefully at the approval rates by neighbourhood. Two applicants with identical income and credit scores get different decisions โ€” one from South Mumbai, one from Dharavi. Can you see the hidden proxy discrimination?

Step 2 โ€” Spot the Hidden Bias

๐Ÿ”ด Denied (suspicious cases)

๐ŸŸข Approved (comparison cases)

๐Ÿšจ Bias Detector
Approval rate โ€” Affluent areasโ€”
Approval rate โ€” Working-class areasโ€”
The AI never directly uses neighbourhood as a feature โ€” but income and credit score are correlated with neighbourhood due to historical inequality. This is called proxy discrimination.
โš–๏ธ
Wizzy the AI Tutor
Let's measure the bias precisely! The Disparate Impact Ratio compares approval rates between groups. A ratio below 0.8 (the "80% rule") means illegal discrimination in many countries. Which features are causing the most bias?

Step 3 โ€” Quantify the Unfairness

Feature Importance (what the AI actually uses):

โš–๏ธ
Wizzy the AI Tutor
Now let's fix the bias! Try different fairness strategies and see how they affect both fairness AND accuracy. This is the real challenge โ€” there's always a trade-off between accuracy and fairness. What's more important to you?

Step 4 โ€” Apply Fairness Fixes

โŒ Before Fix

โœ… After Fix

Click a fairness strategy to apply it and see the before/after comparison.
โš–๏ธ
Wizzy the AI Tutor
๐ŸŽŠ You're now an AI Fairness Auditor! You identified proxy discrimination, measured disparate impact, and applied fairness constraints. These are the same skills used by AI ethics teams at Google, Microsoft, and governments worldwide. Responsible AI needs people like you!
โš–๏ธ

AI Ethics Badge!

You audited an AI system for bias and applied fairness fixes!

โš–๏ธ WhizzStep AI Lab
This certifies that
Student Name
has completed AI Fairness & Ethics Audit Training
AI Ethics Auditor
Bias Detective
Fairness Engineer
whizzstep.in

Key Concepts Mastered

Proxy Discrimination

๐Ÿ”— Hidden Bias

Using a feature correlated with a protected attribute (e.g. zip code โ†’ race). Removes the feature but keeps the bias.

Disparate Impact

๐Ÿ“Š Unequal Outcomes

When a neutral policy produces significantly different outcomes for different groups. Illegal in many countries.

80% Rule

๐Ÿ“ Legal Threshold

If the less favoured group's approval rate is below 80% of the favoured group's, it counts as disparate impact.

Fairness-Accuracy

โš–๏ธ The Trade-off

Making a model fairer often reduces raw accuracy. Society must decide which trade-off is acceptable.

Algorithmic Auditing

๐Ÿ” Checking AI

Systematically testing AI systems for bias across protected groups. Increasingly required by regulators worldwide.

Real Cases

๐Ÿ“ฐ It Has Happened

Apple Card gender bias (2019), Amazon hiring AI (2018), COMPAS recidivism tool, healthcare allocation algorithms.