The project will focus on how neural networks make loan approval decisions and how biases might emerge. We will use datasets with features like credit score, race, employment status, income, gender and age. We will use that data to train neural networks to predict loan approvals and compare performances across different demographic groups to find disparities. We could apply fairness metrics like demographic parity and equal opportunity to measure bias in the model’s decisions. We could also experiment with techniques like re-sampling and fairness constraints to reduce bias. Finally, we will discuss the ethical implications of our findings and consider how small biases in models can scale up real world systems and affect people’s lives. Goals: Identify and quantify bias Evaluate the impact of bias mitigation techniques Assess ethical implications