Techniques to Reduce Bias in Training Datasets: A Survey in Fairness in Artificial Intelligence
  • Author(s): Vivek Santhosh Rai
  • Paper ID: 1710379
  • Page: 19-20
  • Published Date: 02-09-2025
  • Published In: Iconic Research And Engineering Journals
  • Publisher: IRE Journals
  • e-ISSN: 2456-8880
  • Volume/Issue: Volume 9 Issue 3 September-2025
Abstract

Artificial Intelligence (AI) and Machine Learning (ML) systems are increasingly used in sensitive areas like healthcare, recruitment, finance, and law enforcement. However, people often question the fairness of these systems due to biases in training datasets. These biases come from issues like sampling errors and historical prejudice. They can carry through algorithms and cause unfair or discriminatory outcomes. This survey reviews current methods to reduce dataset bias in ML models. The study divides these methods into pre-processing, in-processing, and post-processing approaches and compares their strengths and weaknesses. The paper notes recent developments in fairness-focused ML and offers insights into the trade-offs between model performance and fairness. It wraps up with a discussion on future research opportunities to create more fair and transparent AI systems.

Keywords

Artificial Intelligence, Bias Mitigation, Dataset Fairness, Ethical Machine Learning, Responsible AI

Citations

IRE Journals:
Vivek Santhosh Rai "Techniques to Reduce Bias in Training Datasets: A Survey in Fairness in Artificial Intelligence" Iconic Research And Engineering Journals Volume 9 Issue 3 2025 Page 19-20

IEEE:
Vivek Santhosh Rai "Techniques to Reduce Bias in Training Datasets: A Survey in Fairness in Artificial Intelligence" Iconic Research And Engineering Journals, 9(3)