Manuscript submitted April 24, 2025; accepted July 14, 2025; published September 19, 2025
Abstract—A variety of sectors utilize Artificial Intelligence (AI) models; however, the decision-making frameworks of these models often reflect societal biases that correspond with social inequalities. This review explores the patterns of bias formation in AI systems, the consequences of unfair decisions, and strategies for mitigating these issues. The study employs systematic review methods in accordance with PRISMA guidelines to evaluate existing scholarly literature. The implementation of AI systems encompasses three primary thematic elements: biases originating from the data, components of algorithmic control, and ethical concerns encountered during deployment. The analysis sets the stage for future research that prioritizes fairness-aware artificial intelligence models, along with autonomous governance frameworks and interdisciplinary methods for bias reduction.
keywords—Artificial Intelligence (AI) bias, fairness in AI, algorithmic discrimination, machine learning ethics, decision-making, AI governance, bias mitigation
Cite: Dinesh Deckker, Subhashini Sumanasekara,"Bias in AI Models: Origins, Impact, and Mitigation Strategies," Journal of Advances in Artificial Intelligence, vol. 3, no. 3, pp. 234-247, 2025. doi: 10.18178/JAAI.2025.3.3.234-247
Copyright © 2025 by the authors. This is an open access article distributed under the Creative Commons Attribution License which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited (CC BY 4.0).
Copyright © 2023-2025. Journal of Advances in Artificial Intelligence. Unless otherwise stated.
E-mail: editor@jaai.net