Thursday, January 15, 2026

AI: Ethical and Legal Implications of Algorithmic Bias in Artificial Intelligence

Sdílet

With the growing use and development of artificial intelligence (AI) across many fields of the real-life spectrum, many may think that the concept of human bias is no longer relevant. The idea of having a tool that makes decisions instead of humans, with little to no risk of biased results, is quite appealing, especially in the context of humanity no longer needing to worry about making wrong or, more accurately, subjective (biased) decisions. Artificial intelligence has become deeply incorporated in our daily lives, from personalized recommendations on streaming platforms to sophisticated algorithms guiding financial investments. The ideas of integration into critical sectors like healthcare, transportation, and law enforcement underscores the transformative power of AI technologies. In healthcare, for example, AI systems are already being utilized to assist in medical diagnoses. Topol (2019) highlights how the deep neural networks used in Indian hospitals to interpret key chest X-ray findings have demonstrated accuracy comparable to that of four radiologists.

Many people believe that AI systems, being free from human emotions and bias, provide objective and neutral decisions. This widespread trust in AI’s objectivity arises from the perception that machines operate solely on data and algorithms, completely leaving out human emotions or stereotypes. Many assume that by removing human judgment from decision-making processes, we eliminate space for bias. Even though this looks great on paper, the reality is far more complex. After all, algorithmic systems are created by none other than humans and are trained on datasets which are products of human work. Whether we like it or not, these datasets contain a certain level of bias.

Miceli, Posada, and Yang, as cited in Nah, Luo, & Joo (2024) emphasize that these biases are not accidental but rather originate from power imbalances among the data workers, developers, and corporate stakeholders involved in data creation and production. As a result, AI systems are unable to completely avoid bias and may unintentionally perpetuate or even exacerbate these imbalances.

This phenomenon, known as algorithmic bias, raises important questions in the context of ethics and legal aspects of this field, especially given the fact that AI is already significantly impacting key areas for humanity, such as healthcare, transportation, finance, and many others. (Nićin et al., 2024) Note that the introduction of AI in transportation and logistics has transformed decision-making, leading to improvements in efficiency, safety, and sustainability through applications such as predictive analytics, optimized routing, enhanced security, real-time tracking, and smarter inventory management. However, the presence of bias in AI systems raises ethical concerns, as it can result in unfair treatment, discrimination, and loss of trust in technological systems. Legally, it challenges current anti-discrimination frameworks, necessitating a reassessment of how laws govern autonomous systems..

This paper focuses on the ethical and legal impacts of bias in artificial intelligence. It explores the origins of this bias, its effects on society, and the risks it brings. Starting with an exploration of the origins of algorithmic bias, examining the data and human factors that contribute to this phenomenon. We then analyze ethical and societal consequences, incorporating ethical theories such as utilitarianism and justice. Next, the paper examines  anti-discrimination rules, along with ethical guidelines and legal frameworks, to evaluate their effectiveness in promoting transparency and fairness in AI development. The goal is to ensure that AI contributes positively to society while minimizing potential negative impacts, even though this is a challenging task. By addressing these challenges, we call for efforts to ensure that artificial intelligence becomes a tool that promotes fairness and equality, rather than reinforcing existing inequalities.

Algorithmic bias

Algorithmic bias emerges from various data-related issues that influence the development and functioning of AI systems. Historical data bias, deeply rooted in systemic inequalities, skews training datasets and, consequently, the outcomes of AI decision-making. (Bellamy et al., 2018) argue that bias manifests as a systematic error, where unwanted biases lead to privileged groups receiving undue advantages, while underprivileged groups face systematic disadvantages. For instance, disparities measured by statistical parity difference (which quantifies the gap in favourable outcomes between privileged and unprivileged groups) and disparate impact (the ratio of these outcomes) highlight the extent of such inequalities. In addition, sampling bias exacerbates inaccuracies when datasets fail to represent the diversity of the populations they aim to model, causing decision outcomes to misalign with reality. Furthermore, flaws in data collection and labeling embed these biases more deeply, reinforcing pre-existing inequalities within datasets. Addressing these challenges requires comprehensive tools like the AI Fairness 360 toolkit, which offers metrics and algorithms for detecting, understanding, and mitigating such biases. As (Bellamy et al., 2018) emphasize, this toolkit simplifies the transformation of raw, inconsistent data into structured formats, enabling detailed and standardized analysis. These approaches underscore the critical need for a proactive evaluation of data sources and the implementation of measures to address inherent biases, thereby fostering fairness and inclusivity in AI systems.

Data factors contributing to bias

Factors related to data are very important contributors to bias in AI systems. Starting from representation of the chosen goal to as far as handling the datasets, biases may originate at different levels. In the process of setting the goals, we much more regularly use alternative data instead of direct information if getting precise characteristics is hard. However, this replacement comes with built-in constraints, because replacement attributes (e.g., using zip codes as proxies for race) can encode historical or societal biases. The training data regularly carries problems such as unseen cases, mismatched distributions between training and production data, and manipulated or stale data. These asymmetries may skew model outcomes, because those models that are trained on biased or incomplete data struggle to apply effectively to real-world applications. Historical data, a common foundation for training, inevitably embeds past prejudices, perpetuating existing inequities in model predictions. Ultimately, the inability to detect or address irrelevant correlations within datasets further exacerbates these challenges, highlighting the importance of rigorous data review and monitoring practices in mitigating bias (Roselli et al., 2019).

Historical bias

Historical bias refers to the biases and socio-technical failures before now, written in the world itself, which may cause influences on the process of data generation, no matter how well sampling and feature selection is performed. An example of this is a search in 2018 for images of women CEOs which included too few images of women in the results. At that time, only about 5% of Fortune 500 CEOs were female, so in the results, there were proportionately more males, reflecting a real-world fact. It has been much debated whether search algorithms should mirror such realities. (Mehrabi et al., 2021)

Population bias

Population bias takes place when the peculiarities of a platform’s usage, or its user demographic as a whole, are entirely different from the features of the original intended population. This situation renders the data unrepresentative. A typical example is the more frequent data of a woman visiting some popular websites like Pinterest, Facebook, and Instagram, and the high statistics of a man visiting many popular forums such as Reddit or Twitter. The example illustrates that user demographics shape the data even more, as well as additional examples and statistics regarding variation in social media use along the lines of gender, race, ethnicity, and parental education levels. (Mehrabi et al., 2021)

Self-Selection bias

Self-selection bias is a specific type of sampling or selection bias in which people decide to take part in a study or survey on their own. A typical example might be an opinion poll measuring enthusiasm for a given political candidate, where the most passionate supporters tend to participate more, and results are skewed in favor of their preferences. (Mehrabi et al., 2021)

Social Bias

Social bias develops in situations where people’s judgments have been swayed because they think or act like others consider it. For example, if he intended to rate something low, on seeing overwhelmingly good reviews, he would readjust his score upward, assuming his initial judgment too harsh. (Mehrabi et al., 2021)

Behavioral bias

Behavioral bias stems from variations in user behavior across different platforms, contexts, or datasets. For example, differences in how emojis are represented on various platforms can alter how people interpret and react to messages, sometimes even leading to miscommunication. (Mehrabi et al., 2021)

Temporal bias

Temporal bias is related to changes of user behavior or the population itself over time. For instance, a typical example of this is present on Twitter when users who initially attach a certain hashtag in order to have their topic discussed then proceed to discuss it without the hashtag, thus indicating changes in behavior over time. (Mehrabi et al., 2021)

Content production bias

The biases in content production are mainly guided by structural, lexical, semantic, or syntactic variations in the content created by users. For example, research has shown that language usage varies between both genders and age groups, and among countries and communities alike. Such diversity focuses on the diversity in how content is produced. (Mehrabi et al., 2021)

The outlined subparagraphs provide detailed insights into the various types of biases affecting artificial intelligence systems, using the framework by Mehrabi et al. to explain their nature and impact. Each type of bias—historical, population, self-selection, social, behavioral, temporal, and content production—is clearly defined and supported with examples, illustrating how societal, demographic, or contextual factors can lead to skewed or misrepresented data. This segmentation not only aids in understanding the distinct characteristics of each bias but also highlights the importance of addressing them during AI development. Together, these subparagraphs establish a comprehensive foundation for examining how biases infiltrate different stages of data generation and usage, ultimately distorting AI outputs and carrying significant societal implications.

Human factors contributing to bias

Human factors play a significant role in contributing to algorithmic bias, as personal biases and assumptions from developers influence AI systems. “Understanding each research contribution, how, when, and why to use it is challenging even for experts in algorithmic fairness” (Bellamy et al., 2018). This highlights how developers’ decision-making and their subjective judgments can shape fairness outcomes. Bias in algorithm design is also introduced through design choices that unintentionally create disparities. For example, “there is no one best metric relevant for all contexts. It must be chosen carefully, based on subject matter expertise and worldview” (Bellamy et al., 2018). This indicates that even technical decisions, such as metric selection, can inadvertently perpetuate bias. However, the text does not provide explicit details about the influence of corporate stakeholders or power asymmetries, indicating a gap in the discussion of broader structural factors affecting fairness.

Ethical and societal Implications

Ethical implications

AI algorithms raise ethical questions because of their potential  to be, to a certain level, biased, which have effects going far beyond only technical matters. They come with moral concerns about fairness, justice, and accountability, showing weaknesses in how automated systems handle these values. It is important to define and understand these values in relation to automated systems. Evaluating algorithmic bias from the various ethical perspectives within utilitarianism or justice theories might contribute something into comprehension of wider effects and implications while highlighting the potential effects within society and decision-making processes. These perspectives promote deeper reflection on prioritizing ethics in AI development and use. They also underline the need for active steps to fix these issues. This reflection underscores the necessity of extensive ethical guidelines that align the purposes of AI systems with values and aspirations of us humans.

Utilitarianism Perspective

On the utilitarian scale, the ethics of any AI system must be that which is able to ensure the most happiness and least suffering for everyone involved. Utilitarian ethics is the practice of doing good for the largest number of people and makes it important to apply fairness and inclusion in algorithms. Undoubtedly, biased AI systems cause unfair distribution of resources and opportunities, which lowers overall well-being in society. For example, a biased algorithm used in hiring and training could unfairly favor candidates based on gender or ethnicity. This creates ongoing negative effects for individuals, organizations, and society as a whole. These biases do not just lead to economic problems, they also harm the mental health of people who are unfairly excluded. This further reduces the overall well-being of people, increases unhappiness, and worsens social divides, which are serious ethical problem. With this in mind, the likely solution is deliberate efforts to ensure fairness in AI systems, such as using more diverse datasets and applying methods to reduce bias. These are utilitarian steps that make sure AI improves social well-being (Floridi & Cowls, 2019).

Justice and Fairness Theories

John Rawls emphasizes equal basic liberties in his theory of justice together with structuring social and economic inequalities to the benefit of the least advantaged. The same principle, when contextually applied to AI, requires the designing and implementation of algorithms in ways that produce fairness and equity. Instead, this has been by algorithms that almost always end up not upholding such principles, embedding and amplifying by using existing societal inequalities. For example, AI recruitment tools trained on historical data will usually find their learned pattern proper and even propagate discriminatory treatment of candidates from minority groups. This infringes on the Rawlsian principle of fair equality of opportunity and erodes public trust concerning AI systems. The lack of transparency in many AI decision-making processes, in fact, serves to make it all too easy to further complicate the ethical issues involved. Efforts to bring AI systems into line with the theories of justice and fairness have involved creating and prioritizing regulatory frameworks that attend to equal treatment and moral rights, developing explainable AI systems, and implementing robust accountability measures (Binns, 2018).

Societal Impact

When talking about algorithmic bias in AI, we can not disagree that it affects our society in many ways, it also very much affects the way people view, trust and use artificial intelligence. After taking a closer look at its effects, we see things like unfair treatment, trust in these AI systems themselves or if it makes already existing inequalities and socially constructed differences better or worse, which would definitely affect peoples view of it, to the negative side. And because there is possibility of even it worsening these inequalities, we can debate about the role of those who create and manage AI systems and to make sure they follow their responsibilities by making them fair and inclusive, not otherwise. This also brings up the need to think carefully about how AI systems are designed and tested to avoid unintended harm. If we can name these possible problems, only then can we do something about it and set up the technologies so that they benefit everyone, because if they benefit only a certain group, no matter what, then they are very likely to cause unwanted things to someone else.

Reinforcement of Existing Inequalities

Generative AI has the potential to both exacerbate and mitigate existing socioeconomic inequalities. If not carefully managed, these systems may amplify social divides by perpetuating stereotypes and limiting opportunities for marginalized groups. For instance, automated credit-scoring systems may disadvantage applicants from underserved communities by relying on biased financial histories, further restricting their access to economic resources. In education, AI-driven personalized learning tools could widen the achievement gap if they are less effective for students from diverse backgrounds due to biased training data. These examples illustrate how AI, despite its potential for innovation, can inadvertently reinforce existing inequalities. Addressing these issues requires a proactive approach to designing and deploying AI systems, including rigorous testing for bias, inclusive design practices, and ongoing evaluation of their societal impact (Roy, 2017).

Erosion of Trust in Technology

The distrust of technology, especially among users of AIs, originates from the inherent bias within and the lack of transparency. While AI is supposed to improve the overall decision-making process, it often carries societal “learned” biases from the data it learns from. This worsens the situation because most machine learning systems are “black boxes,” delivering results without a clear explanation of how decisions are made, leaving much ambiguity and doubt in users’ minds about their fairness or reliability. These AI systems are usually built on opaquely operating technologies, often with little coordination and few ethical rules to keep them accountable, which damages their credibility in vital social concerns like healthcare and justice. Designing more fair, transparent, and well-documented AIs is the way to winning back trust. Trustworthy AI minimizes risks, raises public confidence, and lowers these challenges. (Choung et al., 2022)

The rapid advancement of artificial intelligence (AI) has led to the integration of AI systems into various aspects of society, raising both opportunities and ethical concerns. As these technologies grow in complexity and influence, establishing legal and ethical frameworks becomes increasingly critical. These frameworks aim to ensure that AI systems operate transparently, fairly, and accountably, safeguarding societal interests while fostering innovation. Among the key components of these frameworks are promoting transparency in AI systems and ensuring fairness in their development and deployment. Each of these areas presents distinct challenges and requires targeted strategies for effective implementation.

Transparency in AI entails making the decision-making processes of these systems understandable and traceable. This is not only an ethical imperative but also a practical necessity for fostering trust among users and stakeholders. Meanwhile, fairness addresses the need to mitigate biases that may inadvertently harm specific groups, ensuring equitable outcomes across diverse applications. By addressing these two core principles, legal and ethical frameworks aim to establish AI as a socially beneficial tool rather than a source of inequality or confusion.

Accountability Mechanisms

The accountability ecosystem brings together diverse stakeholders—corporations, industry players, civil society, and governments—to address the challenges of accountability in AI, particularly concerning algorithmic bias. This framework assigns each group specific mechanisms, such as internal audits and external accreditations, to promote ethical AI use and transparency. Addressing algorithmic bias involves clearly identifying which algorithms to focus on, categorizing biases (e.g., gender bias), and setting appropriate metrics for fairness. A notable example comes from a case study in the gambling industry, where predictive models improved through innovative methods like ensemble modeling, which reduced disparities. This approach underscores the importance of integrating oversight and technical solutions, highlighting how interconnected processes within the accountability ecosystem can strengthen ethical AI practices across various levels. (Percy et al., 2021)

The interplay between legal requirements and practical implementation highlights a significant divergence in addressing algorithmic fairness. Traditional non-discrimination laws, particularly in EU contexts, are structured to address individual cases of discrimination retrospectively, which poses enforcement challenges in identifying and proving algorithmic biases. On the other hand, practical approaches in computer science focus on proactive bias mitigation during the design stage, utilizing fairness metrics and technical solutions. However, these methods face inherent limitations in addressing normative decisions and require clear guidance on selecting suitable fairness criteria for varied social contexts. The AI Act attempts to bridge these gaps by mandating fairness interventions at the model design stage, shifting non-discrimination responsibilities to earlier stages of AI system development. While the Act seeks to operationalize fairness through technical requirements, it necessitates interdisciplinary collaboration to reconcile the broader societal and technical challenges inherent in implementing fairness standards. (Deck et al., 2024)

Mitigation of AI bias

Mitigating bias from artificial intelligence is a way of defining certain problems which arise from the designing, model training, as well as the implementation of the AI system. Potential discrimination or unequal treatment from some inherent biases in the data, algorithms, or even decision making frameworks will be reduced. Mindful of the different phases where bias creeps into the design of a fairer AI, systematic evaluation and ongoing improvement of such technologies have been the focal point for much of the past work.

Mitigation of algorithmic bias is fundamental in case of achieving fairness and inclusivity when talking about AI systems. Bias is often present in data that AI models are being actively trained on. Those biases reflect societal inequalities and that is what leads to unfair outcomes. A multi-pronged approach is needed to develop better data diversity, refine algorithms during development, and ensure fairer outputs. Other emerging proactive measures include introducing bias awareness during system design and establishing clear accountability mechanisms. Collaboration with affected communities and making the public more familiar with AI technologies are also part of these strategies for achieving fair AI development. All these strategies, in the long run, ensure that AI technologies are applied responsibly and inclusively. (Wang, 2022)

Conclusion

As AI technology progresses into society, it follows that it should address moral and legal challenges brought about by its use. Algorithmic bias, whether via data or human factors, influences inequalities, destroys trust, and causes unfair outcomes. Therefore, clear consideration should be made to ensure that AI systems are built under principles of transparency, fairness, and accountability.

The ethical concerns of fairness and inclusiveness must abide by the standards for developing AI, while laws themselves need to reform to deal with such one-of-a-kind challenges posed by decision-making without human intervention. Various measures, including fairness audits and explainable AIs, can prove worthy in ensuring such trustworthiness and equity in outcomes. For instance, fairness audits can evaluate AI systems to identify harmful biases before deployment, ensuring they do not amplify existing societal inequalities. Explainable AI allows users and stakeholders to understand how decisions are made, fostering trust and accountability.

It is significant to note that AI could help society a great deal, but for any of these to work, it takes some deliberate effort to uncover and correct all existing biases. Without such efforts, the risks of widening inequalities and eroding public trust will only grow. A fair and accountable AI could become a tool, not for perpetuating divisions, but forming equality and trust. If done right, AI could transform not only industries but also the way we approach fairness and justice in society.

List of references

  • Nah, S., Luo, J., & Joo, J. (2024). Mapping scholarship on algorithmic bias: Conceptualization, empirical results, and ethical concerns. International Journal of Communication, 18, 548-569.
  • Topol, E. J. (2019). High-performance medicine: the convergence of human and artificial intelligence. Nature Medicine, 25(1), 44-56. https://doi.org/10.1038/s41591-018-0300-7
  • Nićin, V., Nićin, S., & Mirkov, M. (2024). Impact of AI Technologies on Operations of Small and Medium Transport Businesses. Communications – Scientific Letters of the University of Zilina26(3), E12-24. doi: 10.26552/com.C.2024.038
  • Bellamy, R.K., Dey, K., Hind, M., Hoffman, S.C., Houde, S., Kannan, K., Lohia, P.K., Martino, J., Mehta, S., Mojsilovic, A., Nagar, S., Ramamurthy, K.N., Richards, J.T., Saha, D., Sattigeri, P., Singh, M., Varshney, K.R., & Zhang, Y. (2018). AI Fairness 360: An Extensible Toolkit for Detecting, Understanding, and Mitigating Unwanted Algorithmic Bias. ArXiv, abs/1810.01943.
  • Roselli, D., Matthews, J., & Talagala, N. (2019). Managing bias in AI. Paper presented at the 539-544. https://doi.org/10.1145/3308560.3317590
  • Floridi, L., & Cowls, J. (2019). A Unified Framework of Five Principles for AI in Society. Harvard Data Science Review1(1). https://doi.org/10.1162/99608f92.8cd550d1
  • Binns, R.. (2018). Fairness in Machine Learning: Lessons from Political Philosophy. <i>Proceedings of the 1st Conference on Fairness, Accountability and Transparency</i>, in <i>Proceedings of Machine Learning Research</i> 81:149-159 Available from https://proceedings.mlr.press/v81/binns18a.html.
  • Roy, M. (2017). Cathy O’Neil. Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy. New York: Crown Publishers, 2016. 272p. Hardcover, $26 (ISBN 978-0553418811). College & Research Libraries, 78(3), 403. https://doi.org/10.5860/crl.78.3.403
  •  
  • Mehrabi, N., Morstatter, F., Saxena, N., Lerman, K., & Galstyan, A. (2021). A survey on bias and fairness in machine learning. ACM Computing Surveys, 54(6), 1-35. https://doi.org/10.1145/3457607
  • Wang, K. (2022). Mitigation of Algorithmic Bias to Improve AI Fairness. https://doi.org/10.25776/8ktt-kk62
  • Choung, H., David, P., & Ross, A. (2022). Trust in AI and Its Role in the Acceptance of AI Technologies. International Journal of Human–Computer Interaction39(9), 1727–1739. doi:10.1080/10447318.2022.2050543
  • Percy, C., Dragicevic, S., Sarkar, S., & Garcez, A. S. D. (2021). Accountability in AI: From Principles to Industry-specific Accreditation. arXiv [Cs.CY]. Retrieved from http://arxiv.org/abs/2110.09232
+ posts

Číst více

Další články