by
Muhammad Siraj Khan
CyJurII Theorist
2 August 2025
PDF available
Keywords: Human Rights, AI & Law, Algorithmic Discrimination, Bias.
Introduction
The rise of digital governance and artificial intelligence (AI) has not only affected human decision-making processes, rather they are also shaping them as contributors or, at times, co-adjudicators. The government functionaries and other organizations rely heavily upon the data that has been processed by AI algorithms and formulate their future policies accordingly. It is argued, and to a certain extent, fairly so, that AI, being an automated system working without having emotions and thus promises neutrality and objectivity while performing its tasks. However, reviewing its behavior at times, some legal experts view it as both a symbol of progress and a site of peril. Once heralded as a solution to the unpredictability of human decision-making, algorithmic systems were imagined to bring about an era of fairness, consistency, and impartiality. But as these systems permeate our legal institutions, public services, financial sectors, and hiring processes, a new and deeply troubling pattern has emerged, i.e., the algorithmic discrimination, a quiet, often invisible erosion of human rights through code.
What Is Algorithmic Discrimination?
Algorithmic discrimination refers to outcomes produced by AI or automated decision-making systems that systematically disadvantage individuals based on protected characteristics such as race, gender, religion, disability, or socio-economic status. These outcomes are not typically the result of deliberate intent. Rather, they are the consequence of historical bias embedded in training datasets, flawed model assumptions, inadequate oversight, and lack of diversity among developers (Barocas & Selbst, 2016).
Unlike traditional discrimination, which can be contested in courtrooms or confronted in the public square, algorithmic discrimination often hides in black-box models, mathematical logic, and proprietary codebases. Its harm is diffused, cumulative, and frequently untraceable to a single decision point, which makes it all the more insidious.
Real-World Consequences: From Credit to Crime
Numerous examples illustrate how algorithms, if not monitored thoroughly, can entrench existing social inequalities. To have a better idea of how AI algorithms may cause discrimination, some real-life examples are presented in the underlying paragraphs.
In a landmark study by Buolamwini and Gebru (2018), it was revealed that commercial facial recognition systems misidentify black women at rates as high as 35%, whereas the error rates for white men were less than 1%. This discriminatory behavior may cause some serious implications, e.g., in 2020, a black man, Robert Williams from Detroit, was wrongfully arrested based on a facial recognition mismatch. It was a case that sparked widespread outcry and drew attention from the American Civil Liberties Union (ACLU). He sued the concerned department, and the details of the case, titled Robert Williams v. Detroit Police Department, remain a pivotal example of algorithm-driven rights violations.
Another example is of ‘predictive policing’ prevalent in some states in the US. The prosecution department, as well as, in some states, even the judges, use tools to predict the future behavior of criminal offenders. If the score for an offender comes higher, the laws in some states allow it as a factor that may be considered by the court while sentencing. It has been noted by Lum & Isaac (2016) that such tools may disproportionately direct law enforcement to communities of color, not because of present criminal activity but based on historic over policing, a feedback loop that reinforces racial bias. Here, it will be interesting to mention the famous case titled Loomis v. Wisconsin (2016):
Eric Loomis was convicted of an offence, and it was revealed that while sentencing him, the trial court had considered him a potential offender in the future as well. This finding of the trial court was based on software, COMPAS, used by the prosecution to make such predictions. Loomis challenged the decision in the Wisconsin Supreme Court, claiming at the time that he had not had access to the software's mathematical formulas and, therefore, could not adequately defend himself and defeat the right of due process. It is noteworthy to mention here that an accused has the right to cross-examine the evidence presented against him during the trial. The software's manufacturer, a private company, pleaded to the court that it was not possible to keep the scoring algorithms of the software a trade secret, hence strictly confidential. The Supreme Court upheld the use of a proprietary risk assessment algorithm called COMPAS, despite challenges that the defendant had no meaningful way to contest the tool’s assessment, which influenced his sentencing. The ruling was controversial, especially given that the COMPAS system had been found to carry significant racial bias (Angwin et al., 2016, ProPublica).
Eubanks (2018) has noted that automated lending systems have been shown to penalize applicants from minority zip codes, even when controlling for income and employment history. These systems often rely on proxies like education level, neighborhood, or social media behavior, which correlate with race and class.
Human Rights in the Algorithmic Age
International Human Rights Law provides a robust foundation for addressing discrimination. Article 7 of the Universal Declaration of Human Rights (1948) affirms that "All are equal before the law and are entitled without any discrimination to equal protection of the law." Similarly, the International Covenant on Civil and Political Rights (ICCPR), in Article 26, mandates states to ensure equality and prohibit discrimination in law or practice. But the digital turn challenges the enforceability of these guarantees. When decision-making power is outsourced to private algorithms, operated by opaque platforms, and shaped by market logic, the mechanisms of accountability weaken. That is for these reasons that courts, particularly in developed nations, are now grappling with the question of how constitutional protections extend to algorithmic processes.
The UN Human Rights Council (2021) has warned that such technologies may "undermine the right to privacy, the right to equality and non-discrimination, and even the right to life and liberty." The Council has called for regulatory frameworks that promote algorithmic transparency, human oversight, and rights-based design.
Toward a Legal and Ethical Response
It is now a growing concern about how such challenges should be responded to. It is suggested that to respond to this growing crisis, legal scholars and policymakers should pursue the following lines of action:
Transparency Mandates: Laws should require companies and governments to disclose the logic, data sources, and impact assessments of algorithms used in sensitive decision-making areas such as criminal justice, healthcare, and employment. The European Union’s AI Act and the Digital Services Act may be regarded as the pioneering steps in this direction.
Algorithmic Impact Assessments (AIAs): Just like the principle of Environmental Impact Assessment has evolved and been incorporated in International instruments as well as domestic legislations, it has been suggested that an Algorithmic Impact Assessment be made mandatory to be conducted for approval, to identify any potential discriminatory effects of a system before its deployment in the field.
An Opportunity for Appeal and Human Oversight: Decision systems must include avenues for meaningful human oversight and appeal, especially when life-altering outcomes are at stake. This practice has already been adopted by some major social media organizations.
Development of Accountability Mechanisms: As in our societies, individuals can sue for defamation or employment discrimination; legal regimes must evolve to allow for algorithmic accountability. This includes creating legal standing for those harmed by automated decisions, even when the harm emerges from a statistical pattern, not a single act of prejudice.
Diversity in Development: As the values and ethical standards are not uniform across the globe, therefore, there is a need that diverse and interdisciplinary teams should be deployed in the development of AI, helping to ensure that systems reflect a broad set of values and lived experiences.
In conclusion, it can be said that the digital age has not diminished the urgency of human rights. It has simply transformed the terrain on which they are contested. Today, the courtroom is just as likely to be a line of code as it is a bench of judges. And while algorithms may lack intent, they wield immense power to such an extent that it must be brought under some check and scrutiny, both ethical as well as legal. It is argued that since AI does not carry an intent, therefore, an act of bias or discrimination cannot be associated with it. This is, however, a superficial view of reality, in my humble opinion. As we move deeper into the age of artificial intelligence, one truth remains constant that technology is not neutral. It is the creature of the human mind, trained and governed by them, so it ultimately mirrors the world we have unless we demand that it helps build the world we want. The future of human rights will depend not only on state actors and civil society, but also on how we choose to govern algorithms, and also that whether we are willing to hold the invisible accountable.
References
Angwin, J., Larson, J., Mattu, S., & Kirchner, L. (2016). Machine Bias. ProPublica. https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing
Barocas, S., & Selbst, A. D. (2016). Big Data's Disparate Impact. California Law Review, 104(3), 671–732.
Buolamwini, J., & Gebru, T. (2018). Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification. Proceedings of Machine Learning Research, 81, 1–15.
Eubanks, V. (2018). Automating Inequality: How High-Tech Tools Profile, Police, and Punish the Poor. St. Martin's Press.
Lum, K., & Isaac, W. (2016). To Predict and Serve? Significance, 13(5), 14–19.
United Nations. (1948). Universal Declaration of Human Rights, Article 7.
United Nations Human Rights Council. (2021). The Right to Privacy in the Digital Age: Report of the United Nations High Commissioner for Human Rights.
Case Law Referred
Robert Williams v. Detroit Police Department, ACLU (2020)
Loomis v. Wisconsin, 881 N.W.2d 749 (Wis. 2016)