Indicators of Human Rights Risks in Automated Decision-Making Systems

Authors

    Elham Torkaman Department of International Law, University of Arak, Arak, Iran
    Kaveh Ranjbaran * Department of Political Science, University of Arak, Arak, Iran kaveh.ranjbaran39@yahoo.com

Keywords:

Automated decision-making systems, human rights risks, algorithmic bias, transparency, accountability, qualitative research, AI governance

Abstract

This study aims to identify key indicators of human rights risks inherent in automated decision-making systems to inform governance frameworks and safeguard fundamental rights. A qualitative research design was employed, involving semi-structured interviews with 23 participants from Tehran who possess expertise or experience related to automated decision-making and human rights. Sampling continued until theoretical saturation was reached. Data were transcribed verbatim and analyzed thematically using NVivo software to extract main themes, subthemes, and concepts regarding human rights risk indicators. Three primary themes emerged: (1) Legal and Regulatory Indicators, highlighting gaps in compliance, accountability, transparency, ethical guidelines, and oversight mechanisms; (2) Technical and Algorithmic Risks, including algorithmic bias and discrimination, error and inaccuracy, automation limitations, security vulnerabilities, and accountability gaps; and (3) Social and Human Impact Factors, focusing on access and inclusion, psychological effects, threats to fundamental rights, and user awareness. These indicators collectively illustrate the multidimensional nature of human rights risks in automated decision-making systems, emphasizing the need for integrated legal, technical, and social safeguards. The study underscores the complexity of protecting human rights within automated decision-making contexts and the necessity of comprehensive indicators to assess and mitigate risks. Developing robust governance mechanisms that address transparency, fairness, privacy, accountability, and inclusivity is critical. The findings provide a foundational framework for policymakers, technologists, and civil society to guide ethical AI deployment and enhance human rights protections in digital environments.

Downloads

Download data is not yet available.

References

Adadi, A., & Berrada, M. (2018). Peeking inside the black-box: A survey on Explainable Artificial Intelligence (XAI). IEEE Access, 6, 52138-52160. https://doi.org/10.1109/ACCESS.2018.2870052

Benjamin, R. (2019). Race after technology: Abolitionist tools for the new Jim code. Polity Press.

Binns, R. (2018). Fairness in machine learning: Lessons from political philosophy. Proceedings of the 2018 Conference on Fairness, Accountability, and Transparency, 149–159. https://doi.org/10.1145/3287560.3287598

Buolamwini, J., & Gebru, T. (2018). Gender shades: Intersectional accuracy disparities in commercial gender classification. Proceedings of Machine Learning Research, 81, 1–15.

Cath, C., Wachter, S., Mittelstadt, B., Taddeo, M., & Floridi, L. (2018). Artificial intelligence and the ‘good society’: The US, EU, and UK approach. Science and Engineering Ethics, 24(2), 505-528. https://doi.org/10.1007/s11948-017-9901-7

Corbett-Davies, S., & Goel, S. (2018). The measure and mismeasure of fairness: A critical review of fair machine learning. arXiv preprint arXiv:1808.00023. https://arxiv.org/abs/1808.00023

Council of Europe. (2019). Recommendation CM/Rec(2019)2 of the Committee of Ministers to member States on the human rights impacts of algorithmic systems. https://rm.coe.int/algorithms-recommendation/168093a8f0

Doshi-Velez, F., & Kim, B. (2017). Towards a rigorous science of interpretable machine learning. arXiv preprint arXiv:1702.08608. https://arxiv.org/abs/1702.08608

Eubanks, V. (2018). Automating inequality: How high-tech tools profile, police, and punish the poor. St. Martin’s Press.

European Union Agency for Fundamental Rights (FRA). (2019). Getting the future right – Artificial intelligence and fundamental rights. Publications Office of the European Union.

Floridi, L., Cowls, J., Beltrametti, M., Chatila, R., Chazerand, P., Dignum, V., ... & Vayena, E. (2018). AI4People—An ethical framework for a good AI society: Opportunities, risks, principles, and recommendations. Minds and Machines, 28(4), 689-707. https://doi.org/10.1007/s11023-018-9482-5

Jobin, A., Ienca, M., & Vayena, E. (2019). The global landscape of AI ethics guidelines. Nature Machine Intelligence, 1(9), 389-399. https://doi.org/10.1038/s42256-019-0088-2

Kroll, J. A., Huey, J., Barocas, S., Felten, E. W., Reidenberg, J. R., Robinson, D. G., & Yu, H. (2017). Accountable algorithms. University of Pennsylvania Law Review, 165(3), 633-705.

Kuner, C., Cate, F. H., Millard, C., Svantesson, D., Lynskey, O., & Grundy, J. (2017). The challenges of ‘big data’ for data protection. International Data Privacy Law, 7(2), 75-77.

Mittelstadt, B. D., Allo, P., Taddeo, M., Wachter, S., & Floridi, L. (2016). The ethics of algorithms: Mapping the debate. Big Data & Society, 3(2), 2053951716679679. https://doi.org/10.1177/2053951716679679

Noble, S. U. (2018). Algorithms of oppression: How search engines reinforce racism. NYU Press.

O’Neil, C. (2016). Weapons of math destruction: How big data increases inequality and threatens democracy. Crown.

Pasquale, F. (2015). The black box society: The secret algorithms that control money and information. Harvard University Press.

Robinson, L., Schulz, J., Khilnani, A., Ono, H., Cotten, S., McClain, N., ... & Litt, E. (2020). Digital inequalities and why they matter. Information, Communication & Society, 23(7), 891-807. https://doi.org/10.1080/1369118X.2020.1775768

Rosenblat, A., & Stark, L. (2016). Algorithmic labor and information asymmetries: A case study of Uber’s drivers. International Journal of Communication, 10, 3758-3784.

UNESCO. (2021). Recommendation on the Ethics of Artificial Intelligence. UNESCO Publishing. https://unesdoc.unesco.org/ark:/48223/pf0000377897

United Nations. (1948). Universal Declaration of Human Rights. https://www.un.org/en/about-us/universal-declaration-of-human-rights

Voigt, P., & Von dem Bussche, A. (2017). The EU General Data Protection Regulation (GDPR): A practical guide. Springer.

Wachter, S., Mittelstadt, B., & Floridi, L. (2017). Why a right to explanation of automated decision-making does not exist in the General Data Protection Regulation. International Data Privacy Law, 7(2), 76-99. https://doi.org/10.1093/idpl/ipx005

Zuboff, S. (2019). The age of surveillance capitalism: The fight for a human future at the new frontier of power. PublicAffairs.

Downloads

Published

2024-04-01

Submitted

2024-02-12

Revised

2024-03-16

Accepted

2024-03-29

How to Cite

Torkaman, E., & Ranjbaran, K. (2024). Indicators of Human Rights Risks in Automated Decision-Making Systems. Journal of Human Rights, Law, and Policy, 2(2), 1-8. https://jhrlp.com/index.php/jhrlp/article/view/31

Similar Articles

1-10 of 35

You may also start an advanced similarity search for this article.