Algorithmic inequalities
conductas de alto riesgo para los derechos humanos
Abstract
This article proposes an analysis of the inequalities that arise from or are reinforced using artificial intelligence models that find one of their main justifications in the principle of predictive accuracy, and some of the replies that can be expected from the legal system to deal with them. The approach to this issue has mainly been through the right to privacy and data protection, on the one hand, or the development of ethical codes, on the other. Here I adopt two different perspectives. First, the risk approach offers a framework in which the European regulation of artificial intelligence has recently been situated from a regulatory point of view. It examines the obligations linked to practices considered high-risk. Second, the perspective of anti-discrimination law; the potential and limits of this area of Law. The aim of this article is to show that the capacity of anti-discrimination law to respond to the most controversial practices of algorithmic models, it must overcome the binary vision of antidiscrimination theory and be interpreted as a legal theory oriented to transform social reality.
Downloads
Copyright (c) 2022 Bartolomé de las Casas Human Rights Institute
This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.
The Gregorio Peces-Barba Human Rights Institute retains copyright of the published articles, reviews and news, and it is needed to quote the origin in any partial or total reproduction.
The documents include the Creative Commons 4.0 license: Attribution-NonCommercial-NoDerivatives (CC BY-NC-ND 4.0).
Derechos y Libertades does not charge any fees for receiving, processing or publishing articles submitted by authors.
Funding data
-
Conselleria de Cultura, Educación y Ciencia, Generalitat Valenciana
Grant numbers GVPROMETEO2018-156