Ayuda
Ir al contenido

Dialnet


A review of bias and fairness in artificial intelligence

    1. [1] Universidad Politécnica de Madrid

      Universidad Politécnica de Madrid

      Madrid, España

    2. [2] Universidade do Minho

      Universidade do Minho

      Braga (São José de São Lázaro), Portugal

  • Localización: IJIMAI, ISSN-e 1989-1660, Vol. 9, Nº. 1, 2024, págs. 5-17
  • Idioma: inglés
  • Enlaces
  • Resumen
    • Automating decision systems has led to hidden biases in the use of artificial intelligence (AI). Consequently, explaining these decisions and identifying responsibilities has become a challenge. As a result, a new field of research on algorithmic fairness has emerged. In this area, detecting biases and mitigating them is essential to ensure fair and discrimination-free decisions. This paper contributes with: (1) a categorization of biases and how these are associated with different phases of an AI model’s development (including the data-generation phase); (2) a revision of fairness metrics to audit the data and AI models trained with them (considering agnostic models when focusing on fairness); and, (3) a novel taxonomy of the procedures to mitigate biases in the different phases of an AI model’s development (pre-processing, training, and post-processing) with the addition of transversal actions that help to produce fairer models.


Fundación Dialnet

Dialnet Plus

  • Más información sobre Dialnet Plus

Opciones de compartir

Opciones de entorno