Ayuda
Ir al contenido

Dialnet


Resumen de Bias test to keep algorithms ethical

Matt Reynolds

  • Computers are getting ethical. A new approach for testing whether algorithms contain hidden biases aims to prevent automated systems from perpetuating human discrimination. Matt Kusner and Chris Russell are part of a team that has developed a framework to identify and eliminate algorithmic bias. A fair algorithm, the team says, is one that makes the same decision about an individual regardless of demographic background. So the team maps out variables in a data set and tests how they might skew decision-making processes. If there is evidence of bias, the researchers find a way to remove or compensate for it. First, the team considered all variables, including the skin color and appearance of stopped people. Then, they considered only data points related to actual criminality, such as whether the person was found to be carrying a weapon and whether they were arrested. The team found that police generally saw black and Hispanic men as more criminal than they did white men, stopping them more often--so a machine-learning analysis of these events might deduce that criminality is correlated with skin color.


Fundación Dialnet

Dialnet Plus

  • Más información sobre Dialnet Plus