Bias in Artificial Intelligence (AI) occurs when an AI model reflects prejudices such as sexism or racism. This bias can be obvious, like an AI biometric surveillance system that exclusively picks out people-of-color to be potential criminals, or it can be hidden, such as in one word, that feels violating.
Such bias often stems from the data used to train the AI. Datasets may lack diversity, labeling instructions may be too narrow, or data annotators’ personal biases may influence outcomes. Extending on real-world prejudices and enforcing a hyrachy of power based on colonialism.
Context also matters; for instance, an AI tasked with recommending job candidates might favor applicants based on generalized biases instead of objective qualifications. So it is important to create specific AI datasets for specific contexts.
Addressing issues of invisible power systems is challenging, especially with complex AI models that are difficult to retrain. Developing ethical, diverse training datasets is crucial. As well as engaging diverse communities to envision their own experiences of fairness in AI—particularly those who may not have direct experience—AI Governance can become more inclusive, equitable, and representative.
Sources
Ruha Benjamin – Link to Video
Dr. Martin Pérez Comisso. “Introduction to Socio-Technical Systems ?” Masters in Design for Responsible Artificial Intelligence, Oct 15 2024, Elisave, Online.