Socio-technical Systems

Technology and society are inextricably linked in the same system of networked interconnected components. These networks often include not only visible elements like hardware, software, and infrastructure, but also invisible factors like impacts on society, culture, and human rights. A Socio-Technical System (STS) refers to this integrated approach, considering both the technical and social dimensions. […]

AI Model

AI models are designed to detect patterns in large datasets using algorithms, with their output depending on the quality and type of data they are trained on. For example, Large Language Models (LLMs) take text as input and generate text-based responses based on learned patterns. Training an AI involves feeding vast amounts of labeled or […]

RLHF (Reinforcement Learning Through Human Feedback)

To align the output of Large Language Models (LLMs) with users’ values and expectations, a human-in-the-loop feedback process is essential. This process involves a team of evaluators who review the model’s output across various tasks and use cases, providing rankings based on criteria like helpfulness, fairness, and clarity. These rankings can range from “best” to […]

Model Card

AI Model documentation, also known as a “Model Card,” provides essential information about a model’s characteristics, performance, and limitations. The purpose of a Model Card is to ensure the AI model is transparent, accessible, and understandable. A well-crafted Model Card is written in clear language and typically covers the following key areas: As Google explains: […]

ELIZA Effect

The Eliza Effect is often associated with the first symbolic AI Chatbot, developed in 1966 by Joseph Weizenbaum. The program was designed to simulate a psychotherapist, with the user typing their thoughts and feelings into a computer. The system would then respond with therapist-style questions that are based on changing the end or the beginning […]

Scaling Problem

AI models are increasingly growing in size, with the belief that bigger is better. However, research shows that larger models are not always more effective. For example, a small model might perform poorly due to noise (uncategorized data), causing it to hallucinate or make incorrect predictions. Simply increasing the model’s size doesn’t resolve this issue; […]

“attention”

Attention in Machine Learning (ML) refers to a mechanism that allows a model to focus selectively on certain parts of the input data, rather than treating all elements equally. This helps the model prioritize the most relevant information for the task. For example, in the sentence “Cat on a mat,” the attention mechanism might focus […]

algorithmic accountability

Algorithms have a document of accountability that outlines its decision-making process, evaluated against key criteria: Transparency, Auditability, Ethical Considerations, Risk Mitigation, Accountability Mechanisms, and Stakeholder Engagement. These criteria are crucial in high-stakes domains such as criminal justice, hiring and employment, healthcare, finance and credit, and government decision-making. However, holding algorithms accountable raises complex ethical questions. […]

Ethical protocols

Protocols are a set of guidelines designed to establish best practices for ensuring that artificial intelligence (AI) systems are developed, deployed, and governed according to ethical criteria. These criteria often include: While the primary goal may be social impact and responsibility, it’s important to recognize that the underlying motivations can vary. In some cases, ethical […]

Explainability

Effective documentation of artificial intelligence (AI) models is essential for transparency and accountability. To be truly effective, the documentation must clearly explain how the model makes decisions and the social impacts of those decisions, including considerations of fairness, risks, security, and transparency. This documentation should be written in a way that is accessible to various […]