AI models are designed to detect patterns in large datasets using algorithms, with their output depending on the quality and type of data they are trained on. For example, Large Language Models (LLMs) take text as input and generate text-based responses based on learned patterns. Training an AI involves feeding vast amounts of labeled or unlabeled data to help it recognize patterns and make predictions. An image classifier, for instance, distinguishes between categories (e.g., “cat” vs. “not cat”) by processing thousands of labeled images to identify key features.
AI models improve over time through iterative refinement, often using trial and error to adjust internal parameters. In Natural Language Processing (NLP), this enables models to generate more coherent, contextually relevant responses. AI can be categorized into three main learning methods: Supervised Learning (using labeled data), Unsupervised Learning (finding patterns partly without labels), and Reinforcement Learning (learning through rewards and penalties). Advanced models, such as Deep Learning, employ neural networks with multiple layers to process complex data.
Despite being described as “intelligent,” AI does not possess human intelligence. It cannot think or reason in the human sense. Furthermore, the concept of “intelligence” carries negative cultural connotations, often tied to colonial narratives of superiority. Responsible AI design must address the opacity of AI Model decision-making and critically examine the metaphors of intelligence in AI discourse.
Reference Sources:
Cassie Kozyrkov
Full Applied AI Lectures (MFML)
Making Friends with Machine Learning
Eryk Salvaggio. “A Critical Intro to NLP & LLMs.” Masters in Design for Responsible Artificial Intelligence, Nov 5 2024, Elisave, Online.