Skip to main content
AI Interaction DesignHuman-Centred AIExplainabilityConsent DesignVulnerable UsersEU AI Act

Building Trust in AI Interactions

Design proposals for helping vulnerable users navigate AI with confidence. From elderly banking to children's education.

Explore the full project at trustbridge.design
Abstract representation of trust between humans and AI systems
4
Case Studies
3
Vulnerable Groups
EU AI Act
Art. 13 Aligned
WCAG
AA Accessible

Today's AI interfaces are designed for tech-savvy users. This leaves millions of elderly people, children, and other vulnerable groups exposed to harm.

Through four case studies I designed and documented the interaction layer that determines whether AI builds trust or destroys it. The cases share a common finding: the technology was rarely the problem. The problem was the moment of contact between the system and the person it was supposed to serve.

The Current Risks

Cognitive Overload

Complex AI interfaces overwhelm elderly users, leading to anxiety, errors, and eventual abandonment of essential digital services.

Opacity of Decisions

When AI makes decisions about finances or education without explanation, users lose trust and may make poor counter-decisions.

Inappropriate Reliance

Children may over-trust AI homework assistants, hindering critical thinking development and creating dependency.

Privacy Vulnerabilities

Vulnerable users often unknowingly share sensitive information with AI systems lacking proper safeguards.

"The people most affected by AI decisions are almost never the people who designed, trained, or tested the system. The gap between those two groups is where trust breaks."

Four Interaction Principles

Each case study was developed through a combination of interface prototyping, research framework design, and testing with representative users. The Senior-First Design Principle ran through all four: if a 72-year-old who didn't grow up with smartphones can understand the interaction, the design passes. If they can't, the system is not accessible it is biased.

1

Legibility

Every AI output must be understandable by the person it affects — not just the person who built it. Plain language, separated decisions, and visible reasoning are not nice-to-haves. They are the interaction.

2

Deliberate Friction

Speed is the enemy of informed consent. Interactions designed to be skipped — cookie banners, pre-checked boxes, one-tap agreements — are not interactions. They are the removal of choice. Good design slows people down at exactly the right moment.

3

Separated Decisions

Bundled choices are no choice at all. Each data use, each AI recommendation, each consent clause must stand alone — so that agreement means something and refusal is genuinely possible.

4

Human Override

Every AI interaction must have a visible, accessible exit. A way to say no. A way to ask a human. A way to understand before deciding. EU AI Act Article 22 requires this for automated decisions — good design makes it feel natural, not defensive.

The Case Studies

Deep dives into solving trust challenges for vulnerable user groups. Each case includes interface designs, interview frameworks, and evidence of impact.

AI Banking for Older Adults

Building confidence in digital finance

73%of adults over 65 express anxiety about AI-powered banking

The interaction failure isn’t the AI — it’s that recommendations arrive without explanation, in interfaces that assume confidence the user doesn’t have.

Interface design mockup for AI Banking for Older Adults
Progressive Disclosure
Trust Indicators
Family Co-Pilot Mode
Adaptive Pacing
Design response: Explanation-first interfaces, visible confidence levels, human escalation at every step.

AI Education for Children

Homework help that teaches, not just answers

68%of parents worry children use AI to complete work rather than learn

The interaction problem is that AI homework tools are designed to satisfy the question, not develop the thinking behind it.

Interface design mockup for AI Education for Children
Scaffolded Assistance
Learning Verification
Parent Dashboards
Age-Appropriate Boundaries
Design response: Scaffolded assistance — AI asks questions instead of giving answers, with checkpoints before each step.

AI Data Consent

Informed choices, not dark patterns

91%of users tap "I Agree" without reading what they're agreeing to

Consent forms average 4,200 words at a postgraduate reading level, placed at the moment of highest impatience. The interaction is designed to fail.

Interface design mockup for AI Data Consent
Deliberate Friction
Grade 7 Plain Language
Separated Decisions
Equal Yes and No
Design response: AI consent companion — translates each clause, separates each decision, progress-tracks completion.

AI Health Dialogue

Explainability for patients & clinicians

2-waybidirectional explainability needed between patient and clinician

Clinical AI generates scores and recommendations neither patient nor clinician can fully explain to each other. The consultation becomes a performance of understanding rather than shared decision-making.

Interface design mockup for AI Health Dialogue
Patient Plain Language
Clinician Reasoning
Shared Brief
Override Controls
Design response: Bidirectional explainability — plain language for patients, reasoning transparency for clinicians, shared pre-consultation brief.

The Interaction Layer

Across all four cases, the same structural solution emerged: an interaction layer that sits between the AI system and the human user, translating outputs into something the person can actually act on.

  • Legibility Layer: AI outputs rewritten at Grade 7 reading level, with source reasoning visible on request. Not a summary — a translation.
  • Friction Architecture: Deliberate slowdowns at high-stakes moments. Progress indicators. Confirmation steps. A “pause and ask someone” option that doesn’t feel like failure.
  • Choice Separation: No bundled consent. No pre-checked boxes. Each decision is its own moment, with equal visual weight given to Yes and No.
  • Rights Surfacing: GDPR Article 7 and EU AI Act Article 22 rights appear at the exact point in the interaction where they apply — not buried in a policy page.
  • Senior-First Validation: Every interaction tested with users over 65. If the experience requires digital confidence the user doesn’t have, the design fails.

With and Without the Trust Layer

Without interaction design
AI output presented without explanation
User does not understand what it means
No plain language available
No way to refuse or ask questions
User taps agree or abandons
Trust broken. Adoption fails. Compliance risk.
With TrustBridge interaction layer
AI output translated into plain language
Each decision separated and explained
Friction slows the moment that matters
Human override visible and accessible
User makes an informed, genuine choice
Trust built. Adoption works. EU AI Act compliant.

The Interaction Flow

Every TrustBridge interaction follows the same five-step pattern regardless of domain:

Output

AI makes a decision or recommendation

Translate

Plain language explanation generated

Slow down

Deliberate friction at decision point

Separate

Each choice is its own moment

Override

Human control always visible

Where This Applies

The four case studies are evidence for a pattern that applies across any regulated domain where AI makes decisions that affect real people.

Financial Services

Credit decisions, fraud flags, investment recommendations — all require explainability and override.

Healthcare

Diagnostic AI, risk scoring, treatment pathways — bidirectional explainability between patient and clinician.

Education

AI tutoring, assessment tools, learning analytics — designed to develop thinking, not replace it.

Public Sector

Benefit eligibility, identity verification, service access — where exclusion by design is a legal and ethical failure.

Data & Privacy

Consent flows, data sharing, retention policies — where informed choice is a legal requirement, not a design preference.

Any AI Product

If your AI makes a decision that affects a real person, the interaction layer is not optional from August 2026.

Outcome

Four interaction design systems — each documented with prototypes, principles, and testable frameworks
Senior-First methodology validated — interaction patterns tested with adults 65+, children 10–16, and clinical stakeholders
EU AI Act Article 13 alignment — transparency and explainability requirements designed in from the start, not retrofitted
WCAG AA compliance — all interface proposals meet accessibility standards throughout
Open research published — trustbridge.design available as a resource for organisations and peer designers
Cross-domain pattern established — the same five-step interaction model applies across finance, health, education, and public sector

Reflection

TrustBridge started from a single observation: the people most affected by AI systems are almost never in the room when those systems are designed. Elderly adults, children, patients they appear in user research documents, if at all, as edge cases or accessibility footnotes.

What the four case studies demonstrated is that these groups are not edge cases. They are the most accurate diagnostic of whether an AI interaction was designed for real people or for an imaginary average user. When an interaction fails a 68-year-old retired teacher, the problem is not the teacher. The problem is the design.

"Designing for the most demanding user doesn't create a simpler product. It creates an honest one."

The research also reinforced a practical argument: organisations that invest in interaction design at the human-AI boundary see better adoption, fewer helpdesk escalations, and from August 2026 stronger EU AI Act compliance evidence. Trust is not a soft value. It is measurable, designable infrastructure.

Platform & Research

TrustBridge is published as an open design research portfolio at trustbridge.design. Each case study includes:

  • Interface design proposals with interactive prototypes
  • Research interview frameworks for testing with real users
  • Design principles grounded in EU AI Act and GDPR requirements
  • Plain-language templates organisations can adapt for their own products

The research was developed alongside the Trust by Design governance framework TrustBridge provides the interaction-level evidence for the systemic arguments made at trust-by-design.org.

How This Was Made

I practice what I preach. This project was built with AI at every stage and I want you to see how.

Vibe Coding

The code for this portfolio and the TrustBridge site was written through AI-assisted development. I direct the architecture, logic, and design system; the AI accelerates the build.

AI-Generated Visuals

Images, videos, Figma screens, and interactive prototypes were generated with AI tools. Every visual was art-directed and refined by me to serve a design purpose.

Interactive AI Interfaces

I can build AI chat interfaces that are fully interactive and connected to live LLMs. Not mockups {'—'} real conversational AI products.

AI Systems Knowledge

I understand how RAGs, embeddings, and AI models are built under the hood. This lets me design for what AI actually does, not what marketing says it does.

Why disclose this? Because a trust designer who hides their own process isn't one. AI is my medium I use it to build, to prototype, and to think. The design decisions, the ethical framing, and the human insight are mine. The tools are whatever gets the job done honestly.

Want to be a trust designer too?

We're looking for partners who believe AI should work for everyone. Get in touch to explore how we can collaborate on building trustworthy AI experiences together.