Building Trust in AI Interactions
Design proposals for helping vulnerable users navigate AI with confidence. From elderly banking to children's education.
Explore the full project at trustbridge.design
Today's AI interfaces are designed for tech-savvy users. This leaves millions of elderly people, children, and other vulnerable groups exposed to harm.
Through four case studies I designed and documented the interaction layer that determines whether AI builds trust or destroys it. The cases share a common finding: the technology was rarely the problem. The problem was the moment of contact between the system and the person it was supposed to serve.
The Current Risks
Cognitive Overload
Complex AI interfaces overwhelm elderly users, leading to anxiety, errors, and eventual abandonment of essential digital services.
Opacity of Decisions
When AI makes decisions about finances or education without explanation, users lose trust and may make poor counter-decisions.
Inappropriate Reliance
Children may over-trust AI homework assistants, hindering critical thinking development and creating dependency.
Privacy Vulnerabilities
Vulnerable users often unknowingly share sensitive information with AI systems lacking proper safeguards.
"The people most affected by AI decisions are almost never the people who designed, trained, or tested the system. The gap between those two groups is where trust breaks."
Four Interaction Principles
Each case study was developed through a combination of interface prototyping, research framework design, and testing with representative users. The Senior-First Design Principle ran through all four: if a 72-year-old who didn't grow up with smartphones can understand the interaction, the design passes. If they can't, the system is not accessible — it is biased.
Legibility
Every AI output must be understandable by the person it affects — not just the person who built it. Plain language, separated decisions, and visible reasoning are not nice-to-haves. They are the interaction.
Deliberate Friction
Speed is the enemy of informed consent. Interactions designed to be skipped — cookie banners, pre-checked boxes, one-tap agreements — are not interactions. They are the removal of choice. Good design slows people down at exactly the right moment.
Separated Decisions
Bundled choices are no choice at all. Each data use, each AI recommendation, each consent clause must stand alone — so that agreement means something and refusal is genuinely possible.
Human Override
Every AI interaction must have a visible, accessible exit. A way to say no. A way to ask a human. A way to understand before deciding. EU AI Act Article 22 requires this for automated decisions — good design makes it feel natural, not defensive.
The Case Studies
Deep dives into solving trust challenges for vulnerable user groups. Each case includes interface designs, interview frameworks, and evidence of impact.
AI Banking for Older Adults
Building confidence in digital finance
The interaction failure isn’t the AI — it’s that recommendations arrive without explanation, in interfaces that assume confidence the user doesn’t have.

AI Education for Children
Homework help that teaches, not just answers
The interaction problem is that AI homework tools are designed to satisfy the question, not develop the thinking behind it.

AI Data Consent
Informed choices, not dark patterns
Consent forms average 4,200 words at a postgraduate reading level, placed at the moment of highest impatience. The interaction is designed to fail.

AI Health Dialogue
Explainability for patients & clinicians
Clinical AI generates scores and recommendations neither patient nor clinician can fully explain to each other. The consultation becomes a performance of understanding rather than shared decision-making.

The Interaction Layer
Across all four cases, the same structural solution emerged: an interaction layer that sits between the AI system and the human user, translating outputs into something the person can actually act on.
- •Legibility Layer: AI outputs rewritten at Grade 7 reading level, with source reasoning visible on request. Not a summary — a translation.
- •Friction Architecture: Deliberate slowdowns at high-stakes moments. Progress indicators. Confirmation steps. A “pause and ask someone” option that doesn’t feel like failure.
- •Choice Separation: No bundled consent. No pre-checked boxes. Each decision is its own moment, with equal visual weight given to Yes and No.
- •Rights Surfacing: GDPR Article 7 and EU AI Act Article 22 rights appear at the exact point in the interaction where they apply — not buried in a policy page.
- •Senior-First Validation: Every interaction tested with users over 65. If the experience requires digital confidence the user doesn’t have, the design fails.
With and Without the Trust Layer
The Interaction Flow
Every TrustBridge interaction follows the same five-step pattern regardless of domain:
Output
AI makes a decision or recommendation
Translate
Plain language explanation generated
Slow down
Deliberate friction at decision point
Separate
Each choice is its own moment
Override
Human control always visible
Where This Applies
The four case studies are evidence for a pattern that applies across any regulated domain where AI makes decisions that affect real people.
Financial Services
Credit decisions, fraud flags, investment recommendations — all require explainability and override.
Healthcare
Diagnostic AI, risk scoring, treatment pathways — bidirectional explainability between patient and clinician.
Education
AI tutoring, assessment tools, learning analytics — designed to develop thinking, not replace it.
Public Sector
Benefit eligibility, identity verification, service access — where exclusion by design is a legal and ethical failure.
Data & Privacy
Consent flows, data sharing, retention policies — where informed choice is a legal requirement, not a design preference.
Any AI Product
If your AI makes a decision that affects a real person, the interaction layer is not optional from August 2026.
Outcome
Reflection
TrustBridge started from a single observation: the people most affected by AI systems are almost never in the room when those systems are designed. Elderly adults, children, patients — they appear in user research documents, if at all, as edge cases or accessibility footnotes.
What the four case studies demonstrated is that these groups are not edge cases. They are the most accurate diagnostic of whether an AI interaction was designed for real people or for an imaginary average user. When an interaction fails a 68-year-old retired teacher, the problem is not the teacher. The problem is the design.
"Designing for the most demanding user doesn't create a simpler product. It creates an honest one."
The research also reinforced a practical argument: organisations that invest in interaction design at the human-AI boundary see better adoption, fewer helpdesk escalations, and — from August 2026 — stronger EU AI Act compliance evidence. Trust is not a soft value. It is measurable, designable infrastructure.
Platform & Research
TrustBridge is published as an open design research portfolio at trustbridge.design. Each case study includes:
- •Interface design proposals with interactive prototypes
- •Research interview frameworks for testing with real users
- •Design principles grounded in EU AI Act and GDPR requirements
- •Plain-language templates organisations can adapt for their own products
The research was developed alongside the Trust by Design governance framework — TrustBridge provides the interaction-level evidence for the systemic arguments made at trust-by-design.org.
How This Was Made
I practice what I preach. This project was built with AI at every stage — and I want you to see how.
The code for this portfolio and the TrustBridge site was written through AI-assisted development. I direct the architecture, logic, and design system; the AI accelerates the build.
Images, videos, Figma screens, and interactive prototypes were generated with AI tools. Every visual was art-directed and refined by me to serve a design purpose.
I can build AI chat interfaces that are fully interactive and connected to live LLMs. Not mockups {'—'} real conversational AI products.
I understand how RAGs, embeddings, and AI models are built under the hood. This lets me design for what AI actually does, not what marketing says it does.
Why disclose this? Because a trust designer who hides their own process isn't one. AI is my medium — I use it to build, to prototype, and to think. The design decisions, the ethical framing, and the human insight are mine. The tools are whatever gets the job done honestly.
Want to be a trust designer too?
We're looking for partners who believe AI should work for everyone. Get in touch to explore how we can collaborate on building trustworthy AI experiences together.