Skip to main content
Independent ResearchAI GovernanceUX StrategyEU Compliance

Trust by Design: A Framework for Responsible AI

I created the missing infrastructure between technical AI excellence and the human experience of trustworthiness.

Explore the full framework at trust-by-design.org
67%
Abandonment Rate
42%
Helpdesk Increase
4
Trust Principles
6+
Gov Communities
Trust by Design research and advisory presentation

The Problem I Set Out to Solve

During my Master's research at ELISAVA, I studied how technically mature government AI systems systems built with genuine rigour around accuracy, security, and compliance were still failing in practice. Users didn't trust them. Helpdesk costs were climbing. Adoption was stalling. The technical layer was solid. The human layer was missing.

I identified a 67% abandonment rate and 42% helpdesk cost increase as the measurable cost of this gap. My research question became: how do you design the socio-technical layer that makes technical excellence humanly trustworthy?

What I Investigated

I examined three publicly documented government AI systems to understand where and why the trust gap appeared.

Assistent.iQ

KI-Kompetenz-Center, Bundesdruckerei

Strength

Generates legal guidelines with high accuracy

Trust Gap

Without explainability, users couldn't understand or contest decisions — creating liability risk and citizen confusion

PLAIN

Federal Data Analytics Platform

Strength

Enables secure, sovereign data analysis across federal teams

Trust Gap

Without usability context or onboarding design, cross-team adoption struggled despite strong technical capability

MOVE

AI Quality Benchmarking

Strength

Benchmarks AI model quality systematically against EU AI Act criteria

Trust Gap

Measures technical performance only — user experience and trust perception remain unmeasured

Technical perfection does not automatically create experienced trustworthiness.

What I Built

I developed the Trust by Design framework four architectural principles that sit between Responsible AI systems and the humans who depend on them.

Explainability

I designed for transparency: users need to understand what the AI decided and why, in plain language.

Recoverability

I introduced a Recovery-First principle: every wrong decision needs a clear, human path forward.

Accessibility

I applied Senior-First Design: if a 72-year-old understands the system, everyone does.

Overridability

I designed for human control: automated decisions must always have a human override path.

My research methods included Critical Journey Mapping across service helpdesk, identity and authorisation, and cross-system credentials and systematic gap analysis measuring literacy gaps, explainability gaps, recovery gaps, and accountability gaps.

The Trust Layer

The framework introduced a Trust Layer a distinct architectural layer sitting between AI systems and users that existing Responsible AI approaches don't address.

Users
Citizens / Patients / Frontline Staff
Trust Layer
ExplainabilityRecoveryHuman ControlAccessibility
Infrastructure
AI Models · Data Infrastructure · Risk & Compliance Tooling

Responsible AI ensures the system works. The Trust Layer ensures people can use, understand, and trust it.

Without vs With

The consequences of missing the Trust Layer are measurable and costly.

Without Trust Layer

1
AI decision made
2
User doesn't understand
3
No explanation available
4
No recovery path
5
Escalation to helpdesk
6
Loss of trust
+24M annual cost · +42% helpdesk burden

With Trust Layer

1
AI decision made
2
Explanation provided
3
Recovery path clear
4
Human control available
5
Resolution achieved
6
Trust maintained
Cost reduction · Improved outcomes

From Research to Practice

After completing the research, I turned the framework into a consulting practice. Trust by Design now helps regulated organisations build the human layer that makes their AI systems compliant, adopted, and trusted ahead of EU AI Act enforcement in August 2026.

The Practice

I offer three services: Trust Audits to identify gaps, Framework Implementation to design the Trust Layer into existing systems, and EU AI Act Readiness to build human-centred compliance before the August 2026 deadline.

Trust AuditsFramework ImplementationEU AI Act Readiness
trust-by-design.org

The Tools I Built

TrustAudit Tools

An interactive self-assessment I created so organisations can identify their own explainability, recovery, accessibility, and control gaps.

trustaudit.tools

TrustBridge

Case study examples showing Trust Layer implementation across regulated sectors.

trustbridge.design

Taking It Into Practice

I also brought the framework into practice through internal advisory sessions presenting to process management leaders and cross-functional teams within regulated government institutions. These sessions focused on AI literacy, Trust Layer integration, and translating abstract trust concepts into concrete process improvements. Seeing the framework land with operational teams and watching them connect the principles to their own systems was the moment I knew the research had real-world traction.

Where It Applies

The Trust Layer isn't specific to government. Any sector deploying high-risk AI faces the same gap.

Government

  • Citizen service automation
  • Welfare eligibility decisions
  • Administrative AI assistants
  • Public safety systems

Healthcare

  • Clinical decision support
  • Patient triage AI
  • Diagnostic assistance
  • Treatment recommendations

Finance

  • Loan approval automation
  • Fraud detection systems
  • Robo-advisory platforms
  • Credit scoring AI

Education

  • AI tutoring assistants
  • Grading automation
  • Learning path optimisation
  • Student risk assessment

Cybersecurity

  • Threat detection AI
  • Incident response automation
  • Vulnerability assessment
  • Access control systems

Outcomes

The research produced tools, a practice, and validated frameworks not just a thesis.

Framework Published

Four-principle Trust Layer architecture, openly available at trust-by-design.org

Consulting Practice Launched

EU AI Act readiness for regulated industries

Audit Tools Built

Interactive self-assessment at trustaudit.tools

Government Communities Reached

Framework presented to 6+ communities within German government

Internal Delivery

Advisory sessions with process management leaders in regulated government institutions

Target Impact

25% abandonment rate (from 67%) and 30% reduction in helpdesk costs

Reflection

This project started from a frustration: Germany's most sophisticated AI systems were technically excellent and operationally fragile at the same time. The research gave me the language for why and the framework gave me something to do about it.

What surprised me most was how quickly operational teams recognised the gap once I named it. The Trust Layer wasn't a hard sell. It was something people had felt but couldn't articulate.

That's what I'm building now the tools, the practice, and the conversations that help organisations design trust before it becomes a crisis.