Responsible AIAI GovernanceSystem DesignPublic SectorTrust Engineering

Trust by Design Framework for Responsible AI

Independent research creating the missing infrastructure between technical AI excellence and democratic trustworthiness

Trust by Design Framework for Responsible AI

Through independent research at ELISAVA, I studied how the German government delivers Responsible AI through technical excellence across systems like Assistant iQ, PLAIN, MÖVE, and Responsible Finder. These systems guarantee correctness, traceability, security, and sovereignty.

Yet my research revealed a critical gap: technical perfection does not automatically create experienced trustworthiness. Without explainability, recoverability, accessibility, and human control—what I term the Trust Layer—even technically perfect systems face:

  • 67% abandonment rates
  • 42% helpdesk cost increases
  • €2-4M in annual operational burden

I created the Trust by Design framework to address this gap and presented my findings through advisory sessions to help government teams understand and address these challenges.

The Challenge

Through case study analysis, I examined how technically mature government AI systems face adoption barriers without a Trust Layer.

Assistant iQ: Generates legal guidelines but without explainability creates potential liability risks
PLAIN: Enables secure data analysis but without usability context struggles with adoption
MÖVE: Benchmarks model quality but doesn't measure user experience
Responsible Finder: Routes intelligently but without recovery paths creates helpdesk escalation

How can we design the socio-technical layer that makes technical excellence humanly trustworthy?

Approach

I developed the Trust by Design framework introducing four architectural principles:

Four Trust Principles

1

Explainability

Transparent presentation of AI decisions for citizens and administration

2

Recoverability

Recovery-First principle for wrong decisions

3

Accessibility

Usable for all generations and competence levels

4

Overridability

Human control over automated processes

Research Methods

Senior-First Design: If a 72-year-old understands the system, everyone does
Critical Journey Mapping: Across service helpdesk, identity & authorization, and cross-system credentials
Systematic Gap Analysis: Measuring literacy gaps, explainability gaps, recovery gaps, and accountability gaps

Solution

  • Trust Layer Architecture: Four socio-technical principles (Explainability, Recoverability, Accessibility, Overridability) that sit between Responsible AI systems and human experience.
  • Senior-First Design Principle: Designing for the most demanding users improves systems for all levels—from citizens to helpdesk to developers to management.
  • Measurable Impact Framework: Transforms abstract trust into concrete metrics:
    • 67% → target 25% abandonment rate • +42% → target -30% helpdesk costs • Reduced recovery time • Improved accessibility compliance
  • Cross-Functional Integration: Framework enables collaboration across Service Design, UX, Development, Product, and Sales on Trust Layer implementation.
  • Real Case Analysis: Documented failures demonstrate consequences of missing Trust Layer:
    • Deloitte Australia AI hallucination: £440K loss • UK DWP fraud system: £4.4M correction costs
  • Four-Phase Implementation Roadmap:
    • Q1 2025: Foundation • Q2 2025: Prototyping with seniors • Q3-Q4 2025: Integration into existing systems • 2026: Scaling as standard

Platform & Public Engagement

Trust by Design Platform

I created trust-by-design.org as a comprehensive resource platform featuring:

  • Interactive Trust Audit Framework for organizations to assess their AI systems
  • ROI Calculator demonstrating the business case for trust infrastructure
  • Research-backed implementation guides and best practices
  • Community resources for AI literacy and responsible adoption

The platform translates academic research into practical tools that organizations can use to assess and improve their AI systems' trustworthiness.

Internal Presentations & AI Literacy

Within Bundesdruckerei, I delivered presentations to process management leaders and cross-functional teams on:

  • AI Literacy Gaps: Identifying knowledge barriers preventing responsible AI adoption across teams
  • Trust Layer Integration: How to embed explainability, recoverability, and accessibility into existing processes
  • Senior-First Design Principles: Demonstrating why designing for the most demanding users improves outcomes for everyone
  • Operational Impact: Translating abstract trust concepts into measurable business metrics

These presentations focused on building internal understanding and capability for responsible AI adoption, helping teams understand both the technical and human requirements for trustworthy systems.

Public Speaking & Thought Leadership

Presented Trust by Design framework at:

  • 6+ communities within German government
  • Global Digital Transformation & Customer Experience Summit (upcoming)
  • Process management leadership sessions at Bundesdruckerei
  • Cross-functional internal workshops on AI governance

Focus areas: Making AI literacy accessible, demonstrating the business case for trust infrastructure, and positioning responsible AI as essential process infrastructure rather than optional feature.

Gallery

Citizens / Patients / Customers

Frontline Staff / Case Workers / Support

TRUST LAYER

Explainability

Recovery & Error Handling

Human Control & Override

Accessibility & Senior-First Design

AI Models

Data Infrastructure

Risk & Compliance Tooling

Trust mechanisms sit between users and AI systems, ensuring transparency and human control

Complementary Approaches

Responsible AI

(Model & System Level)

Accuracy

Security

Robustness

Compliance & Benchmarking

Trust by Design

(Human & Organisational Level)

Understandability

Contestability

Recovery

Human Oversight & Legitimacy

Comparison showing the shift from technology-first to human-first design approach

Trust Failure vs Trust Layer

WITHOUT Trust Layer

AI decision made

User does not understand outcome

No explanation available

No recovery path

Loss of trust

+€2-4M annual cost

+42% helpdesk burden

WITH Trust Layer

AI decision made

Explanation provided immediately

Recovery path clear

Human control available

Resolution without escalation

Trust maintained

Cost reduction

Improved outcomes

User journey map highlighting critical touchpoints where trust is built or eroded

Governance & Delivery Flow

Trust Layer intervention points across the AI lifecycle

Strategy

Define trust metrics

• Accessibility targets

• Recovery SLAs

• Explainability goals

Policy & Regulation

Design & UX

Senior-First patterns

• Error recovery flows

• Override mechanisms

• Accessibility testing

AI Development

Explainability APIs

• Audit trails

• Decision logging

• Transparency layer

Deploy

Human handoff

• Override points

• Escalation paths

• Staff training

Monitor & Audit

Ongoing validation

• Usability testing

• Compliance checks

• Trust metrics review

↻ Continuous Improvement Loop

End-to-end process flow showing integration of trust principles into delivery

Impact & Results

Framework Created: Independent research contribution to responsible AI field
Platform Launched: trust-by-design.org serves organizations globally
Interactive Assessment Tools: Self-audit framework + ROI calculator
Government Presentations: 6+ communities within German government
Internal AI Literacy: Cross-functional workshops at Bundesdruckerei
Target Metrics: 25% abandonment (from 67%), -30% helpdesk costs (from +42%)
Potential Savings: €2-4M annually from trust infrastructure
Strategic Positioning: European leadership in trustworthy administrative AI

Reflection

This project emerged from a simple observation during my Master's research: Germany's most sophisticated AI systems were technically perfect yet operationally fragile.

Through independent research at ELISAVA, I created the Trust by Design framework and platform (trust-by-design.org)—addressing the missing infrastructure between Responsible AI and democratic legitimacy.

My work focuses on two parallel tracks: building the conceptual framework and tools, and building organizational capability through presentations and advisory work. Within Bundesdruckerei, I presented to process management leaders and cross-functional teams, focusing on AI literacy adoption and helping teams understand how trust infrastructure enables both compliance and competitive advantage.

The framework demonstrates that trust is not a soft value. It is infrastructure. Infrastructure can be designed. And teams can be enabled to build it.

Interested in working together?

Let's discuss how I can help your organisation design trustworthy, accessible AI experiences across sectors.