Trust by Design Framework for Responsible AI
Independent research creating the missing infrastructure between technical AI excellence and democratic trustworthiness

Through independent research at ELISAVA, I studied how the German government delivers Responsible AI through technical excellence across systems like Assistant iQ, PLAIN, MÖVE, and Responsible Finder. These systems guarantee correctness, traceability, security, and sovereignty.
Yet my research revealed a critical gap: technical perfection does not automatically create experienced trustworthiness. Without explainability, recoverability, accessibility, and human control—what I term the Trust Layer—even technically perfect systems face:
- •67% abandonment rates
- •42% helpdesk cost increases
- •€2-4M in annual operational burden
I created the Trust by Design framework to address this gap and presented my findings through advisory sessions to help government teams understand and address these challenges.
The Challenge
Through case study analysis, I examined how technically mature government AI systems face adoption barriers without a Trust Layer.
How can we design the socio-technical layer that makes technical excellence humanly trustworthy?
Approach
I developed the Trust by Design framework introducing four architectural principles:
Four Trust Principles
Explainability
Transparent presentation of AI decisions for citizens and administration
Recoverability
Recovery-First principle for wrong decisions
Accessibility
Usable for all generations and competence levels
Overridability
Human control over automated processes
Research Methods
Solution
- Trust Layer Architecture: Four socio-technical principles (Explainability, Recoverability, Accessibility, Overridability) that sit between Responsible AI systems and human experience.
- Senior-First Design Principle: Designing for the most demanding users improves systems for all levels—from citizens to helpdesk to developers to management.
- Measurable Impact Framework: Transforms abstract trust into concrete metrics:
- •67% → target 25% abandonment rate • +42% → target -30% helpdesk costs • Reduced recovery time • Improved accessibility compliance
- Cross-Functional Integration: Framework enables collaboration across Service Design, UX, Development, Product, and Sales on Trust Layer implementation.
- Real Case Analysis: Documented failures demonstrate consequences of missing Trust Layer:
- •Deloitte Australia AI hallucination: £440K loss • UK DWP fraud system: £4.4M correction costs
- Four-Phase Implementation Roadmap:
- •Q1 2025: Foundation • Q2 2025: Prototyping with seniors • Q3-Q4 2025: Integration into existing systems • 2026: Scaling as standard
Platform & Public Engagement
Trust by Design Platform
I created trust-by-design.org as a comprehensive resource platform featuring:
- •Interactive Trust Audit Framework for organizations to assess their AI systems
- •ROI Calculator demonstrating the business case for trust infrastructure
- •Research-backed implementation guides and best practices
- •Community resources for AI literacy and responsible adoption
The platform translates academic research into practical tools that organizations can use to assess and improve their AI systems' trustworthiness.
Internal Presentations & AI Literacy
Within Bundesdruckerei, I delivered presentations to process management leaders and cross-functional teams on:
- •AI Literacy Gaps: Identifying knowledge barriers preventing responsible AI adoption across teams
- •Trust Layer Integration: How to embed explainability, recoverability, and accessibility into existing processes
- •Senior-First Design Principles: Demonstrating why designing for the most demanding users improves outcomes for everyone
- •Operational Impact: Translating abstract trust concepts into measurable business metrics
These presentations focused on building internal understanding and capability for responsible AI adoption, helping teams understand both the technical and human requirements for trustworthy systems.
Public Speaking & Thought Leadership
Presented Trust by Design framework at:
- •6+ communities within German government
- •Global Digital Transformation & Customer Experience Summit (upcoming)
- •Process management leadership sessions at Bundesdruckerei
- •Cross-functional internal workshops on AI governance
Focus areas: Making AI literacy accessible, demonstrating the business case for trust infrastructure, and positioning responsible AI as essential process infrastructure rather than optional feature.
Gallery
Citizens / Patients / Customers
Frontline Staff / Case Workers / Support
TRUST LAYER
Explainability
Recovery & Error Handling
Human Control & Override
Accessibility & Senior-First Design
AI Models
Data Infrastructure
Risk & Compliance Tooling
Trust mechanisms sit between users and AI systems, ensuring transparency and human control
Complementary Approaches
Responsible AI
(Model & System Level)
Accuracy
Security
Robustness
Compliance & Benchmarking
Trust by Design
(Human & Organisational Level)
Understandability
Contestability
Recovery
Human Oversight & Legitimacy
Comparison showing the shift from technology-first to human-first design approach
Cross-Domain Applicability
Finance
Automated decisions
Risk scoring
Eligibility & access
Audit & compliance
Human override needs
Health
Automated decisions
Risk scoring
Eligibility & access
Audit & compliance
Human override needs
Education
Automated decisions
Risk scoring
Eligibility & access
Audit & compliance
Human override needs
Cybersecurity
Automated decisions
Risk scoring
Eligibility & access
Audit & compliance
Human override needs
Public Sector
Automated decisions
Risk scoring
Eligibility & access
Audit & compliance
Human override needs
Same Trust Layer principles apply across all domains
Framework demonstrating adaptability across various government service contexts
Trust Failure vs Trust Layer
WITHOUT Trust Layer
AI decision made
User does not understand outcome
No explanation available
No recovery path
Loss of trust
+€2-4M annual cost
+42% helpdesk burden
WITH Trust Layer
AI decision made
Explanation provided immediately
Recovery path clear
Human control available
Resolution without escalation
Trust maintained
Cost reduction
Improved outcomes
User journey map highlighting critical touchpoints where trust is built or eroded
Governance & Delivery Flow
Trust Layer intervention points across the AI lifecycle
Strategy
Define trust metrics
• Accessibility targets
• Recovery SLAs
• Explainability goals
Policy & Regulation
Design & UX
Senior-First patterns
• Error recovery flows
• Override mechanisms
• Accessibility testing
AI Development
Explainability APIs
• Audit trails
• Decision logging
• Transparency layer
Deploy
Human handoff
• Override points
• Escalation paths
• Staff training
Monitor & Audit
Ongoing validation
• Usability testing
• Compliance checks
• Trust metrics review
↻ Continuous Improvement Loop
End-to-end process flow showing integration of trust principles into delivery
Impact & Results
Reflection
This project emerged from a simple observation during my Master's research: Germany's most sophisticated AI systems were technically perfect yet operationally fragile.
Through independent research at ELISAVA, I created the Trust by Design framework and platform (trust-by-design.org)—addressing the missing infrastructure between Responsible AI and democratic legitimacy.
My work focuses on two parallel tracks: building the conceptual framework and tools, and building organizational capability through presentations and advisory work. Within Bundesdruckerei, I presented to process management leaders and cross-functional teams, focusing on AI literacy adoption and helping teams understand how trust infrastructure enables both compliance and competitive advantage.
The framework demonstrates that trust is not a soft value. It is infrastructure. Infrastructure can be designed. And teams can be enabled to build it.
Interested in working together?
Let's discuss how I can help your organisation design trustworthy, accessible AI experiences across sectors.