Introduction
Author’s note: I’ve had some interesting and enlightening discussions recently with an old friend and former colleague, Aaron Sempf, on trust, control, and governance in the age of AI and automation. We’ve noted a convergence on a fault line in governance, and while we approach the problem from different angles, that overlap helped crystallise the questions this sketch explores from an organisational and socio-technical perspective.
This document is a conceptual sketch for how trust and organisational resilience can be deliberately designed in modern organisations. It starts from a simple premise: trust is an organisation’s confidence that its strategy, goals, and intent are being enacted through decision-making structures all the way down to operations.
As automation, AI, and complex digital systems reduce the ability to rely on informal human correction, that confidence increasingly depends on how authority, responsibility, ambiguity, and assurance are designed into our systems.
What follows sets out a decision-centric approach to thinking about information security, governance, and organisational design, reframing trust as an organisational property that must be deliberately sustained as systems scale and automate. This will form the basis for more detailed exploration and builds on previous work undertaken into the socio-technical nature of cybersecurity, and how to ensure it across the Information Environment.
Definition of Trust
In this context, trust is an organisation’s confidence that its strategy, goals, and intent are being faithfully and consistently enacted through its decision-making structures down to the operational level.
Principles
- Trust is architected – Trust is an outcome of decision structure, not based on assumption of widespread adherence to rules
- Constraints anticipate action – Resilience is by design, not reactive
- Ambiguity is inevitable and must be deliberately bounded – Design controls where uncertainty appears and how far it can spread
- Human judgement is reserved for bounded, intentional ambiguity – Humans are risk owners by design, not accidental circuit breakers
Conceptual Model
- Trust = confidence that intent is enacted through decisions
- Socio-technical information security = extends cybersecurity principles across the Information Environment which contains the cognitive, informational, and physical dimensions
- Decision structure = the primary trust mechanism
- Governance = continuous assurance of enactment
- Case studies = stress tests for resilience under time and automation
Notes Toward Architecting Socio-Technical Organisational Trust
1. Socio-technical information security as the basis of trust
As society has become more interconnected and reliant on technology we need to reassess the scope of cybersecurity, broadening it to information security, which is inherently socio-technical. It protects the CIA/AAA (Confidentiality, Integrity, Availability and Authentication, Authorisation, Accounting) of assets across the Information Environment (IE) which contains the cognitive, informational, and physical dimensions, and therefore cannot be reduced to purely technical controls. Trust is formed, maintained, reinforced, or degraded through the interaction of people, systems, and organisational structures.
This can be understood across three layers:
| Strategic | Information security extends beyond technical systems to include people, organisational structures, and decision rights – these all have the potential to impact CIA/AAA within the IE. Organisational trust depends on the alignment of strategy and intent with how decisions are structured and exercised across the enterprise and the IE. |
| Tactical | Trust is enacted through decision-making structures that align authority, responsibility, and risk ownership. This alignment determines whether organisational intent is translated into consistent action (see Section 2: Decision-making structure). |
| Operational | Trust is realised in day-to-day activity. As explored in the capstone (in a cybersecurity context), this involves embedding decision making processes into operational tasks to help deliver consistent outcomes which are aligned with strategic intent. |
2. Decision-making structure
Organisational trust depends on where and when decisions are made, not merely on who makes them. Decision-making structures should be deliberately designed so that responsibility and authority are aligned and made where they maintain or reinforce trust.
Key design considerations include:
| Consideration | Rationale |
| Place decision points where risk ownership belongs, and where ambiguity can be deliberately intercepted and bounded | Complex systems inevitably generate ambiguity. The objective is not to eliminate it, but to ensure it appears in bounded contexts where it can be managed by the appropriate decision owner. This reduces uncontrolled ambiguity and improves the consistency and reliability of outcomes. |
| Inform where human judgement is intentionally required in systems (automated, AI, or otherwise) | Human judgement should be reserved for contexts where ambiguity is irreducible and meaningful, rather than used as a compensating control for poor system design. This provides a principled basis for determining when and how humans are integrated into systems. |
| Avoid responsibility without authority | Assigning responsibility without corresponding authority is a structural design anomaly that undermines trust and accountability. Caveat: this does not preclude deliberate separation of powers or dual-control arrangements, where authority is intentionally shared or constrained to manage specific risks. In such cases, the limits of authority must still be explicit and operationally meaningful. |
Collectively, these design considerations aim to ensure that human judgement and accountability are positioned intentionally within organisational systems, enabling operational decisions to be made as a consequence of structure rather than as ad-hoc responses to unmanaged ambiguity.
When decision-making structures are poorly designed, trust degrades, common indicators include:
- responsibility assigned without corresponding authority
- ambiguity propagating beyond its point of origin, that is being passed from one decision maker to another
- human judgement being used as a compensating control rather than by design
- governance activity increasing without a corresponding increase in confidence
These signals do not indicate individual failure, but structural misalignment between organisational intent and operational decision-making.
This decision-making structure is sustained and reinforced by governance, described below.
3. A new approach to governance
Traditional governance models rely heavily on point-in-time audits, static artefacts, and retrospective assurance and Damocles of disciplinary action. As organisations adopt automation and AI, these limitations are increasingly exposed.
To support organisational trust, governance must move toward continuous assurance. The objective is to create a closed loop governance system that:
- operates as close to real time as practicable
- is driven by assurance rather than documentation
- and continuously informs, and is informed by, architecture, engineering, and operations.
This closed loop enables learning and recalibration. Where assurance indicates that decisions, behaviours, or outcomes are diverging from organisational intent, the organisation can respond by adjusting decision rights, constraints, or system design. Trust is not static, it is continuously tested, corrected, and re-established over time.
In this model, governance shifts from organisational overhead to structural infrastructure. It becomes the connective spine linking:
- strategic intent
- decision-making structures
- architecture and engineering
- and operational execution.
This can be understood as vertically integrated organisational trust assurance: a governance approach that sustains confidence that decisions, authority, and operations remain aligned as conditions change.
Such governance directly supports decision authority by maintaining visibility of where decisions sit, how they are exercised, and whether they continue to reflect organisational intent.
4. Stress tests
The following domains act as stress tests for this model of trust and organisational resilience.
Cryptographic trust
Cryptography represents a distinct trust domain with strong technical properties, but clear organisational implications.
- Strategic
Cryptographic agility signals long-term organisational commitment to security and privacy, supporting confidence that intent will remain enforceable as the threat environment evolves. - Tactical
Decision-makers must be able to demonstrate CIA/AAA over time. Cryptographic Bills of Materials (CBOMs) support this by providing decision-relevant visibility into cryptographic dependencies. - Operational
Trust is sustained through the maintenance of secure cryptographic primitives and the ongoing currency of CBOMs as systems evolve.
AI and automation
Automation and AI force explicit decisions about authority, judgement, and accountability.
Key questions include:
- What can be automated end-to-end?
- What requires human judgement?
- Where should that human judgement be placed?
- What is the role of the human in the decision?
- Who is the appropriate decision owner?
These questions will provide a starting point for further exploration.
Closing
I’ll be using this sketch as a reference point for future writing, where I explore specific domains and case studies in more detail.


