Cyber Security

Luke Hally

What Autonomous AI Exposes About Cybersecurity Maturity

Introduction

OpenClaw (and its various recent names) provides a useful case study for examining a persistent weakness in contemporary cybersecurity practice. It is tempting to treat it as simply another risky tool, or as an example of users moving faster than security teams would like. That framing, however, is insufficient. OpenClaw is better understood as a socio-technical stress test: a highly visible confluence of technical capability, human behaviour, and governance maturity.

Autonomous agents are changing how work is performed and how authority is exercised. This changes how risk propagates through connected systems, exposing weaknesses in existing governance structures. These weaknesses are well known: out-of-date/unread documentation, point in time audits, reactionary responses to breaches, a focus on compliance rather than security. These shortcomings shift responsibility, and value, to our ops teams, which could previously manage incidents and breaches, however autonomous agents have rapidly expanded the speed, scope and scale of potential impacts. 

For security professionals, when such tools spread rapidly, the question is not whether they are technically impressive, but whether our systems are mature and resilient enough to absorb them safely. OpenClaw is an excellent case study, highlighting the shortcoming of our approach to cybersecurity governance, revealing the need for a more holistic and socio-technical approach.

Autonomy and access: an adversary’s dream

Unlike common AI tools thus far, OpenClaw can act with authority across a user’s environment: reading and writing files, command execution, interacting with email and messaging platforms, maintaining a contextual memory, all without constant supervision. This level of access is not user error or complacence, it is by design – OpenClaw is focused for ease and simplicity rather than being secure-by-design. This capability and ease of use has contributed to both its appeal and its rapid uptake.

From a security perspective, however, this capability is concerning. An agent authorised to perform work on a user’s behalf is a highly privileged concentration of authority and provides an adversary with an ideal tool to undertake an attack by stealth. In short, the adversary will have admin level, privileged, access to use the user’s tools and systems to reach their objective. This is known as a living-off-the-land attack.

When we combine broad and deep authorisation, automation and open-by-design, we have potential for attacks. When we combine this with the aforementioned traditional GRC and the prevalence of shadow-AI, both individuals and organisations are at risk. These risks are not new. When viewed through a threat-modelling lens, tools such as OpenClaw enable and streamlines long-established adversary techniques documented in frameworks such as MITRE ATT&CK or the Cyber Kill Chain. 

Uncomfortable question

This raises an uncomfortable question for the cybersecurity profession. If the attack lifecycle tactics have been documented, taught, and operationalised in defensive frameworks for years, why are tools, such as OpenClaw, which enable these tactics, spreading with so little friction? The answer doesn’t rest solely with end-users. It reflects a failure of cybersecurity as a profession to increase society’s cyber robustness. It has broad implications in highly connected societies, cybersecurity robustness increasingly behaves like a public good. Each connected device is a potential attack vector, meaning that an individual’s security decisions can impact on the rights of, and cause harm to others. This adds an ethical dimension to our decision of whether or not to secure our devices. 

You might respond saying it’s not your lane. Perhaps not, but we all have a role to play, whether it is establishing secure-by-design coding practices, privacy- or secure-by-default in products, contributing to discussions on moving security left and embedding it into organisational processes or leading discussions on government policy or regulation where security and privacy are at risk.

We hear a lot about resilience today, which is the ability to maintain continuity and functionality during an attack. However we don’t hear so much about robustness. Cyber robustness is the ability of a system to withstand an attack. At a societal level, cyber robustness is important regarding individuals and their collective security decisions, for example, updating operating systems, setting strong passwords, embracing stronger privacy settings, whether or not to grant authorised access to an unknown AI agent such as OpenClaw. Without improving societal cyber robustness, resilience investments will continue to be consumed by preventable failures.

So we have to ask ourselves: with the increased awareness of cybersecurity, prevalence of attacks, media attention and increased spending on cybersecurity, why is there a lack of corresponding increase in societal robustness?

The hero narrative

The persistent narrative that “if you aren’t responding to incidents at 2am, you aren’t doing cybersecurity” reflects a distortion in our profession. This framing de-emphasises strategy, governance, architecture, engineering and education, all critical for establishing defence in depth which helps prevent incidents rather than responding to them. As mentioned in the introduction, the structural shortcomings in governance shift the responsibility to response, so it’s little wonder that this narrative persists, and biases systems toward recovery instead of robustness and resilience.

Incident response is a critical part of security. It is the sharp end that fills the gaps left by our other layers of defence-in-depth. We depend on ops when things go wrong, but when security depends on brilliance under pressure as the norm, rather than the exception, the system has already failed. Autonomous systems make this distortion visible.

More than a technical challenge

Alongside the hero narrative, the focus of cybersecurity continues to be technical domains. We have seen heavy investment in technical depth, delivery speed, and capability expansion – often on tools to overwhelm inform ops. But, far less attention has been given to systems thinking, holistic strategy, user-centred documentation, and governance mechanisms that evolve alongside technology, where governance lags and assurance relies on downstream monitoring and incident response. This model scaled poorly in the face of increasingly prevalent and sophisticated attacks, it scales even more poorly as autonomy increases. 

Interestingly, cybersecurity is socio-technical in nature, this means it depends on the interaction between technology, human behaviour and organisational processes. A property of socio-technical systems is that improvement requires optimisation of the entire system, not just one of its subsystems. In fact, focusing on one subsystem can actually reduce the overall system performance.

So applying a socio-technical lens to cybersecurity, the result is predictable. The aforementioned technical focus has seen the technical subsystem advance rapidly, while the human and organisational subsystems lag. Governance is reactive rather than proactive, and risk accumulates, reinforcing the hero narrative. This is also manifest at the societal level, the belief that cybersecurity is a technical problem makes it the domain of experts and “someone else’s” problem. This all contributes to reduced societal cyber robustness and lands us in a place where people are granting full and open access to autonomous agents.

The real discussion that organisations need to have now, is how do they mature their governance systems from one that lags: shelfware and point-in-time audit, to one that is as close as possible to real-time assurance? One practical response to this challenge is to uplift from a traditional GRC model to a continuous governance loop, driven by assurance to transform it from shelfware into an active and living process which informs architecture, engineering and operations. I’m not alone, there is some excellent discussion occurring in this space, Aaron Sempf, Field CTO at AWS, has written an excellent piece on autonomy, control and the need for governance to mature rapidly, it can be found here: Autonomy Is a Control Problem, Not a Trust Problem.

Conclusion

OpenClaw is not the problem, neither is autonomy, but it makes an excellent case study. The problem is imbalance. Technology is advancing faster than security awareness, automation is exposing flawed governance systems, technical excellence and a hero narrative are decoupled from systems thinking and understanding the human factors of cybersecurity.

If the cybersecurity profession is to support autonomy without fragility, it must mature beyond a purely technical discipline. We must recognise the socio-technical nature of cybersecurity, that human factors and organisational considerations are as important as technical controls and solutions. When we do, we will be on the road to creating governance systems which are capable of scaling to meet the challenge of automation, as well as creating products and awareness that will contribute to increasing society’s cyber robustness, helping people to make more informed security decisions.

Recent posts