DevSecOps vs SecDevOps: Choosing the Right Security Model for Your Organisation in 2026

Security in software development is no longer optional — but the model you choose to implement it makes a profound difference to both security outcomes and delivery velocity. Two approaches dominate current practice: DevSecOps, which weaves security continuously throughout the development pipeline, and SecDevOps, which places security architecture as the absolute prerequisite before any code is written. Understanding when each is appropriate is one of the key judgment calls a security architect or leader must make in 2026.

What Is DevSecOps? The Shift-Left Imperative

DevSecOps is a cultural and technical transformation that embeds security practices into every phase of the Software Development Lifecycle (SDLC) — from initial design through deployment and post-release monitoring. Its governing principle is straightforward: the earlier a vulnerability is detected, the cheaper and less disruptive it is to remediate.

NIST’s Secure Software Development Framework (SSDF), published as SP 800-218, provides the authoritative reference for DevSecOps implementation. It organises practices around four groups: Prepare the Organisation (PO), Protect the Software (PS), Produce Well-Secured Software (PW), and Respond to Vulnerabilities (RV). Organisations that align their pipelines to SSDF are implementing DevSecOps in a structured, auditable way.

The five core principles of DevSecOps are:

  1. Shift-Left Security: Developers scan their own code as they build it, reducing the gap between writing vulnerable code and fixing it from weeks to minutes.
  2. Automated Security in the CI/CD Pipeline: Every commit triggers automated SAST, DAST, SCA, and secrets scanning — no manual review required at every step.
  3. Shared Security Ownership: Developers, DevOps engineers, and security specialists collaborate, with security functioning as an enabler rather than a gatekeeper.
  4. Security as Code (SaC): Policies are codified in machine-readable form, ensuring every deployment automatically inherits correct security configurations.
  5. Continuous Monitoring: Real-time monitoring of live applications detects emerging threats and feeds insights directly into the next sprint.

What Is SecDevOps? Security-by-Design at Scale

SecDevOps takes a more rigorous architectural stance: no development begins until the security architecture is fully defined and validated. While DevSecOps brings security earlier into development, SecDevOps positions it before development starts entirely — in the planning and threat modelling phase.

This methodology is standard in regulated, high-stakes environments: defence, financial market infrastructure, healthcare, and critical national infrastructure — sectors where a single exploitable vulnerability carries severe legal, financial, or operational consequences.

The five defining characteristics of SecDevOps are:

  1. Secure-by-Design Architecture: Applications are engineered around strict security standards from inception. Security is not retrofitted; it is foundational.
  2. Security as Code — Automating Compliance: Firewall rules, access controls, and compliance checks are codified and applied consistently across all environments.
  3. Infrastructure as Code (IaC) — Hardened Environments: Pre-hardened infrastructure is provisioned through code, eliminating insecure manual configurations.
  4. Automated Governance and Regulatory Compliance: The system continuously validates itself against HIPAA, GDPR, ISO 27001, or APRA CPS 234 and halts workflows on violation detection.
  5. Embedded Security Expertise: Security professionals are core team members from day one — not consultants parachuted in at review gates.

Head-to-Head: When Each Model Wins

Dimension DevSecOps SecDevOps
Delivery velocity High — security accelerates delivery Moderate — thorough design phase upfront
Regulatory environment Consumer, SaaS, general enterprise Finance, defence, healthcare, critical infra
Risk tolerance Moderate — iterative risk reduction Low — zero-defect policy preferred
Security ownership Shared across dev, ops, security Security-led, top-down governance
Cost model Lower — leverages existing pipelines Higher — dedicated security architecture team
Cloud-native fit Strong — microservices, auto-scaling Strong — IaC and hardened provisioning

Choosing the Right Approach: A Practitioner’s Framework

Choose DevSecOps if your organisation operates in a competitive consumer market, values rapid feature delivery, runs on cloud-native infrastructure, or cannot justify a standalone security department for every development team.

Choose SecDevOps if you operate under APRA CPS 234, ISO 27001, PCI-DSS, or HIPAA obligations; process highly sensitive data; manage financial market infrastructure; or cannot accept the reputational and regulatory consequences of releasing software with known vulnerabilities.

The most mature organisations adopt a hybrid posture: applying SecDevOps rigour during the design and threat-modelling phase, then leveraging DevSecOps automation during execution. In financial market infrastructure — where I work — this hybrid model is increasingly the norm: architectural security gates upfront, automated pipeline controls throughout, and continuous monitoring post-deployment.

The Australian Context: APRA and ASIC Expectations

For Australian organisations, APRA CPS 234 (Information Security) mandates that entities maintain information security capabilities commensurate with the size and extent of threats to their information assets. This obligation extends explicitly to software development and third-party dependencies. The ASIC Cyber Resilience Good Practices guide (2023) similarly emphasises secure-by-design as a baseline expectation for regulated entities, particularly those operating financial market infrastructure.

Organisations that cannot demonstrate secure development practices in their third-party assessments and vendor questionnaires are increasingly being flagged as elevated risk. Whether you choose DevSecOps, SecDevOps, or a hybrid, the expectation is the same: security must be systematic, evidenced, and auditable.

References and Further Reading

  • NIST SP 800-218 — Secure Software Development Framework (SSDF), Version 1.1 (2022)
  • OWASP DevSecOps Guideline — owasp.org
  • APRA CPS 234 — Information Security (2019)
  • ASIC — Cyber Resilience Good Practices (2023)
  • Gartner, How to Integrate Security Into DevOps (2024)
  • CISA, Shifting the Balance of Cybersecurity Risk: Principles for Security-by-Design (2023)

The Golden Circle of Cybersecurity: Aligning Security Strategy with Business Value

In many organisations, cybersecurity is still perceived as a technical cost centre — a function that consumes budget, generates audit findings, and slows down projects. This perception is both inaccurate and damaging. When security is positioned correctly, it becomes a strategic enabler of business success: protecting revenue, sustaining customer trust, enabling digital transformation, and differentiating the organisation in competitive markets.

One of the most effective frameworks for communicating this alignment is Simon Sinek’s Golden Circle, applied to security strategy: Why, What, and How. It reframes security from a reactive control function into a proactive business value protector.

WHY: Protecting Business Value and Competitive Advantage

Every organisation’s security programme must begin with a clear articulation of purpose. Not “to comply with ISO 27001” — that is a mechanism, not a purpose. The genuine Why of cybersecurity is the protection of what the organisation values most: its revenue-generating processes, its customer data and the trust built around it, and its competitive differentiation.

Organisations that cannot articulate their security purpose at a business level consistently fail to secure adequate investment. Security becomes a cost centre precisely because it has not been positioned as a value protector. The 2024 Ponemon Institute Cost of a Data Breach Report found that the global average cost of a breach reached USD 4.88 million — a 10% increase from 2023. For organisations in financial services and healthcare, the costs are substantially higher when regulatory penalties and reputational damage are included.

The Why must also drive prioritisation. Not all assets carry equal business value. A mature security programme focuses its resources on protecting the assets whose compromise would most directly damage the organisation’s ability to operate, compete, and maintain stakeholder trust.

WHAT: Defining the Right Controls — Risk-Driven, Not Checklist-Driven

Once the purpose is clear, the next step is determining which controls are needed to protect it. This is where many organisations go wrong: they implement controls based on what auditors expect rather than what business risk requires. The result is a programme that passes assessments but fails to address the actual threat landscape.

A risk-driven control strategy organises controls into four categories:

  • Preventive Controls: Identity and Access Management (IAM), network segmentation, secure configurations, and endpoint hardening that reduce the probability of a breach.
  • Detective Controls: SIEM, threat intelligence platforms, user behaviour analytics (UEBA), and EDR that identify threats before they escalate.
  • Corrective Controls: Incident response plans, backup and recovery mechanisms, and crisis management frameworks that restore operations after an event.
  • Governance Controls: Policies, standards, risk registers, and reporting mechanisms that ensure decisions are made with accurate information and clear accountability.

NIST CSF 2.0 organises these into six functions: Govern, Identify, Protect, Detect, Respond, and Recover. The addition of the Govern function in the 2024 update explicitly recognises that control effectiveness depends on clear accountability and strategic intent — not just technical implementation.

HOW: Enabling Through Technology, Process, and Culture

The How layer is where strategy is executed. It encompasses the technology stack, the processes that govern its use, and the culture that sustains it over time.

Technology enablement includes EDR, SIEM, cloud security platforms (CSPM, CWPP), DLP, and Zero Trust architecture components. Technology alone, however, does not produce security outcomes. It produces data — which must be acted upon by capable people operating within clear processes.

Process integration includes risk-based vulnerability management, continuous monitoring and threat hunting, incident response lifecycle management, and secure software development lifecycle (SSDLC) integration. Mature programmes automate as much of this as possible, reducing dependence on individual effort and enabling consistent outcomes at scale.

Culture and people represent the most under-invested layer in most security programmes. Security awareness training that changes behaviour — not just achieves compliance — requires understanding of cognitive biases, social engineering techniques, and the psychology of decision-making under uncertainty. Research by the Verizon DBIR consistently identifies human factors as contributors to the majority of breaches, underscoring that technical controls alone are insufficient.

Bringing It Together: Security as a Strategic Differentiator

When the Golden Circle is applied consistently, the result is a security programme that earns and sustains executive confidence, secures appropriate investment, and produces measurable risk reduction. More importantly, it positions the security function as a strategic partner rather than a compliance overhead.

In the Australian context, this alignment is increasingly examined by APRA, ASIC, and the Australian Signals Directorate (ASD). The Essential Eight Maturity Model, ASD’s baseline control framework, rewards organisations that approach security strategically — with documented intent, measured outcomes, and continuous improvement cycles.

Organisations that invest in aligning their security strategy to business value are not just better protected. They are better positioned to grow.

References and Further Reading

  • Sinek, S. (2009). Start With Why. Portfolio/Penguin.
  • NIST Cybersecurity Framework 2.0 (February 2024)
  • Ponemon Institute — Cost of a Data Breach Report 2024
  • Verizon — Data Breach Investigations Report 2024
  • ASD — Essential Eight Maturity Model (2023)
  • APRA CPS 234 — Information Security

ISO/IEC 42001:2023 Explained: The AI Management Standard Every Security Professional Needs to Understand

Artificial intelligence is growing fast in business — and regulators, boards, and customers are increasingly demanding that organisations demonstrate they manage it responsibly. A 2024 McKinsey survey found that fewer than one in four organisations have a mature AI governance function, despite the majority expecting AI-specific regulation within two years. ISO/IEC 42001:2023 — the world’s first international standard for AI Management Systems (AIMS) — provides the governance architecture organisations need to close that gap.

For security professionals holding CISSP, CCSP, or AAISM credentials, understanding ISO 42001 is no longer optional. AI systems introduce risk categories that traditional information security frameworks were not designed to address: model bias, adversarial attacks, data poisoning, membership inference, and system prompt injection. ISO 42001 is the standard that organises governance around these AI-specific risk vectors.

What Is ISO/IEC 42001?

ISO 42001 is a management system standard — structurally similar to ISO 27001 (information security) and ISO 9001 (quality management). It provides a framework for organisations that develop, provide, or deploy AI systems to establish, implement, maintain, and continuously improve an Artificial Intelligence Management System (AIMS). It follows the standard High-Level Structure (HLS) used across all ISO management system standards, making integration with existing ISO 27001 implementations more straightforward.

The standard covers the full AI lifecycle: design, development, deployment, operation, monitoring, updates, and retirement. It applies regardless of organisation size, sector, or AI maturity level.

Clause-by-Clause: The Governance Architecture

Clauses 1–3 (Scope, References, Definitions) establish the standard’s boundaries, reference ISO/IEC 22989 for AI terminology, and define key terms including AI management system, risk, data quality, and AI impact assessment. Alignment on terminology is foundational — it ensures that technical, legal, and business teams operate with a shared vocabulary during audits and governance reviews.

Clause 4 (Context of the Organisation) requires organisations to identify internal and external stakeholders, understand their expectations, and define the scope of the AIMS with precision. Is the organisation using AI for customer service automation or for clinical decision support? The risk profile differs dramatically, and the scope definition must reflect that.

Clause 5 (Leadership) assigns explicit accountability to leadership for AI governance. Senior management must establish an AI policy, assign clear roles and responsibilities, and demonstrate active oversight — not passive sign-off. This is consistent with the governance expectations of APRA CPS 234 and ASIC’s technology risk guidance, both of which require board-level accountability for material technology risks.

Clause 6 (Planning) is where risk management becomes operational. Organisations must identify AI-specific risks — including model bias, adversarial manipulation, unintended outputs, and data quality failures — assess their likelihood and impact, and plan mitigating controls. Clause 6 also requires setting AI objectives with measurable targets and an explicit mapping against Annex A controls.

Clauses 7–8 (Support and Operation) ensure that the AIMS is resourced and executed: skilled personnel, training, documentation, and operational controls across the AI lifecycle. Incident response plans for AI system failures and regular AI impact assessments are required at the operational level.

Clause 9 (Performance Evaluation) drives continuous measurement — tracking model performance, compliance status, and incident metrics — through internal audits and management reviews. This is the evidence layer that satisfies auditors and regulators.

Clause 10 (Improvement) closes the loop: root cause analysis of nonconformities, corrective action, and systematic improvements to keep the AIMS current as AI technology and threat landscapes evolve.

Annex A: The 38 AI Controls

Annex A lists 38 recommended controls across risk areas including data quality, bias management, transparency, human oversight, adversarial robustness, and incident management. Organisations are not required to implement all 38, but they must review each one and document their applicability decisions in a Statement of Applicability — the same approach used in ISO 27001 implementations. Auditors will examine both the controls implemented and the rationale for any exclusions.

ISO 42001 and the AAISM Certification

ISACA’s Advanced in AI Security Management (AAISM) certification — which I hold — aligns closely with ISO 42001’s risk governance architecture. AAISM prepares professionals to assess AI risk from an auditor’s perspective: evaluating membership inference controls, differential privacy implementations, model integrity verification, and AI system resilience. The two credentials complement each other naturally: ISO 42001 provides the management framework; AAISM provides the security-specific assurance competency.

Why This Matters for Australian Organisations

Australia’s AI regulatory environment is evolving rapidly. The Australian Government’s Safe and Responsible AI framework (2023) and CSIRO’s AI Ethics Framework both emphasise transparency, accountability, and human oversight — principles that map directly to ISO 42001’s governance architecture. Organisations that establish ISO 42001-aligned AI governance now will be significantly better positioned when mandatory AI governance requirements are formalised.

References and Further Reading

  • ISO/IEC 42001:2023 — Information Technology, Artificial Intelligence, Management System
  • ISO/IEC 22989:2022 — AI Concepts and Terminology
  • ISACA — AAISM Certification Body of Knowledge (2024)
  • NIST AI Risk Management Framework (AI RMF 1.0) — nist.gov
  • Australian Government — Safe and Responsible AI in Australia (2023)
  • McKinsey Global Survey on AI (2024)
  • OWASP LLM Top 10 (2024) — owasp.org

CVE-2026-39808: FortiSandbox PoC Exploit Released — What Security Teams Must Do Now

A proof-of-concept (PoC) exploit for a critical unauthenticated remote code execution (RCE) vulnerability in Fortinet’s FortiSandbox was publicly released in April 2026, dramatically raising the exploitation risk for organisations that have not yet patched. Tracked as CVE-2026-39808, the vulnerability allows an unauthenticated attacker to execute arbitrary OS commands with root privileges — the highest possible access level on the affected appliance.

The speed of PoC release following official disclosure is a recurring pattern in the Fortinet vulnerability timeline. Security teams should treat any unpatched FortiSandbox deployment in the 4.4.0–4.4.8 range as actively compromised until confirmed otherwise.

Vulnerability Summary

Attribute Detail
CVE ID CVE-2026-39808
Advisory FG-IR-26-100
Severity Critical (CVSS 9.8)
Authentication required No — unauthenticated exploitation
Affected versions FortiSandbox 4.4.0 – 4.4.8
Patch available Yes — versions outside the affected range
PoC publicly available Yes — published on GitHub
Attack vector Network — exploitable remotely
Privileges obtained Root / OS-level command execution

Technical Mechanics

The vulnerability stems from improper input validation within a specific FortiSandbox web endpoint. Attackers can inject OS commands through a GET parameter using a pipe character, breaking out of the intended application logic and forcing the underlying server to execute unauthorised commands. Command output is redirected to a text file stored in the web root, allowing the attacker to retrieve results via a standard browser request.

The exploit requires no credentials, no prior access, and no complex tooling. A single crafted curl command achieves root-level RCE. This places CVE-2026-39808 in the highest tier of exploitability — comparable to CVE-2023-27997 (FortiOS SSL-VPN) and CVE-2022-42475, both of which were weaponised within days of PoC disclosure.

Threat Actor Context

Fortinet appliances are systematically targeted by advanced persistent threat (APT) groups and ransomware operators. CISA’s Known Exploited Vulnerabilities (KEV) catalogue includes numerous Fortinet CVEs that were actively weaponised within hours of PoC publication. The simplicity of this exploit — combined with the widespread enterprise deployment of FortiSandbox — makes it an attractive target for automated botnet scanning and ransomware operators seeking initial access to corporate networks.

FortiSandbox’s role as a network security appliance compounds the risk. A compromised sandbox can be used to inspect and manipulate traffic, exfiltrate intelligence about the protected network, or serve as a pivot point for lateral movement — all while appearing to function normally from an operational perspective.

Immediate Mitigation Steps

  1. Upgrade immediately to a FortiSandbox version outside the 4.4.0–4.4.8 affected range. Consult Fortinet’s official PSIRT advisory (FG-IR-26-100) for the confirmed safe versions.
  2. Review web access logs for suspicious GET requests targeting the vulnerable endpoint. Focus on requests from external or unexpected source IPs.
  3. Inspect web root directories for unexpected text files that may indicate the PoC has already been executed against the appliance.
  4. Restrict network access to FortiSandbox management interfaces — limit to authorised management networks and require jump host or VPN access for administrative sessions.
  5. Enable IDS/IPS signatures for the CVE-2026-39808 exploit pattern on upstream security controls.
  6. Threat hunt for indicators of post-exploitation activity: new cron jobs, unexpected network connections from the FortiSandbox appliance, or unfamiliar processes.

Broader Patch Management Observations

This disclosure reinforces a persistent challenge in enterprise security: the gap between patch availability and patch deployment. Fortinet patched this vulnerability quietly in November 2025 before officially disclosing it in April 2026 — a responsible disclosure approach that gave organisations time to patch. Yet the ongoing reality is that many organisations lag significantly on patching network security appliances, often citing change management overhead or operational continuity concerns.

For organisations operating under APRA CPS 234 or ASIC RG 255, timely patching of critical network security appliances is not merely best practice — it is an explicit expectation. The ASD Essential Eight’s Patch Applications control mandates patches for critical vulnerabilities within 48 hours for internet-facing systems at Maturity Level 2 and above.

References and Further Reading

  • Fortinet PSIRT Advisory — FG-IR-26-100
  • GBHackers — PoC Released for FortiSandbox Flaw (April 2026)
  • CISA Known Exploited Vulnerabilities Catalogue — cisa.gov
  • ASD Essential Eight Maturity Model — Patch Applications (2023)
  • NIST NVD — CVE-2026-39808

Threat Modelling with STRIDE: A Practitioner’s Guide to Systematic Security Design

Threat modelling is one of the most underutilised techniques in enterprise security. Despite being a core competency in frameworks ranging from NIST SP 800-154 to ISO/IEC 27001:2022 Annex A (A.8.25 — Secure Development Lifecycle), the discipline is frequently displaced by reactive vulnerability management and compliance-driven control assessments. STRIDE — Microsoft’s threat categorisation model — provides a structured, accessible framework for conducting threat modelling at the application and system design level, and for communicating findings to non-security stakeholders in terms they understand.

What Is STRIDE?

STRIDE is an acronym representing six categories of security threats, developed by Microsoft researchers Loren Kohnfelder and Praerit Garg in 1999 and widely adopted as a foundational threat modelling methodology. Each category maps to a specific security property being violated:

Threat Category Security Property Violated Example
Spoofing Authentication Impersonating a legitimate user or system component
Tampering Integrity Modifying data in transit or storage without authorisation
Repudiation Non-repudiation Denying having performed an action due to insufficient logging
Information Disclosure Confidentiality Exposing sensitive data to unauthorised parties
Denial of Service Availability Exhausting resources to prevent legitimate use
Elevation of Privilege Authorisation Gaining capabilities beyond those intended

The STRIDE Threat Modelling Process

STRIDE is applied through a four-step process that can be conducted at design time (most effective) or retrospectively against existing systems:

Step 1: Define the Scope and Create a System Model

Produce a Data Flow Diagram (DFD) that captures all components of the system: processes, data stores, external entities, and the data flows between them. The DFD is the primary artefact against which threats are enumerated. Each element type has a default set of applicable STRIDE threats: processes are susceptible to all six; data stores are primarily susceptible to tampering, information disclosure, and denial of service; external entities are primarily susceptible to spoofing.

Step 2: Enumerate Threats

Systematically apply each STRIDE category to each element in the DFD. The question for each combination is: “How could an attacker exercise this threat category against this component?” Tools such as Microsoft Threat Modeling Tool and OWASP Threat Dragon automate parts of this enumeration and maintain DFD-to-threat mappings.

Step 3: Assess and Prioritise Threats

Use DREAD (Damage, Reproducibility, Exploitability, Affected Users, Discoverability) or CVSS-based scoring to prioritise identified threats by risk. This produces an actionable ranked threat list that can inform architectural decisions, security requirements, and remediation planning.

Step 4: Mitigate and Validate

For each prioritised threat, identify mitigating controls — whether preventive (authentication strengthening, encryption), detective (logging, monitoring), or architectural (trust boundary redesign, attack surface reduction). The threat model is then maintained as a living document and revisited when the system changes materially.

STRIDE in the Context of CISSP and Secure Architecture

STRIDE maps directly to CISSP CBK Domain 3 (Security Architecture and Engineering) and Domain 8 (Software Development Security). Understanding threat categories at a design level is a prerequisite for producing security architectures that address actual risk — as opposed to architecture that simply implements a control checklist.

In my work at Cboe, threat modelling forms part of the security review process for new application deployments. The DFD approach is particularly valuable because it creates a shared vocabulary between security architects and application development teams — reducing the friction that often arises when security reviews are perceived as compliance gates rather than design contributions.

STRIDE for Cloud and API-Heavy Architectures

STRIDE remains relevant in cloud-native environments, but its application requires adaptation. Key considerations for modern architectures include:

  • Spoofing in OAuth/OIDC flows: Token theft, confused deputy attacks, and client impersonation are spoofing threats specific to modern authentication patterns.
  • Tampering in CI/CD pipelines: Supply chain attacks that modify build artefacts or container images are tampering threats at the infrastructure level.
  • Information Disclosure in serverless: Environment variable leakage, excessive IAM permissions, and shared execution environment risks are information disclosure threats native to serverless architectures.
  • Elevation of Privilege in Kubernetes: Container escape, pod security misconfigurations, and RBAC weaknesses are privilege escalation threats that STRIDE helps enumerate systematically.

References and Further Reading

  • Shostack, A. — Threat Modeling: Designing for Security (Wiley, 2014)
  • Microsoft — STRIDE Threat Model Documentation
  • OWASP Threat Dragon — owasp.org
  • NIST SP 800-154 — Guide to Data-Centric System Threat Modeling
  • NIST SP 800-218 — Secure Software Development Framework (SSDF)
  • ISO/IEC 27001:2022 — Annex A.8.25: Secure Development Life Cycle
  • (ISC)² CISSP CBK — Domain 3: Security Architecture and Engineering

Leadership Transition Is the Real Test of Security Programme Maturity

Most security programmes do not fail because a new leader is ineffective. They fail because the previous leader was carrying far more of the programme than anyone had recognised. Leadership transitions are the most reliable diagnostic of whether a security programme is genuinely mature — or whether it was a high-performing individual operating within a structurally immature system.

This distinction matters enormously for practitioners building programmes, executives evaluating them, and incoming leaders inheriting them. Understanding the difference between a mature programme and a well-led one is one of the more important — and underexamined — questions in security governance.

What Leadership Transitions Actually Expose

When a security leader departs, the structural elements of a programme typically survive intact. Dashboards remain populated. Policies continue to exist. Roadmaps are still documented. But something begins to shift almost immediately:

  • Budget conversations become harder — investment that was approved without challenge now requires justification from scratch.
  • Governance decisions that were settled get reopened.
  • Cross-functional alignment weakens as informal relationships are no longer maintained.
  • Escalation paths that previously worked smoothly begin to stall.
  • Momentum slows, and priorities drift.

None of this reflects a change in strategy or tooling. It reflects the departure of the leader who was sustaining the programme through personal credibility, executive relationships, and undocumented institutional judgment — none of which transferred with the role.

The Hidden Layer: Leadership Capital

Every security programme runs on a visible layer — governance frameworks, roadmaps, metrics, tooling — and an invisible layer: the accumulated leadership capital of the person running it. That invisible layer includes:

  • Executive trust built through years of credible risk communication.
  • Political relationships that unblock funding and remove friction.
  • Institutional context — which decisions were compromises, which initiatives failed and why, which stakeholders require careful management.
  • Judgment about which battles are technical and which are organisational.

None of this appears in a governance charter. None of it is preserved in documentation. And when the leader leaves, it goes with them. The incoming leader inherits the artefacts — the outputs of prior decisions — but not the reasoning, the relationships, or the political context that produced them.

Documentation Preserves Structure — Not Judgment

Organisations frequently overestimate what documentation preserves. A well-documented risk register captures assessed risks and assigned treatments. It does not explain why certain risks were accepted while others were escalated. A roadmap documents sequencing. It does not preserve the reasoning behind why certain initiatives were politically sequenced that way.

This is the documentation paradox in security governance: the artefacts that survive a transition are precisely those that required the least leadership judgment to produce. The elements that required the most — stakeholder navigation, risk prioritisation under uncertainty, credibility maintenance with executives — leave no written trace.

ISACA’s COBIT 2019 governance framework recognises this challenge explicitly. Principle 5 of COBIT 2019 — Separate Governance from Management — acknowledges that governance effectiveness depends not just on structures but on the accountability relationships and information flows that sustain them. When those relationships are personalised rather than institutionalised, leadership transitions break them.

Strong Leadership Is Not the Same as Programme Maturity

A strong security leader can produce excellent outcomes: high visibility, strong executive trust, rapid decision-making, and measurable risk reduction. But if those outcomes depend disproportionately on one individual’s presence, the programme is still immature — regardless of how impressive its outputs appear.

True maturity means the programme remains effective after leadership changes. Governance mechanisms work without executive intervention. Prioritisation logic survives scrutiny by a successor. Institutional relationships are codified — embedded in vendor contracts, governance charters, and stakeholder engagement models — rather than residing in personal networks.

The practical implication: a programme that looks mature during a period of stable, trusted leadership may be fragility dressed in governance clothing. The only reliable test is whether it performs well after that leader departs.

What Incoming Leaders Should Do First

For professionals stepping into a new security leadership role, this reality demands a specific diagnostic approach. Before evaluating tools, controls, or roadmaps, the most important questions are:

  1. Which decisions in this programme depend on informal relationships rather than formal governance?
  2. Where has personal credibility substituted for documented process?
  3. Which governance mechanisms work only because of the previous leader’s personality?
  4. Which stakeholders require careful management that no governance document acknowledges?
  5. Would the programme’s roadmap survive challenge by an informed, independent reviewer?

Answering these questions before making changes is the difference between inheriting a mature programme and discovering — after proposing what appears to be a reasonable change — that the programme’s functioning depended on something invisible and now gone.

Building Programmes That Survive You

The most important long-term contribution a security leader can make is building a programme that continues performing after they leave. That means consciously and consistently doing things that most leaders find uncomfortable: documenting reasoning, not just outcomes; institutionalising relationships through governance structures; and creating conditions under which governance functions without informal intervention.

A security programme should be evaluated not on how well it performs under a respected, trusted leader — but on whether it would survive their departure. By that test, many programmes that appear mature are not.

References and Further Reading

  • ISACA — COBIT 2019 Framework: Governance and Management Objectives
  • Rathbun, D. — The Critical Path Newsletter, LinkedIn (April 2026)
  • Harvard Business Review — What New Leaders Need to Know About Cybersecurity
  • Gartner — CISO Succession Planning and Security Program Resilience (2024)
  • (ISC)² — CISSP CBK Domain 1: Security and Risk Management

Microsoft’s Forced Windows 11 24H2 Rollout: Security Implications for Enterprise IT Teams

Microsoft has initiated an automated, machine-learning-driven rollout to upgrade unmanaged Windows 11 devices to version 24H2. While the security improvements in 24H2 are substantive, the forced nature of this rollout creates operational, compliance, and security governance challenges that enterprise teams must address proactively.

What Is Changing and Why It Matters

Windows 11 24H2 introduces several significant security enhancements: improved Smart App Control capabilities, expanded Windows Protected Print Mode, enhanced Credential Guard defaults, and Rust-based kernel security improvements that reduce memory safety vulnerabilities. For organisations still on earlier Windows 11 builds, these improvements are meaningful — particularly the kernel hardening changes that address a class of vulnerabilities that have been actively exploited by APT actors in recent years.

However, Microsoft’s use of ML-based automatic upgrade targeting for unmanaged devices introduces risks of its own. Devices that receive the upgrade outside of a managed change management process may:

  • Experience compatibility issues with legacy enterprise applications not yet validated against 24H2.
  • Bypass configured Windows Update for Business Group Policies in misconfigured environments.
  • Receive the upgrade during production hours, causing unexpected reboots and operational disruption.
  • Introduce configuration drift if 24H2-specific security defaults differ from organisational baselines.

Security Governance Considerations

For organisations operating under APRA CPS 234, ISO/IEC 27001:2022, or ASD Essential Eight requirements, uncontrolled OS upgrades on endpoints represent a configuration management risk. The ASD Essential Eight’s Application Control and Patch Operating Systems controls both depend on known, validated endpoint states. An automated OS upgrade that has not been through the organisation’s change management process violates the foundational assumption of those controls: that the environment is known and intentionally configured.

Organisations using Microsoft Endpoint Manager (Intune) or WSUS for update management should verify that their deferral policies are applied correctly and that the 24H2 automatic rollout is not bypassing them. The ML-based targeting reportedly applies to devices Microsoft determines are “ready for upgrade” — but the criteria used by Microsoft’s algorithm may not align with organisational readiness criteria.

Practical Steps for Security and IT Teams

  1. Audit endpoint compliance status: Identify all devices currently on Windows 11 builds prior to 24H2. Determine which are managed versus unmanaged.
  2. Verify Update policy enforcement: Confirm that Windows Update for Business deferral settings are correctly applied and are not being overridden.
  3. Validate CIS Benchmark alignment: Microsoft has published a CIS Benchmark for Windows 11 24H2. Review the delta from the prior benchmark and identify any new security defaults that require explicit configuration.
  4. Test application compatibility: Run compatibility validation against business-critical applications before allowing the 24H2 upgrade to proceed in production.
  5. Update your CIS L1/L2 baseline documentation to reflect 24H2 configurations, particularly for Credential Guard, Smart App Control, and kernel protection settings.

The Broader Observation: Vendor-Driven Change Management Risk

This event illustrates a recurring challenge in enterprise security: the tension between vendor-driven update cadences and organisational change management processes. Cloud-era software increasingly assumes continuous, automatic updates — a model that conflicts with the controlled, evidence-based change management that security governance frameworks require.

The resolution is not to resist updates — prompt patching is a core security control. It is to ensure that the managed update process is fast enough that unmanaged devices represent a genuinely small population, and that the governance frameworks acknowledge and manage the risk of vendor-initiated changes.

References and Further Reading

  • Microsoft — Windows 11 24H2 Release Notes and Security Changelog
  • CIS Benchmark for Windows 11, Release 24H2 — cisecurity.org
  • ASD Essential Eight Maturity Model — Patch Operating Systems (2023)
  • Microsoft Learn — Windows Update for Business Configuration
  • NIST SP 800-128 — Guide for Security-Focused Configuration Management