What It Really Takes to Lead Enterprise Security in 2026: A Practitioner’s Guide to CISO-Level Skills

Cybersecurity in 2026 is no longer a back-office IT function. It is a board-level strategic imperative. CISOs are expected not just to defend infrastructure but to enable business growth, sustain operational resilience, and communicate risk fluently in the language of executives and regulators. This shift demands a new type of professional: one who combines deep technical grounding with governance maturity, executive communication, and strategic vision.

Having spent over two decades across telecommunications, financial services, and exchange infrastructure — most recently as Information Security Specialist at Cboe APAC — I have witnessed this evolution firsthand. The skills that earn credibility in a boardroom are fundamentally different from those that earn credibility in a SOC. This post examines what CISO-level competence truly looks like, and why building it is one of the most important investments a security professional can make.

Why Security Leadership Has Become Non-Negotiable at the Executive Level

Three forces are driving this shift simultaneously.

1. Risk is now a board conversation. The ability to translate a complex vulnerability landscape into a clear, actionable risk narrative is one of the highest-value skills in the profession. Directors and C-suite executives need to make investment decisions based on risk data — and that requires a security leader who can speak their language. According to the NIST Cybersecurity Framework 2.0 (released February 2024), governance is now an explicit tier-one function, not an afterthought. The Govern function sits at the centre of the new CSF 2.0 wheel — a signal that risk governance has matured into the primary leadership responsibility of the CISO.

2. Compliance frameworks are operationally demanding. Organisations operating under ISO/IEC 27001:2022, APRA CPS 234, NIST SP 800-53 Rev 5, or ASIC’s RG 255 guidance are expected to demonstrate sustained, evidence-based compliance readiness — not just pass periodic audits. The 2022 update to ISO 27001 introduced 11 new controls around threat intelligence, cloud security, and ICT readiness for business continuity. Managing this complexity requires security leaders who understand not just the letter of these frameworks but their practical application.

3. Security outcomes must be measurable. Boards make decisions based on data. The days of presenting a colour-coded risk heatmap and expecting unchallenged sign-off are over. Today’s security leaders are expected to build KPI frameworks that quantify programme effectiveness: mean time to detect (MTTD), mean time to respond (MTTR), patch compliance rates, third-party risk scores, and phishing simulation metrics — all contextualised against business risk tolerance.

Core Competencies That Define a Modern Security Leader

Based on both practice and the frameworks that govern security leadership globally, the following competencies define a mature CISO capability profile:

  • Enterprise risk governance: Conducting structured annual risk assessments aligned to NIST 800-30 or ISO 27005, producing executive-ready outputs that drive investment decisions.
  • Policy and framework development: Drafting enforceable security policies, standards, and governance models that scale across the organisation without creating operational friction.
  • Regulatory alignment: Staying current with ASIC, APRA, GDPR, and sector-specific regulations, and translating compliance requirements into operational controls.
  • Executive communication: Reporting at board and audit committee level with clarity — translating technical findings into business risk statements.
  • Security architecture judgment: Making design trade-offs between security, usability, and cost at an enterprise level — not just at the platform level.
  • Third-party and supply chain risk: Assessing and managing vendor risk through structured due diligence frameworks, security scorecards, and contractual controls.

What Separates a CISO Training Programme Worth Investing In

Not every programme described as CISO-level actually develops CISO-level capability. The distinction lies in whether participants produce real governance artefacts during training or simply recall theory in a multiple-choice exam. Six criteria worth evaluating:

  1. Is the instruction delivered by a practising security leader with board-level exposure, not just a technical trainer?
  2. Does the programme produce portfolio-ready outputs — risk assessment methodologies, security policies, KPI frameworks — rather than knowledge tests?
  3. Is the curriculum mapped to current standards: ISO/IEC 27001:2022, NIST CSF 2.0, and NIST SP 800-53 Rev 5?
  4. Does it count toward CPE maintenance for CISSP, CISM, or CISA holders?
  5. Is there structured post-training support — mentoring, peer community, session review access?
  6. Does it include a scenario-based assessment rather than a recall-only exam?

Career Pathways That Benefit Most from CISO-Level Development

The professionals who gain most from structured security leadership development tend to fall into identifiable career stages:

Information Security Managers and Heads of Security who have strong operational foundations but need governance and strategic communication skills to move into CISO roles. The gap is rarely technical — it is the ability to manage corporate security budgets, present credibly to audit committees, and design full-scale programmes rather than responding to incidents.

GRC Specialists and Risk Managers who understand frameworks deeply but have limited experience leading programme implementation. The ability to bridge framework knowledge with execution leadership is increasingly what separates mid-level GRC professionals from those who move into security leadership.

Security Architects who want to extend their influence beyond design. Translating complex architecture decisions into business risk terms — and defending them to executives who control budgets — is a distinct skill that most architecture training does not address.

A Practitioner’s Perspective

In my current role at Cboe, the decisions I make daily are not just technical. They are governance decisions: which risks to accept, which to escalate, which controls to prioritise given regulatory obligations under ASIC and the realities of a lean regional security function. The judgement required to make those decisions well does not come from passing a certification exam. It comes from structured exposure to real governance scenarios, combined with honest mentorship from someone who has made those calls under pressure.

The professionals who build that capability most efficiently are those who seek it deliberately — through structured programmes, peer communities, and deliberate practice — rather than waiting for experience to accumulate slowly over time.

References and Further Reading

  • NIST Cybersecurity Framework 2.0 (February 2024) — nist.gov/cyberframework
  • ISO/IEC 27001:2022 — Information Security Management Systems
  • NIST SP 800-53 Rev 5 — Security and Privacy Controls for Information Systems
  • APRA CPS 234 — Information Security (2019)
  • ASIC RG 255 — Cyber Resilience of Market Infrastructure Entities
  • (ISC)² CISO Leadership Certificate Programme — isc2.org
  • Ponemon Institute, Cost of a Data Breach Report 2024

The Golden Circle of Cybersecurity: Aligning Security Strategy with Business Value

In many organisations, cybersecurity is still perceived as a technical cost centre — a function that consumes budget, generates audit findings, and slows down projects. This perception is both inaccurate and damaging. When security is positioned correctly, it becomes a strategic enabler of business success: protecting revenue, sustaining customer trust, enabling digital transformation, and differentiating the organisation in competitive markets.

One of the most effective frameworks for communicating this alignment is Simon Sinek’s Golden Circle, applied to security strategy: Why, What, and How. It reframes security from a reactive control function into a proactive business value protector.

WHY: Protecting Business Value and Competitive Advantage

Every organisation’s security programme must begin with a clear articulation of purpose. Not “to comply with ISO 27001” — that is a mechanism, not a purpose. The genuine Why of cybersecurity is the protection of what the organisation values most: its revenue-generating processes, its customer data and the trust built around it, and its competitive differentiation.

Organisations that cannot articulate their security purpose at a business level consistently fail to secure adequate investment. Security becomes a cost centre precisely because it has not been positioned as a value protector. The 2024 Ponemon Institute Cost of a Data Breach Report found that the global average cost of a breach reached USD 4.88 million — a 10% increase from 2023. For organisations in financial services and healthcare, the costs are substantially higher when regulatory penalties and reputational damage are included.

The Why must also drive prioritisation. Not all assets carry equal business value. A mature security programme focuses its resources on protecting the assets whose compromise would most directly damage the organisation’s ability to operate, compete, and maintain stakeholder trust.

WHAT: Defining the Right Controls — Risk-Driven, Not Checklist-Driven

Once the purpose is clear, the next step is determining which controls are needed to protect it. This is where many organisations go wrong: they implement controls based on what auditors expect rather than what business risk requires. The result is a programme that passes assessments but fails to address the actual threat landscape.

A risk-driven control strategy organises controls into four categories:

  • Preventive Controls: Identity and Access Management (IAM), network segmentation, secure configurations, and endpoint hardening that reduce the probability of a breach.
  • Detective Controls: SIEM, threat intelligence platforms, user behaviour analytics (UEBA), and EDR that identify threats before they escalate.
  • Corrective Controls: Incident response plans, backup and recovery mechanisms, and crisis management frameworks that restore operations after an event.
  • Governance Controls: Policies, standards, risk registers, and reporting mechanisms that ensure decisions are made with accurate information and clear accountability.

NIST CSF 2.0 organises these into six functions: Govern, Identify, Protect, Detect, Respond, and Recover. The addition of the Govern function in the 2024 update explicitly recognises that control effectiveness depends on clear accountability and strategic intent — not just technical implementation.

HOW: Enabling Through Technology, Process, and Culture

The How layer is where strategy is executed. It encompasses the technology stack, the processes that govern its use, and the culture that sustains it over time.

Technology enablement includes EDR, SIEM, cloud security platforms (CSPM, CWPP), DLP, and Zero Trust architecture components. Technology alone, however, does not produce security outcomes. It produces data — which must be acted upon by capable people operating within clear processes.

Process integration includes risk-based vulnerability management, continuous monitoring and threat hunting, incident response lifecycle management, and secure software development lifecycle (SSDLC) integration. Mature programmes automate as much of this as possible, reducing dependence on individual effort and enabling consistent outcomes at scale.

Culture and people represent the most under-invested layer in most security programmes. Security awareness training that changes behaviour — not just achieves compliance — requires understanding of cognitive biases, social engineering techniques, and the psychology of decision-making under uncertainty. Research by the Verizon DBIR consistently identifies human factors as contributors to the majority of breaches, underscoring that technical controls alone are insufficient.

Bringing It Together: Security as a Strategic Differentiator

When the Golden Circle is applied consistently, the result is a security programme that earns and sustains executive confidence, secures appropriate investment, and produces measurable risk reduction. More importantly, it positions the security function as a strategic partner rather than a compliance overhead.

In the Australian context, this alignment is increasingly examined by APRA, ASIC, and the Australian Signals Directorate (ASD). The Essential Eight Maturity Model, ASD’s baseline control framework, rewards organisations that approach security strategically — with documented intent, measured outcomes, and continuous improvement cycles.

Organisations that invest in aligning their security strategy to business value are not just better protected. They are better positioned to grow.

References and Further Reading

  • Sinek, S. (2009). Start With Why. Portfolio/Penguin.
  • NIST Cybersecurity Framework 2.0 (February 2024)
  • Ponemon Institute — Cost of a Data Breach Report 2024
  • Verizon — Data Breach Investigations Report 2024
  • ASD — Essential Eight Maturity Model (2023)
  • APRA CPS 234 — Information Security

Threat Modelling with STRIDE: A Practitioner’s Guide to Systematic Security Design

Threat modelling is one of the most underutilised techniques in enterprise security. Despite being a core competency in frameworks ranging from NIST SP 800-154 to ISO/IEC 27001:2022 Annex A (A.8.25 — Secure Development Lifecycle), the discipline is frequently displaced by reactive vulnerability management and compliance-driven control assessments. STRIDE — Microsoft’s threat categorisation model — provides a structured, accessible framework for conducting threat modelling at the application and system design level, and for communicating findings to non-security stakeholders in terms they understand.

What Is STRIDE?

STRIDE is an acronym representing six categories of security threats, developed by Microsoft researchers Loren Kohnfelder and Praerit Garg in 1999 and widely adopted as a foundational threat modelling methodology. Each category maps to a specific security property being violated:

Threat Category Security Property Violated Example
Spoofing Authentication Impersonating a legitimate user or system component
Tampering Integrity Modifying data in transit or storage without authorisation
Repudiation Non-repudiation Denying having performed an action due to insufficient logging
Information Disclosure Confidentiality Exposing sensitive data to unauthorised parties
Denial of Service Availability Exhausting resources to prevent legitimate use
Elevation of Privilege Authorisation Gaining capabilities beyond those intended

The STRIDE Threat Modelling Process

STRIDE is applied through a four-step process that can be conducted at design time (most effective) or retrospectively against existing systems:

Step 1: Define the Scope and Create a System Model

Produce a Data Flow Diagram (DFD) that captures all components of the system: processes, data stores, external entities, and the data flows between them. The DFD is the primary artefact against which threats are enumerated. Each element type has a default set of applicable STRIDE threats: processes are susceptible to all six; data stores are primarily susceptible to tampering, information disclosure, and denial of service; external entities are primarily susceptible to spoofing.

Step 2: Enumerate Threats

Systematically apply each STRIDE category to each element in the DFD. The question for each combination is: “How could an attacker exercise this threat category against this component?” Tools such as Microsoft Threat Modeling Tool and OWASP Threat Dragon automate parts of this enumeration and maintain DFD-to-threat mappings.

Step 3: Assess and Prioritise Threats

Use DREAD (Damage, Reproducibility, Exploitability, Affected Users, Discoverability) or CVSS-based scoring to prioritise identified threats by risk. This produces an actionable ranked threat list that can inform architectural decisions, security requirements, and remediation planning.

Step 4: Mitigate and Validate

For each prioritised threat, identify mitigating controls — whether preventive (authentication strengthening, encryption), detective (logging, monitoring), or architectural (trust boundary redesign, attack surface reduction). The threat model is then maintained as a living document and revisited when the system changes materially.

STRIDE in the Context of CISSP and Secure Architecture

STRIDE maps directly to CISSP CBK Domain 3 (Security Architecture and Engineering) and Domain 8 (Software Development Security). Understanding threat categories at a design level is a prerequisite for producing security architectures that address actual risk — as opposed to architecture that simply implements a control checklist.

In my work at Cboe, threat modelling forms part of the security review process for new application deployments. The DFD approach is particularly valuable because it creates a shared vocabulary between security architects and application development teams — reducing the friction that often arises when security reviews are perceived as compliance gates rather than design contributions.

STRIDE for Cloud and API-Heavy Architectures

STRIDE remains relevant in cloud-native environments, but its application requires adaptation. Key considerations for modern architectures include:

  • Spoofing in OAuth/OIDC flows: Token theft, confused deputy attacks, and client impersonation are spoofing threats specific to modern authentication patterns.
  • Tampering in CI/CD pipelines: Supply chain attacks that modify build artefacts or container images are tampering threats at the infrastructure level.
  • Information Disclosure in serverless: Environment variable leakage, excessive IAM permissions, and shared execution environment risks are information disclosure threats native to serverless architectures.
  • Elevation of Privilege in Kubernetes: Container escape, pod security misconfigurations, and RBAC weaknesses are privilege escalation threats that STRIDE helps enumerate systematically.

References and Further Reading

  • Shostack, A. — Threat Modeling: Designing for Security (Wiley, 2014)
  • Microsoft — STRIDE Threat Model Documentation
  • OWASP Threat Dragon — owasp.org
  • NIST SP 800-154 — Guide to Data-Centric System Threat Modeling
  • NIST SP 800-218 — Secure Software Development Framework (SSDF)
  • ISO/IEC 27001:2022 — Annex A.8.25: Secure Development Life Cycle
  • (ISC)² CISSP CBK — Domain 3: Security Architecture and Engineering

Leadership Transition Is the Real Test of Security Programme Maturity

Most security programmes do not fail because a new leader is ineffective. They fail because the previous leader was carrying far more of the programme than anyone had recognised. Leadership transitions are the most reliable diagnostic of whether a security programme is genuinely mature — or whether it was a high-performing individual operating within a structurally immature system.

This distinction matters enormously for practitioners building programmes, executives evaluating them, and incoming leaders inheriting them. Understanding the difference between a mature programme and a well-led one is one of the more important — and underexamined — questions in security governance.

What Leadership Transitions Actually Expose

When a security leader departs, the structural elements of a programme typically survive intact. Dashboards remain populated. Policies continue to exist. Roadmaps are still documented. But something begins to shift almost immediately:

  • Budget conversations become harder — investment that was approved without challenge now requires justification from scratch.
  • Governance decisions that were settled get reopened.
  • Cross-functional alignment weakens as informal relationships are no longer maintained.
  • Escalation paths that previously worked smoothly begin to stall.
  • Momentum slows, and priorities drift.

None of this reflects a change in strategy or tooling. It reflects the departure of the leader who was sustaining the programme through personal credibility, executive relationships, and undocumented institutional judgment — none of which transferred with the role.

The Hidden Layer: Leadership Capital

Every security programme runs on a visible layer — governance frameworks, roadmaps, metrics, tooling — and an invisible layer: the accumulated leadership capital of the person running it. That invisible layer includes:

  • Executive trust built through years of credible risk communication.
  • Political relationships that unblock funding and remove friction.
  • Institutional context — which decisions were compromises, which initiatives failed and why, which stakeholders require careful management.
  • Judgment about which battles are technical and which are organisational.

None of this appears in a governance charter. None of it is preserved in documentation. And when the leader leaves, it goes with them. The incoming leader inherits the artefacts — the outputs of prior decisions — but not the reasoning, the relationships, or the political context that produced them.

Documentation Preserves Structure — Not Judgment

Organisations frequently overestimate what documentation preserves. A well-documented risk register captures assessed risks and assigned treatments. It does not explain why certain risks were accepted while others were escalated. A roadmap documents sequencing. It does not preserve the reasoning behind why certain initiatives were politically sequenced that way.

This is the documentation paradox in security governance: the artefacts that survive a transition are precisely those that required the least leadership judgment to produce. The elements that required the most — stakeholder navigation, risk prioritisation under uncertainty, credibility maintenance with executives — leave no written trace.

ISACA’s COBIT 2019 governance framework recognises this challenge explicitly. Principle 5 of COBIT 2019 — Separate Governance from Management — acknowledges that governance effectiveness depends not just on structures but on the accountability relationships and information flows that sustain them. When those relationships are personalised rather than institutionalised, leadership transitions break them.

Strong Leadership Is Not the Same as Programme Maturity

A strong security leader can produce excellent outcomes: high visibility, strong executive trust, rapid decision-making, and measurable risk reduction. But if those outcomes depend disproportionately on one individual’s presence, the programme is still immature — regardless of how impressive its outputs appear.

True maturity means the programme remains effective after leadership changes. Governance mechanisms work without executive intervention. Prioritisation logic survives scrutiny by a successor. Institutional relationships are codified — embedded in vendor contracts, governance charters, and stakeholder engagement models — rather than residing in personal networks.

The practical implication: a programme that looks mature during a period of stable, trusted leadership may be fragility dressed in governance clothing. The only reliable test is whether it performs well after that leader departs.

What Incoming Leaders Should Do First

For professionals stepping into a new security leadership role, this reality demands a specific diagnostic approach. Before evaluating tools, controls, or roadmaps, the most important questions are:

  1. Which decisions in this programme depend on informal relationships rather than formal governance?
  2. Where has personal credibility substituted for documented process?
  3. Which governance mechanisms work only because of the previous leader’s personality?
  4. Which stakeholders require careful management that no governance document acknowledges?
  5. Would the programme’s roadmap survive challenge by an informed, independent reviewer?

Answering these questions before making changes is the difference between inheriting a mature programme and discovering — after proposing what appears to be a reasonable change — that the programme’s functioning depended on something invisible and now gone.

Building Programmes That Survive You

The most important long-term contribution a security leader can make is building a programme that continues performing after they leave. That means consciously and consistently doing things that most leaders find uncomfortable: documenting reasoning, not just outcomes; institutionalising relationships through governance structures; and creating conditions under which governance functions without informal intervention.

A security programme should be evaluated not on how well it performs under a respected, trusted leader — but on whether it would survive their departure. By that test, many programmes that appear mature are not.

References and Further Reading

  • ISACA — COBIT 2019 Framework: Governance and Management Objectives
  • Rathbun, D. — The Critical Path Newsletter, LinkedIn (April 2026)
  • Harvard Business Review — What New Leaders Need to Know About Cybersecurity
  • Gartner — CISO Succession Planning and Security Program Resilience (2024)
  • (ISC)² — CISSP CBK Domain 1: Security and Risk Management

LeakBase Taken Down: What the Dismantling of a Major Credential Marketplace Means for Security Teams

Russian law enforcement authorities arrested the alleged administrator of LeakBase in March 2026, dismantling one of the world’s largest stolen credential marketplaces. The platform had hosted hundreds of millions of compromised account credentials, financial data, and corporate documents — serving as a primary supply chain for account takeover (ATO) attacks, fraud, and business email compromise (BEC) campaigns globally.

For security professionals, the LeakBase takedown offers both an enforcement success story and a timely reminder of the scale of the credential theft economy that underpins modern cybercrime.

What LeakBase Was — and the Scale of the Problem

LeakBase operated since 2021 as a clearnet and dark web marketplace where threat actors could buy, sell, and exploit stolen personal data. According to the US Department of Justice, the platform’s inventory included:

  • Hundreds of millions of account credentials — usernames and passwords from breached services.
  • Financial data — credit card numbers, banking account and routing information.
  • Corporate documents — obtained through hacking campaigns and insider threats.

The platform had over 142,000 registered members and more than 215,000 member messages as of December 2025, operating as both a marketplace and a community infrastructure for cybercriminals. Its seizure banner confirmed that all user accounts, posts, private messages, and IP logs were preserved for evidentiary purposes — a significant intelligence windfall for law enforcement.

The Resilience Problem: LeakBase Came Back

Within days of the seizure, LeakBase re-emerged on a new domain (leakbase[.]bz) with DDoS protection provided by a Russian bulletproof hosting provider. This pattern — rapid reconstitution after law enforcement action — is characteristic of the cybercriminal ecosystem’s resilience. Infrastructure can be seized; the operational know-how, customer relationships, and data inventory are far harder to eliminate permanently.

This resilience dynamic is important context for how security teams should evaluate law enforcement actions. Takedowns disrupt the economics of criminal platforms — they increase costs, reduce trust, and create attribution risk for participants. But they rarely permanently eliminate capability. The value lies in disruption, intelligence gathering, and deterrence — not permanent eradication.

Implications for Enterprise Security Teams

The credential inventory that circulated through LeakBase did not disappear with the platform’s takedown. It has been in circulation for years, and copies exist across numerous other dark web marketplaces and Telegram channels. Security teams should treat the credential threat as a persistent baseline condition, not an incident to respond to reactively.

Practical implications include:

  1. Enable dark web credential monitoring. Services such as Have I Been Pwned, Recorded Future, or SpyCloud provide visibility into whether your organisation’s credentials have appeared in known breach databases. This should be an ongoing capability, not a periodic exercise.
  2. Enforce MFA universally. Stolen credentials are only exploitable if password authentication is the sole access factor. Multi-factor authentication — particularly phishing-resistant methods such as FIDO2/WebAuthn passkeys — eliminates the value of stolen passwords for the majority of account takeover scenarios.
  3. Monitor for credential stuffing activity. Unusual authentication patterns — high volume of failed logins, logins from unexpected geographic locations, or velocity anomalies — are indicators of credential stuffing campaigns using purchased data.
  4. Implement password breach detection at authentication. Integrate Have I Been Pwned’s API or equivalent into your identity platform to prevent users from setting passwords that appear in known breach databases.
  5. Review privileged account exposure. Corporate credentials — particularly for VPN, email, and remote access systems — are highest-value targets in credential marketplaces. Enforce credential rotation policies for accounts that may have been exposed.

The Broader Credential Economy

LeakBase was one platform in a vast, interconnected credential economy. The Verizon 2024 DBIR found that stolen credentials remain the single most common initial access vector in confirmed breaches, accounting for a plurality of intrusion cases across industries. The supply side of this economy — credential theft through phishing, infostealer malware, and data breaches — continues to operate at industrial scale.

Addressing this requires a combination of identity security investment, continuous monitoring, and the recognition that “your credentials are probably already out there” is the correct security assumption — not an exception.

References and Further Reading

  • US Department of Justice — LeakBase Takedown Statement (March 2026)
  • The Hacker News — LeakBase Resurgence Coverage
  • KELA Threat Intelligence — LeakBase Attribution Report
  • Verizon — Data Breach Investigations Report 2024
  • Have I Been Pwned — haveibeenpwned.com
  • CISA — Implementing Phishing-Resistant MFA (2022)
  • NIST SP 800-63B — Digital Identity Guidelines: Authentication and Lifecycle Management

AI Security in 2026: Key Themes from the AI Secure Intelligence Summit and What They Mean for Practitioners

The AI Secure Intelligence Summit 2026, hosted by InfosecTrain, brought together practitioners, researchers, and governance specialists to examine the rapidly evolving intersection of artificial intelligence and cybersecurity. For professionals holding CISSP, CCSP, or AAISM credentials, the summit’s themes are directly relevant to practice — AI is no longer a future consideration but an active component of both the threat landscape and the defensive toolkit.

This post distils the key themes from the summit and connects them to the frameworks, standards, and practical considerations that security professionals in Australia and globally must navigate.

Theme 1: AI as Both Tool and Target

The most important conceptual shift in AI security is recognising that AI operates in two distinct roles simultaneously: as a security tool (threat detection, anomaly analysis, automated response) and as a target (adversarial attacks, model theft, data poisoning). Most organisations have begun investing in AI-powered security tooling without implementing parallel governance for AI system security.

OWASP’s LLM Top 10 (2024 edition) catalogues the primary attack vectors against large language model applications: prompt injection, insecure output handling, training data poisoning, model denial of service, and supply chain vulnerabilities. Security teams that have deployed AI-driven tools — whether UEBA platforms, threat intelligence systems, or AI-assisted SOC capabilities — need to evaluate their exposure to each of these vectors.

Theme 2: Adversarial Machine Learning Is Moving Into Production Environments

Adversarial ML attacks — techniques that manipulate AI model inputs or training data to produce incorrect outputs — are transitioning from academic research to production threat vectors. Key attack categories include:

  • Evasion attacks: Crafting inputs that cause a security model (e.g., malware classifier, fraud detection) to misclassify malicious activity as benign.
  • Data poisoning: Corrupting training data to introduce systematic model biases that favour the attacker.
  • Membership inference: Extracting information about training data by querying model confidence scores — a significant privacy and data protection risk for models trained on sensitive datasets.
  • Model extraction: Reconstructing a proprietary model through API queries, enabling attackers to develop evasion techniques offline.

NIST’s AI RMF 1.0 and its forthcoming Adversarial Machine Learning guidelines provide the foundational framework for assessing and mitigating these risks. ISACA’s AAISM certification body of knowledge covers adversarial ML risk assessment as a core competency — a sign that these skills are now expected at the governance and audit level.

Theme 3: AI Governance Is a Security Control

Summit speakers consistently emphasised that AI governance — the policies, accountability structures, and oversight mechanisms that govern AI system behaviour — is itself a security control. Organisations without formal AI governance are not merely compliance non-compliant; they are operationally exposed.

An AI system without documented accountability creates a situation where security incidents cannot be properly attributed, investigated, or remediated. ISO/IEC 42001:2023 addresses this directly: Clause 5 (Leadership) and Clause 8 (Operation) together require that AI systems operate within defined accountability boundaries and that incident response plans exist for AI system failures.

For Australian organisations, the government’s Safe and Responsible AI consultation process is moving toward a risk-based regulatory framework that will require documented AI governance for high-risk AI applications. Security professionals who understand both the technical and governance dimensions of AI risk will be significantly more valuable in this environment.

Theme 4: The Human Layer Remains the Primary Attack Surface

Despite the growing focus on AI-specific attack vectors, summit discussions repeatedly returned to the human layer as the primary attack surface. AI-generated phishing, deepfake voice and video social engineering, and AI-assisted reconnaissance are dramatically increasing the quality and scale of human-targeted attacks. The 2024 DBIR found that the median time to click a phishing link was under 60 seconds — a figure that has been dramatically compressed by AI-generated, personalised content.

Security awareness programmes built around generic phishing simulations are increasingly insufficient against this threat. Effective awareness training in 2026 must incorporate AI-generated content examples, deepfake detection guidance, and decision-making frameworks for verifying identity in digital channels.

Practical Implications for CISSP and CCSP Practitioners

For practitioners holding CISSP or CCSP credentials, AI security themes connect directly to existing CBK domains:

  • CISSP Domain 1 (Security and Risk Management): AI risk assessment methodology, AI-specific threat modelling.
  • CISSP Domain 3 (Security Architecture and Engineering): AI system security architecture, adversarial robustness controls.
  • CCSP Domain 4 (Cloud Application Security): Securing AI APIs, prompt injection defences, LLM application security.
  • CCSP Domain 6 (Legal, Risk, and Compliance): AI regulatory landscape, ISO 42001 alignment, data protection implications of AI training data.

References and Further Reading

  • OWASP LLM Top 10 (2024) — owasp.org
  • NIST AI RMF 1.0 (2023) — nist.gov
  • NIST — Adversarial Machine Learning: A Taxonomy and Terminology of Attacks and Mitigations (2024)
  • ISO/IEC 42001:2023 — AI Management Systems
  • ISACA — AAISM Certification Programme
  • Verizon DBIR 2024 — Human Factor Analysis

Applied Reverse Engineering for Security Professionals: Why This Skill Is More Relevant Than Ever

Reverse engineering — the process of analysing compiled software or binaries to understand their structure, behaviour, and intent — is one of the most powerful skills in an offensive and defensive security toolkit. In an era of rapidly evolving malware, supply chain attacks, and AI-generated code, the ability to analyse unknown binaries at a low level has become an increasingly differentiating capability for incident responders, malware analysts, threat intelligence practitioners, and penetration testers.

This post explores the core concepts, toolchain, and use cases of reverse engineering, with a focus on how it applies to the daily work of security professionals in enterprise environments.

What Is Reverse Engineering in a Security Context?

Reverse engineering in security involves analysing binaries, firmware, protocols, or code without access to the original source, with the goal of understanding what a piece of software does — particularly when it may be malicious. It is applied in several key security scenarios:

  • Malware analysis: Determining the capabilities, persistence mechanisms, C2 communication patterns, and evasion techniques of suspected malware samples.
  • Vulnerability research: Identifying exploitable conditions in software by analysing compiled binaries when source code is unavailable.
  • Threat intelligence: Attributing malware campaigns to known threat actor groups by identifying code reuse, tooling signatures, and behavioural patterns.
  • Incident response: Understanding exactly what a threat actor’s tooling did on a compromised system when log data is insufficient.
  • Supply chain verification: Validating that compiled software matches expected behaviour and has not been tampered with.

The Core Toolchain

Proficiency in reverse engineering requires familiarity with a specific set of tools that operate at different levels of abstraction:

Static analysis tools examine binaries without executing them. IDA Pro and Ghidra (NSA’s open-source disassembler and decompiler) are the primary platforms for static analysis. Ghidra, in particular, has become the reference tool for malware analysis training because it is freely available and produces high-quality decompiled C-like pseudocode from x86, ARM, and other architectures.

Dynamic analysis tools examine binaries during execution, in a controlled environment. x64dbg (Windows debugger), OllyDbg, and GDB (Linux/macOS) allow analysts to step through code, set breakpoints, and observe the runtime behaviour of binaries — including memory manipulation, registry writes, and network connections.

Sandboxes such as Cuckoo Sandbox, Any.Run, and VirusTotal provide automated dynamic analysis environments that capture behavioural artefacts — file writes, network indicators, process trees — without requiring manual debugging. They are the first-line analysis tool for most SOC workflows.

String and entropy analysis tools identify suspicious patterns in binaries: encoded payloads, hardcoded IP addresses and domains, API call sequences, and encryption routines. Tools like FLOSS (FLARE Obfuscated String Solver) automate the extraction of obfuscated strings from malware samples.

Reverse Engineering and the CISSP CBK

For CISSP practitioners, reverse engineering intersects with several CBK domains:

  • Domain 6 (Security Assessment and Testing): Malware analysis and binary review are assessment techniques within the software security testing domain.
  • Domain 7 (Security Operations): Incident response increasingly requires reverse engineering capability to fully characterise threat actor tooling and determine the scope of compromise.
  • Domain 8 (Software Development Security): Understanding how compiled code differs from source code — and how obfuscation, packing, and anti-analysis techniques work — is relevant to secure software development governance.

A Note on Legal and Ethical Boundaries

Reverse engineering in Australia is governed by the Copyright Act 1968, which contains specific provisions regarding decompilation for interoperability purposes under Section 47D. Defensive reverse engineering of malware samples captured in the course of incident response is generally outside the scope of these concerns, but any reverse engineering of commercial software requires careful legal review. Certifications such as GREM (GIAC Reverse Engineering Malware) provide structured, legally scoped training frameworks for practitioners.

Getting Started: Recommended Learning Path

  1. Develop foundational x86/x64 assembly language knowledge — “Computer Organisation and Architecture” by Stallings is a solid starting reference.
  2. Install and learn Ghidra — the NSA CISA training materials are freely available and practical.
  3. Work through malware samples in controlled VM environments — Malware Traffic Analysis (malware-traffic-analysis.net) and VirusTotal provide real samples in a legal, sandboxed context.
  4. Study the FLARE-On challenge archive — FireEye/Mandiant’s annual reverse engineering competition with detailed writeups is one of the best learning resources available.
  5. Consider pursuing GREM (GIAC Reverse Engineering Malware) certification for a structured, recognised credential in this domain.

References and Further Reading

  • NSA/CISA — Ghidra Reverse Engineering Framework — ghidra-sre.org
  • SANS GREM — GIAC Reverse Engineering Malware Certification
  • FireEye/Mandiant — FLARE-On Challenge Archive
  • Sikorski, M. & Honig, A. — Practical Malware Analysis (No Starch Press)
  • OWASP Mobile Security Testing Guide — Static and Dynamic Analysis Sections
  • Malware Traffic Analysis — malware-traffic-analysis.net