LeakBase Taken Down: What the Dismantling of a Major Credential Marketplace Means for Security Teams

Russian law enforcement authorities arrested the alleged administrator of LeakBase in March 2026, dismantling one of the world’s largest stolen credential marketplaces. The platform had hosted hundreds of millions of compromised account credentials, financial data, and corporate documents — serving as a primary supply chain for account takeover (ATO) attacks, fraud, and business email compromise (BEC) campaigns globally.

For security professionals, the LeakBase takedown offers both an enforcement success story and a timely reminder of the scale of the credential theft economy that underpins modern cybercrime.

What LeakBase Was — and the Scale of the Problem

LeakBase operated since 2021 as a clearnet and dark web marketplace where threat actors could buy, sell, and exploit stolen personal data. According to the US Department of Justice, the platform’s inventory included:

  • Hundreds of millions of account credentials — usernames and passwords from breached services.
  • Financial data — credit card numbers, banking account and routing information.
  • Corporate documents — obtained through hacking campaigns and insider threats.

The platform had over 142,000 registered members and more than 215,000 member messages as of December 2025, operating as both a marketplace and a community infrastructure for cybercriminals. Its seizure banner confirmed that all user accounts, posts, private messages, and IP logs were preserved for evidentiary purposes — a significant intelligence windfall for law enforcement.

The Resilience Problem: LeakBase Came Back

Within days of the seizure, LeakBase re-emerged on a new domain (leakbase[.]bz) with DDoS protection provided by a Russian bulletproof hosting provider. This pattern — rapid reconstitution after law enforcement action — is characteristic of the cybercriminal ecosystem’s resilience. Infrastructure can be seized; the operational know-how, customer relationships, and data inventory are far harder to eliminate permanently.

This resilience dynamic is important context for how security teams should evaluate law enforcement actions. Takedowns disrupt the economics of criminal platforms — they increase costs, reduce trust, and create attribution risk for participants. But they rarely permanently eliminate capability. The value lies in disruption, intelligence gathering, and deterrence — not permanent eradication.

Implications for Enterprise Security Teams

The credential inventory that circulated through LeakBase did not disappear with the platform’s takedown. It has been in circulation for years, and copies exist across numerous other dark web marketplaces and Telegram channels. Security teams should treat the credential threat as a persistent baseline condition, not an incident to respond to reactively.

Practical implications include:

  1. Enable dark web credential monitoring. Services such as Have I Been Pwned, Recorded Future, or SpyCloud provide visibility into whether your organisation’s credentials have appeared in known breach databases. This should be an ongoing capability, not a periodic exercise.
  2. Enforce MFA universally. Stolen credentials are only exploitable if password authentication is the sole access factor. Multi-factor authentication — particularly phishing-resistant methods such as FIDO2/WebAuthn passkeys — eliminates the value of stolen passwords for the majority of account takeover scenarios.
  3. Monitor for credential stuffing activity. Unusual authentication patterns — high volume of failed logins, logins from unexpected geographic locations, or velocity anomalies — are indicators of credential stuffing campaigns using purchased data.
  4. Implement password breach detection at authentication. Integrate Have I Been Pwned’s API or equivalent into your identity platform to prevent users from setting passwords that appear in known breach databases.
  5. Review privileged account exposure. Corporate credentials — particularly for VPN, email, and remote access systems — are highest-value targets in credential marketplaces. Enforce credential rotation policies for accounts that may have been exposed.

The Broader Credential Economy

LeakBase was one platform in a vast, interconnected credential economy. The Verizon 2024 DBIR found that stolen credentials remain the single most common initial access vector in confirmed breaches, accounting for a plurality of intrusion cases across industries. The supply side of this economy — credential theft through phishing, infostealer malware, and data breaches — continues to operate at industrial scale.

Addressing this requires a combination of identity security investment, continuous monitoring, and the recognition that “your credentials are probably already out there” is the correct security assumption — not an exception.

References and Further Reading

  • US Department of Justice — LeakBase Takedown Statement (March 2026)
  • The Hacker News — LeakBase Resurgence Coverage
  • KELA Threat Intelligence — LeakBase Attribution Report
  • Verizon — Data Breach Investigations Report 2024
  • Have I Been Pwned — haveibeenpwned.com
  • CISA — Implementing Phishing-Resistant MFA (2022)
  • NIST SP 800-63B — Digital Identity Guidelines: Authentication and Lifecycle Management

Pay2Key Linux Ransomware: Why Your ESXi Hosts and Cloud Workloads Are Now Prime Targets

Ransomware has definitively moved beyond Windows endpoints. Pay2Key — a threat actor group with links to Iranian state-sponsored operations — has re-emerged as a Ransomware-as-a-Service (RaaS) operation with explicit Linux targeting capabilities, specifically engineered to attack enterprise servers, VMware ESXi hypervisors, and cloud-native workloads. For organisations that have built their ransomware resilience strategy around Windows endpoint protection, this evolution is a forcing function for strategic reassessment.

The Evolution of Pay2Key: From Windows to Linux Infrastructure

Pay2Key was originally known for fast, human-operated intrusions against Israeli and Brazilian organisations — predominantly Windows-based attacks characterised by speed and precision. The group’s re-emergence as a RaaS operation with Linux payload support represents a significant operational evolution. By offering Linux-capable ransomware through an affiliate model, Pay2Key widens the pool of threat actors capable of targeting high-value Linux infrastructure — removing the need for affiliates to have Linux operational expertise themselves.

This shift follows a broader trend documented by threat intelligence providers including CrowdStrike, Mandiant, and ESET: ransomware operators increasingly prioritise Linux environments because that is where the highest-value compute and data assets reside. ESXi hypervisors, NAS devices, SAP environments, financial databases, and Kubernetes clusters all run on Linux. A single compromised ESXi host can cascade into an outage across dozens or hundreds of virtual machines.

Technical Analysis: How the Pay2Key Linux Variant Operates

The Pay2Key Linux binary is configuration-driven and engineered for scale. Key technical characteristics include:

  • Root privilege requirement: The ransomware requires root-level access to operate, incentivising attackers to achieve privilege escalation before deploying the encryptor.
  • JSON configuration file: Target paths, file types, and mount classifications are controlled through a JSON configuration, giving operators fine-grained control over what gets encrypted in each deployment.
  • Pre-encryption preparation: The binary stops services, kills processes, and disables SELinux and AppArmor before beginning encryption — systematically removing defensive controls.
  • Cron persistence: A cron job is installed to resume encryption if the system restarts mid-incident — ensuring encryption completes even if defenders detect and reboot the host.
  • Mount enumeration: The ransomware enumerates /proc/mounts and classifies mounts into categories, targeting standard and removable filesystems while skipping read-only and pseudo-filesystems to avoid system crashes.
  • Selective file targeting: ELF and MZ binaries are skipped to avoid crashing critical system components, maximising the proportion of business-critical data encrypted.
  • ChaCha20 encryption: Per-file keys are generated and stored in obfuscated metadata blocks, complicating recovery without the attacker’s key material.

Why Linux Is Now a First-Class Ransomware Target

Three structural factors explain the ransomware sector’s pivot toward Linux:

  1. Asset concentration: The most valuable enterprise data — databases, ERP systems, financial applications, development environments — increasingly runs on Linux servers and hypervisors. Encrypting one ESXi host takes down an entire virtual estate.
  2. Detection gap: Traditional EDR agents optimised for Windows have limited or no visibility into Linux processes, particularly in containerised and cloud-native environments. Infostealer malware and pre-ransomware activity on Linux hosts often go undetected.
  3. Misconfiguration prevalence: Cloud and DevOps environments frequently suffer from over-privileged service accounts, exposed management APIs, and inadequate network segmentation — creating accessible attack surfaces that Pay2Key affiliates exploit for initial access.

Hardening Recommendations for Linux, ESXi, and Cloud Workloads

Organisations cannot treat Linux as a lower-risk platform. Security hardening must be systematic and audited. Key measures include:

  1. Patch exposed services immediately. VPN gateways, remote management portals, and admin panels are primary initial access vectors for Pay2Key operators. These must be patched within 48 hours of critical vulnerability disclosure.
  2. Enforce least privilege on service accounts. Audit sudo configurations, SSH access controls, and service account permissions. Root-equivalent access should be minimised and logged.
  3. Monitor for pre-encryption indicators. Abnormal process-kill activity, service-stop sequences, and SELinux/AppArmor disable events are early indicators of ransomware staging. These should trigger automated alerts.
  4. Baseline and monitor /proc/mounts activity. Changes to mount classification behaviour on production hosts should be investigated immediately.
  5. Segment ESXi management networks. ESXi management interfaces should be accessible only from dedicated management VLANs and jump hosts, not from general corporate or internet-facing networks.
  6. Validate backup integrity and restore RTOs. The only reliable ransomware recovery capability is tested, isolated backups with validated restore times. Backups accessible from the same network that gets encrypted do not provide recovery capability.
  7. Deploy Linux-capable EDR. Purpose-built Linux EDR agents that can intercept ransomware execution paths — not just detect file changes post-encryption — are essential for cloud and server workloads.

The APRA and ASD Context for Australian Organisations

APRA’s Prudential Practice Guide CPG 234 advises entities to assess ransomware risk explicitly as part of their information security risk management programme. ASD’s Essential Eight includes application control (blocking unauthorised binaries on servers) and regular backups — two controls that directly reduce Pay2Key’s effectiveness. For organisations at Essential Eight Maturity Level 2 and above, patching of server operating systems is expected within 48 hours of critical vulnerability release — a timeline that eliminates many of the initial access vectors Pay2Key exploits.

References and Further Reading

  • GBHackers — Linux Ransomware Pay2Key Targets Servers, Virtualization Hosts, and Cloud Workloads
  • CrowdStrike — 2024 Global Threat Report
  • ESET — Linux Ransomware Threat Landscape (2024)
  • ASD Essential Eight Maturity Model — Application Control and Regular Backups (2023)
  • APRA CPG 234 — Prudential Practice Guide: Information Security
  • CISA Alert — Iranian Cyber Actors Targeting Critical Infrastructure (2024)
  • VMware — ESXi Hardening Guide

AI Security in 2026: Key Themes from the AI Secure Intelligence Summit and What They Mean for Practitioners

The AI Secure Intelligence Summit 2026, hosted by InfosecTrain, brought together practitioners, researchers, and governance specialists to examine the rapidly evolving intersection of artificial intelligence and cybersecurity. For professionals holding CISSP, CCSP, or AAISM credentials, the summit’s themes are directly relevant to practice — AI is no longer a future consideration but an active component of both the threat landscape and the defensive toolkit.

This post distils the key themes from the summit and connects them to the frameworks, standards, and practical considerations that security professionals in Australia and globally must navigate.

Theme 1: AI as Both Tool and Target

The most important conceptual shift in AI security is recognising that AI operates in two distinct roles simultaneously: as a security tool (threat detection, anomaly analysis, automated response) and as a target (adversarial attacks, model theft, data poisoning). Most organisations have begun investing in AI-powered security tooling without implementing parallel governance for AI system security.

OWASP’s LLM Top 10 (2024 edition) catalogues the primary attack vectors against large language model applications: prompt injection, insecure output handling, training data poisoning, model denial of service, and supply chain vulnerabilities. Security teams that have deployed AI-driven tools — whether UEBA platforms, threat intelligence systems, or AI-assisted SOC capabilities — need to evaluate their exposure to each of these vectors.

Theme 2: Adversarial Machine Learning Is Moving Into Production Environments

Adversarial ML attacks — techniques that manipulate AI model inputs or training data to produce incorrect outputs — are transitioning from academic research to production threat vectors. Key attack categories include:

  • Evasion attacks: Crafting inputs that cause a security model (e.g., malware classifier, fraud detection) to misclassify malicious activity as benign.
  • Data poisoning: Corrupting training data to introduce systematic model biases that favour the attacker.
  • Membership inference: Extracting information about training data by querying model confidence scores — a significant privacy and data protection risk for models trained on sensitive datasets.
  • Model extraction: Reconstructing a proprietary model through API queries, enabling attackers to develop evasion techniques offline.

NIST’s AI RMF 1.0 and its forthcoming Adversarial Machine Learning guidelines provide the foundational framework for assessing and mitigating these risks. ISACA’s AAISM certification body of knowledge covers adversarial ML risk assessment as a core competency — a sign that these skills are now expected at the governance and audit level.

Theme 3: AI Governance Is a Security Control

Summit speakers consistently emphasised that AI governance — the policies, accountability structures, and oversight mechanisms that govern AI system behaviour — is itself a security control. Organisations without formal AI governance are not merely compliance non-compliant; they are operationally exposed.

An AI system without documented accountability creates a situation where security incidents cannot be properly attributed, investigated, or remediated. ISO/IEC 42001:2023 addresses this directly: Clause 5 (Leadership) and Clause 8 (Operation) together require that AI systems operate within defined accountability boundaries and that incident response plans exist for AI system failures.

For Australian organisations, the government’s Safe and Responsible AI consultation process is moving toward a risk-based regulatory framework that will require documented AI governance for high-risk AI applications. Security professionals who understand both the technical and governance dimensions of AI risk will be significantly more valuable in this environment.

Theme 4: The Human Layer Remains the Primary Attack Surface

Despite the growing focus on AI-specific attack vectors, summit discussions repeatedly returned to the human layer as the primary attack surface. AI-generated phishing, deepfake voice and video social engineering, and AI-assisted reconnaissance are dramatically increasing the quality and scale of human-targeted attacks. The 2024 DBIR found that the median time to click a phishing link was under 60 seconds — a figure that has been dramatically compressed by AI-generated, personalised content.

Security awareness programmes built around generic phishing simulations are increasingly insufficient against this threat. Effective awareness training in 2026 must incorporate AI-generated content examples, deepfake detection guidance, and decision-making frameworks for verifying identity in digital channels.

Practical Implications for CISSP and CCSP Practitioners

For practitioners holding CISSP or CCSP credentials, AI security themes connect directly to existing CBK domains:

  • CISSP Domain 1 (Security and Risk Management): AI risk assessment methodology, AI-specific threat modelling.
  • CISSP Domain 3 (Security Architecture and Engineering): AI system security architecture, adversarial robustness controls.
  • CCSP Domain 4 (Cloud Application Security): Securing AI APIs, prompt injection defences, LLM application security.
  • CCSP Domain 6 (Legal, Risk, and Compliance): AI regulatory landscape, ISO 42001 alignment, data protection implications of AI training data.

References and Further Reading

  • OWASP LLM Top 10 (2024) — owasp.org
  • NIST AI RMF 1.0 (2023) — nist.gov
  • NIST — Adversarial Machine Learning: A Taxonomy and Terminology of Attacks and Mitigations (2024)
  • ISO/IEC 42001:2023 — AI Management Systems
  • ISACA — AAISM Certification Programme
  • Verizon DBIR 2024 — Human Factor Analysis

OpenAI’s GPT-5.4 Mini and Nano: Security Implications of Lightweight AI at Scale

OpenAI’s release of GPT-5.4 Mini and GPT-5.4 Nano marks a significant moment in the democratisation of capable AI. These lightweight, lower-latency models are explicitly designed for high-volume, cost-sensitive deployments — real-time customer service, embedded applications, mobile AI assistants, and edge computing scenarios. For security professionals, this development has two dimensions: the security of these systems themselves, and the security implications of capable AI becoming widely accessible to both defenders and attackers.

What GPT-5.4 Mini and Nano Represent

The Mini and Nano designations indicate optimised models that trade some capability ceiling for dramatically reduced inference cost and latency. This class of model — sometimes called “small language models” (SLMs) — is increasingly significant because it enables AI capabilities in contexts where full-scale model deployment is impractical: mobile devices, IoT endpoints, embedded systems, and high-throughput API environments.

The security implications stem from the deployment contexts, not the models themselves. When AI capabilities are embedded in high-volume consumer applications, the attack surface — prompt injection, data leakage, output manipulation — scales proportionally with adoption. A single prompt injection vulnerability in a widely deployed AI assistant can affect millions of users.

Security Risks of Lightweight AI at Scale

Prompt Injection at Consumer Scale: As compact AI models are embedded in applications that process user input and take actions on users’ behalf, indirect prompt injection becomes a first-class attack vector. Malicious content in documents, emails, or web pages can manipulate AI agents into taking unintended actions — a risk that scales with adoption. OWASP’s LLM01 (Prompt Injection) remains the highest-priority risk in the LLM Top 10 for this reason.

AI-Assisted Threat Actor Capability: The availability of low-cost, capable AI models dramatically reduces the skill threshold for social engineering, phishing content generation, and vulnerability research by threat actors. This is not hypothetical — security researchers have documented AI-generated phishing campaigns that significantly outperform manually crafted content in terms of click rates and credential capture.

Data Leakage in Embedded AI: Compact models deployed in enterprise applications may process sensitive data — customer PII, financial records, confidential communications — in ways that are not covered by existing data governance frameworks. The information security implications of AI data processing need to be assessed as part of any application security review.

Supply Chain Risk: AI models accessed via API introduce a third-party dependency on the model provider’s availability, security posture, and policy decisions. OpenAI’s API is a critical dependency for an increasing number of production applications — making it a high-value target for disruption attacks and a significant concentration risk for organisations that depend on it heavily.

Governance Considerations for Security Practitioners

Organisations deploying AI capabilities — whether GPT-5.4 Mini, other commercial models, or open-source alternatives — should ensure their security governance addresses:

  1. AI system inventory: Maintain a register of AI systems in use, including their data inputs, outputs, and decision scope. ISO 42001 Clause 4 requires scope definition for each AI system.
  2. Prompt injection controls: Implement input validation, output filtering, and privilege separation for AI agents operating within production environments.
  3. Data classification alignment: Ensure AI systems are not processing data beyond their approved classification level. Systems processing personal data require privacy impact assessments under the Australian Privacy Act and GDPR where applicable.
  4. Third-party AI risk management: Apply standard third-party risk assessment processes to AI model providers, including review of their security documentation, incident response capability, and data handling terms.
  5. Security awareness for AI tools: Ensure staff understand the risks of sharing sensitive information with AI assistants — even those provided by trusted vendors — particularly in contexts where the data may be used for model training.

References and Further Reading

  • OpenAI — GPT-5.4 Mini and Nano Release Notes
  • OWASP LLM Top 10 (2024) — LLM01: Prompt Injection
  • NIST AI RMF 1.0 — Manage Function: Third-Party AI Risk
  • ISO/IEC 42001:2023 — AI Management Systems
  • ENISA — Multilayer Framework for Good Cybersecurity Practices for AI (2023)
  • Australian Privacy Act 1988 — APP 11: Security of Personal Information

Applied Reverse Engineering for Security Professionals: Why This Skill Is More Relevant Than Ever

Reverse engineering — the process of analysing compiled software or binaries to understand their structure, behaviour, and intent — is one of the most powerful skills in an offensive and defensive security toolkit. In an era of rapidly evolving malware, supply chain attacks, and AI-generated code, the ability to analyse unknown binaries at a low level has become an increasingly differentiating capability for incident responders, malware analysts, threat intelligence practitioners, and penetration testers.

This post explores the core concepts, toolchain, and use cases of reverse engineering, with a focus on how it applies to the daily work of security professionals in enterprise environments.

What Is Reverse Engineering in a Security Context?

Reverse engineering in security involves analysing binaries, firmware, protocols, or code without access to the original source, with the goal of understanding what a piece of software does — particularly when it may be malicious. It is applied in several key security scenarios:

  • Malware analysis: Determining the capabilities, persistence mechanisms, C2 communication patterns, and evasion techniques of suspected malware samples.
  • Vulnerability research: Identifying exploitable conditions in software by analysing compiled binaries when source code is unavailable.
  • Threat intelligence: Attributing malware campaigns to known threat actor groups by identifying code reuse, tooling signatures, and behavioural patterns.
  • Incident response: Understanding exactly what a threat actor’s tooling did on a compromised system when log data is insufficient.
  • Supply chain verification: Validating that compiled software matches expected behaviour and has not been tampered with.

The Core Toolchain

Proficiency in reverse engineering requires familiarity with a specific set of tools that operate at different levels of abstraction:

Static analysis tools examine binaries without executing them. IDA Pro and Ghidra (NSA’s open-source disassembler and decompiler) are the primary platforms for static analysis. Ghidra, in particular, has become the reference tool for malware analysis training because it is freely available and produces high-quality decompiled C-like pseudocode from x86, ARM, and other architectures.

Dynamic analysis tools examine binaries during execution, in a controlled environment. x64dbg (Windows debugger), OllyDbg, and GDB (Linux/macOS) allow analysts to step through code, set breakpoints, and observe the runtime behaviour of binaries — including memory manipulation, registry writes, and network connections.

Sandboxes such as Cuckoo Sandbox, Any.Run, and VirusTotal provide automated dynamic analysis environments that capture behavioural artefacts — file writes, network indicators, process trees — without requiring manual debugging. They are the first-line analysis tool for most SOC workflows.

String and entropy analysis tools identify suspicious patterns in binaries: encoded payloads, hardcoded IP addresses and domains, API call sequences, and encryption routines. Tools like FLOSS (FLARE Obfuscated String Solver) automate the extraction of obfuscated strings from malware samples.

Reverse Engineering and the CISSP CBK

For CISSP practitioners, reverse engineering intersects with several CBK domains:

  • Domain 6 (Security Assessment and Testing): Malware analysis and binary review are assessment techniques within the software security testing domain.
  • Domain 7 (Security Operations): Incident response increasingly requires reverse engineering capability to fully characterise threat actor tooling and determine the scope of compromise.
  • Domain 8 (Software Development Security): Understanding how compiled code differs from source code — and how obfuscation, packing, and anti-analysis techniques work — is relevant to secure software development governance.

A Note on Legal and Ethical Boundaries

Reverse engineering in Australia is governed by the Copyright Act 1968, which contains specific provisions regarding decompilation for interoperability purposes under Section 47D. Defensive reverse engineering of malware samples captured in the course of incident response is generally outside the scope of these concerns, but any reverse engineering of commercial software requires careful legal review. Certifications such as GREM (GIAC Reverse Engineering Malware) provide structured, legally scoped training frameworks for practitioners.

Getting Started: Recommended Learning Path

  1. Develop foundational x86/x64 assembly language knowledge — “Computer Organisation and Architecture” by Stallings is a solid starting reference.
  2. Install and learn Ghidra — the NSA CISA training materials are freely available and practical.
  3. Work through malware samples in controlled VM environments — Malware Traffic Analysis (malware-traffic-analysis.net) and VirusTotal provide real samples in a legal, sandboxed context.
  4. Study the FLARE-On challenge archive — FireEye/Mandiant’s annual reverse engineering competition with detailed writeups is one of the best learning resources available.
  5. Consider pursuing GREM (GIAC Reverse Engineering Malware) certification for a structured, recognised credential in this domain.

References and Further Reading

  • NSA/CISA — Ghidra Reverse Engineering Framework — ghidra-sre.org
  • SANS GREM — GIAC Reverse Engineering Malware Certification
  • FireEye/Mandiant — FLARE-On Challenge Archive
  • Sikorski, M. & Honig, A. — Practical Malware Analysis (No Starch Press)
  • OWASP Mobile Security Testing Guide — Static and Dynamic Analysis Sections
  • Malware Traffic Analysis — malware-traffic-analysis.net