Video
April 20, 2026

NIS2 and DORA don't mention Kafka by name. Neither directive was written with event streaming in mind, and neither contains guidance specific to the technology. That's led some Kafka teams to assume they're outside the scope of these regulations. The assumption is wrong, and the gap between what the directives actually require and what most Kafka environments provide is worth examining in detail.

This post walks through the specific requirements, how they map to Kafka, and what a compliant configuration looks like in practice.

The NIS2 requirement

NIS2 Article 21 sets out cybersecurity risk management measures for essential and important entities. Article 21 is written in outcome-driven language, meaning it defines what organizations need to achieve rather than prescribing specific technologies. The directive lists ten mandatory measure areas, and business continuity sits in subsection C.

The specific requirement is that organizations implement business continuity management, including backup management, disaster recovery, and crisis management. The language is deliberately broad because the directive applies across sectors, from financial services to healthcare to manufacturing. What "backup management" looks like for a Kafka cluster is not specified. What's specified is that you need to have it, and you need to be able to show it works.

NIS2 references ISO 27001 as the gold standard framework for operationalizing these requirements. ISO 27001 includes controls for information backup and redundancy of processing facilities. If your compliance program maps ISO controls to NIS2 measures, which most do, the backup controls cover any system that holds data needed for business continuity. Kafka clusters carrying operational data fall into this category.

The DORA requirement

DORA applies to financial entities and critical ICT service providers in the EU. Where NIS2 is broad, DORA is specific. The Digital Operational Resilience Act explicitly addresses ICT risk management, including backup policies, restoration procedures, and testing obligations. Article 12 of DORA requires financial entities to maintain ICT business continuity policies that include backup and recovery capabilities for critical systems.

DORA also includes requirements for what it calls offline capability and air-gapping for certain categories of data. The exact scope depends on your institution's risk profile and the criticality of the systems involved. But for Kafka clusters that carry transaction data, market data, or operational records used in financial services, DORA treats these as critical ICT systems subject to the full backup and recovery regime.

Both regulations share a common expectation: if you have a system that retains data your business relies on, you need a backup and recovery capability for that system. For most organizations running Kafka in production, the data on Kafka meets that threshold.

Where most Kafka environments fall short

The default pattern for Kafka data protection is replication. Replication factor three, four, or eight across multiple availability zones. This is a high availability strategy, not a backup strategy, and the distinction matters for compliance.

A replication setup doesn't produce the artifacts auditors look for. There's no restore log because restore isn't a separate operation. There's no recovery time metric because the metric that matters for replication is availability, not recovery. There's no audit trail of backup tests because the replicas are always active, so "testing the backup" isn't a defined concept. If your NIS2 auditor asks for evidence of business continuity testing on your Kafka environment, a replication-only setup has no answer.

Under the provability requirements of NIS2 and DORA, this is a problem. ENISA has been explicit about the difference between having a policy and being evidence-ready. A policy explains intent. Evidence proves the control operates as intended. For backup and recovery controls specifically, evidence means documented restore tests with outcomes, recovery time measurements, and audit trails showing the capability was exercised on a regular basis.

What a compliant configuration looks like

A NIS2- or DORA-compliant Kafka backup needs a few specific properties, and these shape how you configure Kannika for a regulated environment.

First, the backup needs to be operationally decoupled from the live cluster. Regulators don't want to see a backup that shares credentials, network paths, or control plane access with the system it's backing up. Kannika stores backups on separate storage that isn't reachable from the Kafka control plane, which satisfies this requirement by design.

Second, the backup needs to run automatically and continuously. Point-in-time snapshots are acceptable for some scenarios, but for systems with active write workloads, continuous backup gives you a cleaner recovery point objective. Kannika's backup runs in real time, writing records to the backup storage as they're produced. Your recovery point objective is measured in seconds, not hours.

Third, the restore needs to be a documented, repeatable operation. During a compliance review, the auditor will ask you to demonstrate a restore. You should be able to do this without writing code or calling an expert. With Kannika, the restore is a declarative YAML definition that you apply against the running system. You can run the operation in a test environment, capture the logs, and produce them as evidence during the next audit cycle.

Fourth, the audit trail needs to be complete and accessible. Every backup, every restore, every configuration change should be logged with timestamps and outcomes. Kannika generates this trail as part of normal operations. You can pull the restore history, the recovery metrics, and the configuration changes directly from the system when you need them.

The deployment model matters for compliance

One detail that often gets overlooked in compliance planning is where the backup software runs. Adding a SaaS tool to a regulated environment means updating your risk management inventory, running a new vendor assessment, going through procurement, and handling the data flows to a third-party system. For teams trying to get compliant quickly, a SaaS backup tool can add months to the project.

Kannika is designed to run in your own environment. You deploy it on your own infrastructure, on Kubernetes, on a VM, or wherever your operations team prefers. Your Kafka data never leaves your infrastructure. The backup storage is in your control. The audit trail is in your systems. From a compliance perspective, this is a local deployment, not a third-party integration.

For organizations that prefer to offload the operational side, Kannika also has a managed cloud offering. In this model, we run Kannika on a cloud account that you provide, which means the deployment is still in your infrastructure but the installation and maintenance are handled by us. The data path is the same, your Kafka data stays in your environment, but you don't have to build the operational expertise in-house.

A pragmatic path to compliance

If you're preparing for a NIS2 or DORA audit and your Kafka environment isn't covered today, here's a pragmatic sequence to work through:

Identify which Kafka topics carry data relevant to business continuity. Not every topic needs a backup. The topics that matter are the ones where data loss would affect operations, trigger regulatory reporting, or create downstream integrity problems.

Configure Kannika to back up those topics. This is a few clicks in the UI or a YAML definition for more structured deployments. Your Kafka cluster keeps running, and the backup starts populating the operationally decoupled storage.

Run a restore test. This is the step most teams skip, and it's the step auditors care about most. Pick a topic, apply a restore definition against a test target, and verify the data comes back intact. Capture the logs. This becomes your first piece of audit evidence.

Document the configuration and the test. Write down what you backed up, where the backup lives, how the restore works, and what the test outcome was. This is your evidence pack for the next audit cycle.

Establish a regular cadence for restore testing. NIS2 and DORA both expect ongoing evidence of operation, not a one-time demonstration. A quarterly restore test with documented outcomes is a reasonable baseline for most environments.

Try it yourself

The Kannika sandbox at kannika.io covers the basics of backup configuration and restore operations, so you can see the building blocks of a compliant setup. To map this against your actual NIS2 or DORA scope and walk through the configuration with your real infrastructure in mind, get in touch and we can set up a deeper session.

Bryan De Smaele
Author
Bryan De Smaele