Secure Product Design covers the architectural decisions, technical foundations, and design choices that
define what the product is from a security perspective.
These are properties and mechanisms that are intrinsic to the product; if you remove them, the product is
fundamentally weaker or loses a core function.
Subclusters
-
A1 Reference
architecture patterns
Reference architecture patterns are reusable structural blueprints that determine how
a product resists, contains, and recovers from attacks.
-
A2 Cryptographic
design
Cryptographic design covers the design-time decisions that determine how cryptography
protects a product throughout its lifecycle.
-
A3 Protocol
security
Protocol security covers the selection, configuration, and hardening of communication
protocols and inter-component interfaces through which a product exchanges data.
-
A4 Product
hardening
Product hardening covers the security configuration of a product as it reaches the
customer, including both runtime protections built into the firmware and the defaults active at first
power-on.
-
A5 Physical
security
Physical security addresses threats that cannot be countered in software alone:
tamper, fault injection, side-channel, and chip-level attacks against the hardware.
-
A6 Human
interaction
Human interaction covers the security of how operators, administrators, and end users
authenticate, configure, and control a product, and how the product's interface design prevents
security-relevant errors.
-
A7 Privacy
architecture
Privacy architecture covers design-time decisions about how product-generated data is
classified, owned, and protected across multiple stakeholders.
-
A8 AI integration
security
AI integration security covers the guardrails, validation approaches, and
architectural patterns needed to embed AI features in products that are robust and predictable under
adversarial conditions.
-
A9 Confidential
computing
Confidential computing protects data while it is being processed by leveraging
hardware-based trusted execution environments, not just at rest or in transit.
-
A10 Device
identity
Device identity covers how a product establishes, maintains, and proves its identity
throughout its lifetime.
-
A11 Autonomous
security functions
Autonomous security functions are on-device capabilities for detecting anomalies,
classifying threats, isolating compromised components, and initiating recovery without human
intervention.
Secure development covers the processes, methods, and tooling by which a development team ensures that a
product meets its intended security goals, from security requirements engineering and threat modelling
through secure coding, testing, formal verification, and feedback loops from the field.
It is important for product security because even well-designed security architectures fail in practice
without disciplined development and validation, and this cluster is where vulnerabilities are systematically
prevented, detected, and corrected before they scale into costly incidents or regulatory non-compliance.
Subclusters
-
B1 Security
requirements engineering
Security requirements engineering concerns the systematic identification,
specification, and traceability of security requirements throughout the product lifecycle.
-
B2 Threat
modelling
Threat modelling systematically identifies and analyses how and by whom a product can
be attacked, misused, or fail from a security perspective.
-
B3 Testability
design
Testability design addresses how products are designed to make their security
properties observable, testable, and verifiable.
-
B4 Secure development
practices
Secure development practices define how engineers write, review, and manage code and
configurations to minimise the introduction of vulnerabilities.
-
B5 Security
testing
Security testing evaluates whether a product and its components actually resist
attacks and misuse as intended.
-
B6 Formal
verification
Formal verification applies mathematically rigorous methods to prove that certain
security properties hold.
-
B7 Field-to-engineering feedback loops
Field-to-engineering feedback loops connect real-world security observations back
into development activities.
This cluster covers the chain of technologies and techniques that are required to transform a properly
engineered product and a collection of sourced components – physical or software – into a final product in a
secure way.
Supply chain processes are increasingly targeted by malicious parties that aim to abuse the significant
efforts that go into combining all the facets of a modern supply chain into a final product; this is
especially relevant in the product landscape, which unifies a traditional software chain, hardware
components, and AI models.
Subclusters
-
C1 Build
integrity
Ensuring that the components fed into the build process are exactly what ends up in
the final product.
-
C2 SBOM
generation
Producing an accurate and usable Software Bill of Materials automatically as a build
artifact, reflecting the actual composition of each release.
-
C3 Component
provenance
Using supply-chain artifacts such as build provenance and SBOMs to verify end
products and feed downstream uses such as vulnerability information exchange.
-
C4 AI model supply
chain
Securing the build pipeline of AI models by protecting both the resulting model and
the process and supply chain to create the model.
-
C5 Supplier
governance
Managing the suppliers behind the supply chain — both commercial businesses and
open-source projects — through organisational measures and by establishing visibility.
This cluster covers the organisational structures, regulatory compliance strategies, standards engagement,
maturity measurement, evidence automation, and vulnerability disclosure processes that ensure product
security is not just implemented but demonstrably governed across the product lifecycle.
With the CRA imposing the first horizontal product security obligations in the EU, sector-specific
regulations adding layered requirements, and the gap between "doing security" and "proving security"
becoming a regulatory liability, governance is where technical capability meets market access, and where
manufacturers who automate evidence generation and integrate compliance into engineering workflows will
outperform those treating it as a separate paperwork exercise.
Subclusters
-
D1 Regulatory
compliance
Regulatory compliance is the navigation of overlapping product security regulations
to build a compliance strategy that satisfies all applicable frameworks without duplicating effort.
-
D2 Standards
engagement
Standards engagement is active participation in standards development bodies, not
just passive adoption of published standards.
-
D3 Security
governance
Security governance covers the organisational structures, decision frameworks, and
role definitions that ensure product security is systematically applied across the product lifecycle and
across business units.
-
D4 Maturity
measurement
Maturity measurement is the structured assessment of how capable an organisation is
at product security, using frameworks, metrics, and benchmarks to guide improvement investment.
-
D5 Evidence
automation
Evidence automation links compliance documentation directly to engineering artifacts,
auto-generating regulatory evidence from CI/CD pipelines, test results, and configuration management
systems.
-
D6 Vulnerability
disclosure
Vulnerability disclosure covers the processes, tooling, and relationships through
which manufacturers receive, validate, and communicate about vulnerabilities discovered in shipped
products.
Post-deployment in product cybersecurity covers all security activities carried out after a product reaches
the market, including secure updates, vulnerability monitoring, incident response, end-of-life management,
field telemetry, digital twin simulation, secure remote access, and real-time security monitoring.
This cluster is critical because threats evolve continuously after release, which is why frameworks like
the EU CRA and FDA postmarket guidance now make post-market security a legal requirement rather than an
optional practice.
Subclusters
-
E1 Secure
updates
The trusted delivery and installation of patches, firmware, and software updates to
deployed products in a way that preserves their integrity, authenticity, and availability.
-
E2 Vulnerability
management
The continuous process of identifying, assessing, prioritising, and remediating
security weaknesses in deployed products throughout their operational lifetime.
-
E3 Incident
response
The structured process of detecting, containing, investigating, and recovering from
security incidents affecting deployed products, and communicating transparently with affected customers
and regulators.
-
E4 End-of-life
management
The planned, secure retirement of products, components, and associated data once they
reach the end of their supported lifetime, ensuring they do not become a lingering security liability.
-
E5 Field
intelligence
The continuous collection and analysis of operational, behavioural, and
security-relevant telemetry from deployed products in the field to inform security decisions across the
fleet.
-
E6 Digital
twins
Virtual, continuously synchronised replicas of deployed products or systems that
enable safe security testing, threat simulation, and validation without touching production
environments.
-
E7 Remote
access
The secure, authenticated ability to connect to deployed products from outside their
local environment for diagnostics, maintenance, monitoring, and updates.
-
E8 Security
monitoring
The continuous observation of deployed products and their operating environments to
detect, analyse, and respond to active threats, intrusions, and suspicious behaviour in real time.
A product's security architecture is its most consequential design decision and its hardest to change.
Defence-in-depth layering, zone and conduit models, privilege separation, fail-secure modes, and
compartmentalisation to contain compromise are all structural properties that must be committed to early.
Attack surface reduction, minimising exposed interfaces, protocols, and services, is an integral part of
good architecture rather than a separate activity. Recovery and rollback mechanisms (factory reset,
safe-mode boot, firmware rollback to a verified version) are equally architectural: they determine whether a
compromised product can be restored without physical intervention.
The challenge is adapting general patterns to the constraints of embedded, resource-limited, and
safety-critical products, where memory, processing power, and certification requirements restrict design
freedom. Products deployed across fleets of thousands of units need architecture that scales: patterns that
work on a prototype must also work at production volumes with diverse network environments and integration
contexts.
Relevant Technologies
- Zone and conduit models (IEC 62443) — structured network segmentation separating
product functions by trust level, with controlled communication paths between zones
- Hardware-enforced isolation — ARM TrustZone, RISC-V PMP, hypervisor-based separation
for partitioning safety-critical and security functions on shared hardware
- Secure boot and verified boot chains — establishing a hardware root of trust that
validates each firmware layer before execution
- Fail-secure and graceful degradation patterns — design approaches ensuring
safety-critical functions continue operating when security subsystems are compromised or under attack
- Rollback and recovery architectures — dual-bank firmware layouts, A/B update schemes,
and factory-reset mechanisms enabling field recovery without physical access
Recent Developments / Incidents
- Volt Typhoon pre-positioning in critical infrastructure (2024-2025) — US authorities
disclosed that Chinese state-sponsored group Volt Typhoon had maintained persistent access to critical
infrastructure networks, including routers and edge devices, for years without detection. The campaign
exploited flat network architectures lacking segmentation, demonstrating that products deployed without
zone-based architecture and lateral movement containment provide attackers with unrestricted internal
access once a single component is compromised.
- CRA mandates secure-by-design architecture (2024) — The CRA's essential requirements in
Annex I explicitly require products to be designed to limit attack surfaces, ensure that exploitation of
one vulnerability does not compromise the wider product, and include mechanisms for restoring the product
to a secure state. This transforms reference architecture patterns from a best practice into a regulatory
expectation: products without compartmentalisation, recovery mechanisms, and attack surface minimisation
may not achieve CRA conformity.
Every connected product depends on cryptography for confidentiality, integrity, authentication, and
non-repudiation. The choices that govern whether that protection holds (e.g. algorithm selection,
protocol-level integration, key hierarchy design) are made during product design and are difficult to change
after manufacturing. A particularly consequential aspect is key ownership: as a product moves from
manufacturer to system integrator to end customer, it must be unambiguous who holds which keys and under
what conditions ownership transfers. Equally, the full key lifecycle (generation, secure storage, rotation,
revocation, and recovery) must be planned for product lifetimes that can span decades and fleets that number
in the thousands. Weaknesses here, such as hardcoded keys, unclear ownership at handover points, or missing
integrity checks, are among the most persistent sources of product security failures across industries.
The regulatory and technical landscape adds urgency. The Cyber Resilience Act requires manufacturers to
apply cryptography appropriately and maintain it over the support period. Post-quantum cryptography
migration means products shipping today need crypto-agility: the architectural capacity to replace
cryptographic algorithms without hardware changes. Hybrid classical/post-quantum schemes must be selected
and validated. Cryptographic mechanisms also play a direct role in protecting manufacturers and supplier
intellectual property (e.g. encrypted firmware, obfuscation, and secure IP embedding) making this topic
relevant well beyond the traditional confidentiality and integrity concerns.
Relevant Technologies
- Post-quantum and classical cryptographic algorithms — ML-KEM, ML-DSA, SLH-DSA, HQC
alongside established AES, RSA, ECC; hybrid classical/PQC schemes for transitional deployment
- Key management infrastructure — HSMs, TPMs, secure elements; key hierarchy design,
ownership transfer models across manufacturer–integrator–customer chain, and full key lifecycle
(generation, rotation, revocation, recovery)
- Protocol-level cryptographic integration — TLS 1.3, DTLS, mTLS, OPC-UA security;
certificate lifecycle management (X.509, OCSP/CRL) and code/firmware signing
- Crypto-agility architectures — algorithm-negotiation layers, pluggable crypto backends,
and design patterns that allow cryptographic algorithm replacement without hardware changes
- Cryptographic Bill of Materials (CBOM) — structured inventory of cryptographic assets
(algorithms, keys, certificates, protocols) as CycloneDX extension; prerequisite for PQC migration
planning and CRA compliance evidence
Recent Developments / Incidents
- Salt Typhoon (2024–ongoing) — Chinese state-sponsored hackers compromised at least nine
US telecommunications providers by exploiting infrastructure that lacked end-to-end encryption by design,
accessing call metadata from over a million users and audio recordings of senior political figures. The
breach demonstrated that cryptographic architecture decisions — particularly the absence of end-to-end
encryption and the presence of mandated access backdoors — create vulnerabilities that persist for decades
and cannot be remediated operationally.
- CRA "State-of-the-Art Cryptography" debate (2024–2025) — As CRA harmonised standards
were being developed, researchers found that existing EU radio equipment standards could allow weak
cryptography to remain acceptable until actively exploited — prompting CRA working groups to move toward a
prescribed approved-mechanism listing maintained by the European Cybersecurity Certification Group. If
adopted, this shifts product cryptographic design from "vendor selects what works" to "regulator defines
what is acceptable," making algorithm selection, cipher suite defaults, and crypto-agility direct
compliance obligations for every product placed on the EU market.
Connected products depend on protocols for every external interaction: standard protocols (TLS, MQTT,
DTLS), industrial protocols (OPC-UA, Modbus, Profinet, EtherCAT), and authentication protocols (mTLS, OAuth
2.0, OIDC, FIDO, X.509). The protocol chosen, and how it is configured, directly determines the strength of
authentication, the confidentiality of data in transit, and the integrity of commands received. Protocol
selection affects not just security but also interoperability and operational complexity: more secure
configurations may break integration with legacy systems.
Industrial protocols present a particular challenge. Many were designed for isolated networks and lack
built-in security: Modbus has no authentication, BACnet has minimal encryption support, and even OPC-UA
implementations vary widely in which security profiles they actually enforce. Securing these protocols
without breaking interoperability or real-time performance constraints is an active area of innovation. The
sub-elements include protocol selection and configuration hardening, TLS/DTLS profile management, industrial
protocol security (wrapping or upgrading insecure legacy protocols), authentication protocol integration for
constrained devices, and API security for product management interfaces.
Relevant Technologies
- TLS 1.3 and DTLS 1.3 — current transport security standards with reduced handshake
latency and removal of legacy cipher suites; DTLS for UDP-based protocols in constrained environments
- OPC-UA security profiles — configurable security modes (Sign, SignAndEncrypt) with
certificate-based authentication for industrial automation
- MQTT 5.0 with TLS — lightweight publish-subscribe messaging widely used in IoT
products, requiring careful TLS configuration for resource-constrained devices
- mTLS (mutual TLS) — bidirectional certificate authentication ensuring both endpoints
verify each other's identity, critical for device-to-cloud communication
- Protocol wrapping and gateway approaches — securing legacy industrial protocols
(Modbus, BACnet, Profinet) by encapsulating them in encrypted tunnels rather than replacing them
Recent Developments / Incidents
- Unitronics PLC exploitation via unprotected PCOM/TCP (2023-2024) — Iranian
IRGC-affiliated actors compromised Unitronics PLCs used in US water treatment facilities by exploiting
devices publicly exposed to the internet with default passwords on TCP port 20256. The PCOM/TCP protocol
used by these PLCs provided no authentication mechanism, allowing attackers to query, validate, and take
administrative control of devices directly. The incident demonstrated that deploying products with
unauthenticated industrial protocols on internet-accessible networks creates trivially exploitable attack
vectors.
- ETSI EN 303 645 becomes basis for IoT protocol security requirements (2024-2025) — The
ETSI consumer IoT security standard, which mandates encrypted communication and authenticated connections
for consumer devices, was adopted as the basis for multiple national certification schemes and is
referenced in the CRA standardisation work. Its requirement that product communication should use
best-practice cryptography and that products should not expose unnecessary network services is driving
protocol security expectations for an increasingly broad range of connected products.
Runtime hardening encompasses engineering decisions baked into the build: secure kernel configuration,
filesystem permissions, memory protections (ASLR, stack canaries, NX), service minimisation, and removal of
unnecessary debug tooling. Secure defaults determine the attack surface at first power-on: unique per-device
credentials instead of default passwords, closed ports until explicitly configured, disabled unnecessary
services, and least-privilege access policies. The practical reality is that the security posture at first
power-on is often the security posture for the product's entire lifetime, because most customers never
change defaults.
The CRA mandates secure-by-default configurations, making hardening a compliance concern alongside a design
discipline. The sub-elements include OS and kernel hardening for embedded Linux and RTOS platforms, memory
protection deployment on resource-constrained hardware, service minimisation and attack surface reduction at
build time, default credential elimination (unique per-device credentials provisioned during manufacturing),
and production-disabling of debug interfaces (JTAG, UART, diagnostic modes).
Relevant Technologies
- Embedded Linux hardening — SELinux/AppArmor policies, read-only root filesystems,
kernel configuration lockdown (sysctl hardening, module signing)
- Memory protection mechanisms — ASLR, stack canaries, NX/XN bit enforcement, Control
Flow Integrity (CFI) on ARM and x86 embedded platforms
- Unique credential provisioning — per-device credential generation during manufacturing,
eliminating shared default passwords across product fleets
- Debug interface management — hardware fuses, JTAG lock mechanisms, and production
firmware builds that permanently disable diagnostic access
- Container and firmware hardening scanners — automated tools that verify hardening
compliance against CIS benchmarks or custom baselines as part of CI/CD
Recent Developments / Incidents
- Unitronics default password "1111" enables water utility attacks (2023-2024) — CISA
responded to active exploitation of Unitronics PLCs in water and wastewater systems, where devices were
accessed using the default administrative password "1111", with a CVSS score of 9.8. The manufacturer
subsequently released a software update requiring users to change default passwords. The incident became a
reference case for CISA's "Secure by Design" initiative and directly informed the CRA's requirement that
products must not ship with default passwords shared across units.
- CISA "Secure by Design" pledge and follow-through (2024-2025) — CISA launched a
voluntary "Secure by Design" pledge in 2024, with over 200 technology companies committing to eliminate
default passwords, increase multi-factor authentication adoption, and reduce entire classes of
vulnerabilities. A year later, CISA published progress reports showing measurable improvement among
signatories, establishing secure-by-default product hardening as an industry norm rather than an
aspiration, and reinforcing the CRA's regulatory direction.
Products deployed in physically accessible environments face adversaries who can touch, open, and
instrument the hardware. Anti-tamper design principles, tamper detection and response mechanisms (such as
zeroisation of secrets on tamper detection), physical access controls, and countermeasures against
side-channel attacks are all part of this topic. Side-channel resistance, specifically, includes
constant-time cryptographic implementations, power analysis masking, electromagnetic shielding, and fault
injection countermeasures. For products in unattended locations such as energy infrastructure, field
equipment, and building systems, physical security is the first line of defence rather than an optional
hardening layer.
Physical access control mechanisms (secure boot tied to hardware identity, anti-cloning measures) round out
the design space.
Relevant Technologies
- Tamper-detection mesh and environmental sensors — active monitoring for enclosure
breach, voltage anomalies, or temperature excursions with automated secret zeroisation
- Side-channel resistant cryptographic implementations — constant-time algorithms,
masking and shuffling countermeasures, blinding for RSA/ECC operations
- Fault injection countermeasures — voltage and clock glitch detectors, redundant
computation with comparison, laser fault injection shields
- Physically Unclonable Functions (PUFs) — silicon-level unique identifiers derived from
manufacturing variation, used for device authentication and key generation without stored secrets
- Common Criteria evaluation (AVA_VAN) — vulnerability assessment levels (VAN.1 through
VAN.5) defining the depth of physical attack resistance required for certified products
Recent Developments / Incidents
- CRA Implementing Regulation defines physical security levels for critical products (December
2025) — The Commission's Implementing Regulation 2025/2392 specified that tamper-resistant
microprocessors and microcontrollers must meet security assurance levels AVA_VAN.2/3, while secure
elements require AVA_VAN.4+. This creates a binding link between physical security design decisions and
CRA product category classification, meaning that products incorporating these components must demonstrate
physical attack resistance through formal evaluation.
- EMFI (electromagnetic fault injection) attacks on automotive ECUs gain practical tooling
(2024-2025) — Researchers demonstrated increasingly affordable electromagnetic fault injection
attacks against automotive electronic control units, bypassing secure boot and extracting firmware from
production vehicles using equipment costing under €500. The research lowered the practical barrier for
physical attacks on deployed products and strengthened the case for integrating fault injection
countermeasures at design time rather than relying on the assumption that physical access requires
sophisticated capabilities.
Products in industrial, medical, and automotive contexts often have constrained interfaces: small displays,
physical controls, limited input methods. Traditional security UX patterns like complex passwords,
multi-step configuration wizards, and certificate management workflows are impractical or even dangerous in
contexts where every second matters (a radiographer authenticating per patient scan, an operator responding
to a process alarm). The product's interface must make the secure option obvious and the insecure option
difficult, because the configuration set at installation is likely the configuration that persists for
years.
The sub-elements include authentication UX for constrained product interfaces, secure-by-default
interaction patterns that minimise misconfiguration risk, role-based access through both physical and
digital means, multi-user access management for products shared across roles (installer, operator, service
engineer, administrator), and human factors of security in safety-critical operational contexts.
Relevant Technologies
- Passwordless authentication for constrained devices — FIDO2/WebAuthn adaptations,
NFC-based authentication tokens, QR-code-based provisioning workflows
- Role-based access control (RBAC) with physical enforcement — key switches, hardware
tokens, and physical presence requirements for elevated access levels
- Secure commissioning workflows — structured setup processes that guide installers
through security configuration with validation checks and secure defaults
- Configuration validation and drift detection — automated checking that
security-relevant settings match the intended configuration, alerting on deviations
- Accessible security dashboards — simplified status indicators for non-expert users
showing device security state, update status, and certificate validity
Recent Developments / Incidents
- Misconfigured medical device networks traced to installation UX failures (2024) —
Multiple healthcare cybersecurity assessments reported that network segmentation failures in
hospital-deployed medical devices were traceable not to missing capabilities but to installation
interfaces that made insecure configurations easier than secure ones. Devices that defaulted to open
network access during commissioning and required manual steps to enable segmentation were consistently
found in insecure states, reinforcing that security UX at installation time determines real-world security
posture.
- CRA mandates user-comprehensible security information (2024) — The CRA requires
manufacturers to provide clear instructions enabling users to securely install, operate, and maintain
products, including information about security-relevant configuration options. This transforms security UX
from a design preference into a compliance requirement: products whose interfaces make secure
configuration unclear or difficult may fail to meet the CRA's essential requirements for user information
and instructions.
Products generate data that may belong to different parties simultaneously: operational telemetry to the
manufacturer, process data to the customer, usage data to the end user. Privacy architecture establishes who
owns what, how data is categorised by sensitivity, and what technical mechanisms enforce protection by
construction rather than by policy. This includes privacy-enhancing technologies (differential privacy,
homomorphic encryption, federated learning, secure multi-party computation) and active transparency
functions that give customers visibility and control over their data, including dashboards, export
capabilities, processing opt-outs, and deletion capabilities.
GDPR applies wherever products process personal data, and sector-specific data rules add further
obligations. But beyond regulatory compliance, data transparency is becoming a competitive differentiator:
customers increasingly evaluate products partly on how clearly they can see and control what data is
collected, where it is processed, and with whom it is shared. The sub-elements include data classification
and ownership models, access control and consent management for product-generated data, data integrity and
provenance tracking, privacy-enhancing technologies for edge processing, and customer-facing data
transparency and control functions.
Relevant Technologies
- Federated learning — training ML models across distributed product fleets without
centralising raw data, preserving data locality and customer control
- Differential privacy — mathematical guarantees limiting what can be inferred about
individual data subjects from aggregated product telemetry
- Homomorphic encryption and secure multi-party computation — processing sensitive data
without decrypting it, enabling manufacturer analytics without customer data exposure
- Data minimisation architectures — edge processing designs that extract needed insights
locally and transmit only aggregated or anonymised results
- Customer data control interfaces — dashboards, APIs, and export/deletion mechanisms
enabling customers to exercise GDPR rights directly through the product
Recent Developments / Incidents
- GDPR enforcement on connected product data processing intensifies (2024-2025) —
European data protection authorities issued multiple fines against manufacturers of connected consumer
products (smart home devices, fitness trackers, connected vehicles) for collecting more data than
disclosed, sharing telemetry with third parties without consent, and making data deletion requests
unnecessarily difficult. The enforcement actions established that GDPR's privacy-by-design requirements
apply to the product's architecture, not just its privacy policy.
- EU Data Act enters application alongside CRA (September 2025) — The EU Data Act became
applicable in September 2025, introducing new obligations for manufacturers of connected products to
provide users and third parties access to data generated by their products. This creates a second
regulatory driver alongside GDPR for privacy architecture: products must now be designed not only to
protect data but to share it on request, requiring technical architectures that support fine-grained data
access control, export APIs, and user-controlled data portability.
AI features in products, such as machine vision, predictive maintenance, anomaly detection, and natural
language interfaces, introduce security concerns distinct from traditional software: adversarial inputs
causing misclassification, model drift degrading performance over time, lack of explainability undermining
trust, and failure modes that were not anticipated in safety analysis. Securing AI integration means
ensuring that the AI components behave predictably even when deliberately probed or manipulated, and that
their outputs can be understood and overridden when needed.
The EU AI Act classifies many product-embedded AI systems as high-risk, adding conformity assessment
obligations that intersect with CRA requirements. The rules for high-risk AI systems embedded in regulated
products have an extended transition period until August 2027, aligning with the CRA's full application
date. The sub-elements include adversarial robustness testing for product-embedded models, input validation
and output bounding for AI features, explainability mechanisms for regulatory and operational purposes,
AI-specific monitoring (model drift, performance degradation, distribution shift), and safety-AI interaction
design (ensuring AI features cannot override safety-critical constraints).
Relevant Technologies
- Adversarial robustness testing frameworks — tools for evaluating model behaviour under
adversarial inputs (ART, CleverHans, Foolbox) adapted for embedded deployment
- Input validation and output bounding — architectural patterns constraining AI model
inputs to expected ranges and outputs to safe operational envelopes
- Explainability and interpretability methods — SHAP, LIME, attention visualisation, and
model-specific transparency approaches for regulatory documentation
- Model monitoring for deployed products — detecting distribution shift, performance
degradation, and anomalous predictions in field-deployed models
- AI-safety integration patterns — architectural separation ensuring AI recommendations
are advisory to safety-critical control systems, with deterministic override capability
Recent Developments / Incidents
- EU AI Act enters force with phased high-risk deadlines (2024-2027) — The AI Act entered
into force on 1 August 2024, with prohibited practices effective from February 2025 and high-risk AI
system obligations for products under existing EU product legislation applying from August 2027. For
manufacturers embedding AI in products already subject to CRA or MDR, this creates overlapping conformity
assessment requirements: the AI system must satisfy both the AI Act and the applicable product regulation,
with the product regulation's conformity assessment taking precedence where both apply.
- Adversarial attacks on automotive perception systems demonstrated at scale (2024-2025)
— Researchers published multiple studies demonstrating practical adversarial attacks on production vehicle
perception systems, including physically realisable perturbations (modified road signs, projected
patterns) that caused misclassification under real-world conditions. These results moved adversarial
robustness from an academic concern to an engineering requirement for any product embedding
safety-relevant computer vision, particularly under UN R155 and the EU AI Act's high-risk classification
for vehicle safety components.
TEEs, secure elements, HSMs, and TPMs provide tamper-resistant key storage, hardware-backed attestation,
secure boot anchoring, and isolated execution environments. Confidential computing enables products to
process sensitive data without exposing it to the host system, the manufacturer, or other tenants. For
products that process customer data on manufacturer-controlled platforms, or that operate in environments
where the manufacturer cannot trust the host, confidential computing provides guarantees that contractual
promises alone cannot deliver.
The selection and integration of trusted hardware is a design-time decision with long-term consequences: it
determines the ceiling of security assurance a product can achieve and is impractical to retrofit. The
sub-elements include TEE architecture selection and integration (ARM TrustZone, Intel TDX, RISC-V Keystone),
secure element and HSM integration for key storage and cryptographic operations, hardware-backed remote
attestation (proving platform state to a verifier), confidential computing as an IP protection mechanism
(running supplier algorithms inside TEEs so the manufacturer cannot extract proprietary logic), and
TPM-based measured boot for deployment integrity verification.
Relevant Technologies
- ARM TrustZone and ARM CCA (Confidential Compute Architecture) — hardware isolation for
embedded and mobile processors, with CCA adding realm-based isolation for cloud and edge workloads
- TPM 2.0 and measured boot — platform integrity measurement and attestation using
standardised trusted platform modules
- Secure elements (e.g., Infineon OPTIGA, NXP EdgeLock) — dedicated tamper-resistant ICs
for key storage, cryptographic operations, and device identity
- Hardware Security Modules (HSMs) — certified cryptographic processors for key
management in manufacturing, fleet management, and high-assurance applications
- Remote attestation protocols — mechanisms allowing a verifier to confirm the software
state of a remote device before trusting its output (e.g., IETF RATS architecture)
Recent Developments / Incidents
- Intel SGX deprecation forces TEE migration decisions (2024-2025) — Intel's decision to
deprecate Software Guard Extensions (SGX) in consumer and mainstream server processors, directing users
toward Trust Domain Extensions (TDX) for VM-level isolation, forced product manufacturers using SGX-based
confidential computing to re-evaluate their TEE architecture. The deprecation illustrated that hardware
trust anchors are themselves subject to vendor product lifecycle decisions, reinforcing the importance of
abstraction layers and crypto-agility extending to trusted execution environments.
- IETF RATS (Remote Attestation Procedures) architecture matures (2024-2025) — The IETF's
Remote ATtestation procedureS (RATS) working group advanced its reference architecture for verifying the
trustworthiness of remote devices, providing standardised roles (attester, verifier, relying party) and
evidence formats. For product manufacturers, RATS offers an interoperable framework for remote attestation
that works across TEE implementations, reducing the risk of vendor lock-in and enabling fleet-wide
integrity verification from a common management platform.
A product that cannot reliably prove who it is cannot participate in any trust relationship. Secure
updates, fleet management, remote access, and customer operations all depend on identity. This topic spans
initial identity models (X.509 certificates, decentralised identifiers, manufacturer-assigned serials),
factory-floor provisioning (key injection, zero-touch provisioning, first-boot trust establishment), runtime
attestation (secure boot chains, measured boot, remote attestation proving platform state), and lifecycle
management (rotation, revocation, re-provisioning, decommissioning).
The choice of identity model has deep implications for interoperability, scalability, and cost.
Certificate-based identity requires PKI infrastructure for issuance, renewal, and revocation across
potentially millions of devices over decades. Zero-touch provisioning must work across manufacturing
partners and contract manufacturers without exposing provisioning secrets. The sub-elements include identity
model selection and provisioning infrastructure design, factory-floor key injection and secure manufacturing
integration, certificate lifecycle management at fleet scale (automated renewal, revocation, CRL/OCSP),
device identity for multi-stakeholder environments (manufacturer, integrator, customer each with their own
identity requirements), and decommissioning identity cleanup (certificate revocation, key zeroisation,
identity de-registration).
Relevant Technologies
- X.509 certificate infrastructure — PKI for device identity issuance, chain-of-trust
validation, and automated lifecycle management (EST, CMP protocols)
- Zero-touch provisioning (ZTP) — factory-floor and first-boot provisioning protocols
that establish device identity without manual intervention (BRSKI, SZTP)
- Matter device attestation — standardised device identity and attestation model for
smart home products using Device Attestation Certificates (DAC)
- FIDO Device Onboard (FDO) — industry standard for automated, secure device onboarding
with late-binding ownership transfer
- Decentralised identifiers (DIDs) — W3C standard for self-sovereign device identity,
enabling identity portability across ecosystems without centralised CA dependency
Recent Developments / Incidents
- Matter protocol adoption drives standardised device identity (2024-2025) — The Matter
smart home standard, backed by Apple, Google, Amazon, and Samsung among others, mandates Device
Attestation Certificates for every product, creating the first large-scale consumer product ecosystem with
cryptographic device identity as a baseline requirement. For manufacturers, Matter's identity model
demonstrates that device identity provisioning at scale is feasible and commercially expected, setting a
precedent that will likely extend beyond smart home into industrial and healthcare product domains.
- Certificate expiry outages highlight lifecycle management gaps (2024-2025) — Several
high-profile connected product outages were traced to expired device certificates that had not been
renewed, causing products to lose connectivity to cloud services and in some cases become non-functional.
The incidents demonstrated that device identity is not a one-time provisioning task but an ongoing
lifecycle management challenge: products deployed for years or decades need automated certificate renewal
infrastructure designed from the start, not bolted on after deployment.
Products deployed at scale across dispersed locations cannot rely on centralised security teams for
real-time response. The autonomous frontier of product security is products that protect themselves: using
behavioural analysis, anomaly detection via ML models, and automated response actions faster than any human
operator. The critical constraint is that autonomous responses must respect safety boundaries and
operational requirements. In a safety-critical product, an autonomous security action that disrupts the
primary function (shutting down a medical device, disabling a vehicle control system) can cause more harm
than the attack it is responding to.
The sub-elements include on-device anomaly detection using ML models (behavioural baselines, network
traffic analysis, process monitoring), automated response actions (session termination, feature isolation,
fallback to safe mode), safety-aware response design (ensuring autonomous security actions cannot violate
safety invariants), validation and testing frameworks for autonomous security behaviours, and the feedback
loop between autonomous detection and fleet-level threat intelligence. This is an emerging area with no
widely accepted framework for testing and certifying autonomous security behaviour.
Relevant Technologies
- On-device ML for anomaly detection — lightweight models (TinyML, TensorFlow Lite)
running on MCUs for real-time behavioural analysis without cloud dependency
- Intrusion detection for embedded systems — host-based and network-based detection
adapted for resource-constrained product environments
- Automated containment and isolation — software-defined network microsegmentation,
process sandboxing, and feature-level disable mechanisms triggered by detection events
- Safety-security arbitration frameworks — decision logic ensuring that autonomous
security responses are subordinate to safety constraints in safety-critical products
- Fleet-level threat correlation — aggregating detection events across deployed product
fleets to distinguish localised anomalies from coordinated campaigns
Recent Developments / Incidents
- Autonomous threat response in industrial edge devices enters production (2024-2025) —
Several industrial automation vendors began shipping edge devices with built-in anomaly detection and
automated containment capabilities, capable of isolating suspicious network traffic or terminating unusual
process communication without operator intervention. These represent the first generation of commercially
deployed autonomous security functions in OT products, though their effectiveness and safety implications
under adversarial conditions remain largely untested in production environments.
- IEC 62443 community begins discussing autonomous security classification (2025) — As
products with autonomous security functions reach market, the IEC 62443 community began discussing how to
classify and evaluate these capabilities within existing security levels. The debate centres on whether
autonomous security functions should be treated as a capability that raises a product's achievable
security level, or whether their unpredictability under adversarial conditions introduces new failure
modes that require additional analysis. No consensus framework exists yet, making this a research gap the
coalition can address.
Security requirements engineering is the discipline of turning security intent into engineering-ready
obligations: what the product must protect, what it must resist, and what it must prove. It focuses on
translating high-level security goals, regulatory obligations, and risk assumptions into concrete,
verifiable requirements that can guide design and development. It includes eliciting requirements from
sources such as legislation (e.g. CRA, MDR, UN R155 and R156), standards (e.g. ISO/SAE 21434), threat
analyses, and stakeholder expectations, then structuring and documenting them in a way that is actionable
for engineers. In practice this means expressing security not as a vague goal (“be secure”), but as specific
constraints and behaviours (e.g., authenticated update installation) that can be designed and implemented. A
key characteristic is bidirectional traceability: requirements must link forward to design elements, tests,
and evidence, and backward to their originating risks or obligations.
Security requirements engineering is critical because gaps or ambiguities at this stage propagate further
in the product development process and are expensive or impossible to fix later. Without clear requirements,
security becomes implicit, inconsistently interpreted, or entirely overlooked. Misuse and abuse cases make
attacker and operator failure modes explicit, so teams design for how the product will be broken rather than
only for how it should be used. Requirement taxonomies (confidentiality, integrity, availability, safety,
privacy) prevent systematic blind spots by forcing coverage across different harm types. Requirement‑to‑test
traceability (often via trace matrices and requirements tooling) ensures each requirement is validated and
evidenced, closing the loop between intent, implementation, and compliance proof instead of leaving security
as an assumption.
Relevant Technologies
- Requirements and traceability tooling — Tools like IBM Doors Next, Jama Connect,
Polarion allow for bidirectional traceability between requirements, design and tests.
- Regulatory requirement mapping guides — Interpretations that translate CRA obligations
into checklists to use as a structured source of requirements, like the BSI TR-03183.
- Machine readable requirements — Standardised format such as OSCAL can help check the
implementation of security requirements in products.
Recent Developments / Incidents
- Equifax data breach (2017) — Attackers exploited an unpatched vulnerability, resulting
in leakage of personal data of millions of people. Although a patching policy was in place, there were no
set requirement on testable, time-bound enforcement.
- CVE‑2024‑3094 (2024) — The XZ Utils backdoor, deliberately introduced by one of the
maintainers, affected many Linux distributions enabling SSH compromise. The trust of maintainers was
implicitly assumed, instead of stating under what conditions trust can be (re-)validated.
This subcluster encompasses systematic methods for reasoning about adversaries, assets, and attack paths in
the context of the product’s architecture and operating environment. Techniques such as STRIDE, LINDDUN,
attack trees, MAL or domain-specific methods are used to explore how the product could be compromised,
considering architecture, data flows, and operational context. Threat modelling is best performed
iteratively, evolving as the product design matures and as deployment contexts change.
Threat modelling is essential because it provides the rationale for security requirements, architecture,
controls and test plans, ensuring that defences are risk-driven rather than checklist-based. It helps
development teams focus effort on realistic and high-impact threats, reducing both overengineering and blind
spots. Asset inventories clarify what is valuable (keys, safety functions, firmware integrity, sensitive
data) so protection effort is correctly concentrated. Attacker models set realistic assumptions about
capabilities (remote vs. physical access, insider vs. outsider), which prevents both overengineering and
dangerous underestimation. Trust boundary identification reveals privilege transitions and implicit
dependencies—common places where vulnerabilities hide—while threat prioritisation and documented
assumptions/accepted risks create an auditable rationale for trade-offs and a clear backlog for mitigation.
Relevant Technologies
- Threat identification framework — Used for identification and categorisation of
security threats; examples are STRIDE and LINDDUN (for privacy threats).
- Attack trees — Used to visualise how an asset may be attacked and where security
controls can be placed to mitigate the threats.
- Data flow diagrams and trust-boundary mapping — Help identify where threats could
impact a system and where security controls can be placed to mitigate them.
- Adversary tactics and techniques framework — Used to model adversary tactics and
techniques; examples are MITRE ATT&CK and the Lockheed Martin Cyber Kill Chain.
Recent Developments / Incidents
- AI as threat — Threat modelling must adapt to evolving technologies like generative AI
as a driver of new threats.
- Employees having widespread access to customer accounts and information (Twitter 2020, Bunq
2024) — In 2020, hackers got access to Twitter administration tools for employees, giving them
the opportunity to take over high-profile social media accounts. At bank Bunq, most employees were allowed
(only prevented by guidelines) to see information on all customers, resulting in unauthorised accesses.
This subcluster focuses on embedding test hooks, diagnostics, and controlled fault-injection capabilities
into the product so that security claims can be validated during development and manufacturing. It includes
designing for observability of security-relevant behaviour (e.g. boot mode, key status, privilege level)
without adding new attack vectors for the product in production.
Security controls that cannot be tested are effectively assumptions, and untestable security often hides
latent vulnerabilities. Good testability design enables earlier detection of security defects and stronger
evidence for compliance and assurance. Fault injection points and stress hooks help uncover edge cases
(glitches, malformed inputs, timing races) that frequently produce exploitable weaknesses, especially in
embedded systems. Observable security states and security event logging provide evidence for assurance and
accelerate debugging of security defects. Secure production-disable mechanisms and locked-down debug/test
interfaces are equally critical: they preserve the benefits of test access in development while preventing
those same interfaces from becoming post-deployment backdoors.
Relevant Technologies
- Secure debug/test interface control — JTAG/SWD locking, debug authentication,
fuse/option-byte based disablement, and secure manufacturing modes.
- Fault injection tooling — Voltage/clock glitching setups, software fault injection
harnesses.
- Security state observability — Secure boot state reporting, measured boot attestation
signals, and security event logging hooks.
- Production-safe test hooks — Test features available in development, securely disabled
in production, like when using signed test firmware, removal of active debug code in release builds.
Recent Developments / Incidents
- CVE‑2025‑26408 (2025) — In Wattsense Bridge devices, typically used for building
management, physical access to the PCB allowed JTAG access and full compromise (firmware
extraction/modification/debug), illustrating the consequence of insufficient test-interface controls.
- CVE‑2025‑15017 (2025) — Active debug code was enabled on UART interface of NPort serial
device servers, allowing unauthenticated privileged operations when someone would have physical access.
This subcluster covers the engineering discipline required to build secure products, including coding
standards, peer review processes, and tool-supported workflow controls. It spans both human practices
(training, review culture, responsibility assignment) and technical enablers such as static analysis,
linters, and secrets management. Increasingly, it also includes AI-assisted development and review, which
introduces new opportunities and risks.
Most product vulnerabilities arise from implementation errors rather than fundamental design flaws, making
secure development practices a primary line of defence. Consistent practices reduce variability in code
quality and help scale security across teams and suppliers. Coding standards reduce recurring classes of
bugs (memory safety errors, injection flaws, insecure randomness) by making safe patterns the default. Peer
review and “golden paths” in CI/CD catch risky changes early and enforce consistent quality across teams,
including suppliers. Automated linting and policy checks prevent drift (e.g., reintroducing banned
functions, weakening crypto settings), while robust credential management avoids some common product
failures: secrets committed to repositories or shared across devices.
Relevant Technologies
- Secure coding standards & guidance — OWASP Secure Coding Practices checklist as a
baseline reference for common vulnerability prevention.
- Repository protection & policy controls — Branch protection, mandatory reviews, and
guardrails that prevent unreviewed or unsafe changes from reaching main branches.
- Secrets detection & prevention — Secret scanning/pre-commit hooks and enterprise
secrets governance to reduce credential leakage risk.
- (AI-assisted) development controls — Rules/prompt governance, linting.
- Secure CI/CD “golden paths” — Standardised pipelines that enforce consistent checks
(build, test, scan) and reduce deviation over different developers.
Recent Developments / Incidents
- “Rules File Backdoor” for AI coding assistants (2025) — Pillar Security described an
attack vector where hidden Unicode/prompt payloads in AI rule files could manipulate Cursor/GitHub Copilot
into inserting malicious code that can evade typical review, indicating that new methods of review are
needed.
- Secret leakage on GitHub (continuous) — Reporting by IBM, GitGuardian and others
describes millions of leaked secrets in public GitHub commits, and breaches involving leaked credentials
costing millions each.
This subcluster encompasses a range of testing techniques applied at different stages of development. These
include static and dynamic analysis, software composition analysis, fuzzing, penetration testing, and
firmware or binary inspection. Testing may target source code, compiled artifacts, runtime behaviour, or
deployed systems, depending on the lifecycle phase and threat model.
Security testing is crucial because it exposes concrete, exploitable weaknesses rather than theoretical
risks. It provides feedback on the effectiveness of requirements, design decisions, and coding practices,
and generates evidence needed for certification and regulatory compliance. SAST and white-box analysis help
catch implementation flaws before they ship, while SCA exposes supply-chain risk by flagging vulnerable
third-party components that quietly dominate the attack surface. Dynamic analysis and firmware scanning
reveal misconfigurations and insecure services that are invisible in code review. Fuzzing and symbolic
execution excel at uncovering rare edge cases that lead to memory corruption or logic bypass, and
penetration testing validates exploitability end-to-end, ensuring that individual findings are interpreted
in system context and prioritised by real impact.
Relevant Technologies
- SAST (Static Application Security Testing) tools — Rules-based and semantic static
analysis integrated into CI to catch vulnerability patterns early.
- SCA / dependency & license scanners — Component inventory and vulnerability
matching for third‑party libraries (often paired with SBOM).
- Fuzzing — Fuzzing engines with sanitisers for memory-safety bug discovery.
- DAST (Dynamic Application Security Testing) tools and pentesting — Dynamic testing and
adversarial validation against integrated systems and real interfaces.
Recent Developments / Incidents
- CVE‑2024‑9143 (2024) — OSS Fuzz reported doing AI-assisted fuzzing to get higher
coverage than with human-only fuzzing. This resulted in finding a vulnerability in OpenSSL which has
probably been there for over two decades without being found.
- Testing of AI-enabled systems — AI components are treated as new attack surfaces, which
also require specialised security testing. Examples are prompt-injection testing and model abuse testing.
As a lot of AI behaviour is non-deterministic, this gives new challenges compared to traditional test
assertions.
This subcluster includes techniques such as model checking, theorem proving, abstract interpretation, and
protocol verification to reason about system behaviour exhaustively. Rather than sampling possible
executions, formal methods aim to demonstrate that entire classes of errors or attacks are impossible under
stated assumptions. They are most often applied to high-assurance components such as cryptographic
protocols, boot chains, or safety-critical security functions.
Formal verification is important for product security because some classes of vulnerabilities are extremely
subtle, safety-critical, or costly to discover post-deployment. While resource-intensive, formal methods can
provide a level of assurance unattainable by testing alone. Protocol verification can prevent systemic
design errors that would compromise every device in a fleet or interfaces with external parties. Abstract
interpretation and model checking can prove absence of particular bug classes (or bound behaviours) across
all paths. Proof artifacts also strengthen assurance cases and compliance evidence, but only if assumptions
are clearly documented and the verified component is integrated carefully, making “verified-to-system”
integration a critical sub-element for maintaining the proven guarantees.
Relevant Technologies
- Protocol verification tools — Prove security properties like secrecy and authentication
in protocols with tools like Tamarin, ProVerif.
- Model checkers and theorem provers — Check model of product and prove (security)
properties with tools like Isabelle/HOL, Coq.
- Abstract interpretation and symbolic execution — Techniques to prove absence of bug
classes or explore paths systematically beyond conventional testing.
Recent Developments / Incidents
- EUCC certificates (2025 onwards) — EUCC certificates can be issued since February 2025
under a Common Criteria–based EU scheme, increasing demand for structured assurance arguments and
evidence. Here formal methods can contribute to a higher evaluation assurance level on the certificate.
- Formal verification of Signal protocol (2025) — In 2025, Signal published their updated
protocol to enhance resilience against quantum computing. During the development process, formal
verification tooling was used from the beginning to create assurance of desired security properties.
Field-to-engineering feedback loops are the mechanisms that translate what happens to products in the real
world into concrete engineering improvements. It focuses on capturing security-relevant information from
deployed products—such as vulnerability disclosures, incident data, telemetry, and misuse patterns, and then
convert them into updates to requirements, threat models, tests, and roadmap priorities. This requires both
technical infrastructure and organisational processes.
Product security is dynamic: attackers adapt, usage changes, and new dependencies are introduced over time.
Feedback loops are therefore essential to prevent products from stagnating at their initial security level.
PSIRT integration and structured triage ensure externally reported vulnerabilities are handled consistently
and that fixes are prioritised. Telemetry and field intelligence analysis reveal weak signals like recurring
authentication failures, abnormal update behaviour, unexpected network services—that often indicate emerging
attacks or misconfiguration at scale. Backlog linkage and explicit updates to threat models and test cases
close the loop, preventing the same vulnerability class from reappearing and turning incidents into durable
improvements rather than one-off patches.
Relevant Technologies
- Product incident response team (PSIRT) tooling and workflows — Structured intake,
triage, coordinated disclosure, and processes that connect external reports to internal remediation work.
- Common Security Advisory Framework (CSAF) and Vulnerability Exploitability eXchange (VEX)
pipelines — Machine-readable advisories and vulnerability-status statements that, at scale, let
teams automatically determine whether a product is affected and prioritise the engineering response.
- Threat intelligence and advisory feeds — Subscription to public advisories that drive
updates to threat models, tests, and hardening priorities.
Recent Developments / Incidents
- Red Hat CSAF and VEX publications (2024 onwards) — For every CVE, VEX files are
published and for every Red Hat Security Advisory a CSAF is published. These are publicly available.
- Cisco Vulnerability Repository — Cisco publishes CSAF/VEX documents, allowing for
automated, machine-readable feedback loops rather than manual advisory triage.
This subcluster covers the technologies that let product builders verify that the input to the build
process is exactly what ends up in the final product. The build must always be able to answer — and ideally
prove — what was built, who built it, and in what environment. Build processes typically have multiple
stages, and addressing these questions at each stage builds confidence that the overall process was
completed without tampering.
Product build integrity has additional facets on top of generic software build integrity. Because products
typically include hardware, extra care must be taken that the built software is also what runs on the
supplied hardware, and that no additional components are injected at any stage.
Relevant Technologies
- Artifact signing and attestation — Signing build artifacts and attesting properties
gives a systematic way of answering the questions underlying build integrity. However, the infrastructure
to do this, including the identification of the parties involved and which standards are applied, remains
challenging.
- Provenance generation — Probably the most useful attestation is that of provenance,
generally conforming to SLSA. Gathering the information that goes into the SLSA attestation, however, is
not straightforward. Doing it easily, flexibly, and ideally with verifiability, is even more challenging.
- Reproducible builds — Reproducing builds are a great way to show that there was no
tampering within a build process. This can be done on the same machine but should also be distributed to
multiple trusted parties. This method is already employed by, for example, package registries within the
Linux ecosystem.
- Secure hardware-software linking — Modern tools such as secure boot allow product
owners to lock down all kinds of process, such as what software is allowed to run on given hardware. This
should be automated in the build process. This provides a sure way of making sure that only the intended
code runs on the product.
Recent Developments / Incidents
- XZ Utils backdoor (CVE-2024-3094, 2024) — A malicious maintainer pushed a backdoor into
the xz package, injecting the payload during the build's test phase so it remained invisible to
source-code auditors. Locking down the supply chain and verifying what happens at each phase of the build
could have detected and prevented the tampering.
- SolarWinds SUNBURST (2020) — Russian state-sponsored attackers compromised SolarWinds'
Orion Platform build environment and injected malicious code into legitimately signed updates distributed
to roughly 18,000 customers, including multiple US federal agencies. The breach went undetected for months
because the malicious payload carried SolarWinds' own signature; only end-to-end verification of
build-environment integrity, beyond the signed-by-vendor check, would have detected the tampering.
This subcluster covers the generation of SBOMs across different environments, and pushes the question of
what should be in an SBOM beyond the established baseline. It also interacts with other kinds of BOMs
relevant to products, such as hardware and AI-model BOMs. Generating an SBOM that accurately represents the
software at hand remains challenging: SBOM tooling is much newer than the rest of the software development
pipeline, and SBOM generation is not yet routinely baked into build processes.
SBOMs are a mandatory part of modern supply-chain security under the CRA. This base of truth about the
supply chain is pivotal for establishing product trust, and for interfacing with BOMs covering other aspects
of the supply chain such as hardware and AI models.
Relevant Technologies
- Docker buildx (and other container build systems) — Container build systems are a prime
target for setting up reliable SBOM generation; they have all the information about what goes into the
container, and the boundaries of the container introduce a clear limit on what should go into the SBOM.
This is currently not standardised.
- cdxgen and syft — CycloneDX Generator (cdxgen) and Anchore's syft are widely-used CLI
tools that scan source repositories, build artifacts, or container images and emit SBOMs in CycloneDX or
SPDX format. cdxgen leans towards source-level analysis across many language ecosystems, while syft
focuses on container and filesystem inspection. Both are typically wired into CI to produce an SBOM as a
build artifact alongside the binary.
- CycloneDX and SPDX standards — The two dominant SBOM formats. CycloneDX (OWASP) is
broader in scope and extends naturally to hardware, AI and cryptographic BOMs; SPDX (Linux Foundation,
ISO/IEC 5962) leans towards licence-compliance use cases. Choice of format affects downstream tool
compatibility.
- Build-system SBOM plugins — Plugins for major build systems (Maven, Gradle, npm/Yarn,
Cargo) that emit an SBOM as a first-class build output, capturing dependency information from the build
manifest rather than reconstructing it post-hoc from binaries.
- SBOM signing and attestation — Combining the SBOM with in-toto attestations and
Sigstore/cosign signatures yields a tamper-evident, verifiable record of the bill of materials.
Increasingly expected for CRA compliance evidence and SLSA build-provenance claims.
Recent Developments / Incidents
- Log4Shell (CVE-2021-44228, 2021) — The Log4j incident was an eye-opener for many
organisations; not only that a widely-used technology such as Log4j could contain the vulnerabilities that
it did, but also that the use of it was much harder to map than necessary. Modern software supply chains
run very deep, and it is hard to find out whether software includes a problematic dependency if direct
dependencies do not report their transitive dependencies. This meant that Log4j ended up in many places
without being used directly.
- Axios maintainer compromise (2026) — One of the maintainers of the Axios package was
compromised. The attacker added a single dependency, named plain-crypt-js, to the Axios package, which had
the sole purpose of installing malware. Axios is a popular HTTP library for JavaScript, and is thus often
directly or indirectly included in JS applications that do networking. An accurate SBOM allows one to
immediately spot whether such a compromised version of a library is in use in a product.
Setting up a solid supply chain and build process — with signatures, provenance attestations, and SBOMs —
is only one step in product supply-chain security. Verifying the eventual result, potentially also on
products already in the field, requires integration of supply-chain technologies and sensible checks against
expectations. This subcluster covers the tools to digest and verify the information produced by build
integrity and SBOM-generation processes, and to use that information in novel ways such as structured
vulnerability assessment and disclosure.
Once a product is deployed, it leaves the direct view of the supplier, and the hardware and software within
it may be modified, tampered with, or degraded over time. Verifying the provenance and contents of a
deployed product, both at acceptance and periodically thereafter, is a powerful tool for maintaining
confidence and detecting compromise.
Relevant Technologies
- Kyverno — Kyverno is a cloud-native tool for gatekeeping (and monitoring) properties of
containers and other objects inside a Kubernetes cluster. While this is not a universally applicable
technology, the framework does allow for an established base, and a standardised way of describing
relevant security properties.
- Trivy, vulnscout etc. — Automating the linking of SBOMs to vulnerability databases is a
connection that has been made. Making this waterproof, and also not over-reporting on unreachable
vulnerabilities, is a much more subtle problem.
- VEX (Vulnerability Exploitability eXchange) — Machine-readable statements that declare
whether a known vulnerability in a listed component actually affects a product (e.g. unreachable code,
mitigating control in place). Pairs with an SBOM to drastically reduce false-positive load on downstream
consumers.
- Sigstore and cosign — Keyless signing and verification of containers, binaries, and
SBOMs against public transparency logs (Rekor). Enables a product owner to verify, at acceptance time,
that the component received is the one the supplier published.
- in-toto attestations and SLSA verification — Standardised format and verification logic
for build-time evidence (who built it, how, what inputs). The SLSA framework defines progressively
stronger provenance levels that a product owner can require of a supplier and verify mechanically.
Recent Developments / Incidents
- Axios maintainer compromise (2026) — Also covered in C2, the Axios incident could have
benefited from the technologies in this subcluster. Signature matching could have stopped the malicious
package if set up properly; standardised vulnerability information exchange could have automatically
flagged organisations that had the compromised package in their supply chain.
- Polyfill.io CDN substitution (2024) — After the polyfill.io domain was acquired by a
new owner in early 2024, the CDN began serving malicious JavaScript to an estimated 100,000+ websites that
included the polyfill script in production. The incident showed that products consuming third-party
components from CDNs effectively trust an external party in perpetuity. Component provenance verification,
including matching delivered content against a known-good fingerprint, would have surfaced the
substitution as soon as it started.
This subcluster covers the specifics of the AI development pipeline, including protection of model weights
at rest and in use, integrity of the training and fine-tuning pipeline, and the security of base models and
datasets sourced from third parties.
AI models are increasingly central to products, but these new capabilities come with novel risks. Malicious
inclusions reaching the product via the model — through data poisoning, or inherited from a base model or
third-party dataset — are much harder to detect than equivalent threats in classical software. Protection of
the model embedded in the product also matters commercially: a source-code leak is typically less damaging
than leaking model weights that constitute the product's unique selling point.
Relevant Technologies
- Input monitoring — The most obvious way to protect AI models inside a product is to
limit the input that can be given for inference. This entails all kinds of methods, such as scanning for
malicious prompts or limiting what kind of input can be given.
- Output monitoring — In models that can perform actions it might be better to monitor
output rather than input. This can come in the form of identification of actions through tracing,
containerisation, etc.
- Confidential computing for weight protection — A more elaborate method to protect
weights is to keep the model encrypted in memory. The hardware (especially a Trusted Execution
Environment) is then leveraged to perform inference without the model weights being reachable within the
product.
Recent Developments / Incidents
- MCP STDIO command-injection design in reference servers (2024-2025) — A design pattern
in Anthropic's Model Context Protocol reference servers allows commands to be passed unsanitised through
STDIO transport. Anthropic indicated that command sanitisation is the responsibility of the implementer;
products that build on the reference implementation without addressing the gap inherit a
remote-code-execution vulnerability.
- Prompt-injection arms race (continuous) — Since the emergence of large language models,
prompt-injection techniques to bypass safety filters have evolved into a continuous arms race. Adversaries
publish new bypasses faster than vendors can patch guardrails, exposing products that embed an LLM to a
steady stream of new attack patterns. Notable categories include indirect injection (malicious content in
retrieved documents) and jailbreaks targeting policy classifiers. Products embedding LLMs cannot rely on
the model vendor's filters alone.
Beyond the technical aspects of the supply chain, which can be addressed through technical measures,
supplier governance covers the organisations behind the components included in the product: commercial
vendors, contract manufacturers, open-source projects, and the people who maintain them.
Even a technically perfect supply chain depends on the trustworthiness of the parties that supply each
component. Suppliers may go out of business, change ownership, get compromised, or shift their security
posture; open-source projects may be taken over by a single new maintainer who later acts maliciously.
Supplier governance is the discipline of maintaining ongoing visibility into and influence over these
parties: vetting and onboarding processes for commercial suppliers, contractual security obligations and
right-to-audit clauses, monitoring of maintainer changes and ownership transfers for critical open-source
dependencies, and structured handling of supplier security incidents that may cascade into product impact.
Relevant Technologies
- Supplier risk-assessment frameworks — Standardised questionnaires such as the Shared
Assessments SIG and the Cloud Security Alliance CAIQ provide a baseline for evaluating commercial
suppliers' security posture at onboarding and at periodic review.
- Open-source dependency-health monitoring — Tools such as the OpenSSF Scorecard,
Tidelift, and Snyk Advisor surface signals about maintainer activity, project funding, security
responsiveness, and bus factor for the open-source components in a product, flagging risk before it turns
into an incident.
- Contractual security clauses and SBOM-in-procurement — Standard contract clauses for
security obligations (incident notification, SBOM delivery, vulnerability handling) and
SBOM-in-procurement practices give product manufacturers leverage and visibility into the suppliers they
depend on, complementing technical controls.
Recent Developments / Incidents
- Maintainer hijack — xz-utils (2024) and Axios (2026) — In the xz-utils backdoor, the
attacker gained access by cultivating the trust of the existing maintainer over years. In the Axios
incident, the maintainer's account email was changed to take control of the package. In both cases product
developers could have been warned earlier if changes in maintainership and credentials of upstream
open-source projects were tracked as a routine governance signal.
- SolarWinds SUNBURST (2020) — Attackers initially gained access to SolarWinds via a
single machine with a weak password, then escalated to the build environment where they obtained
certificate material and injected code into the SolarWinds Orion product that passed all verifications.
The compromised product then provided attacker access to the networks of SolarWinds' customers. The
incident highlights that securing all components in the supply chain and build process matters at the
supplier level too: a weakness in supplier-side discipline can cascade into the products that depend on
them.
Connected products entering the EU market increasingly face multiple concurrent regulatory obligations. The
CRA imposes horizontal cybersecurity requirements on all products with digital elements, while
sector-specific regulations layer additional demands: MDR and IVDR for medical devices, UN R155 and R156 for
automotive, RED for radio equipment, SEMI for semiconductor manufacturing equipment. The EU AI Act adds
conformity assessment requirements for products with embedded AI classified as high-risk. Data protection
obligations under GDPR apply wherever products process personal data. Each regulation imposes its own
technical and procedural requirements, and they overlap, interact, and occasionally conflict.
The practical challenge is not understanding any single regulation in isolation but building a compliance
strategy that scales across a product portfolio. A manufacturer shipping a connected medical imaging system
must simultaneously satisfy CRA essential requirements, MDR cybersecurity expectations, EU AI Act
obligations for any embedded AI models, and GDPR requirements for patient data processing. Doing this
through four separate compliance tracks is prohibitively expensive. The sub-elements of this topic include
multi-regulation gap analysis, compliance architecture across horizontal and vertical regulations, technical
file structuring that satisfies multiple frameworks simultaneously, and conformity assessment pathway
selection (self-assessment vs. third-party vs. EU cybersecurity certification).
Relevant Technologies
- Multi-regulation compliance mapping tools — systematic gap analysis and requirement
cross-referencing across CRA, MDR, RED, UN R155, GDPR, and EU AI Act
- Conformity assessment frameworks — self-assessment, third-party audit (notified
bodies), and EU cybersecurity certification (EUCC) under the Cybersecurity Act
- Technical file management systems — structured documentation linking product evidence
to specific regulatory requirements across multiple frameworks
- Regulatory intelligence platforms — tracking evolving implementing acts, delegated
acts, guidance documents, and FAQ updates across EU institutions
- Product classification tooling — determining whether products fall under CRA default,
important (Class I/II), or critical categories, with implications for assessment pathways
Recent Developments / Incidents
- CRA implementation timeline crystallises (2024-2026) — The CRA entered into force on 10
December 2024, with reporting obligations applying from September 2026 and main obligations from December
2027. In November 2025, the Commission adopted Implementing Regulation 2025/2392 providing technical
descriptions of important and critical product categories, and in April 2025 the three European
standardisation organisations accepted Mandate M/606 to develop 41 harmonised standards by late 2026 —
creating the first concrete compliance framework manufacturers can design against.
- RED EN 18031 controversy exposes multi-regulation friction (2024-2025) — When the Radio
Equipment Directive harmonised standards (EN 18031 series) were published, researchers found that the
definitions and language may allow vendors to take an approach where weak cryptography is considered
acceptable until exploitation is feasible. This triggered a broader debate about alignment between RED and
CRA requirements, illustrating how manufacturers face contradictory signals when multiple regulations
address the same product properties through different standards with different interpretations.
The standards underpinning product security compliance are being written now. IEC 62443 remains the de
facto horizontal product security standard, but CEN-CENELEC JTC13 is developing a new generation of
harmonised standards (the EN 40000 series) specifically for CRA compliance. ETSI contributes vertical
standards for specific product categories. ISO 27001, ISO/SAE 21434, ETSI EN 303 645, and sector-specific
standards from SEMI and TISAX add further layers. Standards provide the technical translation of regulatory
intent into implementable requirements, and they standardise vocabulary, expectations, and evidence formats
across the supply chain.
Engagement gives manufacturers influence over the rules they will be held to, and early visibility into
requirements that may take years to finalise. For any single company, the bandwidth required to participate
in all relevant working groups is rarely available. The sub-elements include tracking and participating in
horizontal standardisation (JTC13 WG9 for CRA), vertical standardisation (product-specific working groups),
international standardisation (IEC, ISO), and industry consortium standards (SEMI, TISAX). Understanding how
standards interact, where they conflict, and where gaps remain is itself a knowledge-intensive activity that
benefits from collective effort.
Relevant Technologies
- IEC 62443 series — industrial automation security standard covering process
requirements (62443-4-1), technical requirements (62443-4-2), and system-level security (62443-3-3)
- CEN-CENELEC EN 40000 series (in development) — horizontal CRA harmonised standards
covering cyber resilience principles, security controls, and vulnerability handling
- ETSI EN 303 645 — baseline consumer IoT security standard, basis for multiple national
certification schemes.
- ISO/SAE 21434 — automotive cybersecurity engineering standard, mandatory under UN R155.
- CSAF and VEX formats — machine-readable advisory and exploitability exchange standards
increasingly referenced in harmonised standards for vulnerability handling
Recent Developments / Incidents
- 41 harmonised standards commissioned for CRA (2025) — In April 2025, CEN, CENELEC, and
ETSI officially accepted Standardisation Request M/606 from the European Commission to develop 41 EU-wide
harmonised standards for CRA compliance, covering horizontal framework standards (Type A, deadline August
2026), product-agnostic technical measures and vulnerability handling (Type B, deadline October 2026), and
vertical product-specific standards (Type C, deadline October 2026). This is the largest concurrent
standardisation effort in EU product security history, and manufacturers who are not tracking these drafts
risk designing against outdated assumptions.
- MITRE CVE funding crisis triggers standards sovereignty debate (April 2025) — When the
US government's funding for the MITRE CVE Programme was briefly at risk, it exposed Europe's dependency on
US-maintained vulnerability identification infrastructure. ENISA's EUVD launched in April 2025 partly in
response, and the incident accelerated discussions within European standardisation bodies about requiring
European-maintained vulnerability identifiers and advisory formats (EUVD, CSAF) alongside CVE in CRA
harmonised standards.
Having the right technical capabilities is necessary but insufficient if the organisation cannot
consistently apply them. Security governance addresses how security decisions are made, who has authority at
each lifecycle phase, and how security is embedded in engineering teams rather than bolted on as a separate
function. This includes lifecycle gates and reviews (design reviews, release approvals, change control),
PSIRT design and operation, security champion models that distribute expertise into product teams, and
cross-functional alignment when multiple business units, product lines, or engineering cultures must operate
under a shared security policy.
Equally important is how exceptions are handled. No product ships with zero known issues; the question is
whether deviations from security requirements are formally documented, risk-accepted with compensating
controls, and tracked to remediation deadlines, or whether they are informally ignored. The sub-elements
include organisational reporting structures for product security, security training and skills development
programmes, structured risk acceptance and exception management, and governance mechanisms for aligning
product security across mergers, acquisitions, and joint ventures.
Relevant Technologies
- PSIRT frameworks and tooling — intake triage, case management, researcher
communication, and advisory publication platforms (FIRST PSIRT Services Framework as reference model)
- Security champion programmes — structured models for embedding security expertise in
product engineering teams at scale
- Lifecycle governance frameworks — phase gates, security review checklists, and
decision-authority matrices integrated into product development processes
- GRC platforms — governance, risk, and compliance tooling adapted for product security
(as opposed to enterprise IT security)
- Training and simulation platforms — product security skills development including
tabletop exercises, CTF environments, and role-specific curricula
Recent Developments / Incidents
- CRA mandates named responsible person (2024) — The CRA requires manufacturers to
designate a contact point for regulatory authorities and to ensure that a person or team is responsible
for compliance. For companies where product security was previously distributed across engineering,
quality, and legal without clear ownership, this is forcing explicit governance decisions about where
product security sits organisationally, who has budget authority, and who signs the EU declaration of
conformity.
- Nexperia breach exposes supply chain governance gaps (March 2024) —
Netherlands-headquartered chipmaker Nexperia confirmed that an unauthorised third party accessed its IT
servers in March 2024, with attackers claiming to have stolen 1 TB of data including chip designs, trade
secrets, and customer information from companies like Apple and SpaceX. The incident highlighted that
governance structures for protecting proprietary product designs and supplier IP require cross-functional
coordination between IT security, product security, and supply chain management, and that a breach at a
component supplier can expose the design data of dozens of downstream manufacturers.
Knowing where you stand is a prerequisite for knowing where to invest. Maturity models like BSIMM, OWASP
SAMM, and IEC 62443 capability/maturity levels provide structured ways to assess organisational product
security practices, benchmark against industry peers, and track progress over time. They translate the
abstract question "how good is our product security?" into assessable activities and observable outcomes.
Product-specific KPIs complement organisational maturity: mean time to patch, vulnerability backlog age,
percentage of fleet on current firmware, security test coverage, and incident response times measure
operational performance rather than process maturity.
The risk is that measurement becomes a bureaucratic exercise that optimises for scores rather than
outcomes. Metrics that reward closing low-severity findings quickly while ignoring systemic architectural
weaknesses drive the wrong behaviour. The sub-elements include framework selection and adaptation (choosing
which maturity model fits the organisation's context), metric design (selecting KPIs that correlate with
actual risk reduction), benchmarking (comparing against peers while accounting for different product domains
and risk profiles), and maturity roadmap development (turning assessment results into prioritised
improvement plans with realistic timelines).
Relevant Technologies
- BSIMM (Building Security In Maturity Model) — empirical model based on observed
practices across hundreds of organisations, enabling peer benchmarking
- OWASP SAMM (Software Assurance Maturity Model) — prescriptive model with
self-assessment tooling and improvement roadmap guidance
- IEC 62443 maturity/capability levels — process maturity (ML 1-4) and security
capability (SL 1-4) levels integrated into the dominant industrial security standard
- Product security KPI dashboards — aggregated metrics covering vulnerability management,
patching cadence, fleet update adoption, and incident response performance
- Risk-based metric frameworks — approaches that link security metrics to business risk
reduction rather than activity counts (e.g., factor analysis of information risk, FAIR)
Recent Developments / Incidents
- BSIMM14 confirms product security as fastest-growing domain (2024) — The 14th iteration
of the BSIMM study documented a significant increase in organisations building product security programmes
as distinct from application security, driven by regulatory pressure (CRA, MDR) and customer procurement
requirements. The data showed that organisations with explicit product security governance structures
scored measurably higher on deployment and operations practices than those treating product security as a
subset of AppSec.
- CRA "support period" declaration forces lifecycle metric commitments (2024-2025) — The
CRA requires manufacturers to define a minimum support period at product launch, which must reflect the
expected product lifetime. This effectively mandates a forward-looking metric commitment: manufacturers
must publicly state how long they will deliver security updates, making support duration a measurable,
comparable product attribute. Several industry associations began developing guidance on reasonable
support periods for different product categories, creating de facto benchmarks.
Regulators and customers demand structured proof of security practices: technical files, risk assessments,
test reports, SBOM snapshots, vulnerability handling records, and code review logs. Generating this
documentation manually is slow, error-prone, and scales poorly across product portfolios with multiple
regulatory frameworks. Evidence automation treats compliance documentation as a build artifact, generated
alongside the product itself rather than assembled retrospectively by a separate compliance team.
The CRA's expectation of machine-readable technical files makes automation structurally necessary, not
merely efficient. The sub-elements include policy-as-code (encoding security policies as executable rules
checked in CI/CD), compliance-as-code (mapping engineering artifacts to specific regulatory requirements
automatically), automated technical file generation (assembling CRA/MDR/RED documentation from pipeline
outputs), and audit trail automation (creating tamper-evident records of security decisions, reviews, and
approvals). This is where the most significant technical innovation opportunity exists within the governance
cluster: the gap between what regulators expect and what current tooling delivers is large.
Relevant Technologies
- Policy-as-code frameworks — Open Policy Agent (OPA), Rego, and similar tools encoding
security policies as executable rules checked against build artifacts
- CI/CD security integration — pipeline plugins that generate structured evidence from
SAST, SCA, DAST, and fuzzing results linked to requirement IDs
- Machine-readable technical file formats — structured documentation formats compatible
with CRA conformity assessment, including SARIF for analysis results and CycloneDX for SBOMs
- Compliance traceability tooling — platforms linking engineering artifacts (code
reviews, test results, threat model updates) to specific regulatory requirements bi-directionally
- Tamper-evident audit logging — append-only, cryptographically chained records of
security decisions, exception approvals, and review outcomes
Recent Developments / Incidents
- CRA machine-readable documentation expectations signal automation mandate (2025-2026) —
As CEN-CENELEC work on the EN 40000 series progresses, draft standards for the CRA technical file
increasingly reference machine-readable formats for SBOMs, vulnerability status (VEX), and security test
results (SARIF). This confirms that the Commission envisions automated evidence generation as part of
normal compliance, not a luxury for large manufacturers. Companies still assembling evidence manually from
spreadsheets and email chains face a structural disadvantage.
- SLSA and in-toto adoption gains traction for provenance evidence (2024-2025) — The
Supply-chain Levels for Software Artifacts (SLSA) framework and the in-toto specification for supply chain
attestation saw increased adoption as manufacturers recognised that build provenance evidence required by
the CRA's SBOM and integrity requirements could be auto-generated by properly instrumented build
pipelines. Several open-source CI/CD platforms added native SLSA Level 3 attestation support, lowering the
barrier to producing tamper-evident build provenance as a standard build artifact.
When a security researcher, customer, or internal team discovers a vulnerability in a deployed product, the
manufacturer needs a structured, trusted process for handling it. This includes PSIRT intake processes
(secure reporting channels, acknowledgement timelines), triage and prioritisation (assessing severity and
exploitability in the product's specific deployment context), coordinated disclosure (agreeing timelines
with researchers, managing pre-notification to affected customers), and advisory publication in
machine-readable formats. The CRA mandates that manufacturers operate a vulnerability handling process and
report actively exploited vulnerabilities to ENISA within 24 hours of awareness, with full notification
within 72 hours.
The sub-elements include PSIRT intake and case management, researcher relations programmes (including bug
bounty and safe harbour policies), VEX publication (confirming which product versions are affected or
unaffected by a given CVE), CSAF advisory distribution, and integration with the EU's vulnerability
reporting infrastructure. The tooling ecosystem for coordinated vulnerability disclosure is an active area
of innovation with significant room for improvement: many manufacturers still manage disclosure through
email and spreadsheets, which cannot meet CRA reporting timelines at scale.
Relevant Technologies
- CSAF (Common Security Advisory Framework) — machine-readable security advisory format
enabling automated distribution and consumption of vulnerability information
- VEX (Vulnerability Exploitability eXchange) — standardised statements clarifying
whether a product is affected by a specific vulnerability, reducing noise for downstream consumers
- PSIRT case management platforms — intake, triage, tracking, and advisory publication
tooling (commercial and open-source options)
- CRA Single Reporting Platform (in development) — ENISA-operated platform for mandatory
vulnerability and incident reporting under the CRA, operational from September 2026
- EUVD (European Vulnerability Database) — ENISA's centralised vulnerability intelligence
platform, launched April 2025, complementing CVE/NVD with EU-focused enrichment and CSAF support
Recent Developments / Incidents
- ENISA launches European Vulnerability Database (May 2025) — ENISA launched the EUVD as
mandated by the NIS2 Directive, providing aggregated, reliable, and actionable vulnerability information
with three specialised views covering critical vulnerabilities, actively exploited flaws, and
EU-coordinated disclosures. The EUVD assigns its own identifiers (e.g., EUVD-2025-xxxxx) alongside CVE
references and supports CSAF for machine-readable advisories, establishing European vulnerability
infrastructure that product manufacturers will need to integrate with alongside existing CVE/NVD
workflows.
- MITRE CVE Programme funding crisis (April 2025) — The US government's MITRE contract
for operating the CVE Programme faced a temporary funding gap, briefly raising the prospect that the
world's primary vulnerability identification system could lapse. Although the EUVD is not designed to
replace the CVE Programme, ENISA worked with MITRE on its development and continues to assess the impact
of the funding crisis. The incident demonstrated that product manufacturers relying exclusively on CVE for
vulnerability identification carry a single-point-of-failure risk, and accelerated European interest in
maintaining independent vulnerability identification and advisory distribution capability.
This subcluster refers to the end-to-end process by which manufacturers deliver firmware, patches, or
software upgrades to deployed devices while ensuring that only authentic, untampered code is installed. A
secure Over-The-Air (OTA) update mechanism protects software integrity and authenticity and is the only
practical way to fix vulnerabilities after a product ships. The process includes secure build and signing,
trusted distribution, protected download, on-device signature verification, safe installation with rollback
support, and post-update validation.
Without a reliable update path, vulnerabilities discovered after launch become permanent, and without a
secure one, the update channel itself becomes a major attack vector because updates are inherently trusted.
Secure OTA enables faster patching, regulatory compliance with EU CRA, ISO 21434, and IEC 62443, and allows
post-launch feature expansion, but only if designed for resilience from the start.
Typical sub-elements include code signing with PKI, encrypted transport (TLS), on-device signature and
integrity verification, A/B partition schemes with automatic rollback, anti-rollback protection against
downgrade attacks, secure boot chaining, and fleet version monitoring.
Relevant Technologies
- Uptane / TUF (The Update Framework) — Open-source frameworks for securing software
update systems; Uptane is the automotive-specific variant used by major OEMs to protect vehicle ECU
updates against key compromise.
- HSM-backed code signing (e.g. SignServer, EJBCA) — Hardware Security Modules store
signing keys in tamper-resistant hardware so firmware can be cryptographically signed at build time
without exposing keys to the software environment.
- A/B dual-bank partitioning with secure bootloaders — On-device architecture that
installs updates to an inactive partition and only switches over after signature verification, enabling
automatic rollback if the new image fails.
Recent Developments / Incidents
- SolarWinds SUNBURST supply-chain attack (2020) — Attackers compromised SolarWinds'
Orion build pipeline and injected malicious code into a legitimately signed update, backdooring roughly
18,000 customers including US federal agencies. The incident redefined industry thinking on securing the
build environment itself, not just signature verification.
- CrowdStrike Channel File 291 outage (July 2024) — A faulty Falcon Sensor update crashed
roughly 8.5 million Windows systems worldwide, grounding flights and disrupting hospitals and banks. A
flaw in CrowdStrike's content validator let the broken file pass checks, exposing the risks of unstaged,
globally simultaneous update rollouts.
This subcluster can be defined as the structured, ongoing discipline of tracking known and newly discovered
weaknesses in a product's own code, its third-party components, and its operating environment, then deciding
what to fix, when, and how. It spans the full loop from discovery through intake, triage, prioritisation,
remediation, and verification, and feeds directly into the secure update (E1) pipeline once a fix is ready.
New vulnerabilities in third-party libraries, operating systems, and a manufacturer's own code surface
constantly after release, and attackers routinely weaponise them within hours of public disclosure, meaning
a product that is not actively monitored will drift from secure to exploitable without any change on the
manufacturer's side.
Typical sub-elements include CVE and advisory monitoring, SBOM-based component tracking, coordinated
vulnerability disclosure programs (VDP), bug bounties, severity scoring with CVSS and EPSS, risk-based
prioritisation, and SLA-driven remediation workflows.
Relevant Technologies
- Software Composition Analysis (SCA) tools (e.g. Black Duck, Snyk, Dependency-Track) —
Continuously scan a product's SBOM against vulnerability databases like the NVD to flag known CVEs in
third-party and open-source components.
- EPSS (Exploit Prediction Scoring System) — A data-driven scoring model that estimates
the probability a vulnerability will be exploited in the wild, helping teams prioritise beyond raw CVSS
severity.
Recent Developments / Incidents
- Log4Shell (CVE-2021-44228, 2021) — A critical remote code execution flaw in the Log4j
library affected hundreds of millions of devices and enterprise products, exposing how few manufacturers
knew which of their products even contained the library and accelerating the industry-wide push for
mandatory SBOMs.
- MOVEit Transfer zero-day (CVE-2023-34362, 2023) — The CL0P ransomware group exploited
an unpatched SQL injection flaw in Progress Software's MOVEit tool before a patch was available, breaching
over 2,700 organisations. It highlighted how slow customer patching and weak product-side disclosure
coordination can turn one CVE into a global incident.
Incident response in a product security context covers how a manufacturer reacts when one of its products
is actively exploited, misused, or implicated in a breach, whether the root cause lies in the product itself
or in how it was deployed. It spans detection and triage, forensic investigation, containment through
patches or mitigations, customer notification, regulatory reporting, and post-incident review to prevent
recurrence.
Even the most securely designed product will eventually face an incident, and the speed and transparency of
the manufacturer's response often determines whether the outcome is a contained event or a large-scale
breach affecting thousands of customers. Regulations now impose strict timelines, such as the EU CRA's
24-hour early-warning rule for actively exploited vulnerabilities.
Typical sub-elements include a Product Security Incident Response Team (PSIRT), incident playbooks,
forensic and telemetry collection, customer advisories and CVE publication, regulatory reporting to bodies
like ENISA or CISA, and lessons-learned reviews.
Relevant Technologies
- SIEM and XDR platforms (e.g. Splunk, Microsoft Sentinel, CrowdStrike Falcon) —
Aggregate logs and telemetry from deployed products and customer environments to detect attack patterns
and support forensic investigation.
- SOAR platforms (e.g. Palo Alto XSOAR, Tines) — Automate incident response playbooks
such as triage, enrichment, containment actions, and notification workflows to reduce response time.
Recent Developments / Incidents
- Ivanti Connect Secure zero-days (2024) — Two actively exploited vulnerabilities in
Ivanti's VPN appliances were leveraged by state-linked actors before patches were ready, forcing CISA to
order US federal agencies to disconnect the devices entirely and exposing the limits of traditional
patch-based response timelines.
- Fortinet FortiGate zero-day CVE-2024-21762 (2024) — A critical out-of-bounds write flaw
in FortiOS SSL VPN was disclosed as already exploited in the wild, prompting Fortinet to issue emergency
advisories and CISA to add it to its Known Exploited Vulnerabilities catalog within days, illustrating
coordinated vendor-regulator response under active attack.
End-of-life (EOL) management covers the structured process by which manufacturers and operators retire
products that have reached the end of their supported life, whether by withdrawing security updates,
decommissioning deployed units, or securely disposing of hardware and residual data. It spans the
manufacturer's side (declaring and communicating EOL dates, issuing final updates, publishing transition
guidance) and the customer side (asset inventory, secure data sanitisation, chain-of-custody tracking,
certified destruction or resale).
A product at end-of-life without a plan becomes a permanent, unpatched attack surface: any vulnerability
discovered after EOL will never be fixed, and attackers actively monitor EOL milestones to target legacy
systems. Equally, retired hardware often still contains credentials, configuration data, and sensitive
customer information that can be recovered if disposal is mishandled, turning a decommissioned device into a
back door.
Typical sub-elements include clearly declared support periods, EOL announcements and migration guidance,
final security patches, cryptographic erasure or physical destruction of storage, chain-of-custody
documentation, and certificates of data destruction aligned with standards like NIST SP 800-88.
Relevant Technologies
- IT Asset Management (ITAM) and CMDB platforms (e.g. Lansweeper, ServiceNow) — Track
every deployed asset's lifecycle stage, flag devices approaching EOL, and enforce structured
decommissioning workflows with full chain-of-custody.
- Certified ITAD (IT Asset Disposition) services with R2v3 / e-Stewards certification —
Third-party specialists that handle secure physical destruction, recycling, and resale of retired hardware
under audited environmental and data-security standards.
Recent Developments / Incidents
- Morgan Stanley improper decommissioning fines (2022) — The US SEC fined Morgan Stanley
$35 million, and the Treasury added $60 million, after retired servers and hard drives containing
unencrypted client data were resold without proper sanitisation. It became a landmark case for how
decommissioning failures translate directly into regulatory liability.
- Windows 10 End of Support (2025) — Microsoft ended free security updates for Windows
10, leaving hundreds of millions of devices exposed unless enrolled in the paid Extended Security Updates
(ESU) program. Analysts flagged it as one of the largest EOL-driven cyber risk events ever, with attackers
expected to weaponise post-EOL CVEs against unmigrated systems.
This subcluster refers to the process by which manufacturers gather real-world data from products already
in customers' hands, turning the deployed fleet into a continuous source of insight about how products
actually behave, fail, and are attacked outside controlled environments. It covers product telemetry (logs,
crash data, performance metrics), usage patterns, exploitation attempts observed in the wild, abnormal
behaviours, and contextual threat signals, all fed back into the manufacturer's security operations and
engineering teams.
Without field intelligence, a manufacturer is effectively blind to what happens after a product ships,
meaning emerging attack techniques, zero-day exploitation, and misconfigurations at customer sites can
persist for months before being noticed. It closes the loop between design-time assumptions and real-world
conditions, enabling faster detection of novel threats, data-driven prioritisation of patches, early warning
of active exploitation, and evidence-based improvements to future product versions.
Typical sub-elements include secure telemetry pipelines, crash and exception reporting, fleet-wide anomaly
detection, threat intelligence enrichment, privacy-preserving data collection, and feedback loops into
vulnerability management and product engineering.
Relevant Technologies
- OpenTelemetry — An open-source, vendor-neutral standard for collecting logs, metrics,
and traces from deployed products, enabling consistent telemetry pipelines across heterogeneous device
fleets.
- Fleet observability platforms (e.g. Memfault, Axonius, Splunk) — Specialised tools that
aggregate device-level telemetry, detect anomalies across large product fleets, and surface security and
reliability signals to engineering and security teams.
Recent Developments / Incidents
- Volt Typhoon discovery through fleet telemetry (2023-2024) — Microsoft and CISA
uncovered a Chinese state-sponsored campaign targeting US critical infrastructure by correlating subtle
"living-off-the-land" behaviours observed across customer telemetry. The case highlighted how aggregated
field intelligence can surface nation-state activity invisible to any single endpoint.
- Tesla fleet learning and OTA-linked telemetry (ongoing) — Tesla uses continuous vehicle
telemetry to detect anomalies, refine autopilot behaviour, and identify potential security issues across
millions of vehicles, demonstrating how field intelligence can feed directly into rapid OTA patching and
became a reference model for automotive cybersecurity under UNECE R155.
A digital twin is a high-fidelity virtual model of a physical product, system, or infrastructure, connected
to its real-world counterpart through a two-way flow of real-time data so that it mirrors the actual
deployed environment. In a product cybersecurity context, digital twins allow manufacturers and operators to
simulate attacks, test patches, validate configuration changes, and rehearse incident response scenarios
against an accurate replica of the live system, without risking downtime, safety, or data integrity on the
real product.
Once a product is deployed, especially in critical environments like industrial plants, medical devices, or
connected vehicles, the cost of experimenting directly on the live system is prohibitive, yet the need to
validate security changes continuously has never been higher. Digital twins bridge this gap by providing a
safe, production-grade sandbox for vulnerability testing, patch validation before rollout, attack path
analysis, and training, shifting security operations from reactive cleanup toward proactive, evidence-based
defence.
Typical sub-elements include synchronised real-time data feeds, physics-based or behavioural system models,
attack simulation environments, patch pre-validation workflows, SBOM-linked component twins, and incident
replay capabilities.
Relevant Technologies
- NVIDIA Omniverse / AWS IoT TwinMaker / Azure Digital Twins — Cloud platforms that
build, host, and synchronise digital twins of products and industrial systems, providing the foundational
infrastructure for security simulation at scale.
- Cyber ranges and simulation environments (e.g. Keysight IxNetwork, Cyberbit) —
Specialised twin-like environments used for adversary emulation, red-team exercises, and incident response
rehearsal against replicas of real deployments.
Recent Developments / Incidents
- NIST digital twin for 3D printer cyberattack detection (2023) — NIST researchers
demonstrated a framework using a digital twin of a 3D printer to detect cyberattacks by comparing
real-time physical sensor data against the twin's simulated output, successfully distinguishing genuine
attacks from benign anomalies and signaling a new model for securing manufacturing fleets.
- Siemens and industrial OT digital twins (ongoing, 2024-2025) — Siemens has rolled out
production-grade digital twins of factory floors enabling operators to test ICS patches and simulate
ransomware scenarios before touching live systems, a direct response to OT incidents like Colonial
Pipeline where operators had no safe way to validate defensive changes.
Remote access covers the mechanisms through which manufacturers, service providers, and authorised users
reach deployed products, whether industrial controllers, medical devices, vehicles, or IoT endpoints,
without being physically present. In a product cybersecurity context, it spans the infrastructure (VPNs,
jump hosts, brokered gateways, cloud remote management platforms), the identity and access controls that
govern who can connect, and the session-level protections that ensure every interaction with the deployed
product is authorised, encrypted, auditable, and reversible.
Remote access is simultaneously one of the most valuable post-deployment capabilities, enabling faster
diagnostics, predictive maintenance, and rapid patching, and one of the most dangerous attack vectors, since
a compromised remote channel gives attackers the same privileged reach as a legitimate engineer. Dragos
reported that over 60% of OT-related cyber incidents in 2024 involved remote access vectors, and similar
patterns appear across healthcare and consumer IoT.
Typical sub-elements include multi-factor authentication, zero-trust network access (ZTNA), session
recording and auditing, just-in-time access provisioning, vendor and third-party access management (VPAM),
protocol-aware gateways for legacy OT systems, and network segmentation.
Relevant Technologies
- Zero Trust Network Access (ZTNA) platforms (e.g. Zscaler Private Access, Cloudflare
Access) — Replace traditional VPNs with per-session, identity-based access decisions that
verify user, device, and context before granting connection to specific resources rather than the whole
network.
- OT-specific secure remote access solutions (e.g. Claroty xDome Secure Access, Cisco Cyber Vision
SEA, Secomea) — Purpose-built for industrial environments, offering protocol-aware gateways,
granular per-asset access policies, and compliance with standards like IEC 62443.
Recent Developments / Incidents
- Colonial Pipeline attack (2021) — Attackers used a single compromised VPN password, on
an account without MFA, to reach internal systems and deploy ransomware, shutting down the largest US fuel
pipeline. The incident reshaped federal policy on remote access to critical infrastructure and accelerated
CISA guidance on phishing-resistant MFA.
- Change Healthcare ransomware attack (2024) — The ALPHV/BlackCat ransomware group
breached Change Healthcare through a Citrix remote access portal that lacked multi-factor authentication,
disrupting US healthcare payments for weeks and exposing data of over 190 million people. It became the
defining case for why MFA on remote access is non-negotiable.
Security monitoring is the ongoing, real-time surveillance of deployed products for signs of active attack,
unauthorised access, malicious behaviour, or policy violations, going beyond the question of "what
vulnerabilities exist" to "what is actually happening right now." It spans log collection and correlation,
network traffic inspection, endpoint behaviour analysis, identity and access monitoring, and threat
intelligence enrichment, with the goal of shortening the window between compromise and detection, which
industry data consistently places at months rather than hours.
A deployed product sits in environments the manufacturer does not fully control, facing adversaries who
continuously evolve their tactics, and even a perfectly designed product can be abused, misconfigured, or
targeted in ways that only become visible through behaviour rather than code. IBM's 2024 Cost of a Data
Breach report found an average dwell time of 199 days before detection and 73 more days to contain, meaning
monitoring is often the difference between a contained incident and a catastrophic breach.
Typical sub-elements include log aggregation and SIEM correlation, intrusion detection and prevention
(IDS/IPS), endpoint and extended detection and response (EDR/XDR), user and entity behaviour analytics
(UEBA), threat intelligence feeds, automated alerting and triage, and 24/7 security operations centre (SOC)
coverage.
Relevant Technologies
- SIEM platforms (e.g. Splunk, Microsoft Sentinel, Elastic Security) — Aggregate,
normalise, and correlate logs and events from across deployed products and their environments to surface
attack patterns that isolated tools would miss.
- EDR / XDR platforms (e.g. CrowdStrike Falcon, SentinelOne, Microsoft Defender XDR) —
Continuously monitor endpoint and workload behaviour, applying machine learning and behavioural analytics
to detect intrusions, lateral movement, and zero-day exploitation in real time.
- OT / ICS-specific monitoring platforms (e.g. Claroty CTD, Dragos Platform, Nozomi
Networks) — Provide passive, protocol-aware monitoring for industrial environments where
traditional IT tools cannot safely operate, detecting anomalies in industrial traffic and known OT
threats.
Recent Developments / Incidents
- Volt Typhoon detection (2023-2024) (Also covered in E5) — US and allied agencies
uncovered a Chinese state-sponsored campaign that had been quietly embedded in US critical infrastructure
for years by using legitimate system tools. It was surfaced only through correlated behavioural monitoring
across multiple victims, underscoring how static detection misses modern "living-off-the-land" attacks.
- Snowflake customer breaches (2024) — Attackers used stolen credentials to access
Snowflake customer environments including Ticketmaster and AT&T, exfiltrating data from over 160
organisations. Many victims lacked adequate monitoring and MFA on their tenants, illustrating how
insufficient security monitoring on deployed cloud products turns single-credential compromises into mass
data breaches.