Products with digital elements shall be designed, developed and produced in such a way that they ensure an appropriate level of cybersecurity based on the risks.
On the basis of the cybersecurity risk assessment, products with digital elements shall:
be made available on the market without known exploitable vulnerabilities;
be made available on the market with a secure by default configuration, unless otherwise agreed between manufacturer and business user in relation to a tailor-made product with digital elements, including the possibility to reset the product to its original state;
ensure that vulnerabilities can be addressed through security updates, including, where applicable, through automatic security updates that are installed within an appropriate timeframe enabled as a default setting, with a clear and easy-to-use opt-out mechanism, through the notification of available updates to users, and the option to temporarily postpone them;
ensure protection from unauthorised access by appropriate control mechanisms, including but not limited to authentication, identity or access management systems, and report on possible unauthorised access;
protect the confidentiality of stored, transmitted or otherwise processed data, personal or other, such as by encrypting relevant data at rest or in transit by state of the art mechanisms, and by using other technical means;
protect the integrity of stored, transmitted or otherwise processed data, personal or other, commands, programs and configuration against any manipulation or modification not authorised by the user, and report on corruptions;
process only data, personal or other, that are adequate, relevant and limited to what is necessary in relation to the intended purpose of the product with digital elements (data minimisation);
protect the availability of essential and basic functions, also after an incident, including through resilience and mitigation measures against denial-of-service attacks;
minimise the negative impact by the products themselves or connected devices on the availability of services provided by other devices or networks;
be designed, developed and produced to limit attack surfaces, including external interfaces;
be designed, developed and produced to reduce the impact of an incident using appropriate exploitation mitigation mechanisms and techniques;
provide security related information by recording and monitoring relevant internal activity, including the access to or modification of data, services or functions, with an opt-out mechanism for the user;
provide the possibility for users to securely and easily remove on a permanent basis all data and settings and, where such data can be transferred to other products or systems, ensure that this is done in a secure manner.
It’s the fundamental cybersecurity requirement upon which all other CRA technical measures are built.
This paragraph establishes the core security-by-design principle of the Cyber Resilience Act.
It requires manufacturers to ensure that cybersecurity is not an afterthought but a foundational design criterion — integrated from the earliest stages of product conception through development, production, and post-market operation.
The phrase “appropriate level of cybersecurity based on the risks” means security measures must be proportionate to the threats the product faces and the potential harm that exploitation could cause — to users, organizations, or society.
In essence, this paragraph demands that manufacturers:
Integrate cybersecurity in every lifecycle stage.
Apply risk-based security controls — not uniform, but contextually appropriate.
Demonstrate evidence that design and production decisions are informed by risk assessments and security requirements.
Organizational Actions
Establish Secure-by-Design Governance: Define a governance model that embeds security within product design, engineering, and release processes.
Appoint Product Security Leads: Each product line should have a designated Product Security Owner accountable for implementing secure design principles.
Integrate Security in Development Lifecycle: Security should be integrated into the organization’s product lifecycle stages — ideation, design, development, testing, and production.
Create Cross-Functional Security Forums: Include representatives from security, engineering, QA, compliance, and operations to review design decisions and risk assessments.
Perform Periodic Risk and Design Reviews: Reassess product architecture and risk models at defined checkpoints or after major product updates.
Policy / Process Updates
Define a Secure Development Lifecycle (SDL) Policy: Incorporate CRA’s essential requirements as mandatory security checkpoints.
Update Product Design Policies: Include requirements for secure architecture, data protection, and resilience design.
Add Risk-Based Security Control Procedures: Map security controls to risk severity (e.g., encryption required for high-risk data, sandboxing for high-exposure modules).
Establish Change Management with Security Impact Review: Ensure any design change triggers a cybersecurity impact review.
Integrate Security Requirements into Supplier and Production Policies: Third-party components and OEM suppliers must follow equivalent security design and manufacturing practices.
Technical Implementations
Adopt Secure Architecture Principles: Implement least privilege, defense-in-depth, and secure defaults in system design.
Use Secure Coding Frameworks and Standards: Apply OWASP, SEI CERT, or MISRA (for embedded systems) coding standards.
Embed Threat Modeling in Design Phase: Use STRIDE or LINDDUN methodologies to identify potential abuse cases before development.
Automate Security Testing: Integrate SAST, DAST, and SCA tools into CI/CD pipelines.
Harden Build and Deployment Pipelines: Secure build servers, enforce signed builds, and implement reproducible builds.
Establish Security Baselines for Production: Ensure manufacturing processes prevent unauthorized firmware or software injection.
Apply Continuous Security Monitoring: Implement telemetry and logging mechanisms for runtime monitoring and integrity checks.
Documentation Requirements
Maintain Design and Architecture Documentation: Include data flow diagrams, trust boundaries, and security controls mapping.
Document Risk-Based Security Control Justification: Explain how applied controls correspond to identified risks.
Keep Records of Security Reviews and Approvals: Store review minutes, sign-offs, and change impact assessments.
Link Design Artifacts to Risk Assessment Records (Article 13): Cross-reference architecture documentation with risk assessments and Annex I compliance evidence.
Version-Control All Security Artifacts: Maintain traceable histories for code, design, and security documents.
Common Pitfalls and Readiness Gaps
Treating “security by design” as theoretical rather than enforceable practice.
Over-engineering controls without aligning them to actual risk levels.
Missing traceability between design decisions and risk assessments.
Inconsistent documentation across design, development, and production teams.
Lack of verification mechanisms for third-party components and production environments.
Tools and Frameworks to Help
Secure Architecture & Design: ThreatModeler, IriusRisk, Microsoft TMT
Secure Coding: OWASP ASVS, SEI CERT, MISRA C
Risk-Based Security Integration: ISO/IEC 27005, NIST SP 800-30
Secure SDLC Governance: NIST SSDF, ENISA Secure Software Development Guidelines
Automated Security Testing: Semgrep, SonarQube, Snyk, Dependency-Track
Compliance Tracking: ServiceNow GRC, Vanta, Drata, Confluence with version control
This paragraph sets the tone for risk-proportionate, secure-by-design engineering under the CRA.
Manufacturers must demonstrate — not just claim — that cybersecurity considerations are integral to how products are conceived, designed, and built.
Doing so not only fulfills regulatory obligations but significantly enhances the product’s market trust and long-term resilience posture.
This is the first specific technical requirement under Annex I, Part I, point (2). It transforms the general “secure by design” principle from Paragraph 1 into a measurable product condition — that products must not be released with known exploitable vulnerabilities.
This requirement enforces security assurance at release time.
Manufacturers must ensure that no known vulnerabilities — particularly exploitable ones — are present in the product when it is made available on the EU market.
The key terms here are:
“Known” — meaning vulnerabilities already identified in public or internal sources (e.g., CVE databases, supplier disclosures, bug bounty findings, or internal testing).
“Exploitable” — meaning the vulnerability can be used by an attacker to compromise confidentiality, integrity, availability, or safety.
This does not demand theoretical zero vulnerabilities but rather that manufacturers demonstrate:
A defined process to detect known vulnerabilities before release.
A risk-based approach to patching or mitigation.
Evidence that the product’s final release is free from known exploitable weaknesses at that time.
Organizational Actions
Establish a Vulnerability Management Program (VMP):
Define ownership (e.g., Product Security or SecOps) and governance for vulnerability discovery, tracking, and mitigation.
Integrate Security Assurance into Release Criteria:
Require security clearance before any product release — verifying “no known exploitable vulnerabilities.”
Maintain Threat and Vulnerability Intelligence (TVI):
Continuously monitor NVD, CVE, CISA KEV catalog, supplier advisories, and open-source feeds for issues relevant to your SBOM.
Appoint a Security Champion in Engineering Teams:
Empower designated engineers to enforce vulnerability review and closure before release.
Policy / Process Updates
Develop a Vulnerability Disclosure and Handling Policy:
Define intake, triage, validation, remediation, and disclosure timelines aligned with ISO/IEC 30111.
Update Release Management Policy:
Make vulnerability verification a mandatory pre-release gate.
Implement Supplier Vulnerability Compliance Clauses:
Require third-party component vendors to disclose vulnerabilities promptly and provide patches.
Define Severity and Exploitability Criteria:
Use CVSS scoring + exploitability analysis (e.g., EPSS or vendor advisories) to determine what qualifies as “known exploitable.”
Technical Implementations
Automated Vulnerability Scanning:
Integrate SAST, DAST, SCA, and container scanning into CI/CD pipelines.
Maintain a Software Bill of Materials (SBOM):
Use tools like CycloneDX or SPDX to track components and check for known CVEs automatically.
Run Continuous Dependency Checks:
Use tools like Dependency-Track, Snyk, or GitHub Dependabot for live vulnerability feeds.
Perform Penetration Testing Pre-Release:
Validate that exploitable conditions (e.g., RCE, privilege escalation) are mitigated or patched.
Establish Patch Verification Process:
Ensure fixes are tested for regression and effectiveness before product release.
Implement Secure Configuration and Hardening Scripts:
Disable debug ports, remove default credentials, and restrict unnecessary services.
Documentation Requirements
Maintain Vulnerability Assessment Reports:
Include scan results, patch evidence, and risk acceptance decisions for unresolved low-severity findings.
Keep SBOM and CVE Mapping Records:
Demonstrate traceability between product components and vulnerability checks.
Record Release Readiness Certificates:
Document that “no known exploitable vulnerabilities” were present at release time.
Log Vulnerability Exceptions (if any):
For deferred fixes (e.g., third-party dependency awaiting vendor patch), document compensating controls and rationale.
Maintain Vulnerability Intelligence Log:
Show active monitoring sources, dates, and triage outcomes.
Common Pitfalls and Readiness Gaps
Treating vulnerability scanning as a one-time activity instead of continuous.
Failing to distinguish between “known” and “unknown” vulnerabilities.
Ignoring third-party component vulnerabilities or assuming vendor coverage.
Lack of traceability between vulnerability findings and SBOM.
Releasing patches without formal retesting for exploitability closure.
Tools and Frameworks to Help
Vulnerability Scanning: Snyk, Dependabot, Trivy, Anchore, Tenable.io
SBOM Management: CycloneDX, SPDX, Dependency-Track
Exploitability Evaluation: CISA KEV, EPSS, Exploit DB
Policy/Process Standards: ISO/IEC 30111, NIST SP 800-40, ENISA Vulnerability Management Guidelines
Risk Tracking: Jira + Security plugin, ServiceNow VRM
This requirement embodies “assurance before availability.”
It demands proof that every product released to the market is free of known exploitable vulnerabilities — backed by a continuous and verifiable process.
By embedding automated scanning, structured triage, and release gating, manufacturers not only comply with CRA but also build a measurable trust signal for regulators and customers alike.
This requirement establishes the “secure by default” principle, ensuring that any product’s initial configuration — right out of the box or after installation — is hardened and safe to use without requiring the user to apply additional security settings.
In practice, this means:
Security settings must not be disabled by default (e.g., encryption off, firewalls disabled, open ports).
Default accounts, passwords, or debug interfaces must not be active.
The product should allow the user to reset to a secure factory state, useful for remediation, transfer, or recovery.
The “unless otherwise agreed” clause gives flexibility for tailor-made (customized B2B) solutions, but even there, deviations must be documented and contractually justified.
Organizational Actions
Define “Secure by Default” Baselines for each product family (e.g., server, embedded device, software agent).
Integrate Secure Configuration Verification into the product release checklist.
Create a Configuration Security Governance Committee (or Product Security Board) to approve baseline configurations.
Assign Responsibility for Configuration Security to Product Security or DevSecOps engineers during development.
Policy / Process Updates
Secure Configuration Policy:
Mandate that products must be delivered in a state that minimizes attack surface (e.g., all optional network services off unless required).
Factory Reset Policy:
Require all products to include a secure and verifiable “reset to factory defaults” function that removes user data and restores the secure baseline.
Tailor-Made Exception Procedure:
Define a formal process to document when a customer (business user) agrees to a non-default configuration — including rationale, residual risks, and signatures.
Configuration Hardening Guidelines:
Publish internal baselines (e.g., Linux hardening, cloud environment templates, device lockdown parameters).
Customer Documentation Requirement:
Provide users with configuration and hardening guidance in the product manual or installation wizard.
Technical Implementations
Disable Unused Services and Ports:
Apply the principle of least functionality; only necessary services should run by default.
Enforce Unique Credentials per Device/User:
Eliminate default shared passwords; use random password generation or onboarding tokens.
Enable Security Features by Default:
Examples: encryption, secure boot, automatic updates, and integrity checks.
Provide Factory Reset Capability:
Must securely wipe sensitive data (user credentials, logs, encryption keys) while restoring secure default configuration.
Integrate Configuration Validation in CI/CD:
Use automated compliance scans (e.g., CIS-CAT, OpenSCAP, or custom scripts) to test images for compliance with security baseline.
Implement Configuration Lockdown:
For sensitive environments, restrict users from weakening security settings without administrative approval.
Ensure Configuration is Cryptographically Signed:
Prevent tampering with configuration files or setup scripts.
Provide Secure Installation Wizards:
Guide users to maintain security (e.g., force password change on first use, display security status indicators).
Documentation Requirements
Secure Configuration Baseline Document:
Clearly lists which settings, ports, services, and features are enabled/disabled by default.
Factory Reset Design Document:
Details technical implementation of reset functionality, ensuring user data sanitization.
Configuration Exception Register:
Record all approved deviations from secure baseline (especially for tailor-made systems).
Customer Configuration Guide:
Delivered with the product, showing how to maintain or restore secure configuration.
Release Test Reports:
Evidence from automated scans or QA testing confirming compliance with secure-by-default configuration before release.
Secure Default Review Record:
Minutes or approval notes from Product Security Board confirming the default configuration meets policy.
Common Pitfalls and Readiness Gaps
Leaving test/debug interfaces active in production builds.
Shipping products with default admin credentials or open management ports.
Assuming customers will harden the system themselves.
Not providing a reliable “reset to factory defaults” option.
Inconsistent configuration defaults across product versions or regional builds.
Lack of documentation for deviations in customized deployments.
Tools and Frameworks to Help
Configuration Compliance: CIS-CAT, OpenSCAP, Chef InSpec
Secure Build Verification: GitLab CI, Jenkins + compliance scripts
Baseline Enforcement: Ansible, Puppet, Terraform (with security modules)
Device Hardening: STIGs, CIS Benchmarks
Reset Implementation Validation: Factory reset QA tests, Secure erase tools
This provision operationalizes security at default state — meaning the product itself must protect the user even if the user takes no further action.
It ensures every unit shipped to market starts from a known secure baseline, minimizing exposure and preventing misconfiguration-driven vulnerabilities.
For organizations, compliance is achieved when secure configuration baselines are:
Defined,
Enforced during development,
Verified at release, and
Documented for traceability.
This provision mandates that manufacturers design and support mechanisms to fix vulnerabilities efficiently and transparently after the product is released.
In simple terms:
You must provide a reliable, secure, and timely way to deliver and apply security updates.
By default, updates should install automatically, unless the user explicitly opts out.
Users must receive notifications when updates are available.
Users must be able to temporarily defer updates (e.g., due to operational reasons) but not indefinitely.
The update process itself must be secure, preventing tampering, rollback, or unauthorized updates.
This requirement directly supports the CRA’s principle of maintaining security throughout the product lifecycle.
Organizational Actions
Establish a Secure Update Management Program integrated with vulnerability handling (Article 11) and coordinated disclosure.
Define Update Ownership: Assign clear accountability to Product Engineering, Security Operations, and Customer Success for update creation, validation, and delivery.
Develop SLA for Security Patch Timeliness: e.g., critical vulnerabilities patched within X days from identification.
Implement Product Support Lifecycle Policies: Define minimum guaranteed update support period (e.g., 5 years).
Create an Update Transparency Statement: Publicly communicate how updates are delivered, their frequency, and user controls.
Set up Security Update Readiness Reviews prior to product releases to verify that secure update functionality exists and works reliably.
Policy / Process Updates
Security Update Policy:
Define how security fixes are developed, tested, signed, and distributed. Include:
Authentication of update sources.
Cryptographic signing.
Rollback prevention.
Version tracking and audit.
Vulnerability Handling Procedure Integration:
Link update release workflow with vulnerability disclosure management (Article 11 compliance).
Automatic Update Defaults Policy:
Require that automatic updates are enabled by default but can be opted out via a simple user interface.
User Notification Procedure:
Specify how and when users are informed about available updates (in-app notification, email, or device alert).
Temporary Postponement Process:
Define maximum allowable postponement timeframes (e.g., 7 days for critical updates).
Update Recordkeeping Policy:
Mandate logging of update delivery, installation status, and user deferrals for traceability and audit.
Technical Implementations
Secure Update Mechanism:
Implement cryptographically signed updates (e.g., using RSA/ECDSA).
Validate signature and integrity before installation.
Use TLS-secured channels for update distribution.
Automatic Updates:
Enabled by default at installation or first boot.
Provide a simple toggle for opt-out (in settings or management UI).
Allow temporary postponement (e.g., “Remind me later” up to defined limit).
Update Timeliness:
Integrate update pipelines with vulnerability triage systems (e.g., JIRA + CI/CD).
Automate build, sign, and publish processes via secure CI/CD workflows.
Rollback Protection:
Implement anti-rollback measures using signed version numbers or firmware counters.
Prevent installation of older, vulnerable versions.
User Notification:
Provide clear versioning and changelog for transparency.
Notify users when updates are applied, failed, or deferred.
Testing & Validation:
Test updates in staging environments with representative hardware/software.
Automate regression and security testing pre-deployment.
Telemetry (Optional but Recommended):
Track update adoption rates and failures (with anonymization).
Feed data into product improvement and compliance reporting.
Documentation Requirements
Security Update Management Policy Document:
Describes lifecycle, roles, timelines, and procedures.
Patch Release Records:
Evidence of signed updates, version control, and release notes.
Update Delivery Logs:
Proof that updates were distributed and installed correctly.
Opt-Out and Postponement Design Documents:
Show how user control is implemented securely.
Testing Evidence:
QA and security validation reports confirming update mechanism reliability.
Support Lifecycle Statement:
Official documentation of how long security updates will be provided (e.g., “supported until YYYY”).
Common Pitfalls and Readiness Gaps
Delivering updates manually (e.g., via download links) instead of automated mechanisms.
Not cryptographically signing updates.
Long delays between vulnerability discovery and patch release.
No process to notify users about updates or failed installations.
Automatic updates turned off by default.
Poor rollback protection allowing downgrade to vulnerable versions.
Lack of auditable logs for patch distribution.
No defined product end-of-support timeline.
Tools and Frameworks to Help
Secure Update Distribution: Uptane (for automotive), The Update Framework (TUF), Mender.io, Balena
Patch Automation: Jenkins, GitHub Actions, GitLab CI with signing plug-ins
Cryptographic Signing: OpenSSL, Cosign, Sigstore
Update Verification: SBOM-based validation (CycloneDX, SPDX)
Compliance Reference: ISO/IEC 30111 (vulnerability handling), NIST SP 800-40 rev.4 (Guide to Enterprise Patch Management), ENISA Security-by-Design Guidelines
This clause ensures products remain secure after release. It connects lifecycle management with ongoing vulnerability mitigation.
Compliance means your update system must be:
Secure (signed, validated)
Automatic (default-on)
Timely (aligned with risk)
Transparent (user notified and traceable)
It’s not enough to build a secure product — you must also maintain that security reliably and demonstrably throughout its life.
This one focuses on access control and identity protection, which are the foundation of any secure system. It also adds a reporting expectation for detected unauthorized access — introducing monitoring, detection, and alerting obligations alongside preventive measures
This clause requires manufacturers to design and implement robust access controls to prevent unauthorized access — whether to the device, its interfaces, data, or backend systems.
In essence, the product must:
Prevent unauthorized access using strong authentication and access control mechanisms.
Manage identities and privileges through appropriate Identity and Access Management (IAM).
Monitor, detect, and report any suspected or confirmed unauthorized access.
This provision bridges preventive, detective, and responsive controls, ensuring not only that access is restricted but also that breaches are observable and reportable.
It reflects three design principles:
Least privilege — only allow necessary access.
Defense in depth — layered authentication and authorization.
Accountability — detect and report unauthorized actions.
Organizational Actions
Establish an Access Control Framework covering both product-level and backend-level access.
Define Access Control Ownership: Typically split among Engineering (product enforcement), IT Security (IAM policies), and Product Security (monitoring/reporting).
Adopt Identity Management Standards: e.g., OAuth 2.0, OpenID Connect, SAML 2.0, FIDO2.
Define Access Classification: Identify what constitutes authorized vs. unauthorized access for each interface (API, admin console, firmware, etc.).
Develop a Reporting & Escalation Path: Define how unauthorized access attempts are logged, analyzed, and escalated.
Train Support & Response Teams to recognize and handle unauthorized access events.
Policy / Process Updates
Access Control Policy:
Mandate role-based or attribute-based access control (RBAC/ABAC).
Define authentication strength requirements (e.g., MFA for admin accounts).
Require session timeout and credential lifecycle management.
Identity & Credential Management Procedure:
Define identity issuance, revocation, and audit processes.
Enforce password-less or strong password policies.
Use hardware-rooted trust for device identity (TPM, secure elements).
Monitoring and Reporting Procedure:
Define what events are logged (failed login, privilege escalation, API misuse).
Establish alert thresholds and reporting timelines.
Document the process for notifying authorities or affected users if required under incident obligations (CRA Article 11 / NIS2).
Vendor and Integration Policy:
Ensure third-party integrations follow same authentication standards.
Require secure API keys and secret rotation.
Technical Implementations
Authentication Mechanisms:
Enforce strong authentication (password policy, MFA, FIDO2, certificates).
Implement secure password storage (salted hashing — Argon2, bcrypt, PBKDF2).
Use cryptographically validated tokens (JWT with short lifetimes).
Apply rate-limiting and lockout for brute-force prevention.
Authorization Controls:
Implement RBAC/ABAC with fine-grained permissions.
Ensure least-privilege access to resources and APIs.
Apply context-aware access (e.g., device ID, IP, geolocation).
Identity Management:
Integrate IAM systems (Keycloak, Okta, Azure AD, etc.).
Support federation via SSO and standard protocols (SAML 2.0, OIDC).
Manage lifecycle — provisioning, updates, de-provisioning.
Access Monitoring and Reporting:
Log all authentication and authorization events.
Detect anomalies (failed logins, unauthorized privilege escalation).
Generate alerts and forward to SIEM/SOC (e.g., Splunk, Elastic, QRadar).
Report attempted or successful unauthorized access as per CRA incident-reporting rules.
API and Interface Security:
Enforce authentication on all endpoints.
Validate tokens and scopes at every call.
Disable default or hardcoded credentials.
Protect management ports and debug interfaces (UART, JTAG) by design.
Data Protection:
Encrypt sensitive user data at rest and in transit.
Use secure session management and CSRF protection.
Documentation Requirements
Access Control Design Document:
Describe authentication, authorization, and identity flows.
IAM Policy and Procedures:
Record credential policies, lifecycle management, and federation setup.
Audit and Log Records:
Retain access logs and incident reports for audit (per Article 31).
Unauthorized Access Reporting Protocol:
Define incident classification, escalation, and external reporting steps.
Testing Evidence:
Security test reports showing credential enforcement, brute-force resistance, and authorization correctness.
Common Pitfalls and Readiness Gaps
Using default or hardcoded credentials.
Missing MFA for privileged accounts.
No centralized IAM; fragmented identity silos.
Logging without monitoring — no one reviews the logs.
Lack of defined thresholds for reporting unauthorized access.
Storing credentials insecurely in plaintext or config files.
APIs lacking proper authentication or scope validation.
Failing to revoke credentials of decommissioned devices or users.
Tools and Frameworks to Help
IAM & Authentication: Keycloak, Okta, Auth0, Azure AD B2C
MFA / FIDO2: YubiKey, WebAuthn, Duo
Access Control: OPA (Open Policy Agent), AWS IAM Policies
Logging & Monitoring: Splunk, ELK Stack, Wazuh, Falco
Protocols & Standards: OAuth 2.0, OIDC, SAML 2.0, ISO/IEC 27002 (§9), NIST SP 800-63
Secure Credential Storage: HashiCorp Vault, AWS Secrets Manager
Compliance & Audit: ENISA IAM Good Practices, OWASP ASVS 4.0 (Sections 2 & 4)
This requirement demands both protection and detection. Your product must not only prevent unauthorized access but also observe, record, and report it when it occurs.
True compliance means:
Secure authentication and identity management
Role-based access enforcement
Logging and monitoring of access attempts
Reporting path for unauthorized access events
Together, these form the digital gatekeeping layer that safeguards trust, data integrity, and regulatory assurance under the CRA.
Dives into data confidentiality, one of the three core pillars of cybersecurity (Confidentiality, Integrity, Availability). It sets a “state-of-the-art” expectation — meaning encryption and data protection mechanisms must align with current cryptographic and security best practices, not just any implementation that “works.”
This requirement obliges manufacturers to safeguard all forms of data confidentiality — whether data is stored, transmitted, or processed.
In practical terms, it requires:
Encryption at rest and in transit using up-to-date, industry-accepted cryptography.
Minimization of data exposure — ensuring that only necessary data is collected, stored, or processed.
Protection throughout data lifecycle — including creation, storage, transmission, use, and deletion.
Secure key management — encryption is only as strong as how keys are protected.
This paragraph doesn’t only cover personal data (like GDPR); it also applies to operational, telemetry, and configuration data that could aid an attacker if exposed.
The term “state of the art” mandates continuous monitoring of cryptographic standards — outdated algorithms (like SHA-1, MD5, RC4) must be phased out as new guidance emerges.
Organizational Actions
Establish a Data Protection Strategy aligned with both CRA and GDPR (if personal data is involved).
Define data classification and sensitivity levels (e.g., public, internal, confidential, restricted).
Assign ownership for data security controls — typically under the Product Security & Privacy Engineering teams.
Create an Encryption Governance Board or integrate into an existing Security Architecture Board to review cryptographic choices periodically.
Ensure alignment between product encryption policies and enterprise key management systems.
Implement change management for crypto updates (e.g., deprecation of older protocols).
Policy / Process Updates
Data Protection Policy:
Define protection objectives (confidentiality, integrity, availability).
Mandate encryption for sensitive data both in storage and transit.
Specify approved algorithms and key lengths (e.g., AES-256, RSA-2048+, ECC-P256+).
Prohibit use of deprecated algorithms (e.g., MD5, DES, RC4).
Cryptographic Key Management Procedure:
Define secure key generation, rotation, and destruction processes.
Require use of HSMs (Hardware Security Modules) or cloud KMS (Key Management Services).
Enforce separation of duties — key owners ≠ application developers.
Secure Data Handling Process:
Document how data is collected, stored, transmitted, and deleted.
Mandate encryption of backups and logs.
Define secure file-sharing, API transmission, and inter-service communication.
Incident Handling Process Update:
Add classification and escalation criteria for data exposure incidents.
Technical Implementations
Encryption at Rest:
Encrypt all sensitive data in databases, file systems, and storage volumes.
Use transparent disk encryption (e.g., LUKS, BitLocker) and database-level encryption (e.g., TDE in SQL).
Ensure encryption keys are not stored alongside encrypted data.
Apply strong key management practices (AWS KMS, HashiCorp Vault, Azure Key Vault).
Encryption in Transit:
Enforce TLS 1.3 (or at least 1.2) for all network communications.
Use HTTPS, SSH, or secure MQTT for device communications.
Disable insecure protocols (FTP, Telnet, HTTP).
Validate certificates and pin CA roots where applicable.
Data Processing Security:
Ensure in-memory data is protected using OS-level controls (e.g., protected heap, encrypted swap).
Use secure enclaves or Trusted Execution Environments (TEE) for sensitive operations.
Key Management:
Store keys in secure hardware (TPM, HSM, Secure Element).
Rotate keys periodically and upon suspected compromise.
Implement key versioning and revocation procedures.
Additional Technical Means:
Use tokenization or pseudonymization for sensitive identifiers.
Apply data minimization — only collect and store what’s needed.
Secure logs and telemetry data (often overlooked but critical).
Use secure deletion techniques (NIST 800-88).
Documentation Requirements
Data Flow Diagram and Classification Map:
Document data types, flows, and where encryption is applied.
Cryptographic Architecture Document:
Describe algorithms, key lengths, key management hierarchy, and rotation policies.
Configuration and Testing Evidence:
Provide TLS configuration reports (e.g., SSL Labs), encryption audit results, or penetration test findings.
Exception and Waiver Records:
If certain data isn’t encrypted (for technical reasons), record the justification and compensating controls.
Policy References:
Cross-reference with Data Protection, Key Management, and Secure Communication policies.
Common Pitfalls and Readiness Gaps
Using outdated or non-compliant encryption (e.g., TLS 1.0, SHA-1).
Hardcoding encryption keys or secrets in code.
Storing sensitive data unencrypted in logs or caches.
Failing to rotate encryption keys periodically.
Not encrypting inter-service (microservice/API) communications.
Ignoring “data in use” (runtime memory, temporary files).
Missing encryption coverage for backups or telemetry.
Inconsistent implementation across product versions or regions.
Tools and Frameworks to Help
Encryption at Rest / Transit: OpenSSL, AWS KMS, Azure Key Vault, HashiCorp Vault, GPG
Data Classification: BigID, OneTrust, Collibra
Secure File Transfer: SFTP, HTTPS, MFT platforms
Key Management: HSM (Thales, Entrust), Cloud KMS
Compliance Standards: ISO/IEC 27001 & 27018, NIST SP 800-57, ENISA Cryptography Guidelines
Testing / Validation: SSL Labs, Nessus, OpenVAS, CIS Benchmarks
Secure Deletion: Shred, SDelete, NIST 800-88-compliant tools
This clause demands that manufacturers treat data confidentiality as a design-level security attribute, not a later add-on.
Compliance requires:
Encryption at rest and in transit
Strong key management
Data minimization and protection by design
Continuous monitoring of cryptographic best practices
Manufacturers who implement encryption intelligently — and document it transparently — not only meet CRA requirements but also build lasting user trust and resilience against data breaches.
This paragraph complements the previous one on confidentiality, focusing on the integrity of both data and software elements (commands, programs, configurations).
It’s one of the most crucial — and often under-implemented — cybersecurity requirements, since loss of integrity directly affects trustworthiness and safe functioning of digital products.
This requirement ensures that data and software integrity are preserved end-to-end.
It’s not enough that data is confidential — it must also be accurate, consistent, and unaltered from unauthorized modification.
It covers:
Integrity of user and system data (personal data, logs, telemetry).
Integrity of commands, firmware, and code — ensuring software has not been tampered with.
Configuration integrity — preventing unauthorized changes to security settings or operational parameters.
Detection and reporting — the system must report corruptions or integrity violations (e.g., checksum mismatch, file tampering).
Organizational Actions
Define an Integrity Protection Strategy as part of the secure design lifecycle.
Integrate integrity verification controls into DevSecOps, covering code, configuration, and runtime environments.
Assign clear accountability — Product Security owns the policy, Engineering owns technical enforcement, and QA validates through integrity tests.
Establish a baseline of authorized states for code, configuration, and binaries.
Ensure alignment with Secure Development Lifecycle (SDL) and Vulnerability Management processes.
Perform integrity testing as part of release validation and runtime monitoring.
Policy / Process Updates
Data Integrity Policy:
Define protection requirements for all critical data types and software components.
Require digital signatures or hashes for code and data integrity verification.
Mandate integrity checks on software updates, configuration files, and sensitive data.
Secure Configuration Management Procedure:
Define approved configuration baselines and how they’re protected.
Require signed configuration files or checksums for verification.
Mandate change tracking and version control for configuration changes.
Secure Software Supply Chain Policy:
Require integrity validation of open-source and third-party components (via SBOM).
Verify authenticity of libraries, dependencies, and build artifacts before integration.
Incident Response Procedure Update:
Include handling steps for detected integrity violations (e.g., file corruption, unauthorized change).
Define escalation paths for potential tampering or data corruption events.
Technical Implementations
Code and Binary Integrity:
Use digital signatures (e.g., RSA, ECC) to sign firmware, binaries, and installers.
Verify signatures at startup or before execution.
Implement secure boot to ensure firmware authenticity and integrity.
Utilize code signing certificates and enforce signature verification during updates.
Data Integrity:
Apply hashing mechanisms (e.g., SHA-256 or better) for data integrity validation.
Use HMAC for combined integrity and authenticity checks.
Apply checksums or CRC validation for file and data transfers.
Use database integrity constraints and auditing features.
Configuration Integrity:
Store configurations in read-only or access-controlled directories.
Implement change detection systems (e.g., Tripwire, OSSEC) to identify unauthorized changes.
Enforce role-based access control (RBAC) to limit configuration modification rights.
Transmission Integrity:
Use TLS 1.2+ or mutual authentication to ensure data integrity during transmission.
Enable message authentication codes (MAC) or digital signatures for message validation.
Apply sequence numbering or nonce-based replay protection for critical commands.
Reporting & Monitoring:
Log all integrity failures or corruption events.
Generate alerts for unauthorized configuration or data changes.
Integrate integrity alerts into SIEM or centralized monitoring systems.
Documentation Requirements
Integrity Control Matrix:
Map all components (data, software, configuration) to integrity protection methods used.
Code Signing and Verification Records:
Include evidence of key management, signature creation, and verification success.
Change Logs & Baseline Records:
Maintain traceability between approved configurations and deployed ones.
Incident Reports:
Record instances of detected corruption, investigation outcomes, and mitigation steps.
SBOM (Software Bill of Materials):
Maintain signed SBOMs to verify supply chain integrity and dependency authenticity.
Common Pitfalls and Readiness Gaps
Unsigned firmware or executables allowing tampering.
Using weak or no hashing (e.g., MD5).
No secure boot or signature verification during runtime.
Configuration files left writable by unauthorized users.
Integrity checking implemented but no alerting/reporting on failures.
Missing chain of trust between build, deployment, and runtime.
No validation of third-party libraries (supply chain risk).
Tools and Frameworks to Help
Code Signing / Verification: Microsoft SignTool, OpenSSL, Cosign, Sigstore, GPG
Integrity Monitoring: Tripwire, OSSEC, Wazuh, Falco
Secure Boot / Firmware Validation: UEFI Secure Boot, TPM, Verified Boot (Android/Linux)
Hashing / HMAC: OpenSSL, libsodium, Python hashlib
Configuration Management: Ansible, Puppet, Chef (with integrity enforcement)
Supply Chain Security: SLSA Framework, in-toto, SPDX, CycloneDX
Incident Monitoring: SIEMs (Splunk, QRadar, ELK), Integrity check dashboards
This clause establishes integrity assurance as a mandatory design and operational feature.
Manufacturers must ensure:
No unauthorized changes to data, code, or configurations.
Integrity validation using cryptographic methods.
Detection and reporting of any corruption or manipulation.
Continuous protection across build, deployment, and runtime stages.
By embedding integrity controls into both the development process and runtime protection, manufacturers ensure product trustworthiness and compliance with the CRA.
Introduces data minimization, a foundational principle in both privacy (GDPR) and security-by-design under the CRA.
It ensures that a product’s cybersecurity architecture aligns with the “least data, least exposure” philosophy — reducing unnecessary data collection, storage, or processing that could become an attack vector.
This requirement mandates that products only collect and process the minimum amount of data necessary to perform their functions.
It applies to all data — not just personal data. This means:
No excessive data collection “just in case.”
No unnecessary logging of user actions or identifiers.
No retention of data beyond its useful purpose.
Secure handling and deletion when data is no longer needed.
The goal is risk reduction through limitation — if you don’t store or process it, it can’t be breached or misused.
This aligns closely with GDPR Article 5(1)(c) and supports the “privacy and security by design and by default” principles that the CRA now extends to all digital products.
Organizational Actions
Establish a Data Minimization Framework that defines data necessity, relevance, and retention limits for each product.
Form a Data Review Committee including representatives from product, security, legal/privacy, and engineering to review data collection plans.
Integrate data minimization into the product development lifecycle, ensuring it’s addressed during design and risk assessment phases.
Maintain a data inventory that identifies what data each component collects, processes, or transmits — including non-personal telemetry or operational data.
Ensure alignment with privacy teams if GDPR or other data protection laws also apply.
Policy / Process Updates
Update Secure Development Lifecycle (SDL) and Data Handling Policies to explicitly include data minimization as a control requirement.
Adopt a “Purpose Justification” Process:
Require engineers or product owners to document the purpose and necessity for every data type processed.
Define retention schedules for all collected data and enforce automatic deletion or anonymization after expiry.
Update the Risk Assessment Template to include questions such as:
Is this data essential for product function?
Can the same purpose be achieved with less data?
How long is the data kept, and why?
Establish periodic reviews to remove obsolete or unnecessary data from logs, databases, or backups.
Ensure third-party integrations also comply with data minimization principles (e.g., SDKs, analytics tools).
Technical Implementations
Data Flow Mapping:
Identify where and how data enters, is processed, and leaves the product.
Remove or mask unnecessary fields.
Data Filtering and Masking:
Use field-level controls to prevent over-collection or exposure (e.g., truncate IPs, anonymize IDs).
Implement privacy-preserving techniques like pseudonymization or tokenization.
Telemetry and Logging Controls:
Collect only necessary diagnostic data — avoid sensitive or personally identifiable information in logs.
Allow configurable logging levels.
Storage Minimization:
Automatically delete or archive data after its useful lifecycle.
Use retention tags or TTL (time-to-live) settings in databases and object stores.
Access Controls:
Restrict access to sensitive data based on need-to-know.
Use data classification labels to enforce minimization policies.
Data Anonymization / Aggregation:
Aggregate metrics where individual data is not required.
Use privacy-preserving analytics (e.g., differential privacy) for usage insights.
Edge Processing:
Where possible, process data locally (e.g., on device) instead of transmitting it to servers.
Documentation Requirements
Data Inventory / Register:
List all data elements the product handles, purpose, storage duration, and access rights.
Purpose Justification Log:
Record why each data type is necessary and how minimization has been achieved.
Retention Policy Records:
Document deletion schedules and evidence of data purging or anonymization.
Design and Risk Assessment Records:
Include data minimization controls as part of the risk treatment documentation (referenced in Article 13(3)).
User Documentation:
Include transparency notes about what data the product collects and for what purpose.
Change Logs:
Track updates to data collection features or telemetry scope.
Common Pitfalls and Readiness Gaps
Collecting excessive diagnostic or telemetry data “for future analytics.”
Storing raw logs indefinitely without retention limits.
Using third-party SDKs that collect more data than necessary.
Failing to anonymize or aggregate operational metrics.
No regular review process to retire outdated data or reduce scope.
Treating minimization as a privacy-only issue rather than part of product security.
Tools and Frameworks to Help
Data Mapping / Discovery: OneTrust, BigID, Collibra, OpenMetadata
Data Minimization Frameworks: ISO/IEC 29100 (Privacy Framework), NIST Privacy Framework
Data Masking / Tokenization: HashiCorp Vault, Protegrity, Tonic.ai
Telemetry Control: OpenTelemetry with sampling or filtering rules
Data Retention Automation: AWS S3 Lifecycle, MongoDB TTL Indexes, SQL Scheduled Purge Jobs
Anonymization / Aggregation: ARX, Aircloak, Differential Privacy libraries
This clause ensures security by reduction — the less data your product processes or stores, the smaller your attack surface and compliance risk.
Manufacturers must:
Collect only what’s essential for product function.
Define clear data purposes and retention periods.
Implement technical safeguards to enforce minimization.
Maintain transparent documentation proving compliance.
By embedding data minimization into design, products inherently become safer, more privacy-respecting, and CRA-compliant.
This requirement extends cybersecurity from confidentiality and integrity into availability and resilience — a key part of ensuring operational continuity even under attack or failure conditions.
This paragraph mandates that manufacturers design products to remain functional and recoverable, even when faced with cyberattacks, particularly Denial-of-Service (DoS) or resource exhaustion attacks.
It introduces two major expectations:
Availability Protection:
Essential functions (e.g., authentication, safety controls, data transmission, emergency response) must remain operational even under adverse conditions.
Resilience and Recovery:
The product must be capable of detecting, resisting, and recovering from disruptions caused by both malicious and non-malicious incidents.
This shifts compliance from prevention-only to sustainability and continuity — aligning CRA obligations with operational resilience principles found in standards like ISO 22301 and IEC 62443.
Organizational Actions
Define “essential and basic functions” for each product — e.g., system control, safety functions, communication modules.
Integrate availability and resilience objectives into product design criteria and risk management.
Establish an incident recovery policy that defines how systems must behave during and after disruptions.
Ensure engineering and operations teams conduct resilience testing and post-incident recovery simulations.
Appoint an availability/resilience owner (within product reliability, operations, or site reliability engineering teams).
Policy / Process Updates
Update Secure Development Lifecycle (SDL) policies to include availability and continuity requirements as design criteria.
Incorporate resilience testing (e.g., stress, failover, chaos testing) into QA and pre-market validation phases.
Define DoS mitigation strategies in product and infrastructure design standards.
Develop a Post-Incident Recovery Process:
Define RTO (Recovery Time Objective) and RPO (Recovery Point Objective) targets.
Establish escalation and response protocols for service degradation.
Ensure patch management and redundancy processes support continuous operation after partial system failures.
Include availability objectives in supplier and component requirements, especially for cloud or network-dependent elements.
Technical Implementations
Availability and Redundancy Controls:
Use clustering, load balancing, or failover mechanisms for critical services.
Implement redundancy in key system components (e.g., dual firmware banks, backup communication channels).
Denial-of-Service Protection:
Apply rate limiting, connection throttling, and session quotas.
Use CAPTCHA or challenge-response mechanisms for authentication endpoints.
Deploy network-level protections such as WAF, CDN, or DDoS mitigation services.
Fault Tolerance and Recovery:
Implement watchdog timers and auto-restart logic.
Store configuration and operational data in resilient formats (e.g., replicated databases).
Design systems for graceful degradation — partial functionality maintained during stress.
Monitoring and Detection:
Enable metrics and logging for resource utilization (CPU, memory, network I/O).
Use anomaly detection systems to identify early signs of DoS or performance degradation.
Integrate alerting into incident response platforms (e.g., PagerDuty, Opsgenie).
Resilience Testing:
Conduct load and stress tests under simulated attack conditions.
Employ chaos engineering tools (e.g., Gremlin, LitmusChaos) to test failure response.
Backup and Restore Mechanisms:
Regularly back up critical configuration and user data.
Verify restoration procedures during validation testing.
Documentation Requirements
Availability and Resilience Design Specification:
Define essential functions and describe protection measures in place.
DoS Mitigation Plan:
Document implemented controls (technical and architectural) and expected response behavior.
Incident Response and Recovery Records:
Maintain logs of resilience tests, system recoveries, and related metrics.
Risk Assessment Linkage:
Cross-reference availability risks in the cybersecurity risk assessment (Article 13(3)).
Post-Incident Analysis Reports:
Capture root cause, lessons learned, and remediation actions from real or simulated incidents.
Supplier Documentation:
Evidence that third-party components (e.g., cloud providers, libraries) meet agreed resilience SLAs.
Common Pitfalls and Readiness Gaps
Focusing entirely on confidentiality/integrity while neglecting availability in security design.
Lack of redundancy or single points of failure (e.g., single API endpoint or DB node).
No structured DoS mitigation testing before release.
Assuming resilience is an operational concern, not a product design responsibility.
Inadequate recovery validation — backups exist but restoration untested.
Failure to define or test RTO/RPO thresholds.
Tools and Frameworks to Help
Resilience / Chaos Testing: Gremlin, Chaos Monkey, LitmusChaos
DoS Protection: Cloudflare, AWS Shield, Akamai Kona, Fastly
Monitoring / Telemetry: Prometheus, Grafana, ELK Stack, Datadog
Load Testing: k6, JMeter, Locust
Backup / Recovery: Velero, AWS Backup, Azure Site Recovery
Standards Reference: ISO/IEC 27001 A.17 (Business Continuity), IEC 62443-3-3 (Availability), NIST SP 800-160 Vol 2 (Resiliency Engineering)
This requirement reinforces that resilience is part of security. Manufacturers must ensure products not only resist attacks but survive and recover from them.
To comply:
Identify essential functions and protect them against service degradation.
Build redundancy and failover into the design.
Implement and document DoS mitigation measures.
Regularly test recovery and resilience under simulated failures.
By embedding resilience principles, manufacturers ensure sustained trust and reliability, even in the face of real-world disruptions.
Where paragraph 8 was about protecting your own product’s availability, paragraph 9 extends the responsibility outward — ensuring your product does not harm the availability of others.
This requirement focuses on ecosystem safety — ensuring that your product does not degrade or disrupt the availability of other connected systems or networks.
It recognizes that many modern devices (IoT, software platforms, smart infrastructure) share environments and resources. A compromised or poorly designed product can:
Cause denial-of-service (DoS) to other devices (e.g., network flooding, port scanning, misconfigured broadcast traffic).
Consume excessive resources, disrupting bandwidth or CPU cycles in shared networks.
Propagate instability, like unhandled broadcast storms or malformed data packets.
Introduce cascading failures, particularly in industrial, healthcare, or smart-home ecosystems.
Therefore, manufacturers must design products that coexist safely, even under stress or failure.
Organizational Actions
Define “ecosystem impact” within product risk assessments — what other systems or networks could be affected by your product’s failure or misbehavior.
Assign cross-system dependency owners (e.g., network engineers, integration architects) to review interoperability impacts.
Establish design review gates focused on performance, network behavior, and interoperability.
Include third-party environment testing in your validation plan — especially for products deployed in multi-vendor environments.
Policy / Process Updates
Update Product Security and Quality Policies to include “no negative network impact” as a design and release requirement.
Require interoperability and coexistence testing in the validation lifecycle.
Define acceptable thresholds for bandwidth use, connection rates, and retry behavior.
Mandate network behavior documentation (expected ports, protocols, traffic patterns) to be reviewed during risk assessment and certification.
Include network coexistence criteria in supplier and OEM component requirements.
Technical Implementations
Network Resource Control:
Implement rate limiting for outbound/inbound traffic.
Avoid unnecessary network polling or broadcast traffic.
Use exponential backoff or jitter strategies for retries to prevent traffic spikes.
Isolation and Sandboxing:
Isolate potentially disruptive functions or processes.
Enforce resource quotas (CPU, memory, bandwidth) per process or device component.
Network Behavior Hardening:
Comply with relevant RFCs and protocol standards to prevent malformed packet generation.
Validate and sanitize network data inputs/outputs.
Avoid insecure discovery or peer-to-peer features unless essential.
Resilience to External Failures:
Implement timeouts, error handling, and reconnection logic to prevent feedback loops that flood networks.
Ensure failure of one product instance does not propagate instability across connected devices.
Testing and Validation:
Conduct network stress testing in shared environments (simulate 100+ connected devices).
Run coexistence tests with known compatible and incompatible devices.
Perform penetration testing to ensure no unintended open services or network flooding vectors exist.
Telemetry and Monitoring:
Enable monitoring of network and resource utilization.
Log unusual network patterns that might indicate unintentional interference.
Documentation Requirements
Network Behavior Documentation:
List expected network protocols, ports, and frequency of communication.
Document known dependencies or potential interaction risks.
Ecosystem Risk Assessment:
Include analysis of potential impact on other devices or systems in your cybersecurity risk documentation (Article 13(3)).
Interoperability Test Reports:
Record coexistence test results and network impact metrics.
Incident Response Procedures:
Define how to handle issues caused by unintended external impacts (e.g., throttling, firmware updates).
Supplier & Component Documentation:
Maintain assurance evidence that third-party modules or libraries conform to network standards and don’t introduce excessive load or instability.
Common Pitfalls and Readiness Gaps
Overlooking outbound or broadcast traffic behavior in embedded or IoT devices.
Neglecting to validate multi-device coexistence — products perform fine in isolation but fail in scale.
Lack of documented communication limits (e.g., no rate limiting in telemetry reporting).
Releasing updates that inadvertently increase network load or instability.
Using outdated or insecure network stacks that cause interoperability problems.
Tools and Frameworks to Help
Network Simulation / Testing: iPerf, Wireshark, Scapy, Ostinato, tc/netem (Linux)
IoT Coexistence Testing: Keysight IoT Device Test Suite, Spirent TestCenter
Protocol Compliance Testing: RFC Validator, fuzzing tools like Boofuzz
Resource Monitoring: Prometheus + Grafana, Sysdig, Netdata
Network Standards Reference: ISO/IEC 27033 (Network Security), ENISA IoT Security Guidelines, IEC 62443 (Industrial Automation Security)
Key Takeaway
This requirement ensures collective security — your product must be a good network citizen.
It must not overload, degrade, or destabilize other systems in its environment.
To comply:
Define and test expected network behavior.
Enforce resource and traffic controls.
Validate interoperability under realistic multi-device conditions.
Maintain transparent documentation of dependencies and behaviors.
By designing with ecosystem responsibility, manufacturers enhance trust, reduce liability, and contribute to a more stable and secure connected landscape.
This one is a core principle of secure design and is tightly tied to the concept of minimizing exposure to threats.
This requirement demands that manufacturers intentionally reduce the number and exposure of potential entry points (attack surfaces) in their products.
The “attack surface” refers to all ways an adversary could interact with or exploit the system — including physical ports, APIs, network endpoints, user interfaces, or software services.
A broad attack surface increases risk because:
More interfaces = more opportunities for exploitation.
Unnecessary services, open ports, or debug interfaces can become high-value targets.
Poorly designed APIs or unprotected configuration endpoints often lead to remote code execution or privilege escalation.
The goal is to apply “attack surface minimization” principles throughout the product lifecycle — from design through deployment.
In short:
→ Reduce exposure.
→ Control what’s left.
→ Monitor it continuously.
Organizational Actions
Embed secure-by-design principles into product development policy — explicitly requiring attack surface minimization.
Define and document “attack surface review” as a mandatory design review gate.
Assign a Product Security Lead or “Security Champion” for every product line to oversee exposure assessment.
Integrate threat modeling into early design to identify and minimize unnecessary entry points.
Maintain an inventory of all interfaces and components exposed to users, systems, or networks.
Policy / Process Updates
Update Secure Development Lifecycle (SDL) to include:
Threat modeling during design.
Security architecture review before release.
Static/dynamic code analysis for exposed interfaces.
Require justification and documentation for all exposed ports, APIs, or interfaces.
Implement change control — any new or modified interface must go through security review.
Mandate penetration testing for externally exposed components.
Establish configuration hardening baselines for default deployments.
Technical Implementations
Network & Interface Hardening
Disable all unnecessary services, ports, and debug interfaces before release.
Enforce least privilege for communications (only required ports/protocols allowed).
Use API gateways or authentication wrappers for exposed APIs.
Implement firewall or access control lists to restrict inbound/outbound traffic.
Software Design
Apply modular architecture to isolate critical functions.
Remove or disable default credentials and test endpoints.
Use input validation and sanitization across all user and system interfaces.
System Hardening
Apply secure boot and firmware signing to prevent unauthorized code execution.
Limit local and remote management interfaces.
Enforce encryption and mutual authentication for remote access.
Continuous Monitoring
Maintain ongoing scanning for exposed services and open ports.
Integrate with vulnerability management systems for continuous evaluation.
Use attack surface management tools to track exposure across product versions.
Physical & Embedded Systems
Disable JTAG, UART, or debug pins in production.
Enforce tamper protections and restrict firmware extraction.
Documentation Requirements
Attack Surface Inventory:
Document all interfaces (network, physical, logical) and their intended use.
Include justification for exposure and protection controls.
Threat Model Documentation:
Capture attack paths, mitigations, and residual risks.
Penetration Test / Security Review Reports:
Include findings and remediation actions for exposed interfaces.
Configuration Guides:
Provide customers with secure configuration instructions that minimize exposure.
Release Notes:
Summarize any changes to the attack surface (added/removed interfaces).
Common Pitfalls and Readiness Gaps
Leaving unused ports, APIs, or debug services active in production builds.
No centralized documentation of exposed interfaces.
Adding new features that increase exposure without security review.
Insufficient isolation between modules or services.
Neglecting third-party components that open new attack surfaces (e.g., web frameworks, SDKs).
Tools and Frameworks to Help
Attack Surface Discovery: Shodan, Nmap, Nessus, Qualys ASM, Microsoft Defender External Attack Surface Management
Threat Modeling: OWASP Threat Dragon, Microsoft Threat Modeling Tool, IriusRisk
Code and API Security: Semgrep, SonarQube, OWASP ZAP, Burp Suite
Continuous Monitoring: Censys, Intrigue.io, Pentera, Detectify
Framework References: OWASP ASVS (V1 & V2), NIST SP 800-160, ISO/IEC 27034, ENISA Secure Product Development Guidelines
This requirement enforces secure design discipline.
It ensures the manufacturer actively identifies, minimizes, and manages all external and internal interfaces that could be exploited.
To comply:
Document and justify every exposed interface.
Remove what’s unnecessary.
Protect what remains with layered controls.
Continuously monitor exposure throughout the lifecycle.
By controlling the attack surface, you dramatically reduce exploitable vulnerabilities and improve resilience against both opportunistic and targeted attacks.
It focuses on limiting damage when a security breach occurs, not just preventing it. It’s all about resilience through exploitation mitigation.
This clause requires manufacturers to design their products in a way that limits the damage and scope of compromise if an attacker manages to exploit a vulnerability.
While prevention remains the first line of defense, resilience and containment are equally critical. The regulation expects that:
Exploitation of a vulnerability should not result in total system compromise.
Built-in safeguards (like sandboxing, privilege separation, and memory protection) should contain the attack.
Security design should anticipate failures and provide graceful degradation rather than catastrophic collapse.
This is the core of “defense through mitigation” — assuming breaches can occur and ensuring the product minimizes their impact.
Organizational Actions
Establish a resilience-by-design strategy in the product security framework.
Define a Security Architecture Review process focusing on containment and mitigation design.
Assign security champions in engineering to verify mitigation features (e.g., sandboxing, ASLR, memory safety).
Include incident simulation exercises for engineering teams to assess system response to exploitation.
Partner with internal or external red teams to evaluate exploit resistance in realistic attack scenarios.
Policy / Process Updates
Update Secure Development Lifecycle (SDL) and Secure Coding Guidelines to:
Mandate exploitation mitigation (e.g., stack protections, privilege isolation, DEP, ASLR, etc.).
Require documentation of containment mechanisms for each critical function.
Integrate security architecture reviews into the design stage with explicit checks for:
Privilege separation between services.
Memory safety and runtime protections.
Fail-safe defaults and recovery logic.
Include incident impact assessment as part of the threat modeling process.
Define post-exploitation mitigation validation as a part of penetration testing.
Establish a vulnerability impact scoring process that considers containment controls.
Technical Implementations
At the Operating System / Platform Level
Enable system-level hardening features such as:
ASLR (Address Space Layout Randomization)
DEP/NX (Data Execution Prevention / No eXecute)
Stack Canaries / Control-Flow Integrity (CFI)
Memory-safe languages for new modules (e.g., Rust, Go).
Use sandboxing or containerization to isolate processes.
Enforce least privilege execution — services and processes should run only with necessary permissions.
Implement secure boot and firmware validation to prevent unauthorized code from persisting.
At the Application Level
Apply input validation and error handling to prevent code injection and buffer overflows.
Use application firewalls or RASP (Runtime Application Self-Protection) to detect and block exploit attempts.
Implement privilege separation between user roles and components.
Design fault-tolerant behavior — services continue safely even after one module is compromised.
Incorporate automatic recovery and graceful failover mechanisms to restore essential functions.
At the Network Level
Enforce segmentation to prevent lateral movement between components.
Deploy micro-segmentation for containerized or cloud-native architectures.
Apply rate limiting and request throttling to mitigate DoS from partial compromise.
Post-Exploitation Monitoring
Integrate with endpoint detection and response (EDR) systems or telemetry collection.
Ensure logging captures privilege escalation attempts or memory corruption events.
Automate alerting for abnormal process behavior (e.g., process injection, suspicious syscalls).
Documentation Requirements
Architecture Diagrams:
Show layers of defense, isolation zones, and containment mechanisms.
Threat Model:
Document identified attack paths and mitigation strategies for each.
Security Test Reports:
Include results from fuzzing, penetration testing, and exploit simulation.
Design Justification Documents:
Explain how resilience mechanisms (e.g., sandboxing, ASLR) reduce incident impact.
Incident Simulation Logs:
Provide evidence that the system limits propagation during controlled exploit scenarios.
Common Pitfalls and Readiness Gaps
Over-reliance on detection, with no containment measures.
Failure to test exploit mitigation effectiveness in real-world attack simulations.
Running all services under administrative privileges.
Poor memory management in legacy code leading to unmitigated buffer overflows.
Lack of documentation on built-in resilience measures.
Tools and Frameworks to Help
Exploit Resistance Testing: Metasploit, Immunity CANVAS, Core Impact, Syzkaller
Memory Safety / Code Analysis: AddressSanitizer (ASan), Coverity, CodeQL, Semgrep
Runtime Protection: AppArmor, SELinux, RASP tools, OS-level sandboxing
Container / Isolation Testing: Docker Bench for Security, OpenSCAP, Kube-bench
Simulation / Validation: Chaos Monkey, AttackIQ, MITRE ATT&CK Adversary Emulation
Framework References: NIST SP 800-160 (System Security Engineering), OWASP SAMM, ISO/IEC 27034 (App Security)
This clause shifts security thinking from pure prevention to resilience engineering. Even if exploitation occurs, the damage must be contained, observable, and recoverable.
To comply:
Integrate exploit mitigation mechanisms across all system layers.
Design for isolation, recovery, and fail-safe operations.
Validate containment with testing and simulation.
Document the system’s resilience measures for regulators and auditors.
By doing this, manufacturers demonstrate that their products can withstand real-world attacks — not just avoid them.
The focus is on security observability, auditability, and accountability. It ensures that products with digital elements have built-in mechanisms to record and monitor security-relevant activities — but still respect user privacy and allow opt-out.
This clause establishes that products must have logging, monitoring, and auditing capabilities that capture security-relevant activities — such as:
Access attempts (successful and failed)
Data modifications
Service configurations and function changes
These logs are essential for incident detection, forensic analysis, and regulatory reporting.
However, since monitoring may involve personal data, the manufacturer must ensure:
Transparency and user control, offering an opt-out mechanism (unless legally required for critical functionality or compliance).
Secure log handling, ensuring integrity, confidentiality, and availability of audit data.
In essence, this paragraph ensures traceability of security events — one of the cornerstones of cyber resilience.
Organizational Actions
Define Security Logging and Monitoring Policy under the broader Product Security Governance framework.
Assign ownership for log configuration, retention, and analysis (typically Security Operations or Product Security Engineering).
Incorporate auditability requirements into design and security architecture reviews.
Conduct periodic log review drills to ensure events are actionable and not just collected.
Engage the Data Protection Officer (DPO) to validate compliance of opt-out mechanisms with privacy laws (e.g., GDPR).
Policy / Process Updates
Update Secure Development Lifecycle (SDL) and Operational Security Policy to require:
Logging of all authentication events, privilege escalations, and configuration changes.
Timestamp synchronization (e.g., NTP) across components.
Secure retention and restricted access to logs.
Define Log Retention and Review Schedule — e.g., keep logs for 90 days active, 1 year archived (adjust per product type).
Include Log Tampering Prevention measures:
Cryptographic signing or hash-chaining of logs.
Use of write-once storage for audit trails.
Document Opt-out Behavior:
What gets disabled when users opt out.
Residual mandatory logs (e.g., for core security or legal purposes).
User consent management process.
Integrate log validation as part of the vulnerability management and incident response process.
Technical Implementations
Logging Design
Define event taxonomy — what events to capture:
Authentication and session activity.
Access to data or APIs.
Configuration or firmware changes.
Security control status (firewall, encryption, updates).
System errors or unusual process executions.
Implement centralized logging using:
Syslog, Fluentd, or cloud-native logging agents.
Product-integrated log forwarding to SIEM (Splunk, ELK, or Microsoft Sentinel).
Use structured log formats (JSON, CEF, or Common Event Format) for easier analysis.
Monitoring and Detection
Integrate with a SIEM or equivalent system to detect anomalies.
Implement alert rules for suspicious access patterns or modification attempts.
Use endpoint telemetry (e.g., OSQuery, OpenTelemetry) for detailed event capture.
Deploy integrity monitoring tools to track unauthorized modifications to configurations or binaries.
Data Protection and Opt-out
Encrypt logs both at rest and in transit.
Use role-based access control (RBAC) for viewing or exporting logs.
Allow user opt-out through:
UI controls or API flags.
Clear documentation on what is disabled (while maintaining minimal security logging if essential).
Ensure opt-out preference persistence across updates and resets.
Tamper Resistance
Apply hashing or digital signing to ensure log integrity.
Implement append-only storage or secure audit databases.
Regularly back up logs to immutable repositories (e.g., AWS S3 Object Lock, WORM drives).
Documentation Requirements
Logging and Monitoring Design Document:
Specifies events captured, sources, destinations, and formats.
Opt-out Policy:
Details the scope, implications, and user controls for disabling monitoring.
System Security Plan (SSP):
References how logging contributes to compliance with CRA and GDPR.
Incident Response Procedures:
Include guidance on using logs for forensic analysis and impact assessment.
Retention and Review Logs:
Evidence of log review frequency and response to alerts.
Common Pitfalls and Readiness Gaps
Collecting too few or too many logs — leading to blind spots or noise.
Storing logs without verifying integrity or protection.
Lack of synchronization — inconsistent timestamps across distributed systems.
Ignoring user opt-out compliance (GDPR violations).
No linkage between product logs and organizational SOC or SIEM systems.
Tools and Frameworks to Help
Logging Agents: Fluentd, Filebeat, Syslog-ng, AWS CloudWatch Agent
Centralized Storage / SIEM: Splunk, ELK Stack, Graylog, Microsoft Sentinel, Logpoint
Integrity & Tamper Control: Wazuh, Tripwire, AIDE, immudb
Monitoring & Telemetry: OpenTelemetry, Prometheus, Grafana Loki
Data Protection & Consent: OneTrust, TrustArc (for privacy consent logging)
Frameworks / Standards: ISO/IEC 27002 (Logging & Monitoring), NIST 800-92, ENISA Security Logging Guidelines
This requirement makes security logging and observability a built-in product capability, not an afterthought.
To comply:
Implement structured, secure, and tamper-resistant logging for key security events.
Integrate with monitoring systems for real-time detection.
Provide user transparency and an opt-out mechanism that respects privacy.
Document log collection, retention, and opt-out controls clearly.
When done correctly, this not only meets CRA obligations but also enhances product trustworthiness and forensic readiness.
This final one focuses on data lifecycle management, specifically secure data deletion and safe transfer.
It ensures that products with digital elements give users full control over their data — both in securely wiping it and transferring it safely when needed.
This requirement enforces user data sovereignty — giving users the ability to:
Permanently erase all personal or configuration data, ensuring it cannot be recovered or misused.
Transfer their data securely (e.g., to another device or platform), while preventing data leakage or corruption.
The goal is to ensure privacy, confidentiality, and compliance with data protection laws (e.g., GDPR’s “right to erasure” and data portability).
The product should provide an intuitive, verifiable, and irreversible process for data deletion and secure transfer.
In essence, this clause mandates “secure offboarding” — making sure no residual data, credentials, or sensitive configurations remain after user action or product decommissioning.
Organizational Actions
Define a Data Deletion and Transfer Policy governing how user data is stored, erased, and ported.
Assign clear ownership:
Engineering – implements the deletion and transfer mechanisms.
Privacy Office (DPO) – validates alignment with GDPR and CRA.
Product Support – documents and communicates the procedure to end users.
Include data deletion verification in QA and penetration testing cycles.
Conduct periodic validation audits (e.g., using forensic tools to verify unrecoverability).
Policy / Process Updates
Integrate secure data deletion requirements into Secure Development Lifecycle (SDL) and End-of-Life (EoL) processes.
Establish a Data Lifecycle Policy that defines:
Data categories (e.g., personal, system, telemetry).
Retention and deletion triggers (e.g., user request, account closure, product reset).
Verification and logging of deletion actions.
Develop procedures for secure data transfer, including:
Encryption in transit.
Authentication between source and destination systems.
Validation of successful and complete transfer.
Define user communication standards — how and when users are informed that deletion or transfer is complete.
Require data erasure verification reports for compliance evidence.
Technical Implementations
Secure Data Deletion
Implement factory reset functions that:
Permanently delete user data, credentials, and configuration files.
Wipe residual storage using cryptographic erasure or secure overwrite (per NIST 800-88 Rev.1).
Use data-at-rest encryption with ephemeral keys, so key deletion renders data inaccessible.
Ensure no hidden partitions, caches, or logs retain recoverable data.
Integrate post-deletion verification using checksum or validation mechanisms.
Secure Data Transfer
Use end-to-end encryption (TLS 1.3 or stronger) for data export or migration.
Implement strong authentication (e.g., OAuth 2.0, API keys) before initiating transfer.
Enforce access controls so only authorized users can request transfer or deletion.
Apply data integrity checks (e.g., digital signatures) to verify successful and untampered transfer.
Provide user-friendly interfaces (e.g., UI, CLI, or API endpoints) for initiating and tracking transfer/deletion securely.
User Experience and Transparency
Include a clear “Delete All Data” or “Factory Reset” option accessible from the UI.
Provide confirmation prompts and warnings before irreversible actions.
Offer progress indicators and completion confirmation for transparency.
Ensure data portability formats are standard and interoperable (e.g., JSON, XML, CSV).
Documentation Requirements
Data Deletion Procedure Document:
Details methods, cryptographic algorithms, and verification steps used to ensure permanent deletion.
Data Transfer Security Design:
Outlines how data export/import mechanisms maintain confidentiality and integrity.
Product User Manual:
Includes clear user guidance on how to delete or transfer data securely.
Verification Reports:
Evidence logs showing successful deletion or secure transfer events.
Test Reports:
Results of QA, vulnerability, or penetration tests confirming unrecoverable data post-deletion.
Compliance Mapping:
Documentation linking this functionality to GDPR Articles 17 (right to erasure) and 20 (data portability).
Common Pitfalls and Readiness Gaps
Deletion functions that only “hide” data (e.g., flagging for deletion but not overwriting).
Lack of verification or audit logging to prove data was erased.
Retained data in backups, logs, or temporary files.
Insecure export mechanisms (unencrypted data transfer).
No user transparency — users unsure whether data was truly deleted or migrated.
No documented linkage to privacy compliance requirements (GDPR, CRA).
Tools and Frameworks to Help
Data Deletion Verification: Blancco, Certus, DBAN, NIST 800-88 guidelines
Encryption & Key Management: HashiCorp Vault, AWS KMS, Azure Key Vault
Data Transfer Security: OpenSSL, SCP/SFTP, HTTPS (TLS 1.3), OAuth 2.0
Privacy Compliance Tools: OneTrust, TrustArc, DataGrail
Testing / Validation: FTK Imager, Autopsy (for recovery verification)
Frameworks / Standards: NIST 800-88 (Media Sanitization), ISO/IEC 27040 (Storage Security), ENISA Data Protection Guidelines
This requirement ensures responsible end-of-life data handling and secure portability — two areas often neglected in product security.
To comply:
Implement verifiable, cryptographically secure data deletion mechanisms.
Ensure data portability through secure, encrypted transfer methods.
Provide transparency and user control through accessible UI or API options.
Maintain deletion verification evidence for CRA and GDPR compliance.
When implemented correctly, this builds trust, compliance assurance, and customer confidence that their data is handled securely — not just during use, but also when they part ways with the product.