In April 2026, MITRE released two white papers prepared under contract with FDA that together represent the most substantive update to the medical device cybersecurity literature in roughly a year. The first, Cybersecurity Risk Analysis for Medical Devices in the Era of Evolving Technologies [read the paper], tackles three threat surfaces that FDA reviewers are increasingly attentive to: cloud-dependent device architectures, AI/ML model integrity, and the looming transition to post-quantum cryptography. The second, Considerations for Managing Challenges in Software Bill of Materials (SBOM) Data Normalization [read the paper], extends MITRE's October 2024 SBOM work and gets practical about the data hygiene problem that hits manufacturers once they begin managing SBOMs at scale.
Both papers explicitly anchor themselves in FDA's June 2025 update to Cybersecurity in Medical Devices: Quality System Considerations and Content of Premarket Submissions and reflect the lessons MITRE has gathered through its FDA-funded engagements with manufacturers, healthcare delivery organizations, regulatory consultants, and cybersecurity vendors. For AI/ML SaMD teams scoping their next premarket submission, the two papers should be read as a pair.
Below is a practitioner-facing breakdown of what's new, what's reinforced, and which tools and resources are most worth bookmarking.
Paper One: Cybersecurity Risk Analysis for Evolving Technologies
The first paper is organized around three technology categories: cloud, AI/ML, and post-quantum cryptography. MITRE distinguishes between "evolving" technologies (cloud, AI/ML) where adoption is reshaping device architectures and "emerging" technologies (PQC) that are entering devices to address a future threat. The framing matters because it sets the tone for the rest of the document: cybersecurity risk management for these technologies does not require reinventing the discipline, but it does require extending existing practices, including SBOM management and threat modeling, to cover new components, new trust boundaries, and new shared responsibility models.
General considerations and the shifting responsibility model
A theme that runs through the entire paper is that the traditional model in which a manufacturer ships a device and the healthcare delivery organization assumes operational responsibility no longer holds. Cloud-hosted devices, third-party AI components, and data flowing across geographic regions all expand the set of parties who share in cybersecurity responsibility. MITRE leans heavily on a CISA cloud responsibility chart that maps which layers (data, networking, applications, runtime, middleware, OS, virtualization, servers, storage, physical security) are managed by the agency versus the vendor across on-premises, IaaS, PaaS, and SaaS deployment models. For manufacturers, this chart is useful both internally (which layers are we securing?) and for customer conversations (what are we expecting our HDO customer to manage?).
The paper points to ISO 13485:2016 clauses 7.4.1 through 7.4.3, the purchasing controls clauses, as a framework for setting cybersecurity expectations on suppliers, including cloud service providers. It also references the Healthcare and Public Health Sector Coordinating Councils' model contract language for medtech cybersecurity as a starting point that can be adapted for cloud services. Both are practical tools that fit naturally into an existing QMS and that auditors will recognize.
Cloud
The cloud section opens with the NIST cloud computing framework (SP 800-145) and its three service models and four deployment models, then walks through how a manufacturer might use cloud infrastructure either internally during development or as part of the deployed device. The risks discussed are familiar but worth reciting: ransomware against cloud-hosted device services, cloud unavailability that translates directly to clinical unavailability, supply chain attacks against the CI/CD pipeline that propagate to every fielded device, and downstream impacts that can affect dozens or hundreds of HDOs simultaneously. MITRE cites the Elekta cloud ransomware incident, which disrupted cancer treatment at over 170 facilities, as the canonical example.
The mitigations fall into three buckets: policies and processes, resilient architecture, and preparedness and response. SBOMs for cloud-based devices, the paper emphasizes, must include all cloud components, including virtual machines, container images and their layers, machine images, and cloud-native services. Threat modeling, following MITRE's Playbook for Threat Modeling Medical Devices, should identify high-value data flows and trust boundaries that cross the cloud responsibility chart. For threat enumeration, MITRE points to its Enterprise ATT&CK Cloud Matrix, the Mappings Explorer (which maps AWS, Azure, and GCP security controls to ATT&CK techniques), the CAVEaT cloud threat matrix developed jointly with the Cloud Security Alliance, and the OWASP cheat sheets covering secure cloud architecture, secrets management, Docker, and Kubernetes.
For preparedness, the paper recommends provisioning across multiple geographic regions, designing for offline operation through local caching, and maintaining backups in separate locations. Each of these is a familiar resilience pattern, but the paper is useful in framing them as cybersecurity controls rather than just availability controls.
Artificial Intelligence and Machine Learning
The AI/ML section is where the paper does its most current work. It catalogs the AI/ML lifecycle phases, the categories of data involved (raw, training, testing, adversarial testing, models with their weights and hyperparameters), and the architectural components. It distinguishes discriminative AI/ML, which is relatively mature in radiology and cardiology, from generative AI/ML, where adoption inside medical devices remains limited. As of the FDA AI-Enabled Medical Device list snapshot referenced in the paper (September 30, 2025), there are 1,357 entries, with roughly 76.6 percent in radiology and 9.6 percent in cardiovascular.
The threat discussion identifies several AI/ML-specific risks worth highlighting. Data poisoning across any stage of the lifecycle, including training, testing, and behavior-influencing inputs like LLM prompts, can compromise model integrity in ways that are difficult to detect through traditional code review. Adversarial inputs and prompt injections can produce incorrect outputs at inference time. Membership inference attacks can leak information about whether specific patients were included in training data, with HIPAA implications. AI-generated code may introduce unusual bugs that bypass conventional static analysis tools tuned to human coding patterns. The paper observes that even devices that do not embed AI/ML may have been built using AI-generated code from third-party developers, raising questions that are not yet well-addressed in supplier audits.
Several challenges receive extended treatment. The stochastic, non-deterministic nature of many AI/ML systems complicates traditional verification and validation methods, which generally assume that identical inputs produce identical outputs. Hallucinations in generative models can affect device behavior in ways that affect patient safety. Guardrails, RAG, prompt engineering, and hyperparameter tuning can reduce risk but do not provide the deterministic guarantees that traditional software defenses do. The locked versus adaptive mode trade-off is laid out cleanly: locked models offer software-like versioning and predictable validation, while adaptive models can adapt more quickly but may bypass independent V&V steps and are more exposed to data poisoning during operation. Cloud-dependent AI/ML inherits all of the cloud risks discussed in the previous section.
The mitigation recommendations include securing the entire learning environment, implementing guardrails with robustness testing including manual red teaming by subject matter experts, integrating AI/ML threat modeling into the overall software security program, applying the principle of least privilege to AI-enabled agents and subsystems, and conducting risk and liability analysis during acquisition of any AI/ML components from third parties. MITRE points to its own ATLAS framework, the Adversarial Threat Landscape for Artificial-Intelligence Systems, as a knowledge base that complements ATT&CK for AI-specific tactics and techniques. NIST's AI Risk Management Framework, the OECD AI classification framework, the Berryville Institute of Machine Learning architectural risk analysis, FDA's Good Machine Learning Practice guiding principles (joint with Health Canada and the UK MHRA), and the OWASP Secure AI Model Ops cheat sheet are all called out as supporting resources.
Post-Quantum Cryptography
The PQC section is the most policy-oriented part of the paper. It explains the mathematical premise: Peter Shor's 1994 algorithm means that any sufficiently powerful quantum computer (a "cryptanalytically relevant quantum computer," or CRQC) could break the asymmetric cryptography (RSA, Diffie-Hellman, ECC) that underpins most current security controls. Because medical devices, especially implantables and large capital equipment, often have lifetimes that extend well past their original support windows, the "harvest-now, decrypt-later" threat is particularly relevant: encrypted data exfiltrated today could become readable when CRQCs arrive.
NIST published the first three finalized post-quantum cryptographic algorithm standards (FIPS 203, 204, and 205) in August 2024 and has signaled that all CRQC-vulnerable asymmetric algorithms will be categorized as "Disallowed" by 2035, in line with the NSM-10 deadline. NSA's CNSA 2.0 suite, all of whose asymmetric algorithms are post-quantum, has a more aggressive deadline of December 31, 2031, for national security applications. Executive Order 14144 (June 2025), Public Law 117-260 (the Quantum Computer Cybersecurity Preparedness Act, December 2022), and NSMs 8 and 10 form the policy backdrop.
For manufacturers, the paper recommends a four-part strategic plan: goal setting, gathering information (cryptographic inventory), resources and implementation, and the use of automated cryptographic discovery and inventory (ACDI) tools. The paper notes a gap that medical device teams should pay attention to: ACDI tools today are largely focused on general enterprise IT and do not yet claim to assess vulnerabilities in specialized medical equipment. Practical considerations include the larger memory and code footprint of post-quantum algorithms, longer message sizes, and the interoperability problem of new PQC-equipped devices interfacing with legacy devices that cannot be updated. For implantables that cannot be reprogrammed without physical access, cryptographic transitions may not be feasible at all without device replacement, which has direct patient safety implications.
Paper Two: SBOM Data Normalization
The second paper picks up where MITRE's October 2024 SBOM data normalization paper left off. The earlier paper described the normalization challenges that arise when manufacturers process and manage SBOMs from multiple sources at scale; this one is more practical, focusing on the technologies and processes that can scale with the SBOM tool ecosystem as it evolves.
The "source of truth" approach
The central recommendation is for manufacturers to maintain a centralized "source of truth" (SoT) that provides a consistent nomenclature for the baseline SBOM attributes across the organization. This SoT should support three core functions: retrieving the canonical name of a component or supplier, retrieving alternate names given a canonical entity, and resolving a raw name to its canonical form. Whether implemented as a simple alias database, a fuzzy-matching service, a parser-driven pipeline, or some combination, the SoT enables consistent ingestion of SBOMs from multiple internal and external sources, including those acquired through mergers and acquisitions.
For manufacturers with multiple product lines or recent acquisitions, centralizing this capability at the enterprise level (rather than per-business-unit) acts as a force multiplier: when one product team resolves a normalization edge case, the entire organization benefits. The paper recommends defining the SoT around concrete use cases (aggregating SBOMs across business units, validating supplier-provided SBOMs, ingesting open-source SBOMs, supporting vulnerability management queries), then deriving the schema, API, and update processes from those use cases.
Baseline attributes and where to find authoritative data
For each of the four key baseline attributes (Supplier Name, Component Name, Version String, Unique Identifier), the paper identifies authoritative internal and external sources. Supplier Name can be drawn from internal contracting databases, embedded copyright and license information, the National Vulnerability Database, SEC filings, and state corporation registry databases. Component Name comes from the same internal sources plus external software identifier registries. Version String requires either component-level version tracking or, alternatively, metadata about the versioning scheme itself, including when version schemas have changed. Unique Identifier should preferentially use Common Platform Enumeration (CPE) identifiers, since CPEs are what NVD uses for vulnerability matching, with Package URL (PURL) identifiers as the secondary option when CPEs are not available.
Tooling guidance
The paper offers a set of practical questions for evaluating SBOM tools, with explicit attention to normalization. How does the tool handle unusual versioning schemes? Does it label automated decisions that may be subject to false positives? Can data be migrated to new tools as the market evolves? Are data stores auditable? How does the tool involve a human reviewer when automated matching fails? How closely will the vendor work with the organization on edge cases? These questions reflect MITRE's interview findings that few manufacturers have reached the scale where normalization issues are unavoidable, but most will eventually encounter them, and that lock-in to a single tool is a real risk in a fast-moving market.
For organizations choosing between approaches, the paper notes that the right answer depends on size, number of products, internal skill set, and other factors. Manual approaches with spreadsheets and ad hoc scripts are still common. Open-source tooling, including the OpenSSF SBOM Everywhere catalog, the CycloneDX tool center, and the SPDX tools registry, supports a build-it-yourself approach. Commercial SBOM management tools provide more turnkey functionality but require careful evaluation against the questions above.
For specific identifier-related work, two open-source resources stand out. AboutCode's PURL Database includes a generator that maps PURLs to CPEs and has published a CVE repository covering 1999 through 2022. ScanOSS has released its purl2cpe tool as open source along with its current database. Both are valuable for manufacturers who want to translate PURLs to CPEs for vulnerability management automation against NVD.
Where the Two Papers Connect
The two papers are best read together. Paper One repeatedly invokes SBOMs as a foundational mitigation for cloud, AI/ML, and PQC risks. Cloud-based device SBOMs must enumerate virtual machines, container layers, and cloud-native services. AI/ML model integrity requires inventory not just of code but of training data, model weights, and prompt templates that effectively act as code. PQC migration depends on cryptographic inventory, which in turn depends on the SBOM-style discipline of knowing what is deployed where. Paper Two provides the practical machinery for getting that inventory right at scale.
Both papers also reinforce three points that align with FDA's June 2025 premarket cybersecurity guidance and AAMI CR515. First, threat modeling is non-negotiable, and it must be extended to cover cloud, AI/ML, and cryptographic components. Second, governance, including roles, responsibilities, contracting language, and SLAs, is central to managing risk in technologies where third parties operate parts of the device. Third, devices must be designed to evolve, with hardware and software upgrade paths planned from the outset, so that they do not become "devices that cannot be reasonably protected against current cybersecurity threats" (the IMDRF legacy device definition).
What This Means for AI/ML SaMD Teams Right Now
For teams scoping a 510(k), De Novo, or PMA in the next twelve to eighteen months, the practical implications are concrete.
Start with the threat model. If your current threat model does not enumerate cloud trust boundaries, model artifacts, or cryptographic dependencies, those gaps will likely surface during FDA review or in pre-submission feedback. The MITRE Playbook for Threat Modeling Medical Devices combined with ATT&CK Cloud, ATLAS, and CAVEaT gives you a structured way to extend coverage.
Treat your SBOM strategy as a multi-year capability, not a submission deliverable. The Paper Two recommendations on SoT design pay off most when you are managing multiple product lines, integrating an acquisition, or migrating between SBOM tools. The earlier you define your canonical nomenclature and the API around it, the less rework you face later.
Inventory your cryptography. Even if PQC migration is not on your near-term roadmap, knowing which asymmetric algorithms are deployed where, in which devices, with what update mechanisms, is a prerequisite for any future transition plan. Implantables and capital equipment with long lifetimes deserve particular attention.
Bring AI/ML governance into the cybersecurity program. Manual red teaming, supplier risk assessment for third-party models, integrity controls on training and inference data, and least-privilege design for AI-enabled subsystems are all called out explicitly. Several of these will read as new requirements to engineering teams accustomed to traditional software practices.
Update contracting language. The CISA responsibility chart, the ISO 13485 purchasing control clauses, and the HSCC model contract language together give you a defensible framework for setting cybersecurity expectations on cloud providers, AI/ML suppliers, and component vendors.
A Compact Tool and Resource Reference
A few resources from the two papers are worth bookmarking and referencing in your next threat model, SBOM strategy, or supplier audit. The MITRE Playbook for Threat Modeling Medical Devices and ATT&CK Cloud Matrix anchor the threat modeling work. MITRE ATLAS handles AI/ML-specific adversary tactics. CAVEaT, developed with the Cloud Security Alliance, gives you a cloud-specific threat matrix. The MITRE Mappings Explorer translates AWS, Azure, and GCP controls to ATT&CK. The OWASP cheat sheet series, especially the Docker Security, Kubernetes Security, Secrets Management, Secure Cloud Architecture, and Secure AI Model Ops sheets, gives you concrete control guidance. NIST's AI Risk Management Framework and FDA's Good Machine Learning Practice guiding principles set the AI/ML governance baseline. NIST FIPS 203, 204, and 205, plus NIST IR 8547, are the PQC anchors. For SBOM, the third edition of the CISA-hosted SBOM Framing Document, the OpenSSF SBOM Everywhere catalog, the NVD CPE Dictionary, AboutCode's PURL Database, and ScanOSS purl2cpe cover the main technical bases.
Closing
Neither paper introduces a fundamentally new framework. What they do, and do well, is consolidate the working consensus that has formed across MITRE's interview base of manufacturers, healthcare delivery organizations, consultants, and tool vendors, then map that consensus onto FDA's existing premarket cybersecurity expectations. For practitioners, that consolidation is the value: instead of stitching together cloud, AI/ML, PQC, and SBOM guidance from a dozen sources, you have two FDA-funded MITRE documents that collectively reference the resources you actually need to use.
If you are planning a premarket submission this year that touches any of cloud, AI/ML, or cryptographic controls, or if you are scaling SBOM management across more than one product line, both papers are worth a careful read and, more importantly, a follow-on conversation about how the recommendations map onto your specific architecture and submission timeline.
Need help applying this guidance?
Cosm specializes in FDA regulatory and quality strategy for AI/ML-enabled medical devices and Software as a Medical Device. If you are scoping a threat model, evaluating cloud architectures, or building out SBOM tooling, contact us or visit www.cosmhq.com to discuss how we can support your premarket cybersecurity submission.
Disclaimer - https://www.cosmhq.com/disclaimer

.png)
