A Framework for Cyber-Secure AI Data Centers
Given the threats and high-value nature of the target, the cybersecurity of AI data centers needs to be top tier. As Tim Fist commented, 鈥淎I data centers used to train and run the most powerful models will likely need to be secured with nation-state-level adversaries in mind鈥攖his will likely require taking some of the measures typically only used on government data centers used to store and process highly classified information, as well as many additional measures specific to AI performance and security requirements.鈥1 In line with this comment, this report suggests that each of the six layers of security require three approaches: technical, corporate policy, and national governance (see Figure 5). Bridge the gaps between the technical, corporate policy, and national governance approaches with a framework that maps the threats to AI data centers across the six layers of security.
Technical Measures
Recommendation 1: Implement existing research and standards for technical requirements in AI data centers.
Informing the security approach from a technical perspective, RAND published a report in May 2024 that created security levels from one to five for securing model weights.2 Security Level 1 (SL1) indicates an AI system that can defend against amateur attacks, and Security Level 5 (SL5) can protect against the most sophisticated attacks鈥攅ven ones by nation-state actors. The report includes a benchmark for each level, detailing technical security measures necessary for that level.3 The Institute for Progress built on RAND鈥檚 work and outlined an overview of technical measures required for AI data centers to reach SL4 across supply chain, network & storage, hardware, and physical access security.4 The technical needs for a cyber-secure AI data center are thus generally well known. However, corporate policy and national governance measures need to be in place for a top-tier consolidated cybersecurity approach that incentivizes AI data center companies and operators to implement the required technical measures.
Corporate Policy Measures
Recommendation 2: Corporate policies need to require technical measures across the six layers of security.
For AI data center companies and operators to have the appropriate technical measures in place, they would need corporate governance measures that require the technical mitigations in a structured process. For example, corporate policies that require all AI data centers to have a Faraday cage or shield chamber deployed with the hardware are necessary in order to mitigate side-channel attacks on data center hardware and defend against tracking and monitoring of electromagnetic emanations.5 At the model & data layer, a corporate policy needs to require continuous AI audits and monitoring in order to secure AI models, identify backdoors and vulnerabilities in them, and defend against model and data exfiltration.
National Governance Measures
Recommendation 3: National governance measures should focus on incentivizing operators to meet the technical and corporate policies needed for a cyber-secure AI data center.
National governance approaches need to provide a framework that encourages responsible AI data center construction and operations. Even though companies have natural incentives鈥攕uch as reputational risks from data leakage or financial risks from AI data center outages and disruptions鈥攖o make AI data centers cyber secure, defending these assets from the most sophisticated threat actors is not easy and requires significant resources and investments. National policy approaches are currently immature: While there are over 200 national or supranational AI laws and regulations, very few actually govern AI usage, deployment, and infrastructure with binding legislation.6 Some existing standards that can help support AI data center security include:7
- The National Institute of Standards and Technology (NIST)鈥檚 Secure Software Development Framework (SSDF) includes best practices for decreasing software-level vulnerabilities and has an AI addendum that includes best practices for AI models.8
- NIST鈥檚 SP 800-171 provides standards for protecting unclassified information.9
- NIST鈥檚 SP 800-53 includes standards for security and privacy control.10
- NIST鈥檚 FIPS 140-3 outlines the design and operation of computer hardware that processes and protects sensitive data.11
- The Federal Risk and Authorization Management Program (FedRAMP), based on NIST SP 800-3, looks at security assessments, authorization, and monitoring to determine whether a cloud service provider is in compliance with the program鈥檚 standards.12
- The U.S. Department of Defense has a Cybersecurity Maturity Model Certification (CMMC) program that requires defense contractors to have sufficient security measures to protect unclassified and sensitive information.13
- The Cybersecurity and Infrastructure Security Agency (CISA) has a Zero Trust Maturity model that helps define best practices for controlling access to sensitive data.14
- CISA鈥檚 Software Bill of Materials (SBOM) offers an ingredients list for software to identify software and supply chain vulnerabilities.15
Policies and objectives focused on AI data centers have also emerged but have not consistently required nor incentivized AI data center security. President Biden鈥檚 2025 Executive Order (EO) 14141: Advancing United States Leadership in Artificial Intelligence Infrastructure required AI data center operators to submit security proposals if requesting to build on federal land.16 Unfortunately, President Trump revoked EO 14141 in July 2025. Furthermore, the Trump administration鈥檚 EO 14179: Removing Barriers to American Leadership in Artificial Intelligence and EO 14154: Unleashing American Energy led the Department of Energy to designate 16 potential federal sites for rapid AI data center construction that may not take into account security requirements.17 AI data center construction on federal land has yet to materialize.
More recently, in July 2025, President Trump revealed his AI Action Plan, which further mandates that federal land be available for AI data centers and supporting infrastructure construction without security requisites.18 Additionally, the action plan鈥檚 central theme of 鈥淏uild, Baby, Build!鈥 encourages businesses to build AI tech stacks and data centers abroad and only mentions high-security technical standards for AI data centers utilized by the military and intelligence community.19
National regulations with significant fines and risk-based frameworks that require stronger security measures in AI data centers are currently lacking but are crucial for incentivizing AI data center operators and companies to meet high-security technical requirements.20 Benefits such as tax breaks or access to federal land tied to strong security requirements can also encourage operators.
Once national governance measures exist, they can further incentivize AI data center businesses and operators by highlighting the return on investment when complying with regulations: When there is a distinction between noncompliant and compliant AI data centers, investors, customers, and potential employees will flock to the more cyber-secure centers.21
For example, at the software & application layer, supply chain attacks and vulnerabilities pose risks to AI data centers. A technical approach to mitigate the risk would be to implement secure coding or test software with penetration testing and red teaming.22 The necessary corporate policy would be to only allow tested software and applications into the company鈥檚 AI data centers and to require audits of source code before deploying the software.23 Finally, national governance measures that allow only AI data center operators who follow CISA鈥檚 Secure by Design approach24 or implement software with SBOMs to be government contractors can incentivize AI data center companies to implement corporate policies that meet the technical requirements.25
Citations
- Tim Fist, interview by Seungmin Lee, April 24, 2025; Fist and Datta, How to Build the Future of AI in the United States, .
- Sella Nevo et al., Securing AI Model Weights: Preventing Theft and Misuse of Frontier Models (RAND, 2024), .
- Nevo et al., Securing AI Model Weights, .
- Fist and Datta, How to Build the Future of AI in the United States, .
- Vladimir Anti膰 et al., 鈥淧rotecting Data at Risk of Unintentional Electromagnetic Emanation: TEMPEST Profiling,鈥 Applied Sciences 14, no. 11 (June 3, 2024): 4830, .
- Swati Srivastava, 鈥淩egulate or Innovate? Governing AI amid the Race for AI Sovereignty,鈥 国产视频, May 1, 2025, source.
- Arnab Datta and Tim Fist, Compute in America: A Policy Playbook (Institute for Progress, February 3, 2025), .
- Computer Security Resource Center, 鈥淪ecure Software Development Framework (SSDF),鈥 National Institute of Standards and Technology, updated February鈥27,鈥2025, .
- Ron Ross and Victoria Pillitteri, Protecting Controlled Unclassified Information in Nonfederal Systems and Organizations, Special Publication 800-171, rev. 3 (National Institute of Standards and Technology, May鈥2024), .
- Joint Task Force Working Group, Security and Privacy Controls for Information Systems and Organizations, Special Publication 800-53, rev. 5, update 1 (National Institute of Standards and Technology, October鈥2024), .
- National Institute of Standards and Technology (NIST), Security Requirements for Cryptographic Modules, FIPS PUB 140-3 (NIST, March 22, 2019), .
- 鈥淔edRAMP,鈥 Government Services Administration, updated March 31, 2025, .
- U.S. Department of Defense Chief Information Officer, 鈥淐ybersecurity Maturity Model Certification,鈥 accessed July鈥24,鈥2025, .
- 鈥淶ero鈥疶rust鈥疢aturity鈥疢odel,鈥 Cybersecurity and Infrastructure Security Agency, accessed April鈥11,鈥2023, .
- 鈥淪oftware Bill of Materials (SBOM),鈥 Cybersecurity and Infrastructure Security Agency, accessed June 21, 2025, .
- Biden, Executive Order on Advancing United States Leadership in Artificial Intelligence, .
- Donald J. Trump, Executive Order 14179: Removing Barriers to American Leadership in Artificial Intelligence, 90鈥疐R鈥874, (The White House, January鈥31,鈥2025), ; Secretary of the Interior, Secretary鈥檚 Order No. 3418: Unleashing American Energy (U.S. Department of the Interior, February鈥3,鈥2025), ; 鈥淒OE Identifies 16 Federal Sites Across the Country for Data Center and AI Infrastructure Development,鈥 Department of Energy, April 3, 2025, .
- Winning the Race: America鈥檚 AI Action Plan (The White House, July鈥2025), accessed August鈥7,鈥2025, .
- Winning the Race, .
- Srivastava, 鈥淩egulate or Innovate?,鈥 source.
- Mariami Tkeshelashvili and Tiffany Saade, Navigating AI Compliance, Part 2: Risk Mitigation Strategies for Safeguarding Against Future Failures (Institute for Security and Technology, March 2025), .
- 鈥淒ata Center Threats and Vulnerabilities,鈥 Check Point Software Technologies, ; Nevo et al., Securing Artificial Intelligence Model Weights, .
- Fist and Datta, How to Build the Future of AI in the United States, .
- Cybersecurity and Infrastructure Security Agency (CISA), Shifting the Balance of Cybersecurity Risk鈥擯rinciples and Approaches for Secure by Design Software (CISA, October鈥25,鈥2023), .
- CISA, Shifting the Balance of Cybersecurity Risk, ; 鈥淪oftware Bill of Materials (SBOM),鈥 CISA, .