Securing the Backbone of Artificial Intelligence: Protecting Data Centers
Abstract
With the explosion in demand for artificial intelligence (AI) data centers to support the exponential integration of AI into businesses, operations, governments, and daily lives, the cybersecurity of AI data centers is becoming increasingly important. Given that AI data centers host AI models, weights, and training data, these centers face an expanded set of threats鈥攑articularly related to hardware, model, and geopolitical security. This report recommends a new framework to develop more cyber-secure AI data centers. The framework proposes an alignment and implementation of technical, corporate policy, and national governance approaches across six layers of security: (1) hardware & compute, (2) network & storage, (3) model & data, (4) software & application, (5) physical access, and (6) geopolitical.
Acknowledgments
Special thanks to subject matter expert interviewees Austin Carson, Tim Fist, Allan Freedman, James Slaughter, and Ian Wallace; peer reviewers Michael Garcia and Adefoluke Shemsu; and advisor Peter Singer, program manager Bridget Chan, and others at 国产视频鈥檚 #ShareTheMicInCyber program for their support and contributions.
Editorial disclosure: The views expressed in this report are solely those of the author and do not reflect the views of 国产视频, its staff, fellows, funders, or board of directors.
Downloads
Executive Summary
Global data center demand is predicted to triple by 2030, with nearly 70 percent of the demand being driven by artificial intelligence (AI) workloads. The surge in AI data center demand is fueled by the exponentially increasing use of AI at work and integration into daily lives as well as the recognition that the technology can influence geopolitics, reorder the global economy, drive scientific discovery, and transform human lives and society.
Two barriers to supporting this increased demand for AI data centers are the required energy and security. While the energy concern has been highlighted and publicized, the security gaps have been understated: AI data centers have mostly been considered as a part of traditional data centers and critical infrastructure, with no separate or focused effort for AI data center cybersecurity requirements.
However, AI data centers face an expanded set of threats. A successful cyberattack on an AI data center could enable threat actors to extract information about the AI model and weights, risking loss of sensitive training data as well as the integrity and confidentiality of the AI model. When AI models are exfiltrated, hackers can create vulnerabilities in AI models and bias outputs, as well as more easily and cheaply replicate AI models.
Thus, this report recommends a comprehensive framework for AI data center security that spans six layers of security and three types of approaches. The six layers of security are (1) hardware & compute, (2) network & storage, (3) model & data, (4) software & application, (5) physical access, and (6) geopolitical. Given the complexity, these six layers also require three approaches: technical, corporate policy, and national governance. Under this framework, the report鈥檚 recommendations are as follows:
- Bridge the gaps between the technical, corporate policy, and national governance approaches with a framework that maps the threats to AI data centers across the six layers of security.
- Implement existing research and standards for technical requirements in an AI data center.
- Require technical measures across the six layers of security in corporate policies.
- Focus national governance measures on incentivizing operators to meet the technical and corporate policies needed for a cyber-secure AI data center.
Introduction
Despite the over 10,000 data centers already existing around the world, global data center demand is predicted to triple by 2030, with nearly 70 percent of the demand being driven by artificial intelligence (AI) workloads.1 This surge in AI data center demand is fueled by the exponentially increasing use of AI at work and integration into daily lives as well as the AI race prompted by the recognition that AI technology can influence geopolitics, reorder the global economy, drive scientific discovery, and transform human lives and society.2
Beyond the predictions, 2025 started with announcements of investments into AI data centers: Project Stargate鈥攕upported by the Trump administration鈥攑ledged a $500 billion investment for AI infrastructure; and Meta announced a $60鈥65 billion pledge to AI and an AI data center.3 Globally, France announced that MGX, Bpifrance, Mistral AI, and Nvidia are collaborating on building Europe鈥檚 largest data center campus; EDGNEX Data Centres by DAMAC announced a $2.3 billion investment into a 144-megawatt AI data center in Indonesia; and Chinese tech firms plan to build more than 30 AI data centers with 115,000 Nvidia chips in the Xinjiang region.4
Two barriers to supporting this increased demand for AI data centers are the required energy and security.5 Yet while the energy concern has been highlighted and publicized, the security gaps have been relatively neglected.
Data centers already make up 1 to 2 percent of global energy demand, and data center energy demand is projected to increase to 21 percent by 2030 due to AI.6 The International Energy Agency reported that in 2024 AI servers drove 15 percent of electricity demand from data centers, and it projected that data center electricity consumption will more than double by 2030, with AI as the most influential factor.7 Recognizing the energy demand, President Biden issued the Executive Order on Advancing United States Leadership in Artificial Intelligence Infrastructure in January 2025, which heavily focused on federal support for AI data center energy needs and clean energy.8 Unfortunately, President Trump revoked this executive order six months later.
On the security front, though, AI data centers have mostly been considered as a part of traditional data centers and critical infrastructure with no separate or focused effort for AI data center cybersecurity requirements. For example, the Biden administration issued its National Security Memorandum on Critical Infrastructure Security and Resilience (NSM-22), which designated data centers as a critical part of U.S. infrastructure, indicating the importance of defending the critical infrastructure sectors from cyber activity led by nation-states.9 Beyond the United States, China鈥檚 2016 Cybersecurity Law required data centers to have cybersecurity measures against domestic and foreign threats.10
However, AI data centers face an expanded set of threats. A successful cyberattack on an AI data center could enable threat actors to extract information about the AI model and weights, risking loss of sensitive training data as well as the integrity and confidentiality of the AI model.11 When AI models are exfiltrated (stolen), hackers can create vulnerabilities in AI models, biased outputs, and similar AI models with fewer resources.12
Thus, this report recommends a comprehensive framework that identifies requirements for implementing sufficient cybersecurity measures in AI data centers. This report will first provide the methodology, followed by a brief history of traditional data centers and AI data centers. This background will help build a necessary foundation for the discussions of cyber threats to AI data centers. Finally, the report concludes with a framework and recommendations for a comprehensive cybersecurity approach.
Citations
- 鈥淒ata Centers,鈥 Data Center Map, accessed July鈥10,鈥2025, ; Bhargs Srivathsan et al., 鈥淎I Power: Expanding Data Center Capacity to Meet Growing Demand,鈥 McKinsey & Company, October 29, 2024, .
- Ryan Pendell, 鈥淎I Use at Work Has Nearly Doubled in Two Years,鈥 Gallup, June 16, 2025, ; Adam Satariano and Paul Mozur, 鈥淭he Global AI Divide,鈥 New York Times, June 21, 2025, .
- Deepa Seetharaman and Tom Dotan, 鈥淭ech Leaders Pledge Up to $500 Billion in AI Investment in U.S.,鈥 Wall Street Journal, January 21, 2025, ; Meghan Bobrowsky, 鈥淢eta Spending to Soar on AI, Massive Data Center,鈥 Wall Street Journal, January 25, 2025, .
- 脡cole Polytechnique, 鈥淢GX, Bpifrance, Mistral AI, and NVIDIA Launch Joint Venture to Build Europe鈥檚 Largest AI Campus in France,鈥 May 19, 2025, ; Amber Jackson, 鈥淣ew U.S.鈥$2.3鈥痓n AI Data Centre by EDGNEX Hailed a 鈥楳ilestone,鈥欌 Data Centre Magazine, June 17, 2025, ; K. Oanh Ha, Yang Yang, and Naomi Garyan Ng, 鈥淐hina鈥檚 Got Big Plans for AI鈥擨n the Desert,鈥 Bloomberg, July 8, 2025, .
- Tim Fist and Arnab Datta, How to Build the Future of AI in the United States (Institute for Progress, October 23, 2024), .
- Beth Stackpole, 鈥淎I Has High Data Center Energy Costs鈥擝ut There Are Solutions,鈥 MIT Sloan, January 7, 2025, .
- 鈥淎I Is Set to Drive Surging Electricity Demand from Data Centres While Offering the Potential to Transform How the Energy Sector Works,鈥 International Energy Agency, April 10, 2025, .
- Joseph R. Biden, Executive Order on Advancing United States Leadership in Artificial Intelligence Infrastructure, 88 Fed. Reg. 10,001 (White House Archives, January 14, 2025), .
- At the time of drafting this report, President Donald Trump had signed an executive order that included a review of NSM-22: Donald J. Trump, Achieving Efficiency Through State and Local Preparedness, March 19, 2025, ; Joseph R. Biden, National Security Memorandum on Critical Infrastructure Security and Resilience, National Security Memorandum/NSM-22, April 30, 2024, .
- Lester Ross, 鈥淐hina Rolls Out Critical Information Infrastructure Security Protection Regulations,鈥 WilmerHale, August鈥19,鈥2021, .
- 鈥淩isks of Data Exfiltration,鈥 SentinelOne, accessed July 15, 2025, .
- Riscure Security Solutions Team, 鈥淐ase Study: How TPUXtract Leveraged Keysight Tools for AI Model Extraction,鈥 Keysight, March 19, 2025, .
Methodology
The findings in this report are based on a combination of open-source research, expert interviews, and analysis. The open-source research and expert interviews were conducted in parallel. Technical information was selected from technical reports, cyber and technology companies鈥 work and articles, as well as interviews with experts in AI hardware and software, threat intelligence, engineering, and cybersecurity. Information on policies, recent news, and data on AI or data center trends was derived from news articles, think tank reports and briefs, and interviews with governance, cybersecurity policy, and foreign and domestic policy experts. In total, over 100 sources and five expert interviews contributed to this report鈥檚 findings.
The Evolution of Data Centers
A Brief History
The prototypes of data centers have been around since the 1950s and 1960s and were called mainframes. Mainframes were single, isolated supercomputers that processed data and executed complex calculations. Mainframes were not connected to a network until the 1990s, when multiple microprocessor computers鈥攍ater servers鈥攕tarted replacing mainframes.1 The modern data center is a physical facility with servers, networking equipment, storage devices, redundant power, and cooling infrastructure to store or process a large amount of data and to compute calculations. Data centers are critical information technology (IT) infrastructure because they store, distribute, and interpret data that is foundational to organizations鈥 day-to-day operations.2 As data centers get increasingly complex due to evolving technologies, operators deploy smart control systems and management software to optimize performance and energy efficiency.3 Figure 1 illustrates the increasing size of data centers to help envision the evolving complexity.
Data Centers
The Telecommunications Industry Association (TIA) has a system that classifies data centers into four tiers based on data center design鈥攊ncluding the center鈥檚 architecture and topology, environmental design, power and cooling systems, cabling systems, redundancy, and safety and physical security鈥攚hich affects resiliency. Tier 4 data centers are the most resilient, and Tier 1 centers are the least resilient as represented in Figure 2 below.4
In addition to the tier system, latency also factors into data center performance. Latency refers to the time it takes for the data center to receive and process a user request. Low latency signifies high performance with less delay and time spent before the data center responds to the request.5 While Tier 4 data centers with the lowest latency possible may sound like the goal for all data centers, not all centers require this level of capability because they vary in their specific purposes and critical requirements. To highlight this point, Table 1 provides a high-level summary of different types of data centers and a sample set of critical requirements.6
Given limited resources and the tradeoff between productivity and security, data centers prioritize different requirements when planning, constructing, and operating depending on their main purpose. Supporting AI is an emerging purpose for data centers. While there are still many unknowns about the related requirements, experts believe that AI data centers require high efficiency, significant power and electricity, high compute power, and low latency.7
AI Data Centers
An AI data center is commonly defined as an emerging type of data center built to support the high computational requirements of AI workloads. They can be divided into two subcategories: training AI data centers and inference AI data centers.8 Training AI data centers process large amounts of data to train AI models and conduct machine learning training, while inference AI data centers deliver AI-driven insights and support the deployment of AI models into applications for end users.9
Like traditional data centers, AI data centers have five major components: (1) compute resource, (2) data storage system, (3) network infrastructure, (4) power or energy capability, and (5) cooling system鈥攂ut in the AI context, all five components need to demonstrate high performance and productivity.10 (Table A1 in the appendix summarizes the differences between AI and traditional data centers across these five major components.) Of these five components, this report focuses on the differences in compute resources due to the different impacts they have on AI data center cybersecurity.
Traditional data centers typically utilize central processing units (CPUs) that use logic circuitry to process data and execute commands. CPUs can have multiple cores that can support multiple software types, but they cannot process operations simultaneously. CPUs also do not have enough memory to support AI data centers鈥 workloads.11 Therefore, AI data centers use more powerful compute resources:
- Graphics processing units (GPUs) allow for parallel operations. They are often used for AI training data centers and sometimes in other types of large-scale data centers.
- Field-programmable gate arrays (FPGAs) and application-specific integrated circuits (ASICs) can be customized to efficiently process AI workloads.12
- Google鈥檚 tensor processing units (TPUs) are a type of ASIC that optimizes performance for specific frameworks and effectively supports deep learning.
A key tradeoff when opting for the more powerful compute resource, such as GPUs, is that they may require much more power and create more heat than CPUs.13 Also, while CPUs and the more powerful compute hardware options face similar cyber threats, such as side-channel attacks, GPUs are vulnerable to additional cyber threats, which will be further detailed in the AI Data Center Threats section.
Beyond the five major components, supplementary considerations exist, such as facility location. As of April 2024, about 50 percent of the over 2,500 data centers in the United States are in Northern Virginia, Northern California, and Dallas, Texas, to prioritize low latency.14 Yet new AI data centers, especially training ones, are now being constructed in more remote areas鈥攆or example, in Indiana, Iowa, and Wyoming鈥攖o be closer to power plants and farther from cities due to concerns of draining power grids.15 AI inference data centers can be closer to cities and users: Cerebras AI is planning to build them in Santa Clara, California; Atlanta, Georgia; and Montreal, Canada.16 The location of AI data centers鈥攚hich often depends on latency and power鈥攃an increase risks, especially if AI data centers are built in countries with cheap energy and land, as in Southeast Asia.17 This risk will be further detailed in the Changing World Threats section.
Given that AI data centers require the same five components as traditional ones, the two types of centers face similar cybersecurity threats. However, AI data centers face unique or increased threats due to different hardware requirements and location. Additional threats from evolving technologies and changing geopolitics also heighten security requirements.
Citations
- 鈥淎 Brief History of Data Centers,鈥 Digital Realty, accessed June 14, 2025, .
- 鈥淲hat Is a Data Center?,鈥 Zscaler, accessed July 14, 2025, .
- Emil Sayegh, 鈥淭he Billion-Dollar AI Gamble: Data Centers as the New High-Stakes Game,鈥 Forbes, September 30, 2024, .
- 鈥淲hat Are Data Center Tiers?,鈥 Digital Realty, accessed June 14, 2025, ; 鈥淲hat Is a Data Center?,鈥 Amazon Web Services, accessed June 14, 2025, .
- 鈥淲hat Is Low Latency?,鈥 Cisco, accessed June 15, 2025, .
- Sayegh, 鈥淭he Billion-Dollar AI Gamble: Data Centers as the New High-Stakes Game,鈥 ; Melissa Palmer, 鈥淗yperscalers: The Complete Guide to What, Why and How,鈥 SolarWinds (blog), January 24, 2023, .
- Srivathsan et al., 鈥淎I Power,鈥 ; Sayegh, 鈥淭he Billion-Dollar AI Gamble,鈥 .
- 鈥淎I Data Center,鈥 Sunbird, accessed June 15, 2025, .
- Sayegh, 鈥淭he Billion-Dollar AI Gamble,鈥 .
- 鈥淲hat Is an AI Data Centre, and How Does It Work?,鈥 Macquarie Data Centres, July 15, 2024, .
- Jacob Roundy, 鈥淗ow Do CPU, GPU, and DPU Differ from One Another?,鈥 TechTarget, February 5, 2025, .
- 鈥淗ow Are AI Data Centers Changing Infrastructure?,鈥 TRG Datacenters, accessed June 15, 2025, .
- Brian Venturo, 鈥淭he Redesign of the Data Center Has Already Started. Here鈥檚 What It Looks Like,鈥 CoreWeave, March 19, 2024, .
- Mary Zhang, 鈥淯nited States Data Centers: Top 10 Locations in the USA鈥 Dgtl Infra, April 11, 2024, .
- Srivathsan et al., 鈥淎I Power,鈥 .
- David Chernicoff and Matt Vincent, 鈥淐erebras Unveils Six Data Centers to Meet Accelerating Demand for AI Inference at Scale,鈥 Data Center Frontier, March 18, 2025, .
- Dylan鈥疊utts, 鈥淢alaysia Is Emerging as a Data Center Powerhouse amid Booming Demand from AI,鈥 CNBC, June鈥16,鈥2024, .
Cyber Threats
The different pieces of a data center鈥攖he purposes or types, the five key components, and additional considerations鈥攊nfluence a data center鈥檚 vulnerabilities and risks, and all need to be secured for a cyber-secure AI data center. This includes the requirements of AI workloads: the models, weights, and training data.
Additionally, while not a foundational requirement, increasingly complex and large data centers utilize data center infrastructure management (DCIM) software to efficiently maintain and manage the facilities. DCIMs will be useful for AI data centers in the following capacities:1
- Intelligent capacity search enables quickly finding which device or rack has the capacity for additional AI deployment.
- Predictive analysis predicts the impact of AI workloads on rack capacity and power usage.
- Automatic server power budgeting calculates the required power for servers.
- Dynamic single-line power diagrams depict power capacity and load to support redundancy planning and protect against break trips.
There is also newer software designed specifically for certain hardware mostly found in AI data centers, such as Trend Vision One鈥擲overeign and Private Cloud (SPC) for GPUs and TensorFlow for TPUs, further emphasizing the importance of cyber-secure software and applications deployed in AI data centers.2
A Security Framework for AI Data Centers
In considering all the pieces inside an AI data center, this report proposes the following framework with six layers of security: (1) hardware & compute, (2) network & storage, (3) model & data, (4) software & application, (5) physical access, and (6) geopolitical.
Some elements may require cybersecurity considerations at multiple layers depending on the threats they face. For example, training data in AI data centers requires security at the model & data layer from cyberattacks, at the physical access layer from infiltrators who physically enter the facilities to steal the data, and at the geopolitical layer from state-sponsored actors who are targeting AI training data for their own AI sovereignty. Figure 3A below maps the data center pieces at risk at each of the security layers.
This section of the report will dive into the different threats faced by both traditional and AI data centers and map them to the six-layer framework.
Traditional Data Center Threats
Data centers are high-value targets because they can store important data, provide critical services, and run networks, applications, security, and virtual machines. As previously mentioned in Table 1, data centers support mission-critical applications at banks and health care systems, allow cloud-based services to be scalable, process data for autonomous vehicles and the Internet of Things, and enable communication services. It is not an understatement to say that data centers are the backbone of all online services. Thus, a disruption or outage at data centers鈥攚hether caused intentionally by threat actors or unintentionally by human error or process failures鈥攃an be costly for data center operators, risking financial loss, reputation damage, regulatory penalties, and loss of customers.3 For example, in 2021, when data center outages cost companies an average of $100,000 per incident, Amazon Web Services experienced a five-hour data center outage due to a failure in network devices in the US-EAST-1 Region, costing the company $34 million in revenue.4
Both sophisticated nation-state actors and financially motivated cyber criminals target data centers with disruption attacks, ransomware incidents, and data breaches.5 Four common cyberattacks on data centers are listed in Table 2 along with examples of incidents.6
Distributed denial of service (DDoS) attacks rely on network connectivity to be able to target and overload a system or service, making defense at the network & storage layer critical.7 A ransomware attack targets and encrypts data, highlighting the importance of cybersecurity at the model & data layer.8 A supply chain attack can make both hardware and software inside a data center vulnerable, proving hardware and software layers to be crucial.9 Social engineering attacks depend on threat actors gaining a form of access into the data center whether through calling and deceiving the target鈥檚 customer support or manipulating employees to install remote access Trojans (RAT)鈥攎alware that gives hackers remote access into and control of a targeted system鈥攐r through backdoors into data center components.10 Social engineering attacks necessitate model & data as well as physical access layers of security. Furthermore, the 2017 DDoS campaign against Google and the 2018 Supermicro supply chain attack referenced in Table 2 involved actors in China, requiring consideration of the geopolitical layer that will be further detailed in the Changing World Threats section.
Threat actors can also combine these types of attacks in one operation. For example, in 2024, the German data center power supply company Bender experienced a ransomware attack in which hackers infiltrated its operating systems and gained unauthorized access to account, financial, and banking data.11 This incident, which affected the power and operations of a data center, was a combination of ransomware and supply chain attacks.
Beyond the four common types of cyberattack, there is another type of attack that data centers are particularly vulnerable to: side-channel attacks, a cybersecurity attack that collects information from a system鈥檚 process and execution or attempts to influence a system鈥檚 program.12 In data centers, side-channel attacks include measuring or surveilling fan power or the sounds a CPU makes, or using sensors that measure the electromagnetic field. These attacks can show threat actors CPU-level activity, the data architecture, and data usage, requiring security at the hardware & compute layer. In July 2025, global semiconductor company Advanced Micro Devices reported that four new processor vulnerabilities could allow hackers to conduct timing-based side-channel attacks, and cybersecurity company CrowdStrike labeled these weaknesses as critical threats.13
Finally, there are two specific types of risks also critical to data center security. The first is insider risk, defined by the Cybersecurity and Infrastructure Security Agency as 鈥渢he potential for an insider to use their authorized access or understanding of an organization to harm that organization,鈥 whether intentional or unintentional.14 Insider risk can be mitigated by limiting access to trusted people, and it requires cyber defense at the physical access layer. The second is supply chain risk. Beyond supply chain attacks described earlier in this section, there is an added risk because a majority of the components in a data center are developed by third-party suppliers. At the hardware layer, data centers rely on a global network of hardware suppliers, which creates risks of counterfeit and flawed hardware even without a threat actor鈥檚 interception or attack.15 At the software layer, third-party tools and applications hosted on data centers can have vulnerable code, making the selection of third-party suppliers as well as secure design of applications and code critical.16 Figure 3B below maps traditional data center cyberattacks and risks to the proposed six layers of security.
AI Data Center Threats
In addition to all the threats that traditional data centers face, AI data centers face unique threats. Specifically: (1) The hardware & compute layer needs to consider ASIC and AI-specific hardware-level attacks, such as memory-level and TPU-specific attacks, and (2) the data layer needs to expand to include model security since AI model weights and training data are in AI data centers.
At the hardware & compute layer, defending against supply chain attacks and vulnerabilities, as well as side-channel attacks, are still critical. AI data centers need trusted and secured GPUs and ASICs, which, like CPUs, rely on a global supply chain.17
Once the hardware is inside the AI data center, the sounds from GPUs can reveal information to threat actors about those GPUs, model architecture, and architecture weights.18 A common GPU side-channel attack is keystroke inference, through which hackers monitor GPU-based rendering workloads to get access to keystrokes and user inputs.19 Tim Fist, director of emerging technology at the Institute for Progress, highlighted that 鈥淕PUs are more vulnerable than CPUs to memory-level attacks.鈥20 GPUs do not always have sufficient memory isolation, which means that memory can be carried from one process to another and allow threat actors to get access to model weights and training data.21 There is also GPU-specific malware that can execute malicious code on a GPU鈥檚 memory and could bypass traditional CPU security tools. Notably, in January 2025, Nvidia announced that seven new vulnerabilities were found in its GPUs, with three of them being of high severity.22
GPU vulnerabilities can be found in traditional data centers because large-scale centers deploy GPUs. On the other hand, TPUs鈥攚hich are also subject to the side-channel and memory-level attacks mentioned above鈥攁re uniquely designed for AI and were first deployed internally by Google in 2015.23 In early 2025, a TPU-specific side-channel attack, TPUXtract, was discovered. TPUXtract exploits unintentional TPU data leaks to enable a threat actor to infer an AI model鈥檚 parameters, essentially allowing AI model exfiltration and intellectual property (IP) theft.24
Given that the hardware & compute layer threats enable hackers to extract information about the AI model and weights deployed in the AI data center, the threats at the data layer can extend to the model layer. When data is exfiltrated from a traditional data center, the impacts can include the loss of sensitive data, regulatory issues, and reputational damage.25 When AI model information and IP are accessed and stolen, the impacts expand to include risking the integrity and confidentiality of the AI model. Threat actors can manipulate exfiltrated models to create vulnerabilities and bias decision-making.26 Additionally, as it becomes increasingly evident that AI will influence geopolitics, the global economy, national security, and human lives, the importance of protecting models becomes critical.27 This point will be further described in the Changing World Threats section.
Model threats include AI training and model weight exfiltration through traditional ransomware and social engineering attacks. Furthermore, AI models face specific model-level threats:28
- Data poisoning attacks: accidentally or purposefully including incorrect data in the AI training dataset, leading to erroneously trained AI models
- Model inversion attacks: recovering training data from AI models by querying models, examining the outputs, and extracting information
- Model stealing attacks: querying AI models and using the outputs to train a replacement model trained like the original model
- Model poisoning attacks: modifying model parameters or architecture to create a backdoor or change the model鈥檚 behavior29
At a high level, AI hardware-specific cyber threats and model threats expand cyber threats to AI data centers. Figure 3C below maps cyberattacks specific to AI data centers to the proposed six layers of security.
Changing World Threats
As data centers and their components change to meet the requirements of AI data centers, the world continues to transform as well, introducing new or enhanced threats for emerging AI data centers.
The first type of change is in volatile geopolitics. While traditional data centers also face geopolitical threats and are targeted by nation-state actors, these threats are heightened for AI data centers due to AI鈥檚 significance for national security and economic competitiveness. With the advent of the AI race, the concept of sovereign AI鈥攖he ability of a nation to develop, use, and govern its own AI models and related infrastructure鈥攊s on the rise, and many believe that sovereign AI is crucial for national security and economic competitiveness.30
In April 2025, the U.S. Department of Defense highlighted that the Joint Staff is implementing AI to improve military operations, to improve a commander鈥檚 decision-making and responsiveness, and to streamline processes.31 The potential for AI technologies to help counter threats across all sectors, including critical energy infrastructure, will only embed AI deeper into critical infrastructure.32 In terms of economic competitiveness, technological innovation results in high-level economic improvement, and with AI specifically, automation of routine tasks and AI-supported creative and technical work are predicted to further innovation, allow for strategic tasks, and improve a nation鈥檚 economy.33
These uses for AI indicate that if actors stole AI models, weights, or training data, sensitive data related to the military, economy, critical infrastructure, and more would be leaked. Additionally, stolen AI models can help hackers create deepfakes or convincing phishing emails for enhanced psychological warfare and disinformation campaigns, election interference, cybercrime and financial fraud, and critical infrastructure attacks.34
As AI becomes a key component of national security and economic competitiveness, AI data centers become key assets to protect from foreign adversaries, especially from sophisticated cyber threat actors sponsored by Russia, China, Iran, and North Korea. Nation-state threat actors are well resourced and sophisticated, posing a serious threat to even the biggest U.S. companies and critical industries, as CrowdStrike鈥檚 chief security officer Shawn Henry stated in 2023.35 For example, the Chinese state-sponsored hacking group Salt Typhoon successfully gained access to an unprecedented amount of information from the largest U.S. telecommunications companies and accessed information on high-value targets like Donald Trump and JD Vance.36
Unfortunately, like most businesses and companies, AI data centers鈥 commercial operators are typically not equipped to defend against such sophisticated operations. In 2024, OpenAI emphasized the importance of increasing AI data center and infrastructure security.37
Even before an AI data center is operational, there is potential for sabotage attempts, given that Chinese companies exclusively manufacture many AI data center components, such as most transformer substations critical for power systems. This means that Chinese companies could install backdoors into the hardware components. In addition, most AI-specific GPUs are made in Taiwan, which is threatened by China as part of a larger national sovereignty debate. Historically, Taiwan Semiconductor Manufacturing Company (TSMC) technologies and products have been unlawfully transferred to mainland China. There are also concerns and a strong suspicion that the Chinese Communist Party (CCP) has infiltrated TSMC and U.S. labs with spies.38
Once an AI data center is operational, geopolitical threats continue as AI data centers are vulnerable to state-sponsored disruption and exfiltration attacks due to an insufficient level of cybersecurity.39 For example, Russia has sophisticated cyber capabilities to infiltrate AI data centers, steal models, and potentially run a model in its own infrastructure.40
Despite the potential state-sponsored targeting of AI data centers, companies and nations are risking increased threats by building AI data centers abroad and joint centers shared with other countries. Figure 4 below shows the number of AI data centers in different countries, and also reveals that Asia has the most.41 In January 2025, the United States and the United Arab Emirates (UAE) announced a partnership to build a data center in Abu Dhabi, which will be the largest AI data center outside the United States.42 Concerningly, the Persian Gulf is a part of China鈥檚 Digital Silk Road 2.0, and the UAE has adopted Chinese 5G technology and city-wide surveillance programs, potentially giving the CCP access to the AI data center. Companies also choose to build AI data centers abroad in locations that have cheap energy and land, such as Malaysia.43
The second type of change is evolving technologies. As previously mentioned, the capabilities of AI data centers have conversely enabled an increasingly robust system, which wielded by U.S. competitors, increases risks to critical data center infrastructures. Threat actors are using AI tools to enhance phishing campaigns, research target networks, conduct post-compromise activities, and support coding tasks.44 They are also using AI-generated deepfakes to bypass multifactor authentication.45
AI models themselves can also pose a threat to data centers through AI self-exfiltration. Traditional data centers face threats from backdoors installed into data center components, allowing a threat actor to exfiltrate or access sensitive information. AI data centers face an additional threat of backdoors installed into AI models through model poisoning, which could enable threat actors to steal the model weights and information. In addition, model self-exfiltration is a novel threat that refers to AI models deceiving the user and cybersecurity measures in place in order to self-leak its model weight and sensitive data.46 Typical methods to protect model weights鈥攕uch as air gaps鈥攎ay not stop model self-exfiltration because the model may take over the system.47 This new threat鈥攁 novel form of data exfiltration and leakage鈥攆urther necessitates enhanced model security.
Besides AI, there are other technological evolutions in the works that will enhance threats to AI data centers, such as quantum computing, which will soon require next-generation cryptography.48 Figure 3D below maps changing world threats to the proposed six layers of security, and Figure 3E compiles Figures 3B鈥3D to map the different types of threats across the layers.
Citations
- 鈥淒ata Center Infrastructure Management,鈥 Sunbird, accessed June 16, 2025, .
- Agam Shah, 鈥淭rend Micro, Nvidia Partner to Secure AI Data Centers,鈥 DarkReading, June 6, 2024, .
- 鈥淒ata Center Threats and Vulnerabilities,鈥 Check Point Software Technologies, accessed June 18, 2025, .
- Rich Miller, 鈥淧roblems With AWS Network Devices Caused Widespread Cloud Outage,鈥 Data Center Frontier, December 8, 2021, ; Bill鈥疜leyman, 鈥淭he Data Center Ransomware Attack That Costs You Everything,鈥 Data Center Knowledge, September鈥1,鈥2023, .
- Beth Maundrill, 鈥淐ybersecurity Implications of Data Centres as Critical National Infrastructure,鈥 Infosecurity Europe, October 28, 2024, .
- 鈥淒atacenter Vulnerabilities:鈥7 Life鈥慍hanging Attacks You Must Know,鈥 Enterprise Engineering Solutions Corporation, accessed July鈥15,鈥2025, ; Catalin鈥疌impanu, 鈥淕oogle Says It Mitigated a 2.54鈥疶bps DDoS Attack in 2017, Largest Known to Date,鈥 ZDNet, October鈥16,鈥2020, ; Daryna Antoniuk, 鈥淚ndonesia鈥檚 National Data Centre Encrypted鈥疻ith鈥疞ockBit Ransomware Variant,鈥 The Record, June鈥24,鈥2024, ; Curtis鈥疐ranklin, 鈥淩eport: In Huge Hack, Chinese Manufacturer Sneaks Backdoors Onto Motherboards,鈥 DarkReading, October鈥5,鈥2018, ; 鈥淎pplication Attacks,鈥 Contrast Security, accessed July 15, 2025, ; Abdelrahman Esmail, 鈥淐ryptojacking via CVE鈥2023鈥22527: Dissecting a Full鈥慡cale Cryptomining Ecosystem,鈥 Trend Micro, August 28, 2024, ; 鈥淐yber Attacks on Data Center Organizations,鈥 Resecurity, February鈥20,鈥2023, .
- Josh Fruhlinger and Lucian Constantin, 鈥淒DoS Attacks: Definition, Examples and Techniques,鈥 CSO, May 17, 2024, .
- Kurt鈥疊aker, 鈥淚ntroduction to Ransomware,鈥 CrowdStrike, March 4, 2025, .
- 鈥淭hird鈥慞arty Data Breaches: What You Need to Know,鈥 Mitratech, January 7, 2025, .
- Computer Security Resource Center, 鈥淪ocial Engineering,鈥 National Institute of Standards and Technology, accessed July 15, 2025, ; 鈥淲hat Is a Remote Access Trojan?,鈥 Fortinet, accessed July鈥15,鈥2025, ; 鈥淒ata Center Threats and Vulnerabilities,鈥 Check Point Software Technologies, accessed June 18, 2025, .
- Sebastian Moss, 鈥淒ata Center Power Supply Business Bender Hit by Ransomware Attack,鈥 Data Center Dynamics, December 3, 2024, .
- Scott Robinson, Gavin Wright, and Alexander S. Gillis, 鈥淲hat Is a Side-Channel Attack?,鈥 TechTarget, April 8, 2025, .
- Gyana Swain, 鈥淎MD Discloses New CPU Flaws That Can Enable Data Leaks via Timing Attacks,鈥 CSO, July 10, 2025, .
- Maundrill, 鈥淐ybersecurity Implications of Data Centres,鈥 ; 鈥淒efining Insider Threats,鈥 Cybersecurity and Infrastructure Security Agency, accessed June 18, 2025, .
- Matt Vincent, 鈥淗ow Tariffs Could Impact Data Centers, AI, and Energy amid Supply Chain Shifts,鈥 Data Center Frontier, April 3, 2025, ; 鈥淪ecuring the Hardware Supply Chain,鈥 OPSWAT, accessed June 18, 2025, .
- 鈥淒ata Center Threats and Vulnerabilities,鈥 Check Point Software Technologies, .
- 鈥淩eimagining Secure Infrastructure for Advanced AI,鈥 OpenAI, May 3, 2024, .
- Tim Fist, interview by Seungmin Lee, April 24, 2025.
- 鈥淕PU Vulnerability: Side-Channel Attacks,鈥 Liquid Web, accessed July 15, 2025, .
- Tim Fist, interview by Seungmin Lee, April 24, 2025.
- 鈥淕PU Vulnerability: Side-Channel Attacks,鈥 Liquid Web, .
- Davey鈥疻inder, 鈥淣vidia Security Warning鈥擜ct Now as 7 New GPU Vulnerabilities Confirmed,鈥 Forbes, January鈥28,鈥2025, .
- Chaim Gartenberg, 鈥淭PU Transformation: A Look Back at 10 Years of Our AI-Specialized Chips,鈥 Google Cloud, July 31, 2024, .
- Nate Nelson, 鈥淲ith 鈥楾PUXtract,鈥 Attackers Can Steal Orgs鈥 AI Models,鈥 DarkReading, December 13, 2024, .
- 鈥淩isks of Data Exfiltration,鈥 SentinelOne, .
- 鈥淐ase Study: How TPUXtract Leveraged Keysight Tools for AI Model Extraction,鈥 .
- Satariano and Mozur, 鈥淭he Global AI Divide,鈥 .
- 鈥淭op 14 AI Security Risks in 2024,鈥 SentinelOne, accessed June 18, 2025, .
- 鈥淭op 14 AI Security Risks in 2024,鈥 .
- Muath鈥疉lduhishy, 鈥淪overeign鈥疉I: What鈥疘t鈥疘s, and 6 Strategic Pillars for Achieving It,鈥 World Economic Forum, April鈥25,鈥2024, .
- Wes Shinego, 鈥淒efense Officials Outline鈥疉I鈥檚 Strategic Role in National Security,鈥 U.S. Department of Defense, April 23, 2025, .
- Shingeo, 鈥淒efense Officials Outline鈥疉I鈥檚 Strategic Role in National Security,鈥 .
- Alduhishy, 鈥淪overeign鈥疉I,鈥 .
- Shlomit鈥疻agman, 鈥淲eaponized AI: A New Era of Threats and How We Can Counter It,鈥 Harvard Kennedy School Ash Center, April鈥8,鈥2025, .
- Carrie Pallardy, 鈥淲hat CISOs Need to Know 国产视频 Nation-State Actors,鈥 InformationWeek, December 12, 2023, .
- Erica鈥疍.鈥疞onergan and Michael鈥疨oznansky, 鈥淎 Tale of Two Typhoons: Properly Diagnosing Chinese Cyber Threats,鈥 War on the Rocks, February 25, 2025, .
- 鈥淩eimagining Secure Infrastructure for Advanced AI,鈥 .
- Jeremie Harris and Edouard Harris, America鈥檚 Superintelligence Project (Gladstone AI, April 2025), .
- Billy Perrigo, 鈥淓xclusive: Every AI Datacenter Is Vulnerable to Chinese Espionage, Report Says,鈥 Time, April 22, 2025, .
- Harris and Harris, America鈥檚 Superintelligence Project, .
- Satariano and Mozur, 鈥淭he Global AI Divide,鈥 ; Zoe Hawkins, Vili Lehdonvirta, and Boxi Wu, 鈥淎I Compute Sovereignty: Infrastructure Control Across Territories, Cloud Providers, and Accelerators,鈥 SSRN, June 24, 2025, .
- Amy鈥疓unia, 鈥淲ill 鈥楳assive鈥 Gulf Deals Cement the U.S. Lead in the Race for Global AI Dominance?,鈥 CNN, May鈥22,鈥2025, .
- Tye鈥疓raham and Peter鈥疻.鈥疭inger, 鈥淗ow China鈥檚 Tech Giants Wired the Gulf,鈥 Defense One, May 13, 2025, ; Butts, 鈥淢alaysia Is Emerging as a Data Center Powerhouse,鈥 .
- Google Threat Intelligence Group, 鈥淎dversarial Misuse of Generative AI,鈥 Google Cloud, January鈥29,鈥2025, .
- Andersen Cheng, 鈥淭he Race to Build Data Centers Is On鈥揌ere鈥檚 How We Keep Them Secure,鈥 TechRadar Pro, December 4, 2024, .
- Marius Hobbhahn, 鈥淪cheming Reasoning Evaluations,鈥 Apollo鈥疪esearch, January 23, 2025, .
- Austin Carson, interview by Seungmin Lee, April 17, 2025.
- Cheng, 鈥淭he Race to Build Data Centers Is On,鈥 .
A Framework for Cyber-Secure AI Data Centers
Given the threats and high-value nature of the target, the cybersecurity of AI data centers needs to be top tier. As Tim Fist commented, 鈥淎I data centers used to train and run the most powerful models will likely need to be secured with nation-state-level adversaries in mind鈥攖his will likely require taking some of the measures typically only used on government data centers used to store and process highly classified information, as well as many additional measures specific to AI performance and security requirements.鈥1 In line with this comment, this report suggests that each of the six layers of security require three approaches: technical, corporate policy, and national governance (see Figure 5). Bridge the gaps between the technical, corporate policy, and national governance approaches with a framework that maps the threats to AI data centers across the six layers of security.
Technical Measures
Recommendation 1: Implement existing research and standards for technical requirements in AI data centers.
Informing the security approach from a technical perspective, RAND published a report in May 2024 that created security levels from one to five for securing model weights.2 Security Level 1 (SL1) indicates an AI system that can defend against amateur attacks, and Security Level 5 (SL5) can protect against the most sophisticated attacks鈥攅ven ones by nation-state actors. The report includes a benchmark for each level, detailing technical security measures necessary for that level.3 The Institute for Progress built on RAND鈥檚 work and outlined an overview of technical measures required for AI data centers to reach SL4 across supply chain, network & storage, hardware, and physical access security.4 The technical needs for a cyber-secure AI data center are thus generally well known. However, corporate policy and national governance measures need to be in place for a top-tier consolidated cybersecurity approach that incentivizes AI data center companies and operators to implement the required technical measures.
Corporate Policy Measures
Recommendation 2: Corporate policies need to require technical measures across the six layers of security.
For AI data center companies and operators to have the appropriate technical measures in place, they would need corporate governance measures that require the technical mitigations in a structured process. For example, corporate policies that require all AI data centers to have a Faraday cage or shield chamber deployed with the hardware are necessary in order to mitigate side-channel attacks on data center hardware and defend against tracking and monitoring of electromagnetic emanations.5 At the model & data layer, a corporate policy needs to require continuous AI audits and monitoring in order to secure AI models, identify backdoors and vulnerabilities in them, and defend against model and data exfiltration.
National Governance Measures
Recommendation 3: National governance measures should focus on incentivizing operators to meet the technical and corporate policies needed for a cyber-secure AI data center.
National governance approaches need to provide a framework that encourages responsible AI data center construction and operations. Even though companies have natural incentives鈥攕uch as reputational risks from data leakage or financial risks from AI data center outages and disruptions鈥攖o make AI data centers cyber secure, defending these assets from the most sophisticated threat actors is not easy and requires significant resources and investments. National policy approaches are currently immature: While there are over 200 national or supranational AI laws and regulations, very few actually govern AI usage, deployment, and infrastructure with binding legislation.6 Some existing standards that can help support AI data center security include:7
- The National Institute of Standards and Technology (NIST)鈥檚 Secure Software Development Framework (SSDF) includes best practices for decreasing software-level vulnerabilities and has an AI addendum that includes best practices for AI models.8
- NIST鈥檚 SP 800-171 provides standards for protecting unclassified information.9
- NIST鈥檚 SP 800-53 includes standards for security and privacy control.10
- NIST鈥檚 FIPS 140-3 outlines the design and operation of computer hardware that processes and protects sensitive data.11
- The Federal Risk and Authorization Management Program (FedRAMP), based on NIST SP 800-3, looks at security assessments, authorization, and monitoring to determine whether a cloud service provider is in compliance with the program鈥檚 standards.12
- The U.S. Department of Defense has a Cybersecurity Maturity Model Certification (CMMC) program that requires defense contractors to have sufficient security measures to protect unclassified and sensitive information.13
- The Cybersecurity and Infrastructure Security Agency (CISA) has a Zero Trust Maturity model that helps define best practices for controlling access to sensitive data.14
- CISA鈥檚 Software Bill of Materials (SBOM) offers an ingredients list for software to identify software and supply chain vulnerabilities.15
Policies and objectives focused on AI data centers have also emerged but have not consistently required nor incentivized AI data center security. President Biden鈥檚 2025 Executive Order (EO) 14141: Advancing United States Leadership in Artificial Intelligence Infrastructure required AI data center operators to submit security proposals if requesting to build on federal land.16 Unfortunately, President Trump revoked EO 14141 in July 2025. Furthermore, the Trump administration鈥檚 EO 14179: Removing Barriers to American Leadership in Artificial Intelligence and EO 14154: Unleashing American Energy led the Department of Energy to designate 16 potential federal sites for rapid AI data center construction that may not take into account security requirements.17 AI data center construction on federal land has yet to materialize.
More recently, in July 2025, President Trump revealed his AI Action Plan, which further mandates that federal land be available for AI data centers and supporting infrastructure construction without security requisites.18 Additionally, the action plan鈥檚 central theme of 鈥淏uild, Baby, Build!鈥 encourages businesses to build AI tech stacks and data centers abroad and only mentions high-security technical standards for AI data centers utilized by the military and intelligence community.19
National regulations with significant fines and risk-based frameworks that require stronger security measures in AI data centers are currently lacking but are crucial for incentivizing AI data center operators and companies to meet high-security technical requirements.20 Benefits such as tax breaks or access to federal land tied to strong security requirements can also encourage operators.
Once national governance measures exist, they can further incentivize AI data center businesses and operators by highlighting the return on investment when complying with regulations: When there is a distinction between noncompliant and compliant AI data centers, investors, customers, and potential employees will flock to the more cyber-secure centers.21
For example, at the software & application layer, supply chain attacks and vulnerabilities pose risks to AI data centers. A technical approach to mitigate the risk would be to implement secure coding or test software with penetration testing and red teaming.22 The necessary corporate policy would be to only allow tested software and applications into the company鈥檚 AI data centers and to require audits of source code before deploying the software.23 Finally, national governance measures that allow only AI data center operators who follow CISA鈥檚 Secure by Design approach24 or implement software with SBOMs to be government contractors can incentivize AI data center companies to implement corporate policies that meet the technical requirements.25
Citations
- Tim Fist, interview by Seungmin Lee, April 24, 2025; Fist and Datta, How to Build the Future of AI in the United States, .
- Sella Nevo et al., Securing AI Model Weights: Preventing Theft and Misuse of Frontier Models (RAND, 2024), .
- Nevo et al., Securing AI Model Weights, .
- Fist and Datta, How to Build the Future of AI in the United States, .
- Vladimir Anti膰 et al., 鈥淧rotecting Data at Risk of Unintentional Electromagnetic Emanation: TEMPEST Profiling,鈥 Applied Sciences 14, no. 11 (June 3, 2024): 4830, .
- Swati Srivastava, 鈥淩egulate or Innovate? Governing AI amid the Race for AI Sovereignty,鈥 国产视频, May 1, 2025, source.
- Arnab Datta and Tim Fist, Compute in America: A Policy Playbook (Institute for Progress, February 3, 2025), .
- Computer Security Resource Center, 鈥淪ecure Software Development Framework (SSDF),鈥 National Institute of Standards and Technology, updated February鈥27,鈥2025, .
- Ron Ross and Victoria Pillitteri, Protecting Controlled Unclassified Information in Nonfederal Systems and Organizations, Special Publication 800-171, rev. 3 (National Institute of Standards and Technology, May鈥2024), .
- Joint Task Force Working Group, Security and Privacy Controls for Information Systems and Organizations, Special Publication 800-53, rev. 5, update 1 (National Institute of Standards and Technology, October鈥2024), .
- National Institute of Standards and Technology (NIST), Security Requirements for Cryptographic Modules, FIPS PUB 140-3 (NIST, March 22, 2019), .
- 鈥淔edRAMP,鈥 Government Services Administration, updated March 31, 2025, .
- U.S. Department of Defense Chief Information Officer, 鈥淐ybersecurity Maturity Model Certification,鈥 accessed July鈥24,鈥2025, .
- 鈥淶ero鈥疶rust鈥疢aturity鈥疢odel,鈥 Cybersecurity and Infrastructure Security Agency, accessed April鈥11,鈥2023, .
- 鈥淪oftware Bill of Materials (SBOM),鈥 Cybersecurity and Infrastructure Security Agency, accessed June 21, 2025, .
- Biden, Executive Order on Advancing United States Leadership in Artificial Intelligence, .
- Donald J. Trump, Executive Order 14179: Removing Barriers to American Leadership in Artificial Intelligence, 90鈥疐R鈥874, (The White House, January鈥31,鈥2025), ; Secretary of the Interior, Secretary鈥檚 Order No. 3418: Unleashing American Energy (U.S. Department of the Interior, February鈥3,鈥2025), ; 鈥淒OE Identifies 16 Federal Sites Across the Country for Data Center and AI Infrastructure Development,鈥 Department of Energy, April 3, 2025, .
- Winning the Race: America鈥檚 AI Action Plan (The White House, July鈥2025), accessed August鈥7,鈥2025, .
- Winning the Race, .
- Srivastava, 鈥淩egulate or Innovate?,鈥 source.
- Mariami Tkeshelashvili and Tiffany Saade, Navigating AI Compliance, Part 2: Risk Mitigation Strategies for Safeguarding Against Future Failures (Institute for Security and Technology, March 2025), .
- 鈥淒ata Center Threats and Vulnerabilities,鈥 Check Point Software Technologies, ; Nevo et al., Securing Artificial Intelligence Model Weights, .
- Fist and Datta, How to Build the Future of AI in the United States, .
- Cybersecurity and Infrastructure Security Agency (CISA), Shifting the Balance of Cybersecurity Risk鈥擯rinciples and Approaches for Secure by Design Software (CISA, October鈥25,鈥2023), .
- CISA, Shifting the Balance of Cybersecurity Risk, ; 鈥淪oftware Bill of Materials (SBOM),鈥 CISA, .
Conclusion
AI data centers may be similar to traditional ones in various ways, but they require a heightened threshold for security because they face expanded risks. With the global race to develop AI power and sovereign AI, securing AI and the infrastructure behind it鈥擜I data centers鈥攂ecomes critical for national security, the economy, defense, energy, and more. This report recommends that in order to develop cyber-secure AI data centers, there must be a framework that aligns the technical, corporate policy, and national governance approaches to cover the six layers of security: hardware & compute, network & storage, model & data, software & application, physical access, and geopolitical.
First, since the technical needs for a cyber-secure AI data center are known, operators must implement existing research and standards in an AI data center. Second, corporate policies need to require technical measures across the six layers of security. Finally, national governance measures should focus on incentivizing AI data center operators to meet the technical needs and implement corporate policies for a cyber-secure AI data center.