CISA and International Partners Publish Guide for Secure AI Integration in OT Systems
Summary
Hide ▲
Show ▼
The Cybersecurity and Infrastructure Security Agency (CISA) and the Australian Signals Directorate’s Australian Cyber Security Centre (ASD’s ACSC), along with international partners, have released a joint guide outlining principles for securely integrating Artificial Intelligence (AI) into Operational Technology (OT) environments. The guide provides four key principles to help critical infrastructure owners and operators mitigate risks and ensure the safe adoption of AI in OT systems. The guide emphasizes the need for understanding AI risks, assessing its use in OT, establishing governance frameworks, and embedding safety and security measures. It focuses on machine learning (ML)- and large language model (LLM)-based AI, but can also be applied to systems using traditional statistical modeling and logic-based automation. The guide provides examples of AI use cases in OT environments, including field devices, PLCs, RTUs, SCADA, DCS, and HMI systems, and highlights the risks associated with AI integration, such as system compromise, disruptions, financial loss, and functional safety impact. Additionally, the guidance emphasizes protecting sensitive OT data, including engineering configuration information and ephemeral data, and addresses the challenges of integrating AI into legacy OT systems.
Timeline
-
03.12.2025 14:00 3 articles · 2d ago
CISA and International Partners Publish Guide for Secure AI Integration in OT Systems
On December 3, 2025, CISA and the Australian Signals Directorate’s Australian Cyber Security Centre (ASD’s ACSC), along with international partners, released a joint guide outlining principles for securely integrating AI into OT environments. The guide provides four key principles to help critical infrastructure owners and operators mitigate risks and ensure the safe adoption of AI in OT systems. The guide provides examples of AI use cases in OT environments, including field devices, PLCs, RTUs, SCADA, DCS, and HMI systems, and highlights the risks associated with AI integration, such as system compromise, disruptions, financial loss, and functional safety impact. Additionally, the guidance emphasizes protecting sensitive OT data, including engineering configuration information and ephemeral data, and addresses the challenges of integrating AI into legacy OT systems.
Show sources
- New Joint Guide Advances Secure Integration of Artificial Intelligence in Operational Technology — www.cisa.gov — 03.12.2025 14:00
- Global Cyber Agencies Issue AI Security Guidance for Critical Infrastructure OT — www.securityweek.com — 04.12.2025 15:18
- CISA and International Partners Issue Guidance for Secure AI in Infrastructure — www.infosecurity-magazine.com — 04.12.2025 18:30
Information Snippets
-
The joint guide was developed in collaboration with multiple international cybersecurity agencies, including the NSA’s Artificial Intelligence Security Center (NSA AISC), FBI, Canadian Centre for Cyber Security (Cyber Centre), German Federal Office for Information Security (BSI), Netherlands National Cyber Security Centre (NCSC-NL), New Zealand National Cyber Security Centre (NCSC-NZ), and the UK National Cyber Security Centre (NCSC-UK).
First reported: 03.12.2025 14:003 sources, 3 articlesShow sources
- New Joint Guide Advances Secure Integration of Artificial Intelligence in Operational Technology — www.cisa.gov — 03.12.2025 14:00
- Global Cyber Agencies Issue AI Security Guidance for Critical Infrastructure OT — www.securityweek.com — 04.12.2025 15:18
- CISA and International Partners Issue Guidance for Secure AI in Infrastructure — www.infosecurity-magazine.com — 04.12.2025 18:30
-
The guide outlines four key principles: Understand AI, Assess AI Use in OT, Establish AI Governance, and Embed Safety and Security.
First reported: 03.12.2025 14:003 sources, 3 articlesShow sources
- New Joint Guide Advances Secure Integration of Artificial Intelligence in Operational Technology — www.cisa.gov — 03.12.2025 14:00
- Global Cyber Agencies Issue AI Security Guidance for Critical Infrastructure OT — www.securityweek.com — 04.12.2025 15:18
- CISA and International Partners Issue Guidance for Secure AI in Infrastructure — www.infosecurity-magazine.com — 04.12.2025 18:30
-
The guide focuses on machine learning (ML)- and large language model (LLM)-based AI, but can also be applied to systems using traditional statistical modeling and logic-based automation.
First reported: 03.12.2025 14:003 sources, 3 articlesShow sources
- New Joint Guide Advances Secure Integration of Artificial Intelligence in Operational Technology — www.cisa.gov — 03.12.2025 14:00
- Global Cyber Agencies Issue AI Security Guidance for Critical Infrastructure OT — www.securityweek.com — 04.12.2025 15:18
- CISA and International Partners Issue Guidance for Secure AI in Infrastructure — www.infosecurity-magazine.com — 04.12.2025 18:30
-
The guide provides examples of AI use cases in OT environments, including field devices, PLCs, RTUs, SCADA, DCS, and HMI systems.
First reported: 04.12.2025 15:182 sources, 2 articlesShow sources
- Global Cyber Agencies Issue AI Security Guidance for Critical Infrastructure OT — www.securityweek.com — 04.12.2025 15:18
- CISA and International Partners Issue Guidance for Secure AI in Infrastructure — www.infosecurity-magazine.com — 04.12.2025 18:30
-
The guide highlights the risks associated with AI integration, such as system compromise, disruptions, financial loss, and functional safety impact.
First reported: 04.12.2025 15:182 sources, 2 articlesShow sources
- Global Cyber Agencies Issue AI Security Guidance for Critical Infrastructure OT — www.securityweek.com — 04.12.2025 15:18
- CISA and International Partners Issue Guidance for Secure AI in Infrastructure — www.infosecurity-magazine.com — 04.12.2025 18:30
-
The guide emphasizes the importance of defining roles and responsibilities of AI makers, OT suppliers, and managed service providers throughout the system’s lifecycle.
First reported: 04.12.2025 15:182 sources, 2 articlesShow sources
- Global Cyber Agencies Issue AI Security Guidance for Critical Infrastructure OT — www.securityweek.com — 04.12.2025 15:18
- CISA and International Partners Issue Guidance for Secure AI in Infrastructure — www.infosecurity-magazine.com — 04.12.2025 18:30
-
The guide discusses the need for personnel education on AI to prevent skill erosion and skill gaps.
First reported: 04.12.2025 15:181 source, 1 articleShow sources
- Global Cyber Agencies Issue AI Security Guidance for Critical Infrastructure OT — www.securityweek.com — 04.12.2025 15:18
-
The guide outlines the importance of assessing whether AI is the right solution for business needs compared to other available solutions.
First reported: 04.12.2025 15:182 sources, 2 articlesShow sources
- Global Cyber Agencies Issue AI Security Guidance for Critical Infrastructure OT — www.securityweek.com — 04.12.2025 15:18
- CISA and International Partners Issue Guidance for Secure AI in Infrastructure — www.infosecurity-magazine.com — 04.12.2025 18:30
-
The guide stresses the need for regulatory and compliance considerations in AI governance and assurance.
First reported: 04.12.2025 15:182 sources, 2 articlesShow sources
- Global Cyber Agencies Issue AI Security Guidance for Critical Infrastructure OT — www.securityweek.com — 04.12.2025 15:18
- CISA and International Partners Issue Guidance for Secure AI in Infrastructure — www.infosecurity-magazine.com — 04.12.2025 18:30
-
The guidance emphasizes protecting sensitive OT data, including engineering configuration information and ephemeral data.
First reported: 04.12.2025 18:301 source, 1 articleShow sources
- CISA and International Partners Issue Guidance for Secure AI in Infrastructure — www.infosecurity-magazine.com — 04.12.2025 18:30
-
OT vendors are increasingly embedding AI directly into devices, prompting operators to demand transparency regarding AI functionality, software supply chains, and data usage policies.
First reported: 04.12.2025 18:301 source, 1 articleShow sources
- CISA and International Partners Issue Guidance for Secure AI in Infrastructure — www.infosecurity-magazine.com — 04.12.2025 18:30
-
Integration challenges include system complexity, cloud security risks, latency constraints, and ensuring compatibility with legacy OT systems.
First reported: 04.12.2025 18:301 source, 1 articleShow sources
- CISA and International Partners Issue Guidance for Secure AI in Infrastructure — www.infosecurity-magazine.com — 04.12.2025 18:30
-
Operators should employ testing in controlled environments, maintain human-in-the-loop oversight, and update AI models regularly to prevent errors and maintain safety.
First reported: 04.12.2025 18:301 source, 1 articleShow sources
- CISA and International Partners Issue Guidance for Secure AI in Infrastructure — www.infosecurity-magazine.com — 04.12.2025 18:30
Similar Happenings
AI Integration Challenges for CISOs in 2025
CISOs face significant challenges integrating AI into cybersecurity frameworks. While AI offers enhanced efficiency and automation, it also provides new tools for malicious actors. Only 14% of CISOs feel fully prepared to integrate AI, with data privacy and identity security being major concerns. The rapid adoption of AI in cybersecurity has led to a gap between deployment and readiness, complicating efforts to secure complex ICT supply chains. The integration of AI in cybersecurity is hindered by ethical governance gaps, a shortage of skilled personnel, and budget constraints. Organizations struggle with knowing where and how to start AI integration, highlighting the need for clear frameworks and actionable roadmaps. AI in cybersecurity encompasses various technologies, each with different strengths and limitations across threat detection, response automation, and data analysis. Omdia's research identifies five critical dimensions that CISOs must consider to effectively harness AI in cybersecurity. These dimensions include understanding AI's role across ICT, AI, and cybersecurity as an integrated approach.
Securing AI in Cyber Defense Operations
AI's potential in cyber defense is substantial, but securing AI systems is crucial to avoid expanding the attack surface. Organizations must establish trust in AI systems through strong identity controls, data governance, and continuous monitoring. Best practices include applying least privilege, strong authentication, and continuous auditing to AI agents and models. The SANS Secure AI Blueprint outlines key controls for securing AI systems, aligning with NIST and OWASP guidelines. Balancing automation and human oversight is essential for effective AI integration in cyber defense.
AI Governance Strategies for CISOs in Enterprise Environments
Chief Information Security Officers (CISOs) are increasingly tasked with driving effective AI governance in enterprise environments. The integration of AI presents both opportunities and risks, necessitating a balanced approach that ensures security without stifling innovation. Effective AI governance requires a living system that adapts to real-world usage and aligns with organizational risk tolerance and business priorities. CISOs must understand the ground-level AI usage within their organizations, align policies with the speed of organizational adoption, and make AI governance sustainable. This involves creating AI inventories, model registries, and cross-functional committees to ensure comprehensive oversight and shared responsibility. Policies should be flexible and evolve with the organization, supported by standards and procedures that guide daily work. Sustainable governance also includes equipping employees with secure AI tools and reinforcing positive behaviors. The SANS Institute's Secure AI Blueprint outlines two pillars: Utilizing AI and Protecting AI, which are crucial for effective AI governance.
AI Adoption Guidelines for Secure Enterprise Environments
AI adoption in enterprises is accelerating, posing security risks due to lack of control and safeguards. Security leaders must implement practical principles and technological capabilities to ensure safe AI usage. Five key rules are proposed to balance innovation and protection. AI visibility and discovery, contextual risk assessment, data protection, access controls and guardrails, and continuous oversight are essential for mitigating risks associated with AI adoption. These guidelines aim to create a secure environment for AI experimentation and usage within organizations.
Organizations Lack Comprehensive AI Policies Amid Rapid Adoption
Many organizations lack comprehensive AI policies despite widespread AI adoption. This oversight exposes them to security risks such as prompt injection attacks, hallucination, third-party model vulnerabilities, and shadow AI tools. Only 28% of organizations have a formal AI policy, leaving 72% vulnerable to unregulated AI use. Security experts emphasize the need for principle-based AI policies that include clear, enforceable controls and adapt to evolving regulations. Effective AI policies should guide innovation, set safety guardrails, and define acceptable use boundaries. They must be flexible, regularly updated, and integrated into broader enterprise risk management. Organizations should engage early with leaders to define AI roadmaps, focus on safe business value, and provide ongoing training and real-time monitoring to enforce policies.