CyberHappenings logo

Track cybersecurity events as they unfold. Sourced timelines. Filter, sort, and browse. Fast, privacy‑respecting. No invasive ads, no tracking.

CISA and International Partners Publish Guide for Secure AI Integration in OT Systems

First reported
Last updated
3 unique sources, 3 articles

Summary

Hide ▲

The Cybersecurity and Infrastructure Security Agency (CISA) and the Australian Signals Directorate’s Australian Cyber Security Centre (ASD’s ACSC), along with international partners, have released a joint guide outlining principles for securely integrating Artificial Intelligence (AI) into Operational Technology (OT) environments. The guide provides four key principles to help critical infrastructure owners and operators mitigate risks and ensure the safe adoption of AI in OT systems. The guide emphasizes the need for understanding AI risks, assessing its use in OT, establishing governance frameworks, and embedding safety and security measures. It focuses on machine learning (ML)- and large language model (LLM)-based AI, but can also be applied to systems using traditional statistical modeling and logic-based automation. The guide provides examples of AI use cases in OT environments, including field devices, PLCs, RTUs, SCADA, DCS, and HMI systems, and highlights the risks associated with AI integration, such as system compromise, disruptions, financial loss, and functional safety impact. Additionally, the guidance emphasizes protecting sensitive OT data, including engineering configuration information and ephemeral data, and addresses the challenges of integrating AI into legacy OT systems.

Timeline

  1. 03.12.2025 14:00 3 articles · 2d ago

    CISA and International Partners Publish Guide for Secure AI Integration in OT Systems

    On December 3, 2025, CISA and the Australian Signals Directorate’s Australian Cyber Security Centre (ASD’s ACSC), along with international partners, released a joint guide outlining principles for securely integrating AI into OT environments. The guide provides four key principles to help critical infrastructure owners and operators mitigate risks and ensure the safe adoption of AI in OT systems. The guide provides examples of AI use cases in OT environments, including field devices, PLCs, RTUs, SCADA, DCS, and HMI systems, and highlights the risks associated with AI integration, such as system compromise, disruptions, financial loss, and functional safety impact. Additionally, the guidance emphasizes protecting sensitive OT data, including engineering configuration information and ephemeral data, and addresses the challenges of integrating AI into legacy OT systems.

    Show sources

Information Snippets

Similar Happenings

AI Integration Challenges for CISOs in 2025

CISOs face significant challenges integrating AI into cybersecurity frameworks. While AI offers enhanced efficiency and automation, it also provides new tools for malicious actors. Only 14% of CISOs feel fully prepared to integrate AI, with data privacy and identity security being major concerns. The rapid adoption of AI in cybersecurity has led to a gap between deployment and readiness, complicating efforts to secure complex ICT supply chains. The integration of AI in cybersecurity is hindered by ethical governance gaps, a shortage of skilled personnel, and budget constraints. Organizations struggle with knowing where and how to start AI integration, highlighting the need for clear frameworks and actionable roadmaps. AI in cybersecurity encompasses various technologies, each with different strengths and limitations across threat detection, response automation, and data analysis. Omdia's research identifies five critical dimensions that CISOs must consider to effectively harness AI in cybersecurity. These dimensions include understanding AI's role across ICT, AI, and cybersecurity as an integrated approach.

Securing AI in Cyber Defense Operations

AI's potential in cyber defense is substantial, but securing AI systems is crucial to avoid expanding the attack surface. Organizations must establish trust in AI systems through strong identity controls, data governance, and continuous monitoring. Best practices include applying least privilege, strong authentication, and continuous auditing to AI agents and models. The SANS Secure AI Blueprint outlines key controls for securing AI systems, aligning with NIST and OWASP guidelines. Balancing automation and human oversight is essential for effective AI integration in cyber defense.

AI Governance Strategies for CISOs in Enterprise Environments

Chief Information Security Officers (CISOs) are increasingly tasked with driving effective AI governance in enterprise environments. The integration of AI presents both opportunities and risks, necessitating a balanced approach that ensures security without stifling innovation. Effective AI governance requires a living system that adapts to real-world usage and aligns with organizational risk tolerance and business priorities. CISOs must understand the ground-level AI usage within their organizations, align policies with the speed of organizational adoption, and make AI governance sustainable. This involves creating AI inventories, model registries, and cross-functional committees to ensure comprehensive oversight and shared responsibility. Policies should be flexible and evolve with the organization, supported by standards and procedures that guide daily work. Sustainable governance also includes equipping employees with secure AI tools and reinforcing positive behaviors. The SANS Institute's Secure AI Blueprint outlines two pillars: Utilizing AI and Protecting AI, which are crucial for effective AI governance.

AI Adoption Guidelines for Secure Enterprise Environments

AI adoption in enterprises is accelerating, posing security risks due to lack of control and safeguards. Security leaders must implement practical principles and technological capabilities to ensure safe AI usage. Five key rules are proposed to balance innovation and protection. AI visibility and discovery, contextual risk assessment, data protection, access controls and guardrails, and continuous oversight are essential for mitigating risks associated with AI adoption. These guidelines aim to create a secure environment for AI experimentation and usage within organizations.

Organizations Lack Comprehensive AI Policies Amid Rapid Adoption

Many organizations lack comprehensive AI policies despite widespread AI adoption. This oversight exposes them to security risks such as prompt injection attacks, hallucination, third-party model vulnerabilities, and shadow AI tools. Only 28% of organizations have a formal AI policy, leaving 72% vulnerable to unregulated AI use. Security experts emphasize the need for principle-based AI policies that include clear, enforceable controls and adapt to evolving regulations. Effective AI policies should guide innovation, set safety guardrails, and define acceptable use boundaries. They must be flexible, regularly updated, and integrated into broader enterprise risk management. Organizations should engage early with leaders to define AI roadmaps, focus on safe business value, and provide ongoing training and real-time monitoring to enforce policies.