Investigating the Consequences of NIST's Recent Recommendations for Cybersecurity, Privacy, and Artificial Intelligence
In an effort to address the unique cybersecurity risks introduced by the integration of AI into organizational infrastructures, the U.S. National Institute of Standards and Technology (NIST) has launched a new Cybersecurity, Privacy, and AI program. This initiative is significant as it aims to provide practical guidance and tailored solutions for organizations navigating the intersection of AI innovation and security imperatives.
The program focuses on three main areas of AI data security: data drift, potentially poisoned data, and risks in the data supply chain. It centers around developing a Cyber AI Profile that applies the NIST Cybersecurity Framework (CSF) to these Focus Areas, ensuring alignment with ongoing risk management practices while addressing novel AI-related challenges.
The Cyber AI Profile supplements the CSF by providing detailed technical input and recommendations tailored to AI systems. This includes governance, risk management, supply chain considerations, access control, employee training, and network baseline updates specific to AI deployment. The program's guiding structure is the CSF's core Categories—Identify, Protect, Detect, Respond, Recover—ensuring AI-specific cybersecurity issues are systematically mapped to the well-established CSF framework.
One of the key challenges in securing AI systems is the threat of maliciously modified or "poisoned" data. Threat actors may intentionally inject adversarial or false information into training sets to manipulate model behavior. To combat this, organizations should ensure data used in AI training comes from trusted, reliable sources and use provenance tracking to reliably trace data throughout its lifecycle.
Controlling privileged access to training data and enforcing least privilege for both human and nonhuman identities are important steps in securing AI systems. Secure infrastructure and access controls become paramount when protecting AI model repositories and APIs. Maintaining data integrity during storage and transport requires robust cryptographic measures, such as the use of cryptographic hashes and checksums.
Data drift, a sudden shift in the statistical properties of incoming data compared to the original training datasets, also presents a significant challenge. To detect unexpected behaviors or performance drift in AI systems, data teams and security personnel must establish mechanisms for monitoring and analysis.
The complexity of AI supply chains compounds these vulnerabilities significantly. Organizations must establish comprehensive systems to track data transformations throughout its lifecycle, using cryptographically signed records. Cybersecurity practitioners and data teams must work together to update data asset inventories to account for new threats and risks introduced by AI capabilities.
The program also addresses the issue of AI-specific incident response procedures, a critical gap in many organizations' security postures. AI systems introduce unprecedented attack surfaces that traditional cybersecurity approaches struggle to address effectively. To address this, organizations must establish robust incident response procedures tailored to AI threats.
The National Security Agency's Artificial Intelligence Security Center (AISC) has also released a Cybersecurity Information Sheet (CSI) focusing on key risks that may arise from data security and integrity issues across all phases of the AI lifecycle. This initiative further underscores the importance of the NIST program in harmonizing AI risk management with established cybersecurity and privacy standards.
The Cyber AI Profile is planned for release within the next six months, providing industry-tailored frameworks for organizations seeking to enhance their AI security posture. By applying a trusted cybersecurity framework (NIST CSF 2.0) to the emerging challenges of AI integration, the program aims to enhance security, privacy, and resilience in AI-enabled environments, thereby supporting robust AI adoption across critical infrastructures and businesses.
- To secure AI systems, machine learning models should be protected from data poisoning by utilizing trustworthy data sources and employing provenance tracking to verify data throughout its lifecycle.
- To address AI-specific cybersecurity challenges effectively, a robust incident response plan tailored to AI threats is essential, complemented by the ongoing harmonization of AI risk management with established cybersecurity and privacy standards.