NIST just released draft guidelines for AI cybersecurity. Over 6,500 people joined the development process. The 45-day comment period closes January 30, 2026. Businesses are supposed to read the guidelines, implement the recommendations, and somehow protect themselves from AI-enabled cyberattacks while securing AI systems they barely understand. 

Here’s the problem: by the time NIST finalizes these guidelines in 2026, the AI threat landscape will have evolved beyond what the guidelines address. AI capabilities are advancing faster than regulatory frameworks can keep pace with. The guidelines tell you to “secure AI systems” and “thwart AI-enabled cyberattacks” without acknowledging that most businesses can’t even identify which AI systems they’re using, let alone secure them. 

The Cyber AI Profile centers on three focus areas: securing AI systems, conducting AI-enabled cyber defense, and thwarting AI-enabled cyberattacks. These sound comprehensive until you realize your business faces AI cybersecurity threats right now while NIST collects public comments through January and plans to release an “initial public draft” sometime in 2026. 

I’ve been helping businesses navigate AI cybersecurity risks for the past two years. Most companies I work with are still trying to inventory which AI tools their employees are using. They’re nowhere near ready to implement comprehensive AI cybersecurity frameworks. NIST’s guidelines assume a level of AI maturity that most businesses simply don’t have. 

What the NIST Cyber AI Profile Actually Covers 

The Cybersecurity Framework Profile for Artificial Intelligence (NISTIR 8596) offers guidelines for using NIST’s Cybersecurity Framework (CSF 2.0) to address AI-specific security challenges. 

Securing AI systems: The profile addresses cybersecurity challenges when integrating AI into organizational ecosystems. This includes protecting AI models, training data, deployment environments, and interfaces between AI systems and existing IT infrastructure. 

Conducting AI-enabled cyber defense: The guidelines identify opportunities to use AI to enhance cybersecurity operations. This covers using AI for threat detection, incident response, vulnerability management, and security monitoring. 

Thwarting AI-enabled cyberattacks: The profile focuses on building resilience against AI-enabled threats including adversarial attacks on AI models, AI-powered social engineering, automated vulnerability exploitation, and AI-generated malware. 

The Timeline Problem 

NIST released an initial concept paper in February 2025, conducted a workshop in April, hosted community meetings in summer 2025, and released this preliminary draft in December 2025. The 45-day public comment period closes January 30, 2026. NIST then plans to develop an initial public draft for release sometime in 2026. 

Translation: final guidelines won’t be available until late 2026 at the earliest, possibly 2027. Meanwhile, AI-enabled cyberattacks are happening now. Businesses need guidance today, not two years from now after multiple draft iterations and public comment periods. 

What Most Businesses Are Actually Facing 

The disconnect between NIST’s guidelines and business reality is massive. 

Most businesses can’t inventory their AI systems. Employees are using ChatGPT, Claude, Copilot, and dozens of other AI tools without IT department knowledge or approval. Marketing uses AI for content generation. Sales uses AI for email drafting. Customer service uses AI chatbots. HR uses AI for resume screening. Each creates cybersecurity risks that businesses aren’t tracking. 

Shadow AI is everywhere. Just like shadow IT became a security nightmare when employees started using unauthorized cloud services, shadow AI is creating cybersecurity exposures that businesses don’t even know exist. NIST’s guidelines assume businesses know which AI systems they’re using. Most don’t. 

AI vendors control the security. When your business uses ChatGPT or other AI services, you’re trusting the vendor’s security. You have no visibility into how models are trained, what data they retain, what security controls they implement, or how they respond to breaches. NIST’s guidelines tell you to secure AI systems you don’t control and can’t audit. 

Third-party AI risks are invisible. Your vendors are using AI systems you know nothing about. Your software includes AI components you didn’t know were there. Your supply chain is full of AI-related cybersecurity risks you can’t assess because you don’t know they exist. 

The Three Focus Areas Don’t Address Real Business Problems 

“Securing AI systems” assumes you can identify them. Most businesses have no comprehensive inventory of AI systems in use across their organizations. Employees are adopting AI tools faster than IT departments can track them. Each department is creating AI cybersecurity exposure without centralized oversight. 

“Conducting AI-enabled cyber defense” requires AI expertise most businesses lack. The guidelines suggest using AI to enhance threat detection and incident response. Most businesses are still struggling with basic cybersecurity hygiene. They don’t have the expertise to deploy AI-powered security tools effectively. 

“Thwarting AI-enabled cyberattacks” is reactive, not proactive. By the time NIST finalizes guidelines for defending against AI-powered attacks, attackers will have moved on to new techniques. AI enables rapid evolution of attack methods. Static guidelines can’t keep pace with adversaries using AI to automate vulnerability discovery and generate polymorphic malware. 

What Businesses Actually Need Right Now 

Businesses don’t need comprehensive frameworks that won’t be finalized until 2027. They need practical guidance they can implement immediately to address current AI cybersecurity risks. 

AI inventory and governance. Businesses need processes for identifying what AI systems are being used across their organizations, who’s using them, what data they’re processing, and what security risks they create. This isn’t sexy framework development—it’s basic asset management applied to AI tools. 

Vendor AI risk assessment. Businesses need standardized questions to ask AI vendors about data retention, model security, breach notification, and compliance capabilities. Most businesses have no idea what security questions to ask AI vendors or what answers should concern them. 

Employee AI usage policies. Businesses need clear policies about which AI tools employees can use, what data can be shared with AI systems, and what approval processes are required before adopting new AI tools. Most businesses have no AI usage policies at all. 

Incident response for AI-related breaches. Businesses need playbooks for responding when AI systems are compromised, when sensitive data is accidentally shared with AI tools, or when AI-powered attacks occur. Standard incident response plans don’t address AI-specific scenarios. 

Third-party AI risk management. Businesses need contract provisions requiring vendors to disclose AI usage, security controls for AI systems, and notification when AI-related security incidents occur. Most vendor contracts don’t address AI risks at all. 

What You Should Do While Waiting for NIST 

Start your AI inventory today. Survey departments about what AI tools they’re using. Check expense reports for AI service subscriptions. Interview employees about AI tools they’ve adopted. You can’t secure AI systems you don’t know exist. 

Implement AI usage policies. Establish clear rules about which AI tools are approved, what data can be shared with AI systems, and what approval is required before adopting new AI tools. Policies don’t require NIST guidance—they require business judgment about acceptable risk. 

Assess your AI vendors. For every AI service you’re using, understand what data they retain, how they secure it, whether they train models on your data, and how they handle security incidents. If vendors can’t answer these questions satisfactorily, find different vendors. 

Update incident response plans. Add scenarios covering AI-related breaches, accidental data exposure to AI systems, and AI-powered attacks. Your existing incident response playbooks probably don’t address these situations. 

Review vendor contracts. Add provisions requiring vendors to disclose AI usage, implement reasonable AI security controls, and notify you of AI-related security incidents. Don’t wait for NIST to tell you this is necessary. 

Train your security team. Your security personnel need to understand AI-specific threats, how to assess AI vendor security, and how to respond to AI-related incidents. Most security teams have minimal AI expertise. 

The Reality 

NIST’s Cyber AI Profile represents a thoughtful attempt to address AI cybersecurity challenges through its established framework methodology. Unfortunately, the timeline for final guidelines—late 2026 or 2027—doesn’t match the urgency of AI security risks businesses face today. 

By the time NIST finalizes these guidelines, AI capabilities will have advanced significantly. The threat landscape will have evolved. New AI-powered attack techniques will have emerged. The guidelines will still provide valuable framework, but they’ll be addressing yesterday’s challenges, not tomorrow’s threats. 

Businesses waiting for NIST guidance before addressing AI cybersecurity risks are waiting too long. The time to inventory AI systems, implement usage policies, assess vendor risks, and update incident response plans is now, not after NIST completes multiple draft iterations and public comment periods. 

My team helps businesses assess AI cybersecurity risks, implement AI governance frameworks, evaluate AI vendor security, and update incident response plans for AI-related scenarios. We’re not waiting for NIST to finalize guidelines before helping clients address AI security challenges they’re facing today. 

Contact me directly at tshields@kelleykronenberg.com to discuss your AI cybersecurity strategy. 


About the Author: 

 

Timothy Shields
Partner/Business Unit Leader, Data Privacy & Technology
Kelley Kronenberg-Fort Lauderdale, FL.
(954) 370-9970
Email
Bio

 

 

Timothy Shields holds a Doctorate in Education and Juris Doctor, serves as Partner and Business Unit Leader for Data Privacy & Technology at Kelley Kronenberg, and is a certified NFL agent. He specializes in representing college athletes in Loss of Value insurance negotiations, NIL matters, and coverage disputes involving career-altering injuries.