Skip to content
The Fiscal Year 2025 National Defense Authorization Act (NDAA), signed into law on December 23, 2024, allocates $895.2 billion to the Department of Defense (DoD) and mandates the procurement of artificial intelligence (AI) systems to advance national security. This legislation reinforces the Cybersecurity Maturity Model Certification (CMMC) 2.0’s critical role in securing AI-driven systems, requiring contractors to achieve compliance to support the DoD’s digital transformation. For DoD and Intelligence Community (IC) contractors handling Controlled Unclassified Information (CUI), integrating AI while meeting CMMC Level 2’s 110 NIST SP 800-171 controls is essential. This blog post explores the NDAA’s implications, highlights the intersection of AI and CMMC 2.0, and provides practical strategies to build secure IT infrastructure for AI operations and achieve certification.
The NDAA’s Push for AI and Cybersecurity
The FY 2025 NDAA emphasizes AI as a cornerstone of DoD’s modernization, directing investments in AI-driven analytics, autonomous systems, and decision-making tools. It also strengthens cybersecurity mandates, aligning with the CMMC 2.0 final rule (effective December 16, 2024), which requires:
- Level 1 Self-Assessments: For contractors handling Federal Contract Information (FCI), covering 17 basic cybersecurity practices.
- Level 2 Third-Party Assessments: For CUI-handling contractors, requiring 110 NIST SP 800-171 controls, verified by Certified Third-Party Assessment Organizations (C3PAOs) starting in Q1 2025.
- 72-Hour Incident Reporting: Per the August 2024 DFARS rule, contractors must report cybersecurity incidents impacting CUI within 72 hours.
The NDAA’s focus on AI procurement introduces new cybersecurity challenges, as AI systems processing CUI must be secured against advanced threats. CMMC 2.0 ensures contractors maintain robust IT infrastructure to protect these systems, safeguarding national security and enabling contract eligibility.
Why Secure AI Integration Matters
AI systems amplify the DoD’s capabilities but also introduce vulnerabilities, such as:
- Data breaches exposing CUI processed by AI tools, compromising mission-critical information.
- Manipulation of AI algorithms through adversarial attacks, undermining decision-making.
- Non-compliance with CMMC 2.0, risking contract awards and supply chain exclusion.
- Audit failures, as C3PAOs scrutinize the security of AI-driven systems handling CUI.
Securing AI integration while achieving CMMC Level 2 certification ensures contractors support the DoD’s digital transformation, protect sensitive data, and remain competitive in a rapidly evolving landscape.
Strategies for Secure AI Integration and CMMC 2.0 Compliance
Contractors can build secure IT infrastructure for AI operations and achieve CMMC Level 2 certification with the following strategies, aligning with NIST SP 800-171 and NDAA mandates:
1. Assess AI Security Needs
Begin by evaluating the cybersecurity requirements for AI systems handling CUI:
- Map AI Workflows: Identify where AI tools (e.g., analytics platforms, autonomous systems) process CUI, such as in data analysis or decision support.
- Analyze Risks: Assess threats like data poisoning, model tampering, or unauthorized access, focusing on AI-specific vulnerabilities.
- Align with NIST Controls: Prioritize controls like AC-2 (Account Management), SC-7 (Boundary Protection), and IR-4 (Incident Handling) to guide AI security planning.
This assessment ensures AI systems are integrated within a CMMC-compliant framework.
2. Build Compliant IT Architectures for AI
Secure IT infrastructure is critical to protect AI systems and meet CMMC requirements:
- Implement Zero-Trust Principles: Require continuous verification for users, devices, and AI applications accessing CUI, using MFA and role-based access to comply with AC-3 (Access Enforcement).
- Encrypt AI Data: Use FIPS 140-2 compliant encryption for CUI processed by AI tools, both at rest and in transit, aligning with MP-1 (Media Protection).
- Segment AI Systems: Isolate AI workloads handling CUI from non-sensitive systems, using network segmentation to meet SC-7 (Boundary Protection).
- Secure APIs: Protect AI system interfaces with authentication and encryption, reducing risks of unauthorized access and complying with IA-2 (Identification and Authentication).
These measures safeguard AI operations and support CMMC compliance.
3. Leverage Microsoft 365 GCC High for AI Security
Microsoft 365 GCC High, a DoD-compliant cloud platform, supports secure AI integration and CMMC 2.0:
- Secure Data Inputs: Use GCC High’s DLP and encryption to protect CUI ingested by AI tools, meeting MP-1 (Media Protection) and SC-28 (Protection of Information at Rest).
- Enable Monitoring: Activate audit logs and security dashboards to track AI-related activity, such as data access or model usage, aligning with AU-2 (Audit Events).
- Restrict Collaboration: Limit Teams and OneDrive sharing to authorized users, ensuring CUI used in AI workflows remains secure, per AC-3 (Access Enforcement).
- Automate Threat Detection: Configure alerts for anomalies, such as unusual AI data access, to support IR-4 (Incident Handling) and rapid 72-hour reporting.
GCC High simplifies compliance for AI-driven systems while enhancing security.
4. Refine Compliance Documentation
Comprehensive documentation is essential for CMMC assessments and demonstrates AI security:
- System Security Plan (SSP): Detail how AI systems meet NIST SP 800-171 controls, including encryption, monitoring, and access controls for CUI processing.
- Plan of Action and Milestones (POA&M): Identify gaps, such as incomplete AI system logging or weak authentication, with remediation steps and deadlines.
- Collect AI-Specific Evidence: Include logs, configuration records, and AI model access policies to show compliance during C3PAO assessments.
- Update Regularly: Revise documentation to reflect changes in AI workflows or IT infrastructure, ensuring accuracy for auditors.
Clear documentation proves AI systems are secure and CMMC-compliant.
5. Strengthen Incident Response for AI Systems
The NDAA’s emphasis on rapid response aligns with CMMC’s 72-hour incident reporting requirement:
- Monitor AI Systems: Use SIEM tools or GCC High alerts to detect incidents, such as data breaches or model tampering, in real time, per SI-4 (System Monitoring).
- Define AI Incident Workflows: Establish procedures for analyzing, containing, and reporting AI-related incidents within 72 hours, using the DoD’s DIBCS portal, aligning with IR-6 (Incident Reporting).
- Test Response Plans: Simulate AI-specific incidents, like adversarial attacks, to ensure rapid response and compliance with IR-2 (Incident Response).
- Document Incidents: Maintain records of AI-related incidents and responses as evidence for CMMC assessments, supporting AU-3 (Content of Audit Records).
A robust incident response plan ensures compliance and resilience.
6. Implement Managed IT for Continuous Compliance
Managed IT practices sustain security for AI systems and CMMC readiness:
- Monitor Continuously: Track AI system activity with SIEM or GCC High tools to detect threats, complying with AU-6 (Audit Review).
- Patch Promptly: Apply updates to AI platforms and supporting infrastructure to close vulnerabilities, per SI-2 (Flaw Remediation).
- Secure Backups: Store encrypted CUI backups in compliant environments, meeting MP-4 (Media Storage) for recovery from AI-related incidents.
- Train Staff: Educate employees on AI security practices, such as avoiding unauthorized model access, to meet AT-2 (Security Awareness).
These practices maintain compliance and protect AI operations.
7. Prepare for CMMC Level 2 Assessments
C3PAO assessments, starting in Q1 2025, will verify AI system security:
- Conduct Mock Audits: Test AI systems and documentation against NIST SP 800-171 controls, focusing on data protection, monitoring, and incident response.
- Compile Evidence: Organize SSPs, POA&Ms, logs, and AI-specific records to demonstrate compliance with controls like SC-7, IR-4, and AU-2.
- Train Teams: Prepare IT and compliance staff to explain AI security measures and incident response processes to C3PAOs.
- Remediate Gaps: Address issues, such as incomplete AI logging or weak encryption, to ensure certification success.
Proactive preparation ensures AI systems pass assessments and support DoD contracts.
Looking Ahead: AI and CMMC 2.0 in 2025
As the NDAA drives AI adoption, contractors should anticipate:
- Stricter Compliance Mandates: CMMC Level 2 certification will be required for AI-driven contracts, with audits verifying secure integration.
- Evolving AI Threats: Adversarial AI attacks and data poisoning will demand advanced cybersecurity beyond NIST controls.
- Supply Chain Expectations: Prime contractors will require subcontractor compliance to secure AI systems across the DIB.
Early preparation positions contractors to lead in the DoD’s digital transformation.
Conclusion
The FY 2025 NDAA’s $895.2 billion budget and AI procurement mandate highlight the critical intersection of CMMC 2.0 and secure AI integration. By assessing AI security needs, building compliant architectures, leveraging Microsoft 365 GCC High, refining incident response, and preparing for assessments, contractors can achieve Level 2 certification and protect CUI in AI-driven systems. These strategies not only ensure compliance but also advance the DoD’s digital transformation, strengthen national security, and position contractors for success in a technology-driven DIB.