Rewrite: the-four-control-families-that-cause-the-most-assessment-failures

Explore essential cyber security control measures for defense contractors to enhance compliance and security.

Rewrite: the-four-control-families-that-cause-the-most-assessment-failures

Word count: ~2,100

Specificity markers: - ✅ NIST/CMMC control references (AC, AU, CM, IR domain controls throughout) - ✅ Cost/time estimates (SIEM setup $15K-$40K, 15-min session lock, 72-hr IR reporting) - ✅ Tool/product names (Microsoft Sentinel, CrowdStrike, CIS Benchmarks, DISA STIGs) - ✅ Common mistakes (section for each domain) - ✅ Decision point with guidance (prioritization of remediation effort)

---

CMMC Level 2 has 110 controls across 14 domains. Not all of them fail assessments at the same rate. Based on CMMC assessment findings and DIBCAC data from DoD-conducted evaluations, four domains consistently generate the most Not Met findings: Access Control (AC), Audit and Accountability (AU), Configuration Management (CM), and Incident Response (IR).

This isn't because these controls are the most technically demanding. It's because they require sustained operational discipline, not just a one-time configuration. You can deploy a firewall in an afternoon. Maintaining accurate access control lists, collecting complete audit logs, enforcing consistent configurations, and running a functional incident response program — those are ongoing commitments that most organizations underinvest in until they're facing an assessment.

Here's what each domain requires, why each one trips people up, and what you actually need to do to pass.

Access Control (AC): 22 Controls, the Largest Domain

The AC domain is the largest in CMMC Level 2. It governs who can access your systems, under what conditions, and with what limitations. Twenty-two controls means there are a lot of places to get it wrong.

Where AC assessments go wrong

Shared accounts. The most common AC finding: multiple people using the same login credentials. Shared accounts make individual accountability impossible — you can't tell who did what when everyone is "admin." NIST 800-171 requires unique identification for every user (AC.L2-3.1.1, IA.L2-3.5.1). Every person who accesses a CUI system gets their own account. No exceptions.

MFA not enforced for remote access. Multi-factor authentication is required for remote access to CUI systems and for privileged accounts (IA.L2-3.5.3). Many organizations have MFA available but haven't enforced it — users who configured it use it; users who didn't, don't. That's not "implemented." MFA must be enforced at the identity provider level, not just offered as an option. Conditional Access policies in Azure AD or equivalent enforcement in Okta or Duo make this automatic.

Least privilege violations. Users accumulate access over time. Someone changes roles and keeps their old access. An admin uses their admin account for regular tasks. AC.L2-3.1.5 requires limiting access to the minimum necessary for the user's role. This means defined roles with documented access policies and periodic access reviews. Assessors will pull an access control list and interview users about why they have the access they have. "I don't know, IT gave it to me" is a finding.

Session lock not configured. AC.L2-3.1.10 requires automatic lock after 15 minutes of inactivity. Group Policy, MDM, or identity provider policies should enforce this on all endpoints in the CUI environment. Workstations that lock in 15 minutes but servers that never lock are only partially compliant.

What you actually need for AC

  • Centralized identity management (Active Directory, Azure AD, Okta) with individual accounts for every user
  • MFA enforced via policy — not optional — for all remote access and all privileged accounts
  • Role definitions with documented minimum-privilege access for each role
  • Quarterly access reviews with a record of who reviewed, what was found, and what was changed
  • Group Policy or MDM enforcing 15-minute session lock on all CUI systems

Audit and Accountability (AU): Evidence That Controls Are Working

The AU domain has 9 controls covering what you log, where you store it, and how long you keep it. The failure rate for AU is high for one primary reason: most organizations log something, but assessors are looking for specific, comprehensive, tamper-protected logging — and the gap between "we have logging" and "we have compliant logging" is large.

Where AU assessments go wrong

Incomplete log coverage. AU.L2-3.3.1 requires logging user activity across CUI systems: login/logout events, failed authentication attempts, privilege use, access to CUI files, changes to security settings, account management events, and system startup/shutdown. Many organizations log authentication events but miss file access, privilege use, or account changes. Pull a sample of audit events from your logging system and verify all required event types are captured.

No centralized logging. Logs scattered across individual servers are not a logging program — they're a collection of data that nobody is reviewing. Centralized SIEM or log management is effectively required for a viable AU implementation. Microsoft Sentinel, Splunk, or an MSSP-managed equivalent. Setup runs $15,000–$40,000 depending on scope; managed services $3,000–$6,000/month. This is not where to cut corners.

Log retention below 12 months. AU.L2-3.3.1 requires retaining logs long enough to support after-the-fact investigations. The DoD expects at least 12 months of log retention, with 3 months immediately available for analysis. Compressed archival storage counts for the 12-month requirement as long as logs are retrievable within a defined timeframe.

Logs that can be modified or deleted. AU.L2-3.3.8 and AU.L2-3.3.9 require protecting logs from unauthorized access and alteration. Your system administrators should not have delete rights on the audit logs they generate. This requires either a write-once log store or a separate privileged role structure for log management.

What you actually need for AU

  • SIEM with integrations covering all CUI systems — endpoints, servers, network devices, cloud services
  • Comprehensive event type coverage (don't just log authentication — log everything AU.L2-3.3.1 requires)
  • 12-month log retention, 3 months hot
  • Restricted access to logs — admins who generate events shouldn't be able to delete them
  • Evidence of regular log review — not just collection but analysis. Monthly at minimum.

Configuration Management (CM): The Gap Between Documented and Deployed

The CM domain has 9 controls focused on maintaining secure, documented configurations across your CUI systems. This domain fails assessments because of the difference between what's documented and what's actually deployed. You can have a beautiful baseline configuration document while your servers run default services that should have been disabled two years ago.

Where CM assessments go wrong

No documented baselines. CM.L2-3.4.1 requires establishing and maintaining documented baseline configurations for every system type in your CUI environment. If you don't have written baseline configurations — with specific approved settings for each OS version, application, and network device type — you have a Not Met finding before the assessor looks at anything else.

Baselines that don't match running configurations. Documented baselines that were written once and never updated. Systems that have drifted from baseline due to patches, changes, or quick fixes. The assessor will compare your documented baseline against your actual configurations. They frequently find systems running services that the baseline says should be disabled, or missing settings the baseline says should be configured.

Use CIS Benchmarks (available at cisecurity.org) or DISA Security Technical Implementation Guides (STIGs) as your baseline source — they're defensible and assessor-familiar. Implement a configuration scanning tool (CIS-CAT Pro, Tenable Nessus, or Microsoft Endpoint Configuration Manager) to detect drift.

No change control process. CM.L2-3.4.3 requires changes to CUI systems to go through a documented approval process. Many small contractors apply changes informally — an admin makes a change and tells their manager afterward. The assessor will ask for change records. If you can't produce documented change requests with security impact analysis and approval signatures, that's a finding.

Unnecessary services and software. CM.L2-3.4.6 and CM.L2-3.4.7 require disabling unnecessary services, ports, and protocols and removing unauthorized software. Assessors routinely find: print spooler running on servers with no printers, Bluetooth enabled on workstations that don't use it, software installed by users without IT approval. Implement application whitelisting or at minimum a software inventory and approval process.

What you actually need for CM

  • Documented baseline configurations for every system type, based on CIS Benchmarks or STIGs
  • Configuration scanning to detect drift — run monthly minimum
  • Change control process with documented requests, security impact assessments, and approvals
  • Software inventory and approval process for CUI systems
  • Regular review of running services against baseline — disable what shouldn't be running

Incident Response (IR): Plans Aren't Enough

The IR domain has only 3 controls, but they represent a disproportionate share of assessment findings. The problem: having a plan and having a working program are very different things. Assessors don't just read your incident response policy — they interview your people, look for evidence that you've tested it, and check whether the 72-hour reporting path to DoD is actually documented and understood.

Where IR assessments go wrong

Incident response plan that nobody knows about. IR.L2-3.6.1 requires an incident response capability — not just a document. If your IR plan lives in SharePoint and nobody on your IT team has read it, that's not a capability. The assessor will interview your system administrators and security personnel about how they'd respond to an incident. They expect informed, consistent answers.

No 72-hour reporting path. DFARS 252.204-7012 requires reporting cyber incidents to DoD within 72 hours. Your IR plan must document this — specifically: who decides an incident is reportable, who submits the report, and how to reach DIBNet (dibnet.dod.mil). If your IR plan addresses only internal response and doesn't mention DoD reporting, that's a gap.

No evidence of testing. IR.L2-3.6.2 requires testing the incident response capability. A tabletop exercise with a written after-action report counts. Annual penetration tests that produce findings your IR team responds to counts. Simulated phishing with a documented response procedure counts. "We haven't done a tabletop yet but we plan to" does not count.

No tracking of incidents. Keep a log of security events and incidents — even minor ones (a phishing email that was reported and blocked, a failed login attempt that was investigated). This log demonstrates that your IR program is active and your people are using it, and it provides context for how your program operates.

What you actually need for IR

  • Written IR plan covering: detection and analysis, containment, eradication and recovery, and post-incident activity
  • The 72-hour DoD reporting procedure explicitly documented, with specific contact information and a named responsible individual
  • Annual testing — tabletop exercise with documented scenario, outcomes, and lessons learned
  • Training for all staff on incident recognition and reporting procedures
  • Incident tracking log maintained throughout the year

Where to Start: Prioritizing Your Remediation

If you're looking at gaps across all four domains, prioritize in this order:

  1. Access Control first — specifically shared accounts and MFA. These are both high-risk and high-finding-rate. Fix them before anything else.
  2. Audit logging second — get centralized logging in place. You can't monitor or prove anything without it.
  3. Configuration Management third — document baselines, then implement drift detection. This takes the most sustained effort.
  4. Incident Response fourth — write the plan, train the team, run a tabletop. This is the fastest to remediate once you commit to it.

What Your Assessor Expects

For each of these four domains, assessors use a combination of examination, interview, and testing. Your documentation has to describe what you actually do — not an idealized version. Your people have to be able to describe it consistently. And your systems have to behave the way your documentation says they do.

The most telling sign of a real program vs. a paper one: when the assessor interviews an employee who doesn't work in IT — someone in engineering or operations who handles CUI — and asks how they handle a security incident or report a problem. If the answer is consistent with your documented procedures, you're in good shape. If they stare blankly, you have more work to do.

---

CTA: Start with an access control audit and a log coverage gap analysis — these two reviews will surface most of your critical gaps in under a week. A CMMC readiness assessment by an RP or C3PAO typically covers all four domains and gives you a prioritized remediation roadmap within 2-4 weeks.