Measuring Cybersecurity Risk for CMMC
Learn effective strategies for measuring cybersecurity risk in defense contracting.
Word count: ~1,680
Specificity markers hit:
- ✅ NIST/CMMC control references (RA.L2-3.11.1, RA.L2-3.11.3, NIST SP 800-30 5×5 matrix)
- ✅ Cost/time estimate (SPRS score range; -203 minimum; remediation timeline benchmarks 30/60/90 days)
- ✅ Tool/product names (NIST SP 800-171 DoD Assessment Methodology, SPRS, FAIR framework)
- ✅ Common mistake (confusing SPRS score with actual risk posture)
- ✅ Decision point with guidance (qualitative vs. quantitative measurement)
---
Measuring Cybersecurity Risk for CMMC
CMMC tells you that you must assess risk (RA.L2-3.11.1) and remediate vulnerabilities in accordance with risk assessments (RA.L2-3.11.3). It doesn't tell you exactly how to measure risk. That's intentional — the DoD wants you to have a working risk management process, not just fill out the right form.
But "use a risk assessment" is vague enough to be dangerous. If you measure risk poorly, your remediation priorities will be wrong. And if your POA&M isn't driven by actual risk rankings, a good assessor will catch it.
Here's how to measure cybersecurity risk in a way that satisfies CMMC requirements and actually helps you make decisions.
The SPRS Score: What It Measures (and Doesn't)
Before getting into methodology, let's dispense with one common misconception: your SPRS score is not a risk measurement. It's a compliance gap score.
The NIST SP 800-171 DoD Assessment Methodology assigns point values to each of the 110 Level 2 controls. The maximum score is 110 points (all controls implemented). Every unimplemented control has a negative point value — ranging from -1 to -5 depending on the control's weight. The minimum possible score is -203 (nothing implemented).
Your SPRS score tells the DoD how many controls you've implemented. It doesn't tell you: - Which gaps represent the highest actual risk to your CUI - Whether the controls you've implemented are functioning correctly - How exposed you are to the specific threat actors targeting your sector
A contractor with an SPRS score of 70 but well-implemented access controls and network segmentation may be meaningfully safer than a contractor with a score of 90 who has blind spots in monitoring and incident detection. The score is a compliance proxy, not a risk measurement. Use both, but don't confuse one for the other.
Qualitative vs. Quantitative: Choosing Your Method
The first decision in building a risk measurement process is whether to use qualitative ratings, quantitative scoring, or a combination.
Qualitative risk measurement uses descriptive scales: High/Medium/Low for likelihood and impact, resulting in a risk rating of High/Medium/Low. The NIST SP 800-30 methodology uses a 5-level scale (Very High/High/Moderate/Low/Very Low) mapped to a 5×5 matrix, giving you 25 possible likelihood/impact combinations and a corresponding risk level.
Qualitative is appropriate for most CMMC-level risk assessments. It's fast to execute, requires no specialized software, and produces outputs that management can understand and act on. An assessor reviewing a well-documented qualitative risk assessment — with clear definitions for each scale level, consistent application across findings, and documented rationale for ratings — will find it satisfactory.
Quantitative risk measurement uses numerical values to express risk in monetary terms. The most developed methodology for this is FAIR (Factor Analysis of Information Risk), which estimates loss exposure in annualized dollar terms by analyzing loss event frequency and loss magnitude. FAIR is used by large defense primes and financial institutions that need to compare risk investments across business units.
For most small-to-mid-size defense contractors (under 500 employees), FAIR is overkill. The data inputs required — historical breach rates, asset valuations, control effectiveness percentages — either don't exist or aren't reliable at that scale. A FAIR analysis built on guesses isn't better than a well-executed qualitative assessment.
The pragmatic choice for CMMC: Qualitative assessment using NIST SP 800-30's 5×5 matrix, with clear definitions for each rating level. If you have a CISO with FAIR experience and want to add monetary estimates for specific high-risk findings, that's fine — but it's not required.
Building Your Risk Measurement Process
A defensible risk measurement process has five components:
1. Define Your Scale Before You Start
Before you rate any risk, write down what your ratings mean. If "High likelihood" means different things to different people on your team, your risk ratings will be inconsistent.
Example definitions for a 5-level scale: - Very High likelihood: Threat source is highly motivated and capable; controls are largely ineffective - High likelihood: Threat source is motivated and capable; controls provide limited effectiveness - Moderate likelihood: Threat source is partially motivated or capable; controls provide moderate effectiveness - Low likelihood: Threat source lacks significant motivation or capability; controls are largely effective - Very Low likelihood: Controls are highly effective; threat unlikely to succeed
Do the same for impact levels: Very High would typically mean severe CUI compromise, contract loss, or legal consequences; Very Low means negligible operational disruption with no CUI exposure.
Document these definitions in your risk assessment policy. When your assessor asks how you arrived at a risk rating, you point to the policy.
2. Measure Likelihood Against Your Actual Controls
Likelihood isn't the theoretical probability of an attack — it's the probability of a successful attack given the controls you have in place. A phishing attack is "Very High likelihood" in the abstract. If you have email filtering, DMARC/DKIM/SPF configured, MFA on all accounts, and security awareness training, the likelihood of a successful phishing-driven compromise drops to "Low" or "Moderate."
This distinction matters because it ties your risk measurement directly to your control implementation. When you improve a control, the risk ratings for threats that control addresses should drop. This is how you demonstrate risk reduction over time — not just to your assessor, but to management justifying the security budget.
3. Measure Impact Based on CUI Sensitivity and Operational Consequences
For defense contractors, impact analysis should consider:
- CUI sensitivity — What category of CUI is involved? Controlled Technical Information (CTI) related to weapons systems is at the higher end of impact. General contract administration CUI is lower.
- Volume — How much CUI could be exposed in a worst-case incident? A contractor with 10 CUI files and a contractor with 50,000 CUI files have different impact profiles for the same attack.
- Contractual consequences — A breach that triggers DFARS 252.204-7012 reporting requirements has immediate contractual implications. Document this.
- Operational disruption — Would the incident knock out systems your contract performance depends on?
- Recovery time — How long would it take to restore operations? Do you have tested backups?
4. Produce a Risk Register
The output of your measurement process should be a risk register: a document listing each identified risk, with its likelihood rating, impact rating, overall risk level, current controls that mitigate it, and the risk response decision (accept, mitigate, transfer, or avoid).
The risk register is a living document. It should be reviewed and updated at least annually and whenever your system boundary or threat environment changes significantly. Your assessor will ask to see it.
A minimal risk register entry looks like:
| Risk ID | Threat | Vulnerability | Likelihood | Impact | Risk Level | Current Controls | Response | Owner | |---------|--------|--------------|------------|--------|------------|-----------------|----------|-------| | R-007 | APT spearphishing | User susceptibility; no phishing-resistant MFA | High | High | High | Email filtering, security awareness training | Mitigate: deploy FIDO2 MFA | IT Manager |
5. Tie Risk Ratings to Remediation Timelines
Once you have risk ratings, you need remediation timelines that reflect those ratings. Common practice for CUI environments:
- Critical/Very High risk: Remediate within 30 days or document a POA&M with management sign-off
- High risk: Remediate within 60 days
- Moderate risk: Remediate within 90 days
- Low risk: Schedule for next regular maintenance cycle
These timelines need to be defined in your vulnerability management policy, not invented after the fact. Under RA.L2-3.11.3, remediation "in accordance with risk assessments" means the risk rating drove the timeline — not convenience.
Common Mistake: Treating SPRS Score as a Risk Dashboard
The most common measurement error: using SPRS score as the primary risk metric and driving remediation based on the point values in the DoD assessment methodology rather than actual risk.
This creates a specific failure mode. The DoD methodology weights controls by their importance to the assessment, not by their importance to your specific risk environment. A control worth -5 points in the DoD methodology might protect against a threat that's nearly impossible in your environment. A control worth -1 point might be the only thing between an attacker and your most sensitive CUI files.
If you remediate the high-point-value gaps first without doing a risk assessment, you're optimizing for score rather than for actual protection. You might achieve an SPRS score of 85 while leaving genuinely dangerous gaps open.
The fix: run your risk assessment first, build your risk register, and let the risk ratings drive your POA&M priorities. Then document the connection explicitly — "This control gap was rated High risk because [specific threat/vulnerability combination]; it is prioritized in our POA&M accordingly."
What Your Assessor Expects
For RA domain controls, the assessor will want to see:
- A risk assessment policy defining your methodology, scale definitions, and schedule
- A completed risk assessment with dated findings and ratings
- A risk register or POA&M showing risk levels assigned to each gap
- Evidence that remediation timelines reflect risk ratings — not just a flat list of things to fix
- Vulnerability scan results connected to the risk assessment inputs
The interview will test whether the people running the process understand it. "What's your highest-risk finding right now, and why is it rated that way?" should get a specific, coherent answer. "We have some gaps to work on" is not that answer.
Measurement is the foundation. If your risk measurements are vague, your remediation priorities will be wrong, your POA&M will be unpersuasive, and your assessor will have legitimate concerns about whether your risk management program is real.
---
For the full methodology behind risk measurement, see NIST SP 800-30: The Risk Assessment Methodology.
Got specific questions about CMMC? Our expert is available around the clock — no waiting, no sales pitch.
Got Questions? Ask our CMMC Expert →
Prefer email? Reach us at ix@isegrim-x.com