An incident response plan that exists only in a shared drive is not evidence of preparedness. It is evidence of intent, and the Canadian Program for Cyber Security Certification draws a clear line between the two.
The Incident Response (IR) and System and Information Integrity (SI) control families in ITSP.10.171 cover the operational core of any security program: how an organization detects threats, responds to incidents, remediates vulnerabilities, and maintains system integrity over time. These are not abstract governance requirements. They demand working processes with documented evidence that those processes have been tested and that personnel know what to do when something goes wrong.
For organizations already operating under SOC 2 or ISO 27001, portions of these controls will feel familiar. The specifics, particularly around flaw remediation timelines and the depth of incident response testing evidence, are where the gaps tend to appear.
This is the seventh post in our CPCSC deep dive series. For a full breakdown of all 17 ITSP.10.171 control families, see our ITSP.10.171 explainer.
Incident Response: From Plan to Demonstrated Capability
The IR family in ITSP.10.171 maps closely to the Incident Response family in NIST SP 800-171 and covers four areas: incident handling procedures, monitoring and reporting, incident response testing, and the incident response plan itself.
Incident Handling Procedures
The standard expects documented procedures for each phase of incident handling: preparation, detection, analysis, containment, eradication, and recovery. Each phase needs defined actions and responsible roles, not just a paragraph saying the security team will respond.
In practice, the strongest evidence comes from procedures that are specific enough to be followed by someone who did not write them. If the incident handling document requires tribal knowledge to interpret, it will not satisfy an assessment. The procedures should reference specific tools (which SIEM, which ticketing system, which communication channel), specific escalation paths (who gets notified at each severity level), and specific documentation requirements (what gets logged during triage, what constitutes a case file).
The Specificity Test
If your incident handling procedures cannot be followed by someone who did not write them, they will not satisfy an assessment. Reference specific tools, escalation paths, and documentation requirements for each phase.
Monitoring and Reporting
Organizations must track and document security incidents from detection through resolution. This means maintaining case records that capture the timeline of events, triage decisions, containment actions, root cause analysis, and remediation steps.
A common mistake observed across engagements is treating the SIEM dashboard as the record of monitoring activity. Teams clear alerts, note that nothing actionable was found, and move on. But cleared is not the same as investigated. Assessors look for case records with triage rationale: when the alert fired, what triggered it, what events were reviewed as part of the investigation, and what conclusion was reached. A clean dashboard with no documented triage history is a gap, not a strength.
The reporting component also extends to external obligations. If an incident involves controlled information under a defence contract, there may be notification requirements to the contracting authority. The IR plan should define those thresholds and the reporting process.
Common Pitfall
A clean SIEM dashboard with no documented triage history is a gap, not a strength. Assessors expect case records showing when alerts fired, what was reviewed, and what conclusion was reached.
Incident Response Testing
This is where the distinction between a plan and a capability becomes concrete. ITSP.10.171 expects evidence that the incident response plan has been tested and that the results were used to improve the plan.
Tabletop exercises are the standard approach for most organizations at Level 1. A well-run tabletop walks key personnel through a realistic scenario, identifies where the plan breaks down, and produces documented findings. The evidence package should include the scenario description, the participants, the decisions made during the exercise, the gaps identified, and the changes made to the plan as a result.
Annual testing is the minimum. Organizations that test only when prompted by an assessment cycle tend to have plans that do not reflect current infrastructure, staffing, or tooling. Quarterly or semi-annual exercises, even abbreviated ones, produce better outcomes and stronger evidence.
The Incident Response Plan
The IR plan itself is the governing document that ties the other controls together. It defines roles, responsibilities, communication procedures, and how the organization coordinates with external parties during an incident.
A pattern that creates problems: writing the IR plan as a standalone document disconnected from operational reality. If the plan names an Incident Commander role but nobody on the current team has been assigned that role or trained on it, the plan is aspirational rather than operational. Assessment evidence should demonstrate that named personnel know their roles, which is exactly what tabletop exercises are designed to prove.
System and Information Integrity: Keeping Systems Trustworthy
The SI family covers the ongoing work of maintaining system integrity through flaw remediation, malicious code protection, security alerts and advisories, system monitoring, and information handling. Where IR deals with what happens when something goes wrong, SI deals with preventing degradation before incidents occur.
Flaw Remediation
Flaw remediation is among the more evidence-intensive controls in the SI family. Organizations must identify, report, and correct information system flaws in a timely manner. The standard expects defined remediation timelines based on severity.
The critical decision here is setting realistic Service Level Agreements for remediation. Ambitious SLAs look good on paper, but they become a liability if the organization cannot consistently meet them. Setting a 24-hour critical vulnerability SLA and then missing it repeatedly provides an assessor with documented evidence of policy violations. The organization's own policy becomes the standard it is measured against.
Practical Flaw Remediation SLAs
Set remediation timelines based on actual capacity, not aspirational targets. Your own policy becomes the standard you are measured against.
| Severity | Recommended SLA |
| Critical | 48 hours from identification to remediation or compensating control |
| High | 48 hours, with documented justification if extended |
| Medium | 7 calendar days |
| Low | 30 calendar days |
These timelines should account for the organization's actual capacity. A five-person IT team cannot maintain the same remediation velocity as a dedicated security operations center, and the SLAs should reflect that reality. What matters is consistency: setting achievable targets and meeting them, with documented exceptions when delays occur.
Flaw remediation evidence includes vulnerability scan results, the remediation timeline for each finding, and verification that fixes were applied. Patch management logs, automated scan reports, and ticketing system records all contribute to the evidence package.
Malicious Code Protection
The standard requires mechanisms to detect and eradicate malicious code at system entry and exit points. For most organizations, this translates to endpoint detection and response (EDR) on all workstations and servers, email security filtering, and web content filtering.
The evidence expectation goes beyond having antivirus installed. Assessors look for centralized management of protection mechanisms, evidence that definitions and signatures are updated regularly (ideally automatically), and documentation of how malicious code events are handled. If the EDR solution detects and quarantines a file, there should be a record of what was detected, what action was taken, and whether further investigation was required.
Security Alerts, Advisories, and Directives
Organizations must receive, document, and respond to security alerts and advisories from authoritative sources. In the Canadian context, this includes the Canadian Centre for Cyber Security (CCCS) alerts and advisories, vendor security bulletins for systems in use, and CVE databases relevant to the technology stack.
The operational requirement is a process for reviewing incoming alerts, determining applicability to the organization's environment, and taking action within defined timeframes. Not every advisory requires remediation, but every advisory that affects systems in scope should have a documented triage decision.
System Monitoring
ITSP.10.171 requires monitoring to detect unauthorized access, unauthorized use, and anomalous activity. This is where SIEM solutions, log aggregation, and alerting rules come into play.
The monitoring posture should be proportionate to the environment. At minimum, organizations need to monitor authentication events (successful and failed), privileged operations, changes to security configurations, and access to controlled information. Alert rules should be tuned to reduce noise while maintaining coverage of meaningful events.
Monitoring without review is compliance theater. If logs are collected but nobody reviews the alerts, the control is not effectively implemented. Evidence of active monitoring includes documented alert triage, periodic log review records, and escalation records for events that warranted investigation.
Information Handling and Retention
The SI family also addresses how organizations handle and retain information in accordance with policy. For defence contractors, this includes controlled information received under contract and the metadata associated with security monitoring. Retention policies should define how long incident records, scan results, and monitoring data are preserved, both for operational use and for assessment evidence.
Where SOC 2 and ISO 27001 Controls Overlap
Organizations with existing SOC 2 or ISO 27001 certifications will find that many of these controls are already partially addressed.
SOC 2 overlap: The CC7 series covers detection and monitoring (CC7.1), monitoring system components for anomalies (CC7.2), evaluating security events (CC7.3), executing incident response (CC7.4), and recovery (CC7.5). Organizations with mature CC7 implementations have a strong foundation for CPCSC's IR and SI controls. The primary gaps tend to be in the specificity of flaw remediation timelines and the depth of tabletop exercise documentation.
ISO 27001 overlap: Annex A controls A.5.24 through A.5.28 cover incident management planning, assessment, response, learning from incidents, and evidence collection. A.8.7 covers malware protection, and A.8.8 covers technical vulnerability management. The mapping is close, though ITSP.10.171 tends to be more prescriptive about evidence of tested plans and defined remediation timelines than ISO 27001's risk-based approach.
The practical takeaway: companies with either certification should approach CPCSC as an extension of their existing program, not a separate compliance effort. For a detailed mapping between SOC 2 controls and CPCSC, see our CPCSC compliance services page.
Common Gaps
Five gaps appear repeatedly when organizations assess their IR and SI controls against ITSP.10.171:
Common IR and SI Gaps
1. Untested incident response plans. The plan exists but has never been exercised. There is no evidence that personnel know their roles, no documented findings from exercises, and no revision history showing the plan has been updated based on testing.
2. Alert triage without documentation. Security teams review SIEM alerts daily but do not document the investigation. When an assessor asks for evidence of monitoring, the response is we check alerts every morning, with no case records to substantiate the claim.
3. Unrealistic remediation SLAs. The vulnerability management policy defines aggressive timelines that the team cannot consistently meet. Every missed SLA becomes a documented policy violation rather than evidence of operational maturity.
4. Malware protection gaps at the perimeter. EDR is deployed on endpoints but email filtering or web content filtering is missing, or protection mechanisms are managed individually rather than through a centralized console with reporting.
5. Monitoring without actionable alerting. Logs are collected into a SIEM, but alert rules have not been tuned to the environment. The result is either too much noise (leading to alert fatigue and ignored events) or insufficient coverage of meaningful activity.
Implementation Approach
For organizations building or extending their IR and SI controls for CPCSC, the following sequence prioritizes the controls that produce strong assessment-ready evidence earliest.
Phase 1: Establish the IR plan and test it. Write or update the incident response plan to reflect current personnel, tools, and procedures. Run a tabletop exercise within 30 days of completing the plan. Document everything: the scenario, the participants, the decisions, the gaps, and the plan updates that resulted.
Phase 2: Define and implement flaw remediation SLAs. Set remediation timelines based on actual capacity, not aspirational targets. Configure vulnerability scanning on a regular cadence (weekly for critical systems, monthly at minimum). Build the tracking process in whatever system the team already uses for work management.
Phase 3: Formalize monitoring and triage. Ensure logging covers authentication events, privileged actions, and access to controlled information. Establish a documented triage process for alerts, even if the initial version is simple. The goal is a consistent process that produces evidence, not a sophisticated SOC operation.
Phase 4: Close protection gaps. Verify that malicious code protection covers all entry points, that security advisory review is a defined process, and that information handling and retention policies align with contract requirements.
Build IR and SI Controls That Produce Evidence
We build effective security programs that generate compliance evidence as a byproduct, not as a separate workstream.
Frequently Asked Questions
How often does ITSP.10.171 require incident response testing?
The standard does not prescribe a specific frequency, but annual testing is the widely accepted minimum for Level 1. Organizations that test more frequently, even with abbreviated scenarios, build stronger evidence and more capable response teams. The key requirement is that testing produces documented findings and that those findings drive plan improvements.
Do we need a dedicated SIEM to meet the monitoring requirements?
Not necessarily. The standard requires system monitoring that detects unauthorized access and anomalous activity, but it does not mandate a specific tool. Smaller organizations can meet the requirement with centralized log management and defined review processes. As the environment grows in complexity, a SIEM becomes practical. What matters is that monitoring is active, documented, and reviewed, not that a specific product is deployed.
Can we use the same incident response plan for CPCSC and SOC 2?
Yes, and that is the recommended approach. A single IR plan that addresses the requirements of both frameworks avoids duplication and reduces the risk of conflicting procedures. The plan may need supplemental sections to address CPCSC-specific requirements, such as notification procedures for incidents involving controlled defence information, but the core plan should be unified.
What evidence do assessors expect for flaw remediation?
The evidence package typically includes vulnerability scan reports showing identified flaws, ticketing or tracking records showing when each flaw was assigned and remediated, evidence of verification (re-scan or confirmation), and metrics showing adherence to defined SLAs. Exceptions and delays should be documented with justification, not hidden. An assessor reviewing remediation evidence is looking for a consistent, disciplined process, not a perfect record.
Building IR and SI controls for CPCSC is operational work, not a documentation exercise. The standard rewards organizations that can demonstrate working processes with real evidence over those that produce comprehensive policies nobody follows.
If your organization is preparing for CPCSC Level 1 self-assessment and needs to build or validate IR and SI controls, reach out for a conversation about where your current program stands and what it takes to close the gaps.
Ready to Start Your Compliance Journey?
Get a clear, actionable roadmap with our readiness assessment.
About the Author
Former security architect for Bank of Canada and Payments Canada. 20+ years building compliance programs for critical infrastructure.
Ready for CPCSC Level 1?
Score your readiness across the 6 expected control families. Free.
Take the Scorecard