SOC 2 Risk Management for Hybrid and On-Prem Environments

Reviewed by Ali Aleali, CISSP, CCSP · Last reviewed April 12, 2026

TL;DR

  • Risk management maps to CC3.2 (risk identification and analysis), CC3.3 (fraud risk), and CC3.4 (changes that affect internal control)
  • On-prem and hybrid risk registers cover a wider threat model than cloud-native: physical access, hardware failure, multi-year procurement cycles, firmware supply chains, and key person dependencies in small ops teams
  • A usable register is structured by category, scored consistently (qualitative or quantitative), reviewed on cadence, and updated when events warrant. A 200-row spreadsheet nobody opens is not a register
  • Risk acceptance is a deliberate, documented decision with an authorized approver, named compensating controls, and a review date. An undocumented gap is the failure mode auditors find
  • Provider risk evaluation belongs in the program: read the colocation provider's SOC 2 (or CSAE 3000) report, ask for trust material from Leaseweb Canada, OVH Cloud, and any other in-scope subservice provider

A firmware advisory drops on a Tuesday. The vendor patches its own BMC management software. The team patches several thousand baseboard management controllers across two colocation facilities. Where did that risk live in the register before Tuesday? Was BMC firmware supply chain a row, or was it folded into firmware patching alongside switches and storage? Was the residual risk rating updated after the last vendor security disclosure, or is it still set to whatever the original assessor wrote eighteen months ago? Was anybody assigned ownership of the response, or does the answer arrive when somebody happens to read the advisory?

A SOC 2 risk management program for hybrid and on-prem infrastructure has to answer those questions on a Wednesday morning, not on the Friday before the auditor arrives. Cloud-native risk registers are usually short and mostly cover SaaS vendors and identity. On-prem and hybrid risk registers are wider, deeper, and structured around a different threat model: physical access, hardware failure, multi-year procurement cycles, firmware supply chains, key person dependencies in small ops teams. CC3.2, CC3.3, and CC3.4 are the Trust Services Criteria that govern how the program identifies, evaluates, and responds to those risks. This post walks through how to run that program so it stays usable long after the audit closes. It is part of the broader SOC 2 readiness guide for bare metal SaaS, alongside the sibling posts on on-prem network security and on-prem vulnerability scanning.

The Threat Model Is Different

Several risk categories show up on a hybrid or on-prem register that rarely appear on a pure-cloud one. The table below shows where cloud-native registers thin out and where on-prem registers have to go deeper.

Risk category Cloud-native register Hybrid or on-prem register
Physical access Carved out to cloud provider Cage access, visitor escort, tampering, vendor hands-and-eyes visits, camera coverage
Hardware failure Abstracted by the provider Disk, RAID, PSU, memory, switches, cooling, summer thermal events
Firmware and supply chain Rare, mostly indirect BIOS, BMC, switch firmware, storage microcode, bootkits, malicious updates
Hardware EOL and lock-in N/A 5 to 10 year procurement cycles, named replacement plans, vendor support end dates
Provider concentration Region-level redundancy Single cage, single facility, single transit provider exposure
People and key person Moderate, broader teams Three-person ops teams, tribal knowledge, segregation-of-duties limits

Physical access risk. The colocation cage is a real place. Badge access, cage locks, visitor escort procedures, camera coverage, tailgating, hardware tampering, and unauthorized hands-and-eyes visits by vendor technicians are all on the table. The provider carries most of the physical controls, but the company still owns the part that touches its own cage and its own hardware.

Hardware failure risk. Disks die, RAID arrays lose a second drive during rebuild, power supplies trip, memory corrupts silently, switches reboot unexpectedly, cooling fails on a summer afternoon. Cloud abstractions hide most of this. On-prem exposes every bit of it. This ties directly to the backup and disaster recovery program, since hardware failure is the scenario DR testing validates.

Firmware and supply chain risk. Out-of-band management cards, BIOS, baseboard management controllers, switch firmware, and storage microcode are all code. They all update. The supply chain for those updates runs through hardware vendors whose own software supply chains have been compromised in well-publicized incidents. Firmware bootkits, BMC compromises, and malicious microcode updates are credible risks that belong on the register even when likelihood is low, because realized impact is total.

Hardware end-of-life and lock-in risk. Bare metal procurement runs on a five-to-ten-year cycle. A database server on hardware past vendor support is a known risk with a known timeline, and auditors expect it on the register with a replacement plan and a named end date.

Provider concentration risk. A company running its entire production footprint out of one cage in one data center has a concentration risk that belongs on the register whether or not the business has a plan to address it. Acceptance is a valid treatment, but acceptance with documented justification looks very different from silent absence.

People risk. On-prem ops teams are small. Three engineers, sometimes fewer. When one person holds the tribal knowledge of the network, the hypervisor cluster, and the restoration procedure, and that person leaves, the program loses a control. Key person dependency, insider threat, and small-team segregation-of-duties limits all belong in this category under CC3.2.

A register that identifies these categories looks credibly on-prem. A register that copies a generic SaaS template and swaps the company name looks like the team never read the infrastructure.

The Risk Register Nobody Opens

The most common failure mode is not missing risks. It is a register so sprawling and so disconnected from actual operations that nobody inside the company ever opens it voluntarily. Two hundred rows in a spreadsheet, last edited eleven months ago.

Small enough to actually read

For most small and mid-market shops, five to fifteen risk scenarios is the right range. Enough to cover the real threat surface, few enough that the team can review each one in a quarterly session without glazing over. Risks cluster naturally under headings such as physical, availability, confidentiality, integrity, vendor, and people.

The register also needs one home. From real engagements: a fintech preparing for SOC 2 Type 1 had risks scattered across the GRC platform, a separate spreadsheet, and informal Slack threads. During a preparation call, both the security lead and the outside consultant discovered they had independently created overlapping MFA-related risk entries without realizing it, and several previously flagged risks had been fully remediated but were still marked open, making the posture look worse than reality. The fix was non-negotiable: one location for all risks, and that location is the GRC platform, because the GRC platform is what gives the auditor direct visibility.

Scattered tracking is one of the most common failures under CC3.2 and CC3.4. When risks live in multiple systems, no single view exists of what has been identified, accepted, mitigated, or remediated, and the auditor cannot get the coherent program CC3.2 expects.

Scoring: Quantitative vs Qualitative

How risks are scored matters less than whether scoring is consistent and defensible. Both models work under SOC 2.

Qualitative scoring (low, medium, high for likelihood and impact, producing a heat map) is the right default for most on-prem and hybrid shops. It is fast to apply, easy to explain, and defensible under CC3.2 when the rubric for what counts as low, medium, or high is written down. A quarterly calibration step keeps the scores from drifting.

Quantitative scoring (expected loss in dollars, FAIR-style modeling) is worth the effort in narrow cases: when a specific risk is large enough to drive a real spending decision, such as buying a second colocation facility, contracting a hot standby, or replacing a hardware class ahead of end of life. The practical middle ground is qualitative scoring for the whole register, quantitative modeling applied to the two or three largest risks where treatment has real budget implications. The auditor accepts either model, provided the rubric is documented and scoring is applied consistently.

Fraud Risk Belongs on the Register

CC3.3 is the criterion most on-prem programs overlook. It explicitly requires the risk assessment to consider the potential for fraud, including fraudulent reporting, misappropriation of assets, corruption, and management override of controls. Most technology companies handle fraud detection operationally but never write it down. The register has zero fraud scenarios. The auditor notices.

The gap is almost never in capability. From real engagements: a fintech had robust fraud detection built into its platform, continuously tuning detection patterns and monitoring for fraudulent activity. When the SOC 2 risk assessment was reviewed, the formal documentation of fraud risks was empty. The fix took an afternoon: document two to three fraud-specific scenarios, connect them to the existing detection mechanisms, and train the ops team on the examples. Informal knowledge became a documented control with audit-ready evidence.

For hybrid and on-prem shops, the fraud scenarios worth capturing usually fall into three buckets:

  • IT and access fraud. Management override of access reviews, a privileged admin making unauthorized changes that benefit themselves or a third party, fraudulent access granted to a colluding outside party.
  • Asset fraud. Theft or unauthorized disposal of hardware, data, or backup media. On-prem expands this surface because hardware is physical and portable.
  • Financial and reporting fraud. Revenue recognition, expense reporting, and ledger manipulation, usually outside the security team's direct ownership but worth including so the CFO or finance lead owns the scenario.

Two or three is enough, zero is a problem

Two or three fraud entries is enough for most shops. Zero is a problem. Twenty is theater. The goal is a documented acknowledgement that fraud is a considered risk category, linked to existing detection and response mechanisms.

Risk Acceptance: What the Auditor Actually Reads

Accepted risks are where most registers fall apart. A risk flagged as accepted without justification is worse than a risk that was never identified, because it signals the team knows the problem and chose to do nothing without recording why.

A defensible acceptance record answers five questions in writing: what the risk is (specifically, not as a category), the likelihood and impact under the documented rubric, why it is being accepted rather than mitigated or transferred, what compensating controls reduce residual exposure, and who approved the acceptance with a named review date.

The distinction between mitigate and accept is more than a label. Mitigate means something was done, and there must be evidence of the action. Accept means a conscious decision was made, and there must be a justification and a named approver. The recurring failure pattern is risks marked mitigate with no linked remediation task, or risks marked accept with no approval record. Both are evidence gaps. A handful of deliberate, well-documented acceptances reads as a mature program. A clean register with zero acceptances usually means the process is not being used, and the auditor will probe until they find the risks that were quietly ignored rather than formally accepted.

Evaluating Infrastructure Providers for Risk

Vendor risk is where on-prem and hybrid shops feel the difference most sharply. When the colocation facility, hardware vendors, transit provider, and remote hands service are all third parties, vendor risk assessment under CC3.2 becomes a substantial workstream. The vendor management post covers the subservice organization methodology in full; this section covers the risk-identification half.

For infrastructure providers, the primary evidence source is their SOC 2 report. In Canada, the equivalent attest standard is CSAE 3000, and Canadian providers often publish trust material through a CSAE 3000-audited report that mirrors the structure and opinion format an AICPA report produces. Leaseweb Canada and OVH Cloud are two examples of infrastructure providers serving Canadian workloads who publish trust material that user entities can pull into their vendor risk file. Leaseweb Canada operates a Quebec-based facility and makes trust documentation available to qualifying user entities. OVH Cloud publishes certification summaries and trust documentation for its Canadian regions. Both can be read as evidence of the provider's control environment and factored into the risk assessment rather than taken on faith.

Complementary user entity controls are register entries

Read the provider's report, note the domains covered and carved out, identify the complementary user entity controls the provider expects the customer to implement, and record each of those as an entry on the register. An unaddressed complementary user entity control is a risk the customer has implicitly accepted without realizing it.

Provider concentration is its own entry. We operate out of one facility because a second facility is not economically justifiable at current revenue is a valid acceptance when documented with a named approver and a review cadence. Bare absence is not.

Change-Driven Risk Review: CC3.4 in Practice

CC3.4 is the criterion most teams read and forget. It requires the risk assessment to account for changes that could significantly impact the system of internal control, and the Points of Focus name changes in the external environment, the business model, and leadership as scenarios that should trigger a review. In practice, CC3.4 is the reason risk assessment cannot be an annual exercise.

Changes that should trigger an out-of-cycle risk review on a hybrid or on-prem program include a new colocation facility or transit provider, a major hardware refresh, a new product line that materially changes the data handled, the loss or replacement of a key ops team member, a significant regulatory change such as Law 25 or PHIPA, an acquired business with its own footprint, a material incident that revealed a previously unidentified failure mode, and the result of a penetration test or red team exercise that surfaced new attack paths.

The practical mechanism is a short triage step attached to the existing change management workflow. When a change ticket carries certain tags (new vendor, new facility, new product line, material personnel change), the workflow generates a linked risk review item routed to whoever owns the register. Most reviews conclude that nothing material changed. Some surface a new entry or an update. Either outcome is evidence of CC3.4 operating as designed. The change management post walks through how the ticket workflow itself is structured so these triggers attach cleanly.

Build a Risk Register Auditors Trust

Truvo designs risk management as the foundation of an effective security program that holds up on bare metal and hybrid infrastructure.

Annual Review and Event-Triggered Reviews

A program on a yearly cadence alone fails CC3.4. A program on event-driven reviews alone drifts. The answer is both.

Annual review. Once a year, the full register is walked end to end in a structured session. Every entry is revisited: is the risk still accurate, is the likelihood and impact still correct under the current rubric, is the treatment still appropriate, are the compensating controls still in place, is the named owner still the right person. Remediated risks are closed out with evidence. New risks are added. The session minutes become audit evidence.

Quarterly review. A lighter-touch check catches drift. Focused on the top-scored risks and any new entries since the last check, not the whole register. Thirty minutes, three or four people, one agenda item per risk.

Event-triggered review. On top of that cadence, any CC3.4 trigger generates an interim review scoped to the change, not the full register. The output is a dated note or a set of register updates attached to the triggering event.

This three-layer cadence produces continuous evidence rather than a once-a-year artifact. Continuous evidence is the difference between an audit that takes a week and one that takes six.

How CC3.2, CC3.3, and CC3.4 Points of Focus Show Up in Risk Management

The AICPA's 2017 Trust Services Criteria (with revised Points of Focus, 2022) lists specific characteristics auditors evaluate when assessing whether the risk management program is suitably designed and operating effectively. The paraphrased characteristics below map the Points of Focus to a hybrid and on-prem program.

CC3.2, Risk Identification and Analysis

CC3.2's Points of Focus call for risks to be identified across organization, business unit, and functional levels, with both internal and external factors considered, and with the right levels of management involved. Identified risks are analyzed for potential significance, and the assessment considers how each one should be managed (accept, avoid, reduce, or share). The criterion also calls out threats to objectives from intentional acts, unintentional acts, and environmental events, along with vulnerabilities in system components and threats arising from vendors and other third parties. In a hybrid or on-prem program, this translates to a cross-functional register where the security team owns the technical risks, the CFO or finance lead owns the financial and fraud risks, and the CEO owns the business-model risks, all under one scoring rubric. The colocation provider, hardware vendors, remote hands service, and transit provider all show up on the register as third-party entries. The bridge between this criterion and the technical side of the program is the vulnerability scanning workflow that feeds newly discovered system weaknesses into the register.

CC3.3, Fraud Risk

CC3.3's Points of Focus require the assessment to consider multiple types of fraud (fraudulent reporting, loss of assets, corruption), to consider the incentives and pressures that might lead to fraud, to consider the opportunities for unauthorized acquisition or use of assets, to consider how personnel might rationalize inappropriate actions, and to explicitly consider IT- and access-related fraud risks. The three-bucket fraud model above (IT and access fraud, asset fraud, financial and reporting fraud) covers the common categories for a hybrid or on-prem shop. The IT and access-related fraud Point of Focus is the one most directly tied back to privileged access, management override of controls, and access concentration on small ops teams.

CC3.4, Risk Assessment of Changes

CC3.4's Points of Focus require the assessment to consider changes in the external environment (regulatory, economic, physical), the business model (new product lines, reorganizations, acquisitions, rapid growth, new technologies), and leadership (management changes and their attitudes toward internal control). For a hybrid or on-prem program, this is the criterion that turns risk assessment from an annual artifact into a continuous process. Law 25 and PHIPA updates, a new customer contractual obligation, a major transit outage, a CTO departure, a hardware refresh, or an acquired business each trigger an interim review scoped to the change.

Explore further in Framework Explorer: CC3.2 · CC3.3 · CC3.4, see the full requirement, implementation guidance, evidence types, and cross-framework mappings.

Source: AICPA TSP Section 100, 2017 Trust Services Criteria with Revised Points of Focus (2022). Point of Focus characteristics described in Truvo's words and mapped to a hybrid and on-prem risk management implementation pattern. Consult the source document for the official AICPA text.

Where This Lands in an Effective Security Program

Teams that pass CC3.2, CC3.3, and CC3.4 cleanly on hybrid or on-prem are not the ones with the longest risk registers. They are the ones whose register is honest about the infrastructure, small enough to actually review, fraud-aware rather than fraud-silent, and wired into change management and vendor assessment so that CC3.4 happens continuously rather than once a year.

Build the program once. Map SOC 2, ISO 27001, and the Canadian frameworks onto it. The same register that satisfies CC3.2 through CC3.4 satisfies ISO 27001 Clause 6.1, CPCSC risk management requirements, and the risk-assessment expectations in ITSP.10.171. The alternative, a generic SaaS register retrofitted onto infrastructure it was never designed for, is a fast way to produce evidence that the program does not match reality.

Further Reading

Frequently Asked Questions

What do SOC 2 CC3.2, CC3.3, and CC3.4 require for risk management?

CC3.2 requires the organization to identify and analyze risks to the achievement of its objectives, including risks from internal and external factors, vendors, and other third parties. CC3.3 requires the risk assessment to explicitly consider the potential for fraud, including fraudulent reporting, asset misappropriation, corruption, and management override of controls. CC3.4 requires the assessment to consider changes in the external environment, business model, and leadership that could significantly impact internal control. None of the three criteria prescribe a specific tool or cadence. All three expect a program that matches how the organization operates and produces continuous evidence.

How big should a SOC 2 risk register be for a hybrid or on-prem environment?

Five to fifteen risk scenarios is the right range for most small and mid-market shops. That is enough to cover physical, availability, confidentiality, integrity, vendor, and people categories without producing a document nobody reads. A 200-row register is almost always a sign that the team confused coverage with usefulness. The test is whether the ops team can review every entry in a quarterly session without glazing over.

Do auditors prefer qualitative or quantitative risk scoring?

Auditors accept both, provided the rubric is documented and applied consistently. Qualitative scoring (low, medium, high for likelihood and impact) is the practical default for most hybrid and on-prem shops because it is fast to apply and easy to explain. Quantitative scoring (FAIR-style expected loss in dollars) is worth the effort on two or three risks large enough to drive real spending decisions, such as a second colocation facility or a hardware class replacement.

How should fraud risk be documented under CC3.3 for a technology company?

Two or three fraud-specific scenarios covering IT and access fraud (management override, privileged admin abuse, collusive access grants), asset fraud (theft or unauthorized disposal of hardware, data, or backup media), and financial and reporting fraud (owned by the CFO or finance lead). Each scenario should link to existing detection or monitoring mechanisms so the documentation reflects controls that already run, not theoretical ones. Zero fraud scenarios is a recurring CC3.3 finding.

How do colocation and infrastructure providers fit into the risk assessment?

The primary evidence source is the provider's SOC 2 or CSAE 3000 report. Read it, note the control domains covered and carved out, identify the complementary user entity controls the provider expects the customer to implement, and record each of those on the register. Leaseweb Canada and OVH Cloud are two examples of Canadian-serving providers who publish trust material that can be pulled directly into the vendor risk file. Provider concentration (single cage, single facility) is its own register entry, with a documented acceptance and review cadence when a second facility is not yet economically justifiable.

How often should a SOC 2 risk register be reviewed?

A three-layer cadence works best. A full annual review walks every entry end to end and produces audit evidence through session minutes. A lighter quarterly review catches drift on the top-scored risks and recent additions. Event-triggered reviews run whenever a CC3.4 condition fires: new colocation facility, hardware refresh, key personnel change, regulatory update, material incident, or penetration test finding. Annual reviews alone fail CC3.4. Event-driven reviews alone drift.

Ready to Start Your Compliance Journey?

Get a clear, actionable roadmap with our readiness assessment.

Share this article:

About the Author

Former security architect for Bank of Canada and Payments Canada. 20+ years building compliance programs for critical infrastructure.

How Ready Are You for SOC 2?

Score your security program in under 5 minutes. Free.

Take the Scorecard
Framework Explorer BETA Browse SOC 2 controls, guidance, and evidence — free.