SOC 2 Data Protection for On-Premise Datastores and Physical Media

Reviewed by Ali Aleali, CISSP, CCSP · Last reviewed April 12, 2026

TL;DR

  • Data protection maps to CC6.1 (logical access architecture), CC6.6 (data in transit), and CC6.7 (information disposal)
  • Encryption at rest on bare metal is three layers stacked: volume encryption (LUKS, BitLocker, FileVault), database TDE (SQL Server, Oracle, PostgreSQL pgcrypto), and storage controller hardware encryption. Each layer answers a different threat
  • Key management without KMS typically uses HashiCorp Vault, hardware HSMs (YubiHSM, Thales, Entrust), or encrypted key stores; the discipline is rotation, audit trail, and access policy applied deliberately
  • Encryption in transit is mutual TLS, site-to-site IPSec, and certificate material issued and rotated without public ACME
  • Canadian data residency is a deliberate decision about where the rack lives. Leaseweb Canada (Quebec) and OVH Cloud (Canadian regions) are two examples of compliant Canadian colocation and hosting options
  • Media disposal follows NIST SP 800-88 Rev. 1 (Clear, Purge, Destroy) with chain-of-custody documentation

On bare metal, encryption at rest is not a single control. It is three layers stacked on top of each other: volume encryption (LUKS, BitLocker, FileVault), database transparent data encryption (SQL Server TDE, Oracle TDE, PostgreSQL pgcrypto), and storage controller hardware encryption on enterprise SAN and NAS. Each layer answers a different threat. Volume encryption protects against physical drive theft. Database TDE protects against unauthorized read access at the application boundary. Hardware encryption protects against drive failure replacement workflows where the failed drive leaves the building. A SOC 2 program that conflates the three is a program that misses one of them under audit.

The same layered reality shows up across the rest of data protection on bare metal. Encryption in transit is mutual TLS between internal services, site-to-site IPSec tunnels, and certificate material that has to be issued and rotated without public ACME. Key management is software the team chose, deployed, and operates. Physical media leaves the building through documented procedures. And for Canadian SaaS preparing for SOC 2 on bare metal infrastructure under provincial privacy regimes, data residency is a decision about which rack, in which province, in which building, not a cloud dropdown. CC6.1, CC6.6, and CC6.7 are the Trust Services Criteria that govern this territory, and none of them prescribe a specific tool. They describe a program where data is restricted, encrypted across its lifecycle, and handled deliberately when it moves or leaves the organization.

How Data Protection Maps to the Trust Services Criteria

Three criteria converge on the data protection domain, each covering a different surface.

Three criteria, one data lifecycle

CC6.1 governs data at rest and the keys that protect it. CC6.6 governs data in transit across system boundaries. CC6.7 governs information as it moves, transmits, or leaves the organization in physical form. A data protection program that maps cleanly to all three is a program that holds up under audit.

CC6.1 governs the logical access architecture that protects information assets. Its Points of Focus include Uses Encryption to Protect Data (at rest, in process, and in transmission when the risk strategy calls for it) and Protects Cryptographic Keys across generation, storage, use, and destruction. For on-prem programs, CC6.1 is the home for volume encryption, database TDE, and key management decisions.

CC6.6 governs logical access security against threats from outside the system boundaries. Its Points of Focus cover the use of encryption technologies or secure communication channels to protect data in transmission and the protection of authentication credentials crossing boundaries. In practice this is where TLS between internal services, site-to-site IPSec, and VPN termination live, along with the certificate lifecycle that keeps those channels trusted.

CC6.7 governs the restriction of transmission, movement, and removal of information. Its Points of Focus include encrypting data beyond connectivity access points, protecting removable media such as USB drives and backup tapes, and protecting endpoint devices. CC6.7 is where the conversation about tape backups, removable drives, and end-of-life media lands. It sits next to CC6.5, which governs the sanitization step when a physical asset is being retired, and the two criteria tend to be evidenced together.

Read together, the three criteria describe a lifecycle. Data is protected where it rests (CC6.1), when it moves across a network (CC6.6), and when it leaves the system boundary in physical form (CC6.7, with CC6.5 backing up the disposal step). The full paraphrased Points of Focus for each criterion are in the reference section near the end. None of the three prescribe a tool. All of them expect a program that matches how the environment actually stores, moves, and retires data, and that produces continuous evidence of each step.

Scope: What Data Protection Has to Cover On-Prem

A typical on-prem data footprint is more varied than a single-cloud environment. Production databases on dedicated servers or virtualized clusters. Application tier caches holding short-lived sensitive data. File shares holding exported reports and operational artifacts. Enterprise SAN or NAS appliances holding the block and file storage underneath. Endpoint workstations holding source code, credentials, and cached customer data. Backup media rotating through tape libraries, removable drives, or a secondary facility. Removable media appearing when someone moves a database export between isolated networks or decommissions hardware.

Each of those locations needs an answer to three questions: how is it encrypted at rest, how is it encrypted when it moves, and how is it handled when it is no longer needed. A program that answers those three consistently across every data location is what CC6.1, CC6.6, and CC6.7 describe. A program that answers them for the database and forgets the backup tapes is what gets flagged.

The tiered asset classification used across the on-prem cluster applies here too. Tier 1 covers customer data and anything whose loss or exposure would be a breach event. Tier 2 covers operational data that supports the service. Tier 3 covers logs, backups of non-sensitive systems, and internal artifacts. Encryption and handling rigor scales with the tier, and the data classification policy is the document that makes the tiering defensible.

Encryption at Rest: The Three-Layer Pattern

Without a cloud KMS to lean on, encryption at rest becomes a layered set of choices: volume tools at the platform layer, database features at the engine layer, and storage hardware at the array layer. The layers are not mutually exclusive, and the strongest programs use more than one where the threat model calls for it. The table below summarizes how each layer lines up against a different threat.

Layer Tools Threat it answers Evidence
Volume and filesystem LUKS (Linux), BitLocker (Windows Server), FileVault (macOS) Physical theft of the disk or server Config screen showing encryption enabled, key release method, sample verification across systems
Database engine SQL Server TDE, Oracle TDE, PostgreSQL pgcrypto or commercial fork Compromised OS account reading database files directly TDE configuration, encryption key hierarchy, certificate or wallet backup procedure
Storage array and backup Self-encrypting drives in SAN and NAS, Veeam, Commvault, Bacula, Restic Drive replacement workflow, backup media leaving the facility Array config, FIPS validation notes, backup encryption status on completed jobs
Key protection HashiCorp Vault, YubiHSM, Thales, Entrust, encrypted PKCS12 stores Key exposure, rotation failure, undocumented access to cryptographic material Vault audit log, rotation schedule, documented destruction procedure

Volume and filesystem encryption. On Linux, LUKS is the default for full-disk encryption of server volumes, with keys unsealed at boot through a key server, a TPM, or a manual passphrase depending on the threat model. On Windows Server, BitLocker handles the same role with TPM-backed key release and Active Directory recovery key escrow. On macOS endpoints, FileVault covers developer laptops holding source code, credentials, or cached customer data. Evidence is consistent across platforms: configuration showing encryption is enabled, the key management approach, and a sample of systems verified through whichever compliance scanner the team already runs.

Database-level encryption. Volume encryption protects against theft of the physical disk. It does not protect against a compromised operating system account that can read the file. Transparent Data Encryption (TDE) raises the bar inside the database engine itself. SQL Server TDE encrypts data and log files with a database encryption key protected by a certificate in the master database. Oracle TDE uses a similar wallet-backed pattern. PostgreSQL does not ship native TDE in community builds, so teams combine volume encryption with application-layer encryption of sensitive columns or use a commercial fork. For Tier 1 data in regulated industries, the layered approach is the default.

Storage array encryption. Enterprise SAN and NAS appliances almost always ship with self-encrypting drives and array-level key management. Turning it on is a configuration decision, and the evidence is the array configuration screen plus whatever key rotation the vendor supports. Auditors accept it readily when the documentation is in place and the vendor's FIPS validation status is noted in the architecture document.

Backup encryption. Backup encryption is a separate control from production encryption, and teams miss this more often than they should. A database encrypted with TDE still produces backup files that need their own encryption once they leave the database server. Veeam, Commvault, Bacula, and Restic all support at-rest encryption of backup files, with keys managed in their own subsystems or in an external key store. The on-prem backup and disaster recovery post walks through how backup encryption threads through the broader DR program.

Key Management Without a Hosted KMS

Encryption at rest is only as strong as the keys that protect it. The cloud convenience of a hosted KMS with automated rotation, audit logging, and IAM-integrated access policy is not available on-prem, so the program stands up equivalent capability out of its own components.

HashiCorp Vault is the most common foundation in on-prem and hybrid environments. Vault handles secrets, encryption-as-a-service for applications, internal PKI, and dynamic database credentials. Deployed in HA mode with integrated storage and auto-unseal against a cloud KMS or hardware module, Vault becomes the audited center of the key management program. For SOC 2 evidence, Vault's audit log is a direct artifact for CC6.1 Protects Cryptographic Keys, showing who accessed which secret or key, when, and from where.

Hardware security modules belong in the architecture when the risk strategy calls for it. YubiHSM covers smaller-scale key custody and code signing at a price point most mid-market teams can accept. Thales and Entrust network-attached HSMs cover larger environments and FIPS 140-2 Level 3 requirements where regulation or customer contracts demand it. HSMs typically back Vault's master key, the certificate authority root, or the root-of-trust for an encrypted storage array, rather than holding every data key directly.

Encrypted key stores fill the gap for smaller teams that do not need a full Vault deployment. An encrypted PKCS12 file with a strong passphrase, stored with access logging and rotated on a defined schedule, is a defensible option for a small number of long-lived keys when the threat model supports it. What matters is that rotation, access, and destruction are documented and evidenced, not that the store itself is elaborate.

Key management is the least forgiving part of the domain

Auditors ask specifically about generation, storage, use, rotation, and destruction under CC6.1. Each step needs a documented process and an audit trail. A Vault deployment with no runbook fails this question the same way a shared spreadsheet of passphrases does.

Encryption in Transit: TLS, mTLS, and Site-to-Site

CC6.6 cares about how data is protected when it crosses a boundary. On-prem that covers four distinct paths.

Client to service. TLS 1.3 with a modern ciphersuite on every externally reachable endpoint. Certificates are issued from a public CA for external-facing endpoints and from an internal CA (Vault's PKI secrets engine, or AD Certificate Services) for internal ones. This is where ACME-only assumptions break down. Public ACME does not issue certificates for internal hostnames that never resolve to a public IP, so internal TLS needs its own issuance and rotation story. Vault PKI, step-ca, or a managed internal CA handle this cleanly and produce the certificate inventory CC6.6 evidence requires.

Service to service. Mutual TLS between internal services authenticates both sides of the connection. For applications that cannot be retrofitted with mTLS directly, a service mesh or a reverse proxy pattern terminates TLS at a sidecar and keeps the application code simpler. The evidence is the service certificate inventory and the proxy or mesh configuration showing mTLS is enforced rather than optional.

Site to site. When production runs across two colocation sites, or production and DR sit in different facilities, IPSec tunnels protect the link between them. The tunnel configuration on both ends, the IKE and ESP parameters, and the renewal schedule become the evidence package. For environments where site-to-site tunnels are not an option, application-layer encryption on top of a less-trusted transport reaches the same outcome through a different architecture.

Remote access. The VPN that fronts the environment handles encryption in transit for administrative access. TLS-based VPNs, IPSec with strong group policies, and WireGuard deployments all meet the bar when configured with modern ciphers and documented in the architecture. The access control post covers the identity side of VPN access; CC6.6 cares about the cryptographic side.

Certificate management without public ACME. Teams that lean only on public ACME end up with a gap when the auditor asks how internal certificates rotate. The answer is an internal issuance path (Vault PKI, AD Certificate Services, or a commercial internal CA), a rotation schedule that matches the external one, and a single certificate inventory covering both external and internal material. PKCS12 bundles remain the lingua franca for moving certificate material between systems that do not support modern formats directly.

Physical Media: Tape, Removable Drives, and the Rules That Survive Real Use

CC6.7 explicitly calls out removable media. Tape backups, external drives used for offsite rotation, USB drives used to move data between isolated networks, and decommissioned internal disks all fall into scope. The controls auditors look for are straightforward when documented.

Every piece of removable media is encrypted before it leaves the primary facility. Tape backups use the backup software's native encryption. External drives use operating system encryption (BitLocker To Go, LUKS, FileVault) or backup tool encryption. USB drives for routine data transfer are hardware-encrypted drives with centralized management, or encrypted software volumes with documented key handling.

Every movement is logged. A media movement register, kept in a ticketing system or a dedicated log, records the media identifier, the date it left the facility, the destination, the individual responsible, and the expected return. Media that goes to an offsite storage vendor, a second colocation site, or a bank safe deposit box has its movement recorded both on departure and on return. Under CC6.7's Restricts the Ability to Perform Transmission Point of Focus, this register is the auditable record that movement is controlled and accountable.

Every piece of media has a lifecycle owner. When a tape rotation ends, a disk fails, or a drive reaches end of life, the retirement procedure is documented, not improvised. That procedure is where the conversation shifts from CC6.7 handling into CC6.5 disposal.

Data Residency: Where the Data Physically Lives

For Canadian SaaS, provincial privacy regimes make data residency part of the architectural conversation. Quebec's Law 25 requires a Privacy Impact Assessment before personal information about Quebec residents is communicated outside Quebec, which means hosting posture has downstream procedural consequences. PIPEDA, PHIPA, and Law 25 all care about where personal information lives, and enterprise and government customers often impose Canadian residency requirements in their contracts independent of the law. For SaaS that sells into banks, hospitals, or public sector, the contractual residency clause tends to arrive before the legal one.

The architectural answer on-prem is a deliberate hosting decision. Production data, backups, and DR replicas sit in Canadian facilities. Administrative access is restricted to accounts whose jurisdiction is known. Vendor subservice relationships are evaluated for where they actually store the data they touch.

For SaaS that requires Canadian data residency, Leaseweb Canada's Quebec facility and OVH Cloud's Canadian regions are two examples of colocation and hosting options that keep data on Canadian soil. Both providers publish trust material that user entities can use for vendor risk evidence, and both are Canadian-jurisdictional data center options that satisfy the residency ask without forcing a full migration off the on-prem model. This is not a recommendation to choose one over the other; it is a recognition that the Canadian hosting question has real options, and the hosting choice belongs in the data protection architecture conversation rather than as a late-stage procurement question.

The evidence that data residency is enforced is a combination of the hosting contract, the architecture diagram showing where data lives, the backup destination configuration, and the vendor risk file for each subservice organization that touches the data. The vendor management post for when your data center is a subservice organization covers how the colocation relationship itself is documented.

Design On-Prem Data Protection That Holds Up

Truvo builds on-prem encryption, key management, and media handling workflows as part of an effective security program that evidences itself continuously.

Disposal and Sanitization: NIST SP 800-88 Rev. 1

The end of the data lifecycle is where CC6.7's removal language and CC6.5's disposal language overlap. A retired server has disks. Those disks either get sanitized and retained, sanitized and returned to the vendor, physically destroyed on site, or physically destroyed by a certified vendor under a chain-of-custody document. All four are defensible. None are defensible without documented procedure and evidence.

The authoritative reference is NIST SP 800-88 Rev. 1, Guidelines for Media Sanitization. It names three categories: Clear (logical overwrite), Purge (cryptographic erase or block erase commands that defeat laboratory-level recovery), and Destroy (physical shredding, pulverization, or incineration). The right category is driven by the sensitivity of the data and the destination of the media after sanitization. Tier 1 data heading off-site for reuse needs Purge at minimum. Tier 1 data that is unusable after sanitization is typically Destroyed. The certificate of destruction, whether produced internally or by a certified vendor, becomes the evidence artifact.

Good practice with no document gets no credit

A real engagement: a team ran a well-designed data scrubbing script for years that cleared billions of records before every release, and the auditor gave them almost no credit because the process lived only in one engineer's head. Formalizing it into a one-page procedure with a defined cadence and a before-and-after evidence template turned their strongest existing practice into one of their strongest pieces of audit evidence.

The fastest way to harden this domain is to write a one-page sanitization procedure that states the NIST category per data tier, the tool or service used for each category, the chain-of-custody documentation requirement, and the retention period for the resulting evidence. The same pattern applies to disposal. A team that destroys drives well but does not document destruction gets no credit for it.

Process: The Operating Cadence

Daily. Monitor Vault audit logs and alerting for anomalous key access. Verify certificate expiration dashboards for upcoming renewals. Review backup encryption status on completed jobs.

Weekly. Check media movement register for outbound and returning media. Verify offsite backup media arrived at its destination. Review TLS certificate inventory for any endpoints approaching their expiration window.

Monthly. Review key rotation events against the schedule. Confirm the certificate inventory matches what the scanners find on the network. Review any new data stores introduced into production and confirm they have encryption at rest configured before they hold production data.

Quarterly. Sample volume and database encryption settings across tiers. Review and update the data classification inventory. Run a tabletop on a disposal event to verify the chain-of-custody paperwork still matches reality.

Annually. Review and rotate root keys on the defined schedule. Review the sanitization procedure against the current hardware refresh cycle. Update the data residency architecture document and reconfirm subservice residency.

People: Ownership That Survives a Real Audit

Data protection ownership on a small on-prem team usually splits across three roles. An encryption owner (typically the infrastructure lead) runs the volume, database, and backup encryption stack and owns the Vault or HSM operations. A certificate owner (often the same person for smaller teams, or a platform engineer in larger ones) runs the CA, issuance, and rotation workflow. A records and media owner (often the operations or compliance coordinator) runs the media movement register, the disposal procedure, and the evidence staging. For smaller teams a fractional security team can hold the reviewer role across all three, which is usually where the documentation discipline comes from in the first place.

Programs run on cadence, not intention

The failure mode is the sanitization script with no document, the Vault deployment with no runbook, the certificate inventory that exists only in the tab the lead engineer keeps open, the tape library procedure that nobody has watched in eighteen months. Data protection is the domain where the difference between documented cadence and undocumented intention shows up on the day the primary owner goes on leave.

Where This Lands in an Effective Security Program

Teams that pass on-prem SOC 2 cleanly on CC6.1, CC6.6, and CC6.7 are not the ones with the most expensive key management stack. They are the ones whose program is honest about how data actually moves through the environment, who documented the encryption choices once and evidenced them consistently, and who treated removable media and disposal as first-class controls rather than operational afterthoughts. Build the program once with a workflow that matches how the team actually stores, transmits, and retires data. Map frameworks onto it without restart. The same program satisfies the data protection outcomes in SOC 2, the cryptography and media handling controls in ISO 27001, and the data-at-rest and data-in-transit requirements in CPCSC and ITSP.10.171. The alternative, a generic policy pack retrofitted onto infrastructure it was never designed for, is the fastest way to produce evidence of the team's own policy violations.

Running On-Prem and Need SOC 2?

Truvo is a Canadian cybersecurity consultancy building effective security programs for on-prem, hybrid, and bare metal infrastructure. Our fractional security team designs data protection workflows that match how the infrastructure actually runs, from volume encryption through key management through physical media handling, with evidence captured as a byproduct of the work. See how we structure SOC 2 on-prem consulting engagements, or book a strategy call.

Further Reading

How CC6.1, CC6.6, and CC6.7 Points of Focus Show Up in Data Protection

Data protection is one of the SOC 2 program activities that spans multiple Trust Services Criteria, which is why teams sometimes miss a Point of Focus when they map a single encryption tool against a single criterion. Here's how the relevant Points of Focus from each criterion translate to the on-prem data protection program described above.

CC6.1: Logical Access Architecture Over Protected Information Assets

CC6.1 governs the access architecture that protects information assets, and several of its Points of Focus sit directly on data protection.

  • Restricts logical access. Logical access to infrastructure, software, and data at rest, in process, and in transmission is restricted through access control software, rule sets, and hardening. For on-prem data protection, this is the layered combination of identity-based access, network-level restriction, and encryption that ensures only authorized access paths can reach data, whether it is sitting on disk, flowing through memory, or moving across the wire.
  • Uses encryption to protect data. Encryption protects data at rest, in processing, and in transmission when the risk strategy calls for it. This is the most direct mapping for the volume encryption, database TDE, mTLS, and site-to-site tunnels described above. The risk strategy document is what makes the encryption choices defensible; it ties each tier and data type to the encryption control applied.
  • Protects cryptographic keys. Keys are protected across generation, storage, use, and destruction, with cryptographic modules, algorithms, key lengths, and architecture appropriate to the risk strategy. Vault, HSMs, and documented encrypted key stores implement this characteristic. The Vault audit log, the key rotation schedule, and the documented destruction procedure are the evidence auditors sample.
  • Identifies and manages the inventory of information assets. Information assets (infrastructure, software, data) are identified, inventoried, classified, and managed. The data classification inventory and the tiered asset model make this practical on-prem.
  • Restricts access to information assets. Data classification, separate data structures, port restrictions, access protocol restrictions, user identification, and digital certificates establish access control rules. Internal PKI issuance and mTLS policies are the certificate half of this characteristic.

CC6.6: Boundary Protection Against External Threats

CC6.6 governs logical access security against threats originating outside system boundaries.

  • Uses encryption technologies or secure communication channels to protect data. Data transmitted beyond connectivity access points is protected through encryption or secure channels. This is the TLS 1.3, mTLS, IPSec, and VPN story described above. The certificate inventory and the tunnel configuration are the evidence.
  • Protects identification and authentication credentials. Identification and authentication credentials are protected during transmission outside the system boundaries. LDAPS, Kerberos with strong encryption types, and modern VPN authentication all implement this. The configuration showing credentials never cross a boundary in plaintext is the evidence artifact.
  • Requires additional authentication or credentials. Additional authentication is required when accessing the system from outside. MFA at the VPN, mTLS between services crossing trust zones, and bastion-host patterns cover this characteristic.
  • Implements boundary protection systems. Firewall appliances, demilitarized zones, intrusion detection and prevention, and endpoint detection protect external access points. The network security post covers the boundary controls that enforce CC6.6 alongside encryption in transit.

CC6.7: Restriction of Transmission, Movement, and Removal of Information

CC6.7 governs the controls that keep information from leaving in ways the organization has not authorized.

  • Restricts the ability to perform transmission. Data loss prevention processes and technologies restrict who can authorize and execute transmission, movement, or removal of information. The media movement register, the DLP controls on endpoints, and the restriction on who can export data from production systems implement this characteristic.
  • Uses encryption technologies or secure communication channels to protect data. Transmission of data and other communications beyond connectivity access points is protected. The same TLS and tunnel controls covered under CC6.6 produce the evidence for this CC6.7 Point of Focus as well. The overlap is deliberate; the two criteria reinforce each other.
  • Protects removable media. Encryption and physical asset protections are used for removable media such as USB drives and backup tapes. This is the tape backup encryption, the hardware-encrypted USB drive policy, and the offsite media handling procedure described in the physical media section above.
  • Protects endpoint devices. Processes and controls are in place to protect endpoint devices such as mobile devices, laptops, desktops, and sensors. Volume encryption on endpoints (FileVault, BitLocker, LUKS), endpoint detection and response, and device management policies cover this characteristic.

Explore further in Framework Explorer: CC6.1 · CC6.6 · CC6.7, see the full requirement, implementation guidance, evidence types, and cross-framework mappings.

Source: AICPA TSP Section 100, 2017 Trust Services Criteria with Revised Points of Focus (2022). Point of Focus characteristics described in Truvo's words and mapped to an on-prem data protection implementation pattern. Consult the source document for the official AICPA text.

Frequently Asked Questions

What do SOC 2 CC6.1, CC6.6, and CC6.7 require for data protection on-prem?

CC6.1 governs the logical access architecture that protects information assets, including the use of encryption at rest and the protection of cryptographic keys across their lifecycle. CC6.6 governs the encryption and secure communication channels that protect data crossing system boundaries. CC6.7 governs the restriction, encryption, and handling of information as it is transmitted, moved, or removed, including explicit Points of Focus for removable media and endpoint devices. None of the three criteria prescribe specific tools. All three expect a documented program that matches how the environment actually stores, transmits, and retires data, and that produces continuous evidence of each step.

How do you handle encryption at rest on bare metal without a cloud KMS?

Volume encryption covers the platform layer through LUKS on Linux, BitLocker on Windows Server, and FileVault on macOS endpoints. Database-level encryption through SQL Server TDE, Oracle TDE, or PostgreSQL column-level encryption raises the bar inside the engine. Enterprise SAN and NAS appliances typically offer array-level encryption through self-encrypting drives. Key management is handled by HashiCorp Vault for most mid-market teams, backed by a hardware security module where the risk strategy requires it, or by a documented encrypted key store for smaller environments with limited key counts.

What key management options exist for on-prem environments without a cloud KMS?

HashiCorp Vault is the most common foundation, deployed in HA mode with integrated storage and auto-unseal. Vault handles secrets, encryption as a service, internal PKI, and dynamic database credentials. For environments that need hardware-backed key custody, YubiHSM covers smaller-scale needs and Thales or Entrust network-attached HSMs cover FIPS 140-2 Level 3 requirements. For smaller teams, an encrypted PKCS12 key store with strong passphrase protection, documented rotation, and access logging is a defensible option when the threat model supports it.

How are backup tapes and removable media handled under SOC 2 CC6.7?

Every piece of removable media that leaves the primary facility is encrypted before it leaves, using backup software encryption for tapes and operating system or hardware encryption for USB drives and external disks. A media movement register records every movement with the media identifier, date, destination, and responsible individual. The register is typically kept in the ticketing system or a dedicated log and retained through the SOC 2 observation period. Disposal follows NIST SP 800-88 Rev. 1 categories (Clear, Purge, or Destroy) chosen based on the data sensitivity tier and the destination of the media after sanitization, with a certificate of destruction retained as evidence.

How does Canadian data residency affect SOC 2 data protection for on-prem SaaS?

Canadian data residency is a hosting and architecture decision that sits inside the data protection domain. Provincial privacy regimes such as Quebec's Law 25 and contractual requirements from enterprise and public sector customers often require personal information to be stored on Canadian soil. On-prem and colocation are natural fits because the hosting location is a deliberate choice rather than a vendor configuration. Leaseweb Canada's Quebec facility and OVH Cloud's Canadian regions are two examples of providers with Canadian-jurisdictional infrastructure. The evidence that residency is enforced includes the hosting contract, the architecture diagram showing where data lives, the backup destination configuration, and the vendor risk file for each subservice organization that touches the data.

What does a defensible media disposal procedure look like for SOC 2?

The reference is NIST SP 800-88 Rev. 1, which defines three sanitization categories: Clear (logical overwrite), Purge (cryptographic erase or block erase commands), and Destroy (physical destruction). A one-page procedure names the category per data tier, the tool or service used for each category, the chain-of-custody documentation required, and the retention period for the resulting evidence. For Tier 1 data, Destroy through a certified vendor with a certificate of destruction is the common pattern. The certificate of destruction, the chain-of-custody record, and the linkage back to the retired asset in the inventory form the evidence package auditors sample.

Ready to Start Your Compliance Journey?

Get a clear, actionable roadmap with our readiness assessment.

Share this article:

About the Author

Former security architect for Bank of Canada and Payments Canada. 20+ years building compliance programs for critical infrastructure.

How Ready Are You for SOC 2?

Score your security program in under 5 minutes. Free.

Take the Scorecard
Framework Explorer BETA Browse SOC 2 controls, guidance, and evidence — free.