ISO 27001 Evidence: Why Your Policies Say One Thing and Your Evidence Shows Another

Reviewed by Ali Aleali, CISSP, CCSP · Last reviewed May 12, 2026

Building an ISO 27001 ISMS is largely an exercise in documentation. You write policies, implement controls, collect evidence, and upload everything to a GRC platform. By the time the internal audit arrives, the evidence library looks complete.

Then the auditor runs through it and flags a set of gaps that have nothing to do with whether the controls are actually working. The policies are documented. The controls are in place. The evidence is uploaded. But the evidence doesn't match the policies, the policies reference tools you stopped using, or the approval fields are populated with role titles instead of names.

This is the most common class of ISO 27001 audit finding: not a control failure, but a documentation failure. The control is operating, but the paper trail doesn't confirm it.

This post explains where these gaps come from and how to close them before the external audit.

Want to see where your evidence library stands before the audit?

The ISO 27001 readiness scorecard maps your current state against the Annex A controls and flags the documentation gaps that come up most often.

Why the Gap Exists

The policy-evidence gap is structural. It has two sources.

The first is timing. Policies are typically written during the initial certification sprint, when the organization is implementing controls and building the ISMS from scratch. Evidence is collected to match what the policies describe. At that point, everything is aligned.

Then the organization evolves. A team migrates from one ticketing tool to another. A new cloud service is added to the stack. A key person leaves and the policy approval responsibility shifts informally to someone else. The processes keep working, but the documentation doesn't get updated to match. By the time the annual internal audit runs, there are six months or twelve months of operational drift between what the policies say and what the evidence shows.

The second source is the way GRC platforms are typically used. Most organizations upload evidence for the initial certification audit, confirm the tests are passing, and then treat the platform as a compliance dashboard rather than a living evidence library. Evidence screenshots that were accurate in 2024 are still sitting in the platform in 2026, showing a tool that was retired, a process that changed, or a person who is no longer in the role.

Neither of these is a sign that the ISMS is broken. They're signs that the ISMS governance process, specifically the management review and documented information management requirements under Clause 7.5, needs to run more frequently against the evidence library, not just against the controls.

What This Looks Like in Practice

The wrong tool in the evidence

An access control policy states that access requests are submitted through the organization's primary ticketing system. At the time the policy was written, that was true. Since then, the team migrated to a different platform. The policy was not updated, and the only screenshots in the evidence library are from the old system.

When the auditor reviews the access control evidence for A.5.15 and A.5.16, they find a mismatch. The policy names one tool; the evidence shows another. The auditor cannot confirm that access requests are currently being processed through a system that meets the policy requirements, because the current system is not in the policy and the old system's screenshots don't reflect current practice.

The fix is straightforward: update the policy to name the current tool and upload fresh screenshots from the system that is actually being used. But the fix requires knowing that the gap exists, which is exactly what the internal audit is designed to surface.

The key principle

ISO 27001 audits confirm that what is documented is what is actually happening. If your policy names a tool or a process, the evidence needs to show that specific tool or process in operation. A control that is working correctly but documented inaccurately creates the same audit risk as a control that isn't working at all.

Approval fields without names

The information security policy has an approval section. The section says Approved by policy owners and lists four names as owners. None of the four names is recorded in an approval field with a date. The policy was approved by consensus and published, but the document itself does not show who formally signed off.

This is a finding under Clause 5.2 (information security policy must be established by top management) and under A.5.1 (policy must be approved, published, and communicated). The standard requires an accountable approver, not a committee. When ownership is distributed across four people with no clear approver, accountability disappears.

The same pattern appears in incident response. The incident report template has Approved by and Prepared by fields. Reports are generated after incidents, root causes are analyzed, and corrective actions are documented. But the approval fields are left blank, or populated with a role title with no name attached.

Under A.5.24, A.5.26, and A.5.27, the auditor needs to confirm that incident reports go through a formal review and approval step. An empty approval field means they can't.

Version references that don't resolve

A third pattern: policy cross-references that don't align. One part of the evidence library references the Access Control Policy as V1.1. Another references V1.2. The Information Security Policy appears as V2.0 in one control narrative, V2.1 in another, and V2.3 in a third.

The organization may have a clear internal version history that makes this sensible. But from the auditor's perspective, inconsistent version references across the evidence library suggest that the documented information management process (ISO 27001:2022 Clause 7.5) is not being applied consistently. Which version is current? Which one governs the control being assessed?

This is resolved by maintaining a master document register that lists current versions, reviewing cross-references when a document is updated, and confirming version alignment before the external audit.

Why version drift matters

External auditors cross-reference version numbers across evidence artifacts. When a risk assessment cites policy V1.1 but the evidence library holds V1.2, the auditor has to stop and verify which version was actually in effect during the period under review. In a sample-based audit, that friction adds up quickly.

What GRC Platforms Can and Can't Do

GRC platforms handle the technical evidence problem well. When Vanta, Drata, or Secureframe integrates with AWS, GitHub, or Okta, the automated test results are current, accurate, and timestamped. The platform knows whether MFA is enforced, whether encryption is enabled, whether vulnerability scans are running. For these controls, the test is the evidence: if the test is automated, the evidence is automated, and it stays current without any manual intervention.

The platform cannot fix the policy-evidence gap. It cannot update a policy to reflect a new tool. It cannot populate an approval field. It cannot reconcile version references across uploaded documents. These require human action, and they require the same discipline the team applies to the technical controls.

Automated vs. manual evidence: a practical line

Technical controls generate evidence automatically through GRC platform integrations. Policy controls require manual evidence and manual review. Before every internal audit, run a dedicated pass through all manually uploaded evidence to confirm it reflects current tools, current versions, and current named approvers. Automated evidence will take care of itself; manual evidence won't.

For organizations using Vanta, Drata, or Secureframe, a common approach is to assign a quarterly evidence hygiene review alongside the access reviews and other periodic tasks already built into the platform. This surfaces documentation drift before the annual internal audit rather than during it.

How to Close the Gap Before the External Audit

The internal audit appendix of open items is the artifact that makes this tractable. A complete internal audit report includes a prioritized list of documentation gaps, version inconsistencies, and confirmation items that need client action. Treating this list as a pre-certification task board, with owners and target dates, is the most efficient path to a clean external audit.

The categories to work through:

PRE-CERTIFICATION EVIDENCE CHECKLIST

Tool and process references

For every policy that names a specific tool, ticketing system, or platform, confirm the named tool is current. If the organization has migrated to a different system, update the policy before uploading fresh evidence.

Approval fields

For every policy document and operational record (incident reports, risk assessments, management review minutes), confirm that approval fields are populated with a name and a date, not a role title alone.

Version alignment

Pull a list of all current policy versions from the master document register and check that cross-references across the evidence library are consistent. Where they're not, either update the cross-references or add a note to the document register explaining the version history.

Evidence currency

Review the upload dates of manually uploaded screenshots and confirm that outdated evidence has been replaced. A screenshot from the previous certification cycle showing a retired tool or configuration should be refreshed.

Risk treatment status

For any risk treatment item documented as in progress or incomplete, confirm that either the treatment has been completed and the status updated, or that a revised target date has been formally agreed and documented.

The Bigger Picture

The policy-evidence gap is a symptom of an ISMS that was built for a point-in-time certification rather than for operational use. The standard is explicit about this: Clause 10.2 requires continual improvement of the ISMS, not just an annual confirmation that the certificate is still valid.

Organizations that treat the ISMS as a living governance system, updating policies when processes change, refreshing evidence when tools migrate, and running management reviews against real operational data, close this gap naturally. The internal audit surfaces a short list of minor items rather than a long list of documentation that has fallen behind operational reality.

The controls most organizations have in place are not the problem. The discipline around keeping the documentation current with those controls is where the work is.

For a structured view of where your ISMS stands today, the ISO 27001 readiness scorecard covers the full Annex A control set and surfaces the documentation gaps that come up most often before external certification.

Further Reading

Build an Evidence Library That Holds Up

We build and run effective security programs where documentation stays current, not just compliant at certification time.

Frequently Asked Questions

What types of evidence does an ISO 27001 auditor review?

ISO 27001 auditors review two broad categories: policy documents (information security policies, procedures, and documented processes) and operational evidence (screenshots, configuration records, access review records, training completions, incident reports, and audit logs). For organizations on a GRC platform, technical evidence is generated automatically through integrations with cloud infrastructure and SaaS tools. Policy evidence requires manual review and upload, which is where most documentation gaps are found.

How often should ISO 27001 evidence be updated in a GRC platform?

Automated evidence from GRC platform integrations (MFA configuration, encryption status, vulnerability scan results) is refreshed continuously. Manually uploaded evidence should be reviewed at least quarterly and updated whenever a relevant tool, process, or policy changes. The access review cycle, typically quarterly, is a natural trigger for checking whether the manually uploaded evidence still reflects current practice.

Does a policy-evidence mismatch result in a nonconformity during an ISO 27001 audit?

It depends on the scope of the mismatch. An isolated discrepancy, such as one tool name referenced incorrectly in one document, is typically raised as an observation or open item rather than a formal nonconformity. A pattern of mismatches across multiple controls, or a mismatch in a high-risk area such as access control or incident response, may be raised as a minor nonconformity. The internal audit is the right place to find and close these gaps before the external auditor sees them.

What is Clause 7.5 in ISO 27001 and why does it matter for evidence?

Clause 7.5 covers documented information: what documentation the ISMS requires, how it is created and updated, and how it is controlled. It has three sub-clauses covering what to include (7.5.1), how to create and update documents (7.5.2), and how to control them (7.5.3). In practice, this means maintaining a document register with current versions, applying a consistent review and approval process, and ensuring that evidence in the GRC platform reflects current versions. Many documentation gaps found during internal audits trace back to Clause 7.5 not being applied consistently.

Can you have a passing ISO 27001 certification with policy-evidence gaps?

Yes, if the gaps are minor and limited in scope. External certification auditors work from samples. A well-run internal audit that surfaces documentation gaps and closes them beforehand significantly reduces the risk of the external auditor finding them. Organizations that treat internal audits as a compliance exercise rather than a genuine evidence review are more likely to encounter friction at certification or surveillance.

What is the difference between automated and manual evidence in a GRC platform?

Automated evidence is generated by the platform's direct integrations with cloud services and SaaS tools. If the platform is connected to AWS, GitHub, or Okta, the test results for those controls are live and current without any manual action. Manual evidence is screenshots, documents, or records that a team member uploads. Automated evidence stays current automatically; manual evidence drifts if it isn't actively maintained. Most policy-evidence gaps involve manually uploaded evidence, not automated test results.

Ready to Start Your Compliance Journey?

Get a clear, actionable roadmap with our readiness assessment.

Share this article:

About the Author

Former security architect for Bank of Canada and Payments Canada. 20+ years building compliance programs for critical infrastructure.

How Ready Are You for SOC 2?

Score your security program in under 5 minutes. Free.

Take the Scorecard
Framework Explorer BETA Browse SOC 2 controls, guidance, and evidence — free.