Inside a SOC 2 audit: what actually gets asked for
We sat in on three audits this quarter. Here is the list of evidence that came up every time, and the one that almost never did.
This quarter we observed three SOC 2 Type II audits at UK firms: a forty-person fintech in Leeds, a ninety-person SaaS company in Bristol, and a sixty-person managed-services provider outside Edinburgh. All three were in their second or third audit cycle, which meant the initial panic of first-time readiness was behind them and the steady-state reality of evidence gathering was what we were watching. What they requested looked remarkably similar across all three engagements, and what they did not request was identical in all three cases.
These are observations from sitting in the room, not from audit methodology guidance. The pattern we describe is empirical. It may not hold for every auditor or every scope.
01 · The reliable sevenWhat they always ask for
Across three firms and three separate audit teams, the following seven categories of evidence came up in every engagement. Not occasionally, not in some: every time.
Access reviews. Auditors want to see that someone reviewed who has access to what, on a periodic basis. Quarterly is the common expectation. The review does not need to be complicated, but it does need to be documented, dated, and signed off by someone with authority. A spreadsheet with a timestamp and an approver’s name is sufficient. An email thread is not.
Change management records. Any change to production infrastructure or code should have a trail: who proposed it, who approved it, when it went live, and what the rollback plan was. This does not require a sophisticated ticketing system. It requires that the trail exists and that it is consistent. Gaps are noticed.
Incident response drills. Evidence that the organisation has tested its incident response process, not merely written it. A dated record of a tabletop exercise, with participants listed and findings noted, satisfies this in most cases. The finding that the drill produced matters less than the evidence that it happened.
Vendor review. A record of reviewing third-party vendors against defined criteria, at least annually. Auditors look for a list of in-scope vendors, a date of last review, and a named owner. The criteria used matter less than the consistency of applying them.
Encryption-at-rest configuration. Evidence that data at rest is encrypted, typically from cloud console screenshots, configuration exports, or infrastructure-as-code. This is usually the easiest item to produce; the difficulty is in remembering to capture and store the evidence at the time, rather than reconstructing it weeks later under audit pressure.
Onboarding and offboarding checklists. Completed checklists for a sample of joiners and leavers during the period. Auditors want to confirm that access provisioning and deprovisioning happen in a controlled, documented way. Undated checklists, or checklists signed off weeks after the event, attract questions.
Backup restoration tests. Evidence that backups are not merely taken but restored. A log showing a restoration test, with date, system, and outcome, is the standard expectation. This is one of the items most commonly missing in first-cycle audits. By the third cycle, it is usually in place.
02 · The quiet gapWhat they almost never ask about
In all three audits we observed, across three different audit firms, nobody asked for internal policy change logs. Nobody asked whether the information security policy had ever been revised since it was written, or who approved the revision, or what changed. Auditors trusted that the policy existed. They read it. They compared it against the controls in scope. What they did not do was ask for the version history, the approval trail from when it was last updated, or the evidence that someone had reviewed it in the past twelve months.
This is the quiet gap. It is not that policy change history is unimportant; it is that it is not, in our observation, a standard audit request. Which means that many firms have policies they regard as current, signed off years ago by people who have since left, last meaningfully reviewed when a different version of a framework was in effect, and nobody has noticed because nobody has asked.
· · ·
The practical takeaway from three audits in a single quarter is that the evidence trail matters as much as the control itself. Auditors are not assessing whether your processes are good; they are assessing whether you can demonstrate, with documentary evidence, that your processes operated as described during the period under review. A control that exists but cannot be evidenced is, for audit purposes, the same as a control that does not exist.
The corollary is that the moment to build the evidence trail is when you build the control, not in the weeks before the audit. A quarterly access review completed in February is useful. A quarterly access review completed in January with a back-dated February timestamp is a problem. The infrastructure for capturing evidence is not a compliance overhead; it is the actual product of running a controlled operation. Write the control once; build the evidence trail in parallel to the control itself, from the day it goes live. By the time an auditor asks for it, it will simply be there.