All posts
Fraud Detection10 min read

Your Vendor's SOC 2 Report Might Be a Template: What the Delve Scandal Reveals About B2B Trust

The Delve scandal exposed 1,500 tech companies holding fabricated SOC 2 certifications — and the enterprise buyers who trusted them. Forensic document verification catches what procurement checklists miss.

SOC 2 fraudfake compliance reportvendor due diligencethird party risk managementDelve scandalcompliance document verificationSaaS vendor securitysupply chain risk

In early 2026, investigators published a data leak exposing a Y Combinator-backed compliance startup called Delve. The company had raised $32 million, served over 1,500 fast-growing tech companies, and promised AI-native compliance in weeks rather than months. What they actually delivered was a generation script: 494 SOC 2 reports sharing 99.8% identical boilerplate text, auditor conclusions written before clients submitted a single piece of evidence, and test values like "sdf" and "dlkjf" passed directly into final certification documents.

The scandal is a window into something the tech industry has quietly known for years but rarely acts on: compliance documents are trusted, not verified. When a SaaS vendor sends your security team a 40-page SOC 2 Type II report, someone scans the cover page, checks the date, and files it in a SharePoint folder. Nobody runs forensics on the PDF. Nobody checks whether the auditor's conclusion was written before or after the audit. Nobody compares the system description against 492 other companies' reports to see if they're word-for-word identical. The Delve scandal is what happens when the entire B2B supply chain operates on that assumption.

1,500+
tech companies held Delve-generated compliance docs. Source: DeepDelver investigation
99.8%
of 494 SOC 2 reports shared identical boilerplate — only logo and company name changed
$32M
raised by Delve before the fraud was publicly exposed
Side-by-side SOC 2 reports from two unrelated companies showing identical system description text, flagged by template matching analysis
Two Delve-generated SOC 2 reports from unrelated companies. The system description section is word-for-word identical — including the same grammatical error. Template matching catches this; manual review almost never does.

The Two Sides of the Problem

Most document fraud discussions focus on a single direction: someone fabricates a document and submits it. The Delve scandal is more interesting because it damaged two distinct groups of tech companies in fundamentally different ways.

Companies that used Delve to get certified won enterprise deals on the strength of compliance attestations that didn't reflect actual security controls. One client described passing a security review for a major enterprise customer, closing the deal, and later discovering that their SOC 2 report claimed penetration testing had been conducted when it hadn't. Their HIPAA audit passed despite unencrypted laptops. They now face potential contract breaches with every enterprise customer who relied on those certifications, regulatory exposure under GDPR (fines up to 4% of global revenue) and HIPAA (criminal liability under certain conditions), and the reputational cost of disclosing to customers that their security posture was misrepresented.

Companies that onboarded Delve clients as vendors granted system access, data sharing agreements, and integration permissions based on security attestations that certified controls that weren't actually in place. Their own security auditors may now flag those vendor approvals as invalid. Any breach traced to a Delve-certified vendor will face awkward questions about what due diligence was actually performed.

Neither group is the fraud perpetrator. Both are harmed by the same structural failure: compliance documents were accepted at face value without any forensic verification of the document itself.

What a Fake Compliance Report Actually Looks Like

The Delve investigation is unusually specific about the mechanics of how the fake reports were constructed. This is valuable — not because Delve is uniquely dangerous, but because the patterns it used are the same patterns any template-based compliance system produces.

Pre-written auditor conclusions. AICPA standards require that an independent auditor form their opinion after examining evidence. The Delve generation script produced the auditor's conclusion section before clients submitted company descriptions, network diagrams, or any supporting evidence. In the leaked spreadsheet, these conclusions appeared as pre-populated template text — the auditor's voice, written by a generation script.

Statistically impossible uniformity. All 259 Type II reports in the dataset claimed zero security incidents during the audit period. All 259 claimed zero personnel changes. All 259 claimed zero customer terminations. These claims covered unrelated companies across a six-month period — startups, mid-size SaaS businesses, and companies processing healthcare data. The probability of every single one genuinely experiencing no incidents, no staff changes, and no customer churn is effectively zero. This is what industrial-scale template generation looks like in the data.

Identical system descriptions across unrelated entities. The sentence "An Endpoint Security Solution is installed with the feature of scanning the device automatically" appeared in 493 of 494 SOC 2 reports. A grammatically incorrect phrase — "has developed an organization-wide Information Security Policies" — appeared in 492. These reports described AWS, GCP, Azure, and Vercel customers using identical infrastructure overviews despite fundamentally different architectures. The system description section of a SOC 2 report is supposed to uniquely describe that company's environment.

Keyboard-mashed test values in final documents. The generation spreadsheet contained test entries — "sdf", "dlkjf", "g" — that were passed through the automation script and appeared in final reports delivered to clients. These aren't draft artifacts. They're production outputs.

PDF metadata revealing automated generation. The reports were generated using Google Apps Script. Documents produced this way carry tool signatures in their PDF metadata — the same way a Word document reveals which version of Office created it, or a phone photo reveals which camera app. The metadata for these reports didn't show a CPA firm's document authoring tool. It showed a generation script.

The problem isn't specific to Delve. Any GRC platform that generates SOC 2 report structure programmatically — and hands it to a rubber-stamp auditor to sign — produces the same forensic signals. The Delve investigation exposed one company. The underlying model is industry-wide.

The Forensic Signals That Reveal a Template Report

When a vendor submits a compliance document during procurement, there are seven forensic signals that separate a genuine audit from a generated template — none of which require reading the document's claims.

Template fingerprinting. Genuine SOC 2 reports from real CPA firms follow that firm's house format: specific heading hierarchy, characteristic phrasing in transition language, consistent footnote structure. A template shared across hundreds of clients deviates from any single firm's authentic format. Structural comparison against known-genuine reports from the claimed auditor surfaces this.

PDF metadata and generation tool signatures. Every PDF carries creator metadata. A report issued by a licensed CPA firm is typically authored in a professional document environment (Adobe Acrobat Pro, LaTeX, firm-specific document systems). A report generated programmatically carries the generator's signature. Delve's reports showed Google Apps Script. Others show online PDF generators, mail-merge systems, or automation tools inconsistent with professional audit practice.

Font metric analysis. Auto-generated text inserted into template placeholders frequently shows font metric deviations — subtle differences in character spacing, kerning, or glyph metrics between the template text and the filled-in values. This is the same signal used to detect tampered invoices and edited bank statements: the fill-in doesn't perfectly match the surrounding typographic environment.

Audit period date arithmetic. SOC 2 Type II reports cover a defined period — typically six or twelve months. The document's creation date, the auditor's sign-off date, and the audit period end date have a required logical relationship: the report can't be signed before the audit period ends. Reports with creation dates predating the audit period close — or auditor conclusions dated before the engagement's evidence collection phase — fail this check.

Impossible uniformity across claimed entities. A single report can't be checked for uniformity. But vendors sometimes submit reports from the same audit firm, or a security team reviews multiple vendors who used the same compliance platform. Cross-document comparison of system descriptions, test results, and conclusion language surfaces template reuse that no individual review would catch.

Entity and registration consistency. The company name, registered business details, and described infrastructure in a genuine SOC 2 report should be verifiable against public registration records and consistent with what the vendor actually operates. A report describing AWS infrastructure for a company that demonstrably runs on Azure is an inconsistency. A report listing an auditor whose registration number doesn't match public CPA databases is a red flag.

Structural template matching. Known-genuine reports from major audit firms have characteristic structural properties. A report claiming to be from Firm X but deviating from Firm X's documented report structure — in heading hierarchy, section ordering, disclaimer language — is a template substitution, not an authentic report.

TamperCheck analyses all of these signals in a single API call and returns a structured verdict — clear, suspicious, or likely tampered — with specific findings listed. The analysis treats the document as a forensic object, not a set of claims to be read and believed.

Where This Fits in the Vendor Procurement Flow

Most vendor security reviews follow a version of this sequence: vendor completes a security questionnaire, vendor submits supporting documentation (SOC 2 report, pen test summary, certifications), security team reviews, legal reviews the DPA and BAA, procurement approves. The documentation review step typically involves a human reading the cover page and filing the document.

Inserting a forensic document check into the submission step takes three seconds per document and changes the nature of what the review actually catches:

Vendor submits SOC 2 / ISO cert / pen test report
          ↓
TamperCheck forensic check (~3s per document)
          ↓
Clear → Route to standard security review
Suspicious → Request supporting evidence: specific control documentation,
             contact auditor directly, request evidence samples
Likely Tampered → Reject submission, escalate to security / legal team

The output of the forensic check isn't a replacement for the human review — it's a triage layer that tells the reviewer where to focus. A clear verdict means the document is worth reading carefully. A suspicious verdict means specific findings need follow-up before the document is relied upon. A likely-tampered verdict means the vendor has submitted a fabricated document and the procurement process should stop.

The economics are straightforward: at $0.50 per document, checking every compliance document in a vendor review costs less than five minutes of a security engineer's time. The downside of not checking is what the Delve scandal demonstrates — access granted to vendors whose security controls don't exist.

The Full Stack of Documents That Need Checking

SOC 2 reports are the most visible compliance document in SaaS vendor reviews, but they're not the only one subject to fabrication. Tech companies typically collect a stack of compliance-related documents during procurement, each with its own fraud surface.

ISO 27001 certificates are issued by accreditation bodies and carry certificate numbers, issuing body details, and defined certification scopes. Fabricated certificates reuse real accreditation body logos with altered certificate numbers and non-existent auditor identities. The issuing body, certificate number, and scope should all be verifiable against the accreditation body's public register.

Penetration test reports are frequently submitted as evidence of security practice. A genuine pen test report from a real firm includes scoping documents, methodology descriptions, authenticated finding evidence, and firm letterhead consistent with the firm's known output. Template pen test reports — generated to satisfy a checkbox without an actual test being conducted — show the same structural uniformity and metadata signals as fake SOC 2 reports.

HIPAA Business Associate Agreements and other legal attestations are documents with defined structural requirements. Signature analysis, date consistency, and structural verification apply.

Vendor security questionnaire evidence attachments — screenshots, configuration exports, policy documents attached to justify questionnaire answers — are increasingly AI-generated or pulled from templates. ELA and metadata analysis apply to image attachments; structural analysis applies to policy PDFs.

The AI Document Problem Makes This Harder Without Getting Easier

The Delve-style attack used a generation script and rubber-stamp auditors. The next generation of this attack uses large language models to produce narrative text that reads as authentic — specific enough to avoid the obvious uniformity that caught Delve, but still fabricated.

A human reviewer scanning a 40-page SOC 2 report has no reliable way to distinguish an LLM-synthesised system description from a genuine one. The text is fluent, specific, and doesn't repeat. The company name appears in the right places. The controls described sound reasonable.

But the document-level forensic signals survive AI generation. PDF metadata still shows the generation tool. Font metrics still show fill-in deviations where template and generated text meet. Date arithmetic still fails if the conclusion predates the audit period. Structural template matching still catches report formats inconsistent with the claimed auditor's known output. The text layer gets harder to detect; the document layer doesn't.

This is why forensic analysis operates on the document as an object — not on the claims it makes.

Verify vendor compliance documents in seconds

Upload any SOC 2 report, ISO certificate, pen test summary, or security attestation and get a forensic verdict before you onboard the vendor. Free to start.

Start free

FAQ

Can TamperCheck verify whether a SOC 2 report was issued by a real CPA firm?

TamperCheck verifies the document's forensic properties — metadata, structural characteristics, font metrics, date arithmetic, and entity consistency — against signals associated with authentic professional audit output. It doesn't directly query CPA firm registries, but the combination of structural template matching and metadata analysis surfaces reports that don't match the claimed firm's known document characteristics. For high-stakes vendor decisions, direct auditor verification (calling the firm listed on the report) remains the gold standard; forensic analysis is the triage layer that tells you when that call is necessary.

What if the vendor's report was legitimately generated using a compliance platform?

Compliance platforms that assist with evidence collection and report structuring don't inherently produce fake reports — the forensic signals differ. The Delve problem wasn't that they used software to assist with compliance; it was that the software wrote the auditor's conclusions before evidence existed and produced identical outputs across unrelated entities. A legitimate compliance-assisted audit still requires a licensed auditor to examine actual evidence, form an independent opinion, and produce a report reflecting that company's specific controls. The forensic signals for that process are different from automated template generation.

Does this work for ISO 27001 certificates, HIPAA attestations, and pen test reports?

Yes. TamperCheck analyses any document type — SOC 2 reports, ISO 27001 certificates, HIPAA attestations, penetration test reports, and policy documents. The forensic signals (metadata, structural analysis, font metrics, date arithmetic, entity consistency) apply across document types. The specific checks weight differently by document type — a pen test report has different structural expectations than an ISO certificate — but the underlying analysis approach is the same.

How does this integrate into an existing vendor risk management workflow?

TamperCheck provides a REST API and webhook support. The typical integration is at the document submission step of the vendor security review: when a vendor uploads their compliance documents (via email attachment, a vendor portal, or a security questionnaire platform), those documents are passed to the TamperCheck API before they enter the human review queue. The forensic verdict and findings come back in under 10 seconds and can be attached to the vendor record in your GRC platform or TPRM system. For teams that aren't ready to integrate via API, documents can be analysed manually through the dashboard at any point in the review process. See the Document Verification API developer guide for integration specifics.

See it in action

TamperCheck verifies documents in under 3 seconds — $5 in free credits, no contract.