Back to blogIndustry

Cyber Insurance SIEM Requirements: What Carriers Actually Ask For, and Why Most MSPs Cannot Produce It

Zbigniew Gajuk 2026-04-23 11 min read

The audit that arrives without warning

A cyber insurance carrier sends your MSP a 20-page questionnaire. They want SIEM log coverage, MFA deployment evidence, backup test results, patching cadence, EDR deployment inventory, and detection rule documentation. They want the answers in 48 hours. The request is not a renewal. It is a mid-term audit triggered by a named loss event in the portfolio, a ransomware cluster in your region, or a routine revalidation the underwriter now performs after every claim-heavy quarter.

You have 50 clients. Each one runs a slightly different security stack. Across the portfolio, there are 76 tools. Some clients run Microsoft 365 with Entra MFA, some run on-prem AD with a separate MFA appliance, some run Okta, some have MFA enabled on the VPN but not on the jump host. SIEM logs exist for most clients. They are scattered across Splunk Cloud instances, Sentinel workspaces, and a QRadar deployment that nobody wants to touch. Detection rules were written by three different engineers across five years. Nobody knows where the evidence of a specific MFA event from nine months ago actually lives.

The 48-hour clock is running. The carrier wants proof, not promises.

This scenario is no longer hypothetical. 2025 was the most expensive cyber loss year on record. Premiums are up. Carriers have responded by tightening evidence requirements to a degree that most MSPs built their stacks before anyone imagined. The applications themselves now run 20+ pages (Axcient/ConnectWise), and carriers specifically ask about SIEM and centralized logging as factors in approval, premium, and coverage limits. 51 percent of businesses are required to have MFA just to qualify, and missing MFA is the number one reason claims are denied (industry data via Cinch I.T.).

What the 20-page questionnaire actually asks for

The questionnaire reads as a general compliance survey, but the data it demands is very specific. Almost every question maps to a technical control that must be evidenced with a log, a configuration export, or a detection rule. In practice, every carrier asks for some subset of the following:

Centralized logging coverage. Which log sources feed the SIEM. How many days of retention. Which categories are covered (identity, endpoint, network, cloud). Whether logs are immutable or mutable. What the gap policy is when a source stops forwarding.

Multi-factor authentication deployment. MFA enforcement rate across privileged accounts. MFA enforcement on email, VPN, remote desktop, and administrative console access. Whether MFA bypasses exist for service accounts. Evidence of the last time an MFA event was logged for each user.

Endpoint detection and response. EDR deployment rate across every endpoint class (server, laptop, mobile, virtual). The specific EDR vendor. Retention of EDR telemetry. Detection rules active.

Backup and disaster recovery. Backup frequency, offsite replication, immutability, and test cadence. Evidence that the last restore actually worked.

Patching and vulnerability management. Average time to patch critical vulnerabilities. Scanning frequency. Exception log. Remediation SLA.

Detection content. Number of active detection rules. Coverage against MITRE ATT&CK. False positive rate. Last tuning cycle.

Incident response. Documented playbooks. SOAR tooling. Tabletop exercise cadence. Notification policies.

Every item in that list is a log question wearing a policy costume. The carrier does not care about the policy document. The carrier cares whether the log can be produced on demand, in context, across every client in the portfolio.

Why most MSPs cannot produce the evidence

The structural problem is not the number of questions. It is that the evidence lives in 76 different tools, each emitting data in a different format, with different retention windows, different query languages, and different ownership. Cisco's 2025 Global State of Security Report found that 78 percent of security leaders say their tools are dispersed and disconnected, and 59 percent cite tool maintenance as their primary source of inefficiency. For an MSP scrambling through an audit, that dispersion translates directly into unbillable hours and missed deadlines.

Consider the MFA question alone. Proving that MFA is enforced on every privileged account across 50 clients requires pulling data from Entra ID sign-in logs, Okta system logs, on-prem Active Directory event logs, VPN authentication logs, jump host audit logs, and the MFA appliance's own logs. Each source has a different field name for "user identity." Each has a different timestamp format. Some retain 90 days, some 30, one retains 7. The actual evidence exists, but producing it in a single report within 48 hours requires writing ad-hoc extraction scripts and reconciling schemas by hand. For a single client it is painful. For 50 clients it is impossible.

The same pattern repeats for every question on the 20-page form. SIEM coverage evidence requires inventorying source feeds across every client's SIEM. EDR deployment requires reconciling the EDR console against a client-supplied asset inventory, which is usually stale. Backup evidence requires pulling success/failure records from three different backup vendors and cross-referencing with the immutability settings. In each case, the data exists. The architecture to query it does not.

The MFA gap that triggers the claim denial

Missing MFA is the number one reason cyber insurance claims are denied. The reason is not that MSPs fail to deploy MFA. Most MSPs enforce MFA by default on new clients. The reason is that MFA deployment is heterogeneous by account type, and the gaps live in places where nobody is actively looking.

A typical MFA gap profile looks like this. Regular user accounts are covered by whatever identity provider the client runs. Administrative accounts are usually covered, because the MSP knows they are high risk. Service accounts are often excluded from MFA enforcement because they run automated jobs that cannot complete an MFA challenge. Break-glass accounts are exempt by design. Accounts created for third-party integrations (a backup vendor, a monitoring tool, a contractor) are inherited from an email distribution rule and are not always covered.

When a carrier investigates a claim, they do not ask whether MFA was enforced for most users. They ask whether it was enforced on the account that was compromised. If the compromised account was a service account, a contractor, or an inherited integration account that was missed in the MFA rollout, the claim is denied. The MSP had MFA. The policy said so. The evidence says otherwise.

The fix is not more policy. The fix is continuous evidence production from log data. A pipeline layer that normalizes identity events from every source into a common schema can answer the question "which accounts authenticated without MFA in the last 90 days" in a single query, across every client, every identity provider, and every privileged scope. That query becomes a scheduled report. The gaps surface before the claim event, not during it.

What insurance-grade telemetry actually looks like

The baseline for insurance-grade telemetry is not a SIEM. It is an architecture that treats evidence production as a first-class capability, with three properties:

Unified schema across sources. Every identity event from every provider lands with the same field names for user, timestamp, auth method, MFA state, source IP, device, and outcome. Every endpoint event from every EDR lands with the same field names for process, user, host, and action. Cross-source correlation becomes a single query instead of a joining exercise.

Separation of hot and cold tiers. The expensive SIEM tier receives only events the SOC actively uses for detection. A parallel cold tier receives the full-fidelity original of every event in an open, queryable format on object storage. The cold tier has unlimited retention at a small fraction of the SIEM cost, and the carrier's audit query runs against the cold tier without touching SIEM ingest or budget.

Replay on demand. When the carrier asks for a specific event from 14 months ago, the pipeline re-streams the relevant events from cold storage into the SIEM for a quick cross-check, or queries them in place for the investigation. No rehydration tickets. No three-day delays. No "we have it somewhere" excuses.

An MSP who has built this architecture can answer the 20-page questionnaire across 50 clients in a day. An MSP who has not built it spends two weeks scrambling.

The pipeline architecture that produces evidence in 48 hours

A Cribl pipeline between every source and every destination is how most MSPs reach insurance-grade without rebuilding their SIEM stack. Three things happen at the pipeline layer:

Schema normalization at ingest. Cribl maps every identity source (Entra, Okta, AD, VPN, MFA appliance) to a common event schema, usually OCSF, ECS, or ASIM. A single field called user means the same thing whether it came from Entra sign-in logs or an on-prem domain controller. The carrier-facing query "show every account that authenticated without MFA in the last 90 days across all clients" becomes a straightforward filter instead of a six-way join.

Tenant-aware routing. Every event carries a client tenant tag applied at the pipeline, which means the MSP can scope queries by client, by portfolio, or by sub-portfolio on demand. When a specific client's carrier asks for evidence, the answer is one query against the tenant tag. No per-client ad-hoc integration.

Enrichment with compliance-relevant context. The pipeline joins events with lookup tables during ingest: account type (privileged, regular, service, break-glass), MFA enforcement state, client tier, compliance regime. By the time an event lands in cold storage, it already carries the context the audit will need. The retrospective query is a filter, not a reconstruction.

The same pipeline also solves the cost problem that got most MSPs into trouble in the first place. SMB clients cannot pay for full-SIEM-tier retention on every log source. Routing full fidelity to object storage at a small fraction of SIEM pricing makes insurance-grade evidence affordable to keep for years. The SIEM tier stays lean. The archive stays deep. The questionnaire gets answered.

The MSP wedge: cyber insurance is creating new SIEM demand

Cyber insurance has inadvertently become one of the strongest demand drivers in the managed security services market. SMB clients who never needed a SIEM before now require one for insurance. The application literally asks whether the organization has centralized logging, and the answer "no" is disqualifying for most tiers of coverage.

This creates a strange market condition. SMBs need insurance-grade SIEM capability but cannot afford enterprise pricing. They cannot hire security engineers, they cannot deploy a multi-week SIEM rollout, and they certainly cannot absorb the 20 to 30 percent annual cost growth that enterprise SIEMs produce. They need the evidence capability without the cost and complexity.

MSPs that solve this problem capture the wave. The answer is not to put every SMB client on Splunk Enterprise. The answer is to build a multi-tenant pipeline architecture where the expensive SIEM tier is shared, scoped, and filtered per tenant, and the full-fidelity archive sits on object storage at commodity prices. One pipeline serves the entire client portfolio. Each client gets its own scoped view, its own retention policy, and its own carrier-facing report. The MSP's cost per client stays flat as the portfolio grows.

The revenue story is compounding. Each new SMB client requires a baseline SIEM capability for insurance, which the MSP can now deliver without bespoke engineering per client. Existing clients need revalidation each year as cyber insurance questionnaires expand. Every new carrier relationship introduces new reporting templates that reuse the same underlying logs. The pipeline pays for itself at roughly 15 to 25 clients and starts earning net new margin from there.

Where to start: the single-client audit simulation

Before investing in pipeline architecture across the portfolio, simulate the 48-hour audit on one client. Pick the client whose carrier has been most aggressive with mid-term audits, or the one whose stack is most typical of the portfolio average. Run the exercise for real, with a stopwatch.

Step one: take a representative 20-page cyber insurance questionnaire and mark every question that requires log evidence. There will be 40 to 60 of them. Assign each a data source (Entra, AD, EDR, backup, patching).

Step two: try to answer each one from the client's current stack. For each question, record how long it took, whether the evidence was findable in the SIEM or had to be pulled from a separate tool, and whether the time window requested was within retention. Most MSPs find that 30 to 50 percent of questions require pulling data from outside the SIEM, and 10 to 20 percent cannot be answered within the retention window at all.

Step three: compute the economics of pipeline architecture for this one client. Route the identity, EDR, backup, and patching sources through a Cribl pipeline with a dual write to the SIEM and to object storage. Re-run the questionnaire against the pipelined architecture. Measure the difference.

The number that usually comes out of that exercise is a 60 to 80 percent reduction in audit response time and a retention window that extends beyond every question the carrier could reasonably ask. The multi-tenant version of that result, scaled to 50 clients, is the business case for pipeline architecture across the portfolio.

Cyber insurance pressure is one of several structural forces pushing MSPs toward pipeline architecture. Related pieces cover the adjacent ground.

On the solution side, our Multi-Tenant Pipelines page covers the per-tenant routing, scoping, and attribution model that makes insurance-grade evidence production affordable across 10 to 100 clients. The Data Governance page covers PII masking and regulated data handling for HIPAA and PCI-DSS portfolios where the pipeline also needs to scrub sensitive fields before they reach any destination. MSP Services describes how we design and implement this architecture specifically for MSP portfolios.

The discovery call

If your next cyber insurance audit is within the quarter, or if a portfolio-wide revalidation is on the horizon, a discovery call will scope the shortest path between your current stack and insurance-grade evidence production. We will walk through the actual questionnaire you are expecting, map each question to a required log source, and identify the gaps that would take the longest to close under the current architecture. Thirty minutes is usually enough to rank the top three audit risks in your portfolio and pick a pilot client for the single-client simulation.

Schedule a discovery call and bring whatever carrier materials you have on hand, including any mid-term audit notices, renewal questionnaires, or claim denial correspondence. The more concrete the audit scenario, the faster the architecture conversation lands on something defensible.

Cyber insurance made SIEM and centralized logging mandatory. Your MSP's advantage is whether the evidence can be produced in 48 hours without scrambling. Everything else follows from that answer.

#cyber-insurance#siem#compliance#mfa#msp#audit#evidence#cribl#centralized-logging

Want to discuss how this applies to your environment?

Schedule a discovery call and we will walk through your specific data sources, platforms, and cost challenges.

Schedule a Discovery Call