Extract loss runs to structured JSON

Loss runs are insurer-produced claim summaries used in quoting, renewals, and program reviews. The layout varies by carrier, often spanning multiple pages of claim tables. Sensible converts these into consistent JSON for underwriting, analytics, and broker workflows.

Why loss runs resist automated extraction

Carrier-specific layouts, multi-page claim tables, and inconsistent document quality make loss runs one of the hardest document types to extract at scale.

Carrier Format Variation

Every carrier formats loss runs differently: different column structures, different claim table layouts, different page conventions. One carrier puts totals on the first page; another buries them at the end. Each layout requires its own extraction configuration, and SenseML makes each one repeatable.

Multi-Page Claim Tables

A single loss run can span 50+ pages. Claim rows must be stitched across page breaks without duplication or omission. Sensible tracks table continuations and merges them into a single structured output.

Inconsistent Document Quality

Some carriers deliver clean PDFs. Others send scanned images, Excel-to-PDF conversions, or faxed copies with misaligned columns. Our hybrid engine combines LLM parsing for degraded inputs with deterministic rules that enforce output consistency.

Fields we extract

Sensible extracts these fields by default. Customize the output schema to match your underwriting pipeline.

Policy headers

Policy number, insured name, effective/expiration dates, carrier, line of business

Claim detail

Claim number, date of loss (DOL), cause, status (open/closed), paid, reserve, total incurred, coverage, location

Totals

Claim count, open count, paid total, reserve total, incurred total, frequency, severity


{ /* Sensible uses JSON5 to support in-line comments */
"fields": [
{
"method": {
"id": "queryGroup",
"queries": [
{
// Extract the claim number from each row in the loss run
"id": "claim_number",
"description": "claim number, claim #, claim ID"
},
{
// Date of loss for each claim
"id": "date_of_loss",
"description": "date of loss, DOL, loss date",
"type": {
"id": "date"
}
},
{
// Policy number associated with the loss run
"id": "policy_number",
"description": "policy number, policy #"
},
{
// Total incurred amount per claim
"id": "total_incurred",
"description": "total incurred, incurred amount",
"type": {
"id": "currency"
}
}
// Additional fields for paid, reserve, status, cause, etc.
]
}
}

]
}
Travelers Loss Run

Travelers carrier format with per-occurrence claim detail and incurred totals.

Hartford Loss Run

Standard multi-page loss run format from The Hartford with claim tables and policy summaries.

Liberty Mutual Loss Run

Liberty Mutual format covering Workers Comp General Liability and Auto lines.

Zurich Loss Run

Zurich carrier loss run with large deductible and self-insured retention detail.

Chubb Loss Run

Chubb loss history report with open and closed claims, reserves, and payments.

AIG Loss Run

AIG format spanning commercial and specialty insurance lines.

Supported Carriers

Sensible processes loss runs from any carrier. Pre-built configurations ship in the template library, and new carrier formats can be configured in hours using SenseML's hybrid extraction approach.

Major carriers

Hartford, Travelers, Liberty Mutual, Chubb, AIG, Zurich, CNA, and more

Any carrier

Regional carriers, MGAs, and specialty markets. Any loss run format can be configured

Trusted by operations and engineering teams at

Common Questions

Answers about loss run parsing, carrier support, and multi-page claim table handling.

Can Sensible handle multi-page loss runs?

Yes. Sensible processes loss runs of any length. Claim tables that span multiple pages are stitched together into a single structured output.

Does Sensible calculate claim aggregates from loss runs?

Sensible extracts totals when they appear on the document. You can also compute claim count, open count, paid total, reserve total, incurred total, frequency, and severity from the structured claim data.

Do you support webhooks?

Yes. Sensible sends extraction results to your webhook endpoint when processing completes. You can also poll the API for status.

What fields does Sensible extract from loss runs?

Sensible extracts policy number, insured name, effective dates, carrier, line of business, claim number, date of loss, cause, status, paid amounts, reserves, total incurred, and aggregate totals.

Does Sensible support human review?

Yes. Sensible flags extractions with low confidence for human review. You can configure review thresholds and workflows.

Which insurance carriers' loss runs does Sensible support?

Sensible processes loss runs from any carrier. Pre-built loss run templates are available in the configuration library, and custom carrier formats can be configured in hours.

What security certifications does Sensible have?

Sensible is SOC 2 Type II certified and HIPAA compliant. Data is encrypted in transit and at rest.

How long is document data retained?

Document data is stored indefinitely by default. Custom retention policies are available and can be configured for same-day deletion if needed.

Is there a free trial?

Yes. Sensible offers a 14-day free trial on the Growth plan. No credit card required to start.

How is pricing structured?

Sensible uses per-document pricing for predictable costs. No token-based billing or usage surprises. Volume discounts are available for higher throughput.

How do I integrate with Sensible?

Sensible provides REST APIs and SDKs for Python and Node.js. Most integrations take a few hours. Webhooks, Zapier, and direct API calls are all supported.

What file formats does Sensible support?

Sensible processes PDFs (native or scanned), Microsoft Word (DOC, DOCX), spreadsheets (XLSX, XLS, CSV), single-page images (JPEG, PNG), multi-page images (TIFF), and email bodies with attachments.

How accurate is the extraction?

Accuracy depends on document quality and configuration. Most production deployments achieve 95%+ accuracy with proper validation rules and confidence signals.

How fast is document processing?

Processing speed depends on document size, page count, OCR requirements, and which extraction methods are used. Simple single-page documents process in seconds. Larger or more complex documents that use LLM-based extraction take longer.