How two insurance companies expedited underwriting with automated loss run processing

Updated on
December 16, 2025
5
min read
Contributors
No items found.
Author
How two insurance companies expedited underwriting with automated loss run processing
Table of contents
Turn documents into structured data
Get started free
Share this post

In insurance underwriting, loss runs are central documents that tell the story of risk. From warehouse slip-and-fall claims to trucking fleet accidents, these documents report every claim and payout for a business's insurance policies. When underwriters evaluate whether to insure a business or renew a policy, loss runs help answer one of their biggest questions: "What happened in the past five years?"

Processing loss run data at scale has remained stubbornly manual. Loss runs’ complexities are many. Each insurance carrier uses a different format, the same carrier can use different layouts for different coverage types, and individual documents can be dense, long, and span multiple years.

Two companies recently tackled this extraction challenge with Sensible's platform, but their differing business contexts resulted in contrasting implementation strategies for the same basic document type. One focused on automating data entry for workers’ compensation analysts and leaned on Sensible’s managed services for normalizing complex output requirements. The other built an underwriting platform serving hundreds of users and chose to build 300+ extraction templates themselves for maximum control and speed.

Company A: Workers' comp data specialists

An Insurtech company —Company A—tackled loss runs from a specialized angle. They focus on processing workers’ comp loss runs for insurance underwriting decisions. Workers’ comp claim history is the strongest predictor of future risk for manufacturing facilities, warehouses, construction companies, and other high-headcount businesses involving physical labour or vehicle fleets. A warehouse employer with multiple workplace injuries presents a very different risk profile than one with a clean record.

The challenge: Density, variability, and schema precision

Company A's challenges involved technical complexity and strict business requirements. The typical loss run they wanted to process was quite complex. A single workers’ comp loss run could exceed 50 pages with details on claim dates, amounts, injury descriptions, settlement status, and policy information. They processed a relatively low volume of loss runs, but each one could take hours of an analyst's time. Moreover, the analyst needed to input data in a precise format and schema for the company’s backend system. 

The solution: Managed services with sophisticated logic

Company A opted for Sensible's managed services model. The Sensible team builds and maintains all of their configurations for extracting loss run document data, covering multiple carrier formats and adjusting for evolving schema requirements. For example, at the company's request Sensible created an output schema that represents claims as a flat array, with policy totals listed elsewhere in the loss run appended to each claim object. If policy totals appear in the source document, Sensible extracts them directly; otherwise, they’re deterministically calculated. Edge cases added another layer of implementation complexity that Sensible was able to meet. For example, some loss runs list policies with no associated claims, which normally would result in null output.  In such cases, Sensible provides metadata like policy periods and carrier information so the company can confirm the policy exists but is claim-free.

The Results: Viable automation for complex documents

By concentrating solely on workers' comp, Company A can optimize for accuracy in their specific domain while managing format variability across carriers. Their implementation demonstrates that even at lower volume, loss run processing can achieve strong ROI when documents are sufficiently complex. Long loss runs that previously required hours now process automatically within seconds. Moving forward, they can scale their extraction capacity without adding headcount. Most importantly, the company’s back-end system receives accurate, deterministic data in the required schema, regardless of source carrier.


Company B: Broad platform automation

Company B, an insurance underwriting platform, provides underwriters with a centralized workbench for evaluating risks. Initially, their platform simply provided document storage. Underwriters would upload loss runs, log into the platform, and manually extract the data they needed themselves.

As the platform grew to serve hundreds of underwriters across different insurance verticals, this manual extraction became a bottleneck that limited what the platform could offer.

The Challenge: Building new features, not just efficiency

Company B's challenge wasn't primarily about cost or speed; it was about product differentiation. Their data analyst explained that if they could transform their platform from a document repository with manual data entry into a service offering automated document processing, they’d gain a significant competitive edge over their platform competitors. 

As a platform, building this feature meant tackling tens of thousands of documents each month across many carriers and industries. They estimated they’d have to support about three hundred different loss run formats across carriers, and plan to expand coverage in future. Not an easy task when each insurance carrier designs its own loss-run layout. Even within a single carrier, the format for a workers’ comp loss run is completely different from commercial auto or general liability formats. Moreover, unlike company A, which focused primarily on complex, long documents, this company experienced more variety. A single loss run might be 2 pages with one claim or 80 pages with hundreds of claims.

The Solution: Self-service template creation

Company B took a self-service approach. After some formal training provided by Sensible’s team, they built virtually all their extraction templates themselves. This approach aligned well with their product strategy because they needed the flexibility to add new carrier formats quickly as their platform onboarded new underwriter clients.

Starting with perhaps 50 templates, they trained internal resources who rapidly scaled to hundreds of templates. 

The results: A new product feature

Not only did the implementation enable Company B to offer automated loss run extraction as a platform capability, it led to new offerings. Leveraging their new skills at rapid template creation, they expanded their features to include automated ACORD form processing. As an aside, they found that while deterministic methods worked well for loss runs, a hybrid deterministic and LLM-based approach was best for ACORDs. For example, they use Sensible’s LLM-based features to solve particularly thorny OCR problems in ACORDs, then deterministic methods to extract the cleaned OCR data. In sum, they stated that automated extraction capability fundamentally changed their value proposition to underwriters.

Two strategies, one document type

These implementations reveal how the same document type demands different approaches:

Company A optimized for depth and precision. Focusing on workers’ comp with specific schema requirements, they needed expert configuration and sophisticated normalization logic. Sensible’s managed services offering freed them to focus on their core business.

Company B optimized for breadth and flexibility. Processing tens of thousands of documents across hundreds of formats, they needed rapid template creation capability so they could offer new products. Self-service gave them control and speed.



Key takeaways for loss run automation

If you're evaluating loss run extraction solutions:

  1. Plan for format proliferation: Even one coverage type means building dozens to hundreds of carrier-specific data extraction templates. Loss runs tend to be too complex for generalizable extraction templates. 
  2. Design for edge cases early: Requirements like handling no-claim policies will emerge. Build flexibility into your schema design.
  3. Deterministic methods deliver accuracy: Underwriting decisions depend on extraction precision. LLM-based approaches that work well for other document types struggle to provide that precision for dense, complex loss runs. Sometimes it’s advantageous to opt for deterministic extraction approaches instead.


Get started with loss run automation

Whether you're building an underwriting platform or specializing in policy transitions, Sensible's solution engineering team can help design a loss run extraction pipeline for your specific needs.



Book a demo
to discuss your loss run processing requirements, or explore our managed services to see how we can handle template creation and maintenance for you.

Frances Elliott
Frances Elliott
Turn documents into structured data
Get started free
Share this post

Turn documents into structured data

Stop relying on manual data entry. With Sensible, claim back valuable time, your ops team will thank you, and you can deliver a superior user experience. It’s a win-win.