We Built a Workbench for Underwriting
Here's What Worked (And What Didn't)
By
Puneet Shrivas
"After testing we are really surprised how great the tool matches our manual results - even in our language. We would like to test the tool more intensively and to its full extent. We would be happy to do so using current submission and renewal cases handled by our underwriters. Would it be possible to make the tool available for our financial lines underwriters for the mentioned testing" - our POC for German D&O Underwriting Workbench Pilot
From Executive Dashboards to Underwriting Workflows
So we came into this massive reinsurer as an IT consultant about eight months ago. Originally hired to build Gen AI tools for the C-suite, you know exhibit level overviews and platforms for analyzing stock performance impact on their exposure. Pretty standard executive stuff.
But here's what's funny about consulting projects, you always discover things you weren't looking for. While building these financial analysis tools we kept running into the underwriting team. Their pain points were so obvious and solvable that we couldn't ignore them anymore.
The executives got their dashboards but the underwriters were still drowning in manual processes. So we expanded scope, probably made the project manager nervous but it was the right call.
Step 1: From Strategic Vision to Workflow Reality
The original brief was all high level insights and financial modeling. But talking to underwriters we realized the real opportunity was much more granular. These people had manual checklists for everything, workflows that hadn't been updated in years.
Instead of another executive dashboard that gets looked at once a month, we focused on daily workflow automation. Document review checklists, risk assessment protocols, compliance verification. All stuff that was perfect for automation.
Most insights came from just sitting with underwriters and watching them work. You'd see someone spend two hours manually checking policy language and think "okay this is definitely something AI can help with."
Step 2: Complex Data That Stumped Everyone Else
Here's where things got interesting technically. We weren't dealing with simple documents or basic queries. These underwriters worked with incredibly complex Excel sheets, multi layered XML data feeds, regulatory filings structured in weird ways.
We tested ChatGPT and Microsoft Copilot early on. Complete disaster. Neither could retrieve accurate data from complex financial models, calculations were wrong half the time, and the security issues were obvious. You can't feed proprietary reinsurance data to external APIs.
Our internal copilot was way more capable because we trained it specifically on their data structures. It understood their Excel models, could parse XML feeds accurately, and did calculations on the fly without sending anything outside their network.
Step 3: Automating the Checklists That Actually Mattered
The breakthrough came when we digitized manual checklists scattered throughout their underwriting process. Risk assessment checklists, compliance verification, policy review protocols. Critical but tedious stuff prone to human error.
We went through so many iterations. First versions would miss edge cases or flag false positives. But we kept refining accuracy and thoroughness, testing against historical cases where we knew the right answers.
The AI could be way more thorough than humans on routine checks while flagging unusual cases for manual review. It would catch policy language inconsistencies that a tired underwriter might miss, but also knew when something was weird enough to escalate. Manual checklist automation ended up being our biggest win.
Step 4: Integration With Their Actual Workflow
Since we were building internally we could integrate much deeper than external solutions. The AI lived directly in their Excel models, email workflows, policy management systems. No context switching, no separate apps to remember.
Underwriters could run risk assessments directly in spreadsheets they were already using. AI would read XML data feeds and automatically populate analysis templates. Less like new technology, more like existing tools just got smarter.
Integration over innovation became our motto because adoption was faster when people didn't have to change how they worked.
Big Wins We Actually Measured
Results were honestly better than anyone expected. Manual checklist completion went from hours to minutes. Document analysis that took a full day could be done in under an hour with higher accuracy.
Unexpected win was how much more thorough analysis became. AI could cross reference policy terms against regulatory requirements, historical claims, and internal guidelines simultaneously. No human could keep all that context in their head.
Risk assessment quality improved because AI never got tired or missed obvious red flags. Underwriters could focus on genuinely complex judgment calls instead of routine verification.
What Completely Failed
Our biggest failure was an AI playground we built. Teams could access different models, customize prompts, connect to knowledge bases. Idea was to let them automate workflows without always coming to development.
Total disaster. Teams created completely different output formats for similar tasks. One wanted bullet points, another paragraphs, a third structured data. Follow up requests were all over the place and impossible to consolidate.
We spent more time maintaining quality across custom implementations than if we'd built everything centrally. Playground concept sounds great in theory but created chaos in practice.
Lessons for Other Reinsurance Teams
Start with one specific workflow automation not a general purpose AI tool. Pick manual checklists or document analysis, get it working perfectly, then expand.
Don't underestimate data security and accuracy for complex financial calculations. External AI tools aren't ready for sophisticated reinsurance analysis.
Resist building self service AI platforms unless you have huge support teams. Centralized development with tight quality control works better than letting everyone build solutions.
Ready to see what this looks like in practice? We've built a copilot specifically for underwriters that handles document processing, data extraction, and all the tedious prep work so you can focus on risk assessment and decision making. Want to take it for a test drive? Try our new Underwriter Copilot today and see how much time you can get back in your week. No more PDF hunting, no more manual data entry - just better underwriting.