I feel it’s time to truly address the elephant in the room. Because there is a genuine fear in the market – perhaps in your own office – that AI wants to take the wheel. The goal of “insurtech” is to remove the underwriter, remove the broker, and turn risk transfer into a vending machine.
However, I want to make this very clear: that is not the Cowbell vision.
In fact, I believe the future of insurance isn’t “Man vs. Machine.” It is a symbiotic relationship where the machine handles the heavy lifting, and the human handles the judgment.
The “Venn Diagram” of Success
You can describe our philosophy as a simple Venn diagram.
- Circle A (The AI ‘Machine’): This is where we put the tasks that require massive scale and speed – analysing millions of data points, spotting patterns in vast datasets, and flagging routine anomalies.
- Circle B (The Human): This is where we put the tasks that require nuance, ethics, context, and commercial strategy.
The magic happens in the overlap.
When I spoke previously, I used the analogy of a software engineer. Today, the best engineers in the world use AI tools (like Claude or GitHub Copilot) to write code. The AI doesn’t replace the engineer; it acts as a “force multiplier.” It spots the bugs, suggests the structure, and identifies historical faults that a human might miss in a rush.
But the engineer still designs the architecture. The engineer still decides what to build.
We are bringing this similar logic to underwriting. Our AI – Cowbell Co-Pilot – is designed to spot the “bugs” in a risk profile. It flags the vulnerabilities and translates the technical data. But it leaves the complex decision-making (the strategy) to the human expert.
The Educational Engine
One of the most overlooked benefits of AI within the insurance industry is its ability to teach the next generation of Underwriters.
In the London Market, we respect its tradition and deep expertise, but cyber risk changes so fast that even the biggest and best ‘experts’ struggle to keep up.
We view our AI not just as a calculator, but as an educational tool. When our system flags a risk, it doesn’t just say “No.” It explains why. It breaks down the technical jargon into plain English.
This means that over time, Underwriters and Brokers actually become smarter. The AI helps to explain and understand the nuance of a specific vulnerability (like a new CVE or a port exposure), allowing you to explain it to your client with confidence.
Turning “Techno-Babble” into Strategy
For the UK broker, this is transformative. You often sit between a technical problem (the client’s risk) and a commercial solution (the policy).
Our AI acts as the translator. It takes the complex “techno-babble” of a security scan and turns it into a clear, strategic narrative. It allows you to say to a client: “The system has flagged this specific issue – not because the computer says so, but because it mirrors a pattern we’ve seen in similar claims.”
The Human in the Loop
Let me be absolutely clear: Human judgment and accountability remain non-negotiable.
AI is incredible at processing data, but it lacks context. It doesn’t know the client’s history, their business culture, or the specific commercial pressures they face. That is why we must keep the “human in the loop.”
We use the machine to clear the desk of the routine, repetitive tasks – the “drudgery” – so that our underwriters have the time to pick up the phone and actually talk to you.



