As generative tools move from novelty to everyday habit, that line is getting harder to see. Survey data showing that almost all brokers now use AI in some capacity highlights just how fast adoption has moved. For Jonathan Weekes (pictured), president, Canada at BOXX Insurance – and a former broker for 15 of his 18 years in the industry – the real question has shifted. It’s no longer whether brokers should touch AI at all, but how they use it and what that means for professional risk.
“I was a broker right up until March 2025,” he says. “I don’t think I was using it as much as I could have. I probably could have been more efficient if I’d found ways to leverage it.” What surprised him in the recent survey was how far his former peers have gone. “It was a shock to me to see that almost all the brokers we surveyed are using it.”
That enthusiasm, he suggests, comes with a catch.
For Weekes, the tipping point has nothing to do with the sophistication of the tool and everything to do with how it’s used.
“AI becomes a professional risk the moment it starts to substitute for judgment, rather than supporting judgment or a recommendation,” he says.
Using AI to summarise information, surface considerations or help organise thinking is one thing. Letting it generate advice, recommendations or explanations that go straight to a client without human validation is something else entirely.
“The tipping point is when brokers stop interrogating the outputs,” Weekes says.
In practical terms, that might look like:
Each of those saves time. In an E&O claim, each is also a gift to a plaintiff’s lawyer.
Asked what the worst scenario would be in the near future, Weekes said he wouldn’t expect a single dramatic AI-driven disaster. The real danger, he says, is slower and more insidious.
“There isn’t a single point of failure we can really anticipate,” he explains. “It’s more likely to be compounding – a few different issues stacking up.”
Those include hallucinated advice that sounds confident but is wrong, embedded bias and a lack of genuine reasoning, and unintentional data leakage when client or corporate information is fed into public tools.
Layered on top is a skills problem. Brokers without a solid baseline understanding of insurance are poorly equipped to distinguish good AI output from bad.
“If they don’t have that baseline understanding, and they’re relying on AI to develop thoughts, opinions and advice that they pass on to clients, you start to see this compounding effect of misinformation or disinformation,” he says.
The nightmare scenario wouldn’t be one spectacular AI-fuelled loss. It would be hundreds or thousands of small, AI-assisted misstatements about coverage, duty or exclusions quietly embedded in advice and marketing materials – only exposed when claims arise.
That creeping change raises a tougher question: if AI is now part of everyday brokerage work, will it change what courts and regulators expect of a “reasonable” professional?
Weekes thinks it will.
“As AI becomes more embedded in risk analysis, regulators and courts will begin to ask not just whether AI was used, but whether it was used responsibly,” he says.
The mere presence of AI won’t be seen as inherently negligent or prudent. What will matter is governance and judgment: which tools were used, for what tasks, with what oversight and documentation.
“I think the definition of a responsible broker will evolve to reflect the tools available to them,” Weekes adds. “There’ll be a higher standard in terms of governance and controls around AI use. That feels inevitable.”
He draws parallels to earlier technology shifts. As the industry moved from typewriters to word processors and internal automation, expectations emerged around documentation quality, version control and file management. Over time, those expectations shaped what “reasonable practice” looked like.
“I don’t see why AI would be treated any differently,” he says.
That has two key implications for professional risk. Refusing to use AI altogether is unlikely to remain a long-term shield if most of the market is using it to improve analysis and documentation. At the same time, using AI badly – with no policies, oversight or validation – will look increasingly reckless as governance norms solidify.
So where does that leave brokers heading further into 2026?
For Weekes, the principle is simple, even if the execution isn’t. AI has real upside when it’s used on the right foundation.
“AI can be very effective in helping brokers bring additional insights forward,” he says. For brokers who already understand risk and controls, it can help gather and organise information or remove friction from repetitive tasks – building baseline spreadsheets, structuring presentations or drafting first-pass summaries, provided everything is checked.
But the dividing line remains firm.
“AI should accelerate thinking,” Weekes says. “It shouldn’t outsource accountability of thought.”
Before anything AI-assisted goes out the door, he applies a simple test: would you be comfortable standing behind this output in front of a client, a regulator or a judge? If the answer is no – or even “I’m not sure” – then the professional risk line has already been crossed.
__
Originally published on Insurance Business
Get the latest updates about Cyber Insurance and Protection with our newsletter.