As automation becomes more common in the Canadian P&C insurance industry, the question facing brokers is no longer whether AI will shape insurance distribution, but where accountability sits when technology begins influencing advice.
Jonathan Weekes, president of BOXX Canada, says brokers risk undermining consumer trust and confidence if technology replaces diligence.
“Trust is our greatest asset,” he tells Canadian Underwriter. “To immediately trust technology simply because we think it’s better than us as human beings is probably the biggest risk that we face.”
How many Canadian P&C insurance brokers are using AI in their brokerages?
There is “little publicly available research about brokers’ use of AI tools in Canada,” Ontario’s broker regulator, Registered Insurance Brokers of Ontario, found in its December 2024 report on AI use. “Interviews with brokers suggested that current AI use is largely centred around process automation…and marketing.”
Weekes confirms brokers are already using AI tools to improve efficiency and analyze information more quickly. But he cautions brokers can’t outsource their responsibility for AI outputs.
“Automation tools, AI… are really great at enhancing the efficiency of brokers,” he says. “But it certainly shouldn’t start to think for brokers or service on a broker’s behalf.”
Industry experts warn AI can create false confidence if automated outputs are treated as complete answers. Coverage decisions often hinge on nuance, exclusions, and context requiring professional interpretation.
As these tools enter the advice chain, brokers remain responsible for the guidance clients rely on — regardless of how information is generated.
Weekes is clear about where that responsibility remains. “It sits with the broker,” he says.
As brokers work in an increasingly automated environment, they must, to maintain their clients’ confidence, be transparent about what their technology can and cannot do, Weekes says. And they must avoid treating AI outputs as definitive. Customers are more likely to trust brokers who slow down at key moments, ask the right questions and document decisions clearly.
Conversely, trust can be undermined when brokers prioritize speed over diligence, or when clients sense advice is being delegated to a tool rather than grounded in professional judgment.
A lack of accountability carries legal consequences, says Daniel Strigberger, an insurance coverage lawyer and principal at Strigberger. He notes technology can erode the personal connection underpinning the consumer’s confidence in insurance decisions.
“Computers and AI take away the human element and the relationship and the interaction,” Strigberger says.
Furthermore, he warns the real risk comes when AI-generated guidance is accepted without verification and later tested in a dispute.
“If you use AI blindly…without fact checking…there’s definitely potential [for E&O liability],” Strigberger says. “If I’m cross-examining [brokers] and [they say] ‘…I got it from ChatGPT,’ that’s going to be a problem.”
Rather than replacing brokers, Weekes expects automation will shift the profession toward deeper advisory work, with technology handling more of the routine administrative functions.
“The paper-pushing tools or functions within that process will be handed off to AI and then brokers will take on more of a consultancy or an advisory role,” Weekes predicts.
In an automated workplace, Weekes says the broker’s differentiator will be less about speed and more about “judgment, advocacy and accountability,” particularly when clients face complex risks or when claims outcomes depend on how coverage was structured.
Brokers should not reject automation, Weekes says. But they should ensure technology remains a tool that supports — rather than replaces — human judgment.
“Don’t let these tools think on your behalf,” he says. “Your judgment should still be the last call.”
__
Originally published on Canadian Underwriter
Get the latest updates about Cyber Insurance and Protection with our newsletter.