Global leader on why he thinks, “the insurance industry is approaching AI with the wrong mindset”

Global leader on why he thinks, “the insurance industry is approaching AI with the wrong mindset”

[ad_1]



Global leader on why he thinks, “the insurance industry is approaching AI with the wrong mindset” | Insurance Business America















There is a great collision taking place – where are we going wrong?


Technology

By
Mia Wallace

“We’ve got to stop convincing ourselves that the first to use technologies is going to be the most successful when, in fact, it’s the organization that uses it the right way that will ultimately be the winner.”

‘The great collision’ represented by AI is already underway, according to Rory Yates (pictured), global strategic lead at EIS, and he’s got some stark warnings for the insurance industry and how it’s currently utilizing AI tools.

Championing a positive business case for AI

“I’ve always taken the view that when I’m leading transformation projects, we’re freeing up human capital to be deployed where it’s really needed,” Yates said. “I think the insurance industry is approaching AI with the wrong mindset because it’s adopting from the position of a negative business case where you replace people in the name of efficiency. That’s based on a displacement theory, where people just bear the brunt and it won’t produce a better result for the end customers or humanity.”

Yates noted that the Industrial Revolution is a pertinent example because what’s happening today is itself a protracted revolution. The Industrial Revolution resulted in the deaths of many people and the decimation of many communities. It’s something we don’t want repeated. We’re supposed to be living in an “era of intelligence”, he said, so it’s to be hoped that society is now better equipped from a technological and a moral standpoint to utilize machine intelligence without sacrificing people.

This will involve re-training to redirect people to where they can add significant value. In many cases this will be the customer experience, presenting the opportunity for a net positive for employees and customers alike. But if you make the business case negative, he said, all you do is shed that human asset.

The regulatory implications of AI

An area of significant concern is just how little people understand about AI. Machine learning is already well established, and although, superficially they know how to use Generative AI, they don’t know what it is, where it comes from, how it’s funded or what implications it has for their data.

“Companies are investing in massive multi-million dollar projects which is especially absurd when you realize these are the same insurers that won’t utilize a public cloud because they’re wary of putting their data out there,” he said. “These businesses go through so much regulation around their customer data but when it comes to AI, they’re throwing it out the door without really knowing where it’s going.”

Setting the right foundations for a healthy approach to AI

“We’re also running the massive risk of rolling out genAI on top of a lot of weak data,” Yates said. “Insurance has got a lot of data but it’s largely unstructured so it’s very hard to understand how insurers will make sure it’s acting in the best interests of the customers. We’re not even very good at that on a policy level and suddenly we think we’re going to do it in this open model.

“So, we’ve got to get the foundations right if we’re going to leverage the many possibilities of AI and do that in the constraints of it being provably better for humans. There’s always going to be volatility and uncertainty, that’s a given, but it comes back to asking the right questions and making sure if you’re a CEO or CFO signing off on any projects, that you know what you’re signing off on.”

Yates believes that 2024 will be a tipping point for AI, and he noted that, of any industry, insurance likely has the most to gain from getting the foundations right. GenAI can bring simplicity to tasks where a high degree of accuracy is required, particularly around making policy information more accessible, more quickly. Keeping a human in the loop will be critical for data validation purposes, which highlights how this tool can be utilized to complement and enhance the working lives of people.

“There’s a lot of really powerful use cases and I have no intention of standing in the way of them,” he said. “But to get to these, there’s an awful lot of responsibility we need to bake into our decision-making first to get this done right. We’ve got to invest in the fundamentals before we build. Because there’s no point in knocking down the bungalow to build a skyrise AI only to later find we’ve not built it on strong foundations.”

Related Stories


[ad_2]

Source link

Leave a Reply

Your email address will not be published. Required fields are marked *