Without Action, AI Is Just Hype

Artificial intelligence (AI) can produce astonishingly accurate analyses and predictions, but these mean very little if businesses fail to act on them.

“Unless you actually take the trouble to implement it, AI is just a party trick, which is very impressive but doesn’t help anyone,” said Daniel Saksenberg, chief AI officer at machine learning company Emerge ML.

He was the keynote speaker at an ‘AI in the Financial Services Industry’ event held in Johannesburg on 24 July as a collaboration between pre-eminent African law firm Bowmans and Microsoft.

According to Saksenberg, whose topic was ‘From data to value: unlocking AI in financial services’, there are two main reasons why AI does not necessarily deliver the value that financial institutions are hoping for, and can even be loss-making. “One is that they are solving interesting, but not valuable problems. The second is that the output of AI is just information. You need to act on this information.”

An actuary, academic lecturer and entrepreneur who has worked in AI for 23 years, Saksenberg gave the example of an AI model built for a South African insurer who was losing customers. Looking three months ahead, the model predicted which customers would cancel their policies.

“It was scarily accurate; we’re talking about upper 90 percent accurate. We gave them a list of
1 000 people, and almost to a person those people left in the next three months,” he said, adding that this accuracy had not left the insurer any better off as it had not taken steps to act on the insights produced.

‘It’s important to devise interventions to operationalise the data,” he said, offering other, more successful examples of the benefits that AI can bring to financial services companies that use AI insights effectively.

Success in credit applications and combating fraud

In one case, an AI-based credit application scorecard resulted in a 42 per cent reduction in defaults for a South African bank and also increased the amount of business by allowing in creditworthy customers who would have been screened out before. “The previous credit scorecard was blunt,” he said, noting that pre-AI credit paradigms had not changed in 80 years.

Another enormously exciting area for AI in financial services is fraud prevention and detection. Traditional rules for combating credit card fraud often fail because they are designed by ‘honest people who do not think like criminals’ and are infrequently updated. Furthermore, the sheer volume of transactions flagged as potentially fraudulent outstrips the capacity of financial institutions’ call centres to follow up on flagged purchases.

By contrast, a machine trained to recognise fraud for a particular South African bank was able to touch 94 percent of actual cases of fraud compared to only 20 per cent before.

What is more, Saksenberg said, machine learning models are unleakable, so even if an employee is in the pay of a fraud syndicate, they would not be able to figure out what rules the machine is applying. “Machine learning models are fabulously complex; they turn into black boxes. Even though I build these models, I do not know what the machine has learnt at the end of the day. That freaks out a lot of people but in the case of fraud, this is an amazing advantage.”

Why hallucination happens

He also explained the phenomenon of AI hallucination, referring to the generation of AI output that is incorrect or fabricated – as reportedly happened in South Africa recently when lawyers submitted what turned out to be fictitious AI-generated case law to the Gauteng High Court.

AI hallucination happens when a machine is instructed to provide an answer for which it has not been given information, Saksenberg said. Machine learning models have to give an answer if instructed to do so, he noted, adding that the best way to ensure a machine does not hallucinate an answer is to provide data direct from a source.

He spoke briefly about the risks of AI. “I don’t want to give you the impression that AI is all rainbows and unicorns; there are genuine risks,” he said, mentioning challenges such as agentic misalignment, privacy issues, skills shortages, job losses, model bias, lack of explainability, ethical challenges and market risks.

On the question of jobs, Saksenberg said jobs would be lost to AI and while new jobs would be created, these would not typically be available at the same time as jobs were lost, nor would they necessarily be the same type of jobs.

Despite the challenges, it would be ‘inconceivable’ to have businesses that were not focused on AI going forward, he said. ‘Anyone not willing to embrace AI is going to be a relic of the past.’

Dealing with resistance

Even so, large financial institutions in particular, face significant challenges in introducing AI within their organisations, including the phenomenon of ‘corporate antibodies’: when confronted with anything new, there are always people who ‘cluster around and do their best to destroy it’, Saksenberg explained.

In his experience, the most effective way to introduce AI in a financial institution is to carve out a dedicated team consisting of business decision-makers, data decision-makers and operational staff responsible for implementation. Starting with the low-hanging fruit, the team should incubate the innovation, release it into the organisation and champion its successes.

Saksenberg warned that it is vital to obtain the support of employees who will be expected to implement the work. “You can’t preach from on high that the COO wants to implement this. You must have the buy-in of the people on the ground. You need to take people along.”

Ready to Shape the Future of Technology?

Join tech industry leaders at the most exclusive gathering of CIOs and technology visionaries. Where breakthrough innovations meet strategic partnerships, and where your next career-defining connection awaits.
Don’t just witness the future—help create it.

Source link

Similar Posts