Newspaper icon
The latest issue of Financial Standard now available as an e-newspaper
READ NOW

The rising role and risks of generative AI

Generative AI, a rapidly advancing form of deep learning that mimics human output, also raises concerns about the verifiability of its results, UTS industry professor - emerging technology Nicholas Davis says.

Appearing at the Stockbroker and Investment Advisers (SIAA) conference, he explained that generative AI uses complex transformer models to process vast amounts of diverse data. While these models are refined with human feedback to create convincing outputs, they're not inherently designed to produce verifiable truths.

Davis noted that while the public's focus is often on AI in client-facing applications like chatbots, which he considers "fluffy," he believes there's "massive ongoing deepening" in back-office functions such as treasury and clearing.

"When it comes to generative AI, you may have noticed from today's discussions and previous conferences, including markets like the UK, that many institutions are still in the early stages of development or deployment. But if your institution hasn't yet built or rolled out these models, that's probably a good thing in terms of latency and understanding the risk profile of this," he said.

Despite standout examples of early adoption, such as Microsoft's investment in OpenAI, the public nevertheless remains sceptical about AI's benefits. A survey by KPMG Australia and the University of Queensland, involving 17,000 people across 17 countries, found that Australians are among the most cautious regarding AI usage. This mistrust outweighs the perceived benefits for many, leading to a significant risk aversion among the public and clients.

So, when working with clients, it's crucial to understand the segments, counterparties, and stakeholders involved, he said. Ensure clarity when introducing new AI tools in your terms and conditions to avoid relationship issues and unexpected surprises. Also, when announcing new AI tools, be mindful of your messaging and target audience, emphasising governance aspects.

Currently, 71% of respondents believe specific AI regulation is necessary, though Davis thinks 80% of the market's challenges are already addressed by existing regulatory powers. Therefore, it's important to communicate clearly about the use and governance of AI to meet market expectations and build trust, he said.

However, "there's probably only about a 5% gap here for actual new legislation," he said.

Read more: Generative AINicholas DavisSIAAKPMG AustraliaUniversity of Queensland