Nearly half of the licensees in ASIC’s review did not have AI policies in place that considered fairness, inclusivity, or accessibility.
The Australian Securities and Investment Commission (ASIC) has urged credit licensees and financial services businesses to do more to ensure their governance practices are keeping pace with the accelerating adoption of artificial intelligence (AI).
To continue reading the rest of this article, please log in.
Looking for more benefits? Become a Premium Member.
Create free account to get unlimited news articles and more!
Looking for more benefits? Become a Premium Member.
These calls follow the release of ASIC’s first state-of-the-market report on the topic, Beware the gap: Governance arrangements in the face of AI innovation, which saw the consumer body review the use and adoption of AI by 23 licensees.
While saying the current usages of AI “remained relatively cautious”, ASIC’s review found there was a potential for governance practices to lag behind AI adoption.
According to ASIC, only just over half (12) licensees had policies in place for AI that referenced fairness or concepts such as inclusivity and accessibility.
The consumer body said it was concerned not all licensees are well positioned to manage the challenges of expanding AI usage, with some licensees “updating their governance arrangements at the same time as increasing their use of AI”.
Joe Longo, ASIC chair, said updating governance frameworks and planning for the future use of AI is crucial to ensuring future challenges posed by the technology will be met.
“Our review shows AI use by the licensees has to date focused predominantly on supporting human decisions and improving efficiencies. However, the volume of AI use is accelerating rapidly, with around 60 per cent of licensees intending to ramp up AI usage, which could change the way AI impacts consumers,” Longo said.
“It is clear that work needs to be done – and quickly – to ensure governance is adequate for the potential surge in consumer-facing AI.
“When it comes to balancing innovation with the responsible, safe and ethical use of AI, there is the potential for a governance gap – one that risks widening if AI adoption outpaces governance in response to competitive pressures.
“Without appropriate governance, we risk seeing misinformation, unintended discrimination or bias, manipulation of consumer sentiment and data security and privacy failures, all of which has the potential to cause consumer harm and damage to market confidence.
“Existing consumer protection provisions, director duties and licensee obligations put the onus on institutions to ensure they have appropriate governance frameworks and compliance measures in place to deal with the use of new technologies. This includes proper and ongoing due diligence to mitigate third-party AI supplier risk.
“We want to see licensees harness the potential for AI in a safe and responsible manner – one that benefits consumers and financial markets. This can only happen if adequate governance arrangements are in place before AI is deployed.”
Considered approach needed
In a recent opinion piece for The Adviser, Theo Hourmouzis, vice-president, Australia and New Zealand for data cloud applications platform and AI services provider Snowflake, said care was needed to ensure an AI strategy was implemented successfully.
“An AI strategy should only be implemented alongside an exceptionally strong and unified data strategy,” Hourmouzis said.
“In the financial services industry, artificial intelligence has the potential to completely reinvent the sector.
“While financial organisations have the most to gain from a considered and strategic AI implementation, they also have the most to lose if projects are pursued without a strong data strategy in place.”
[Related: Government launches consultation on AI and consumer law]
JOIN THE DISCUSSION