Is regulation the answer to closing the risk gap? | Nexus

Is regulation the answer to closing the risk gap?

Balancing AI’s rapid innovation with ethical oversight
22 January 2026
Annette Mcilroy | with LightBlueBG
0:00
/
0:00
Cannot load this file!
There was an error loading this file.

00;00;01;11 - 00;00;04;08

Nexus, Publish By GHD. Where ideas connect

00;00;04;08 - 00;00;05;27

Where ideas connect.

00;00;08;17 - 00;00;10;09

We have Annette Mcilroy,

00;00;10;12 - 00;00;13;26

executive advisor

with our risk advisory business at GHD.

00;00;14;01 - 00;00;15;04

Joining us today

00;00;15;11 - 00;00;19;07

and Annette will be sharing her insights

on risks in AI.

00;00;19;23 - 00;00;23;19

Annette with AI risks continuing

to dominate boardroom discussions.

00;00;24;01 - 00;00;29;05

What would you say is the current state

of the risk landscape in 2025?

00;00;29;12 - 00;00;32;10

For this, the current state of risk,

00;00;32;13 - 00;00;37;29

I think it would be termed transformation

and perhaps to turn it on its head

00;00;38;02 - 00;00;41;16

a little bit and say it's

the current state of possibility.

00;00;41;29 - 00;00;45;28

We're saying that

AI is transforming various sectors.

00;00;46;07 - 00;00;50;14

And in the financial sector, for example,

we're seeing fraud detection

00;00;50;27 - 00;00;54;04

improving through the application of AI.

00;00;54;14 - 00;00;56;18

Another example is in healthcare.

00;00;56;18 - 00;01;00;27

The Royal Melbourne

Hospital is using AI to improve outcomes

00;01;01;00 - 00;01;04;13

for patients by tracking patient metrics,

00;01;04;26 - 00;01;07;14

and in manufacturing assessors

00;01;07;14 - 00;01;11;07

is optimizing yields of crops

by analyzing various

00;01;11;10 - 00;01;15;03

data like soil condition, weather patterns

and crop health

00;01;15;15 - 00;01;19;24

to inform decisions

around our products in the farming sector.

00;01;20;04 - 00;01;22;27

So rapid transformation is absolutely

00;01;22;27 - 00;01;25;27

the characteristic of the current state.

00;01;26;00 - 00;01;30;06

We have another example of AI in decision

making.

00;01;30;19 - 00;01;35;29

Artificial intelligence can take

a lot of data and process it very quickly.

00;01;36;12 - 00;01;39;17

And I don't know whether you've

heard of this in the mining sector,

00;01;39;27 - 00;01;43;05

but Bill gates is backing

a company called Cobalt,

00;01;43;15 - 00;01;46;12

and they're using artificial intelligence

to create

00;01;46;15 - 00;01;49;23

detailed maps

to locate valuable resources.

00;01;49;26 - 00;01;53;18

And they've actually found,

quite substantial copper deposits

00;01;53;26 - 00;01;55;17

using this approach.

00;01;55;20 - 00;01;59;12

So we can see that

it's transforming many areas.

00;02;00;07 - 00;02;03;14

And with all of those benefits

and with the transformation

00;02;03;17 - 00;02;07;09

that's occurring in it,

there has to be some challenges in

00;02;07;12 - 00;02;13;00

how organizations need to think about

how they manage risks associated with AI.

00;02;13;03 - 00;02;16;22

Do you have anything that you can share

with us around what you've seen

00;02;16;25 - 00;02;18;18

and what you've heard organizations

00;02;18;18 - 00;02;21;20

doing at the moment

in terms of managing AI risks?

00;02;22;08 - 00;02;23;10

I can look.

00;02;23;10 - 00;02;28;17

There certainly are challenges

that come with the AI technology

00;02;28;20 - 00;02;32;19

and one of them is automating decision

making.

00;02;32;25 - 00;02;36;25

There's a lot of danger

that can come from that process.

00;02;37;08 - 00;02;41;11

And an example of that in Australia

was the robo debt, where the income

00;02;41;14 - 00;02;45;07

and welfare payments

were incorrectly allocated,

00;02;45;10 - 00;02;50;00

and it traumatized a lot of people

with the follow up of getting the debts

00;02;50;03 - 00;02;53;03

back, which they actually didn't own

in the first place.

00;02;53;10 - 00;02;55;12

The other area is ownership.

00;02;55;12 - 00;02;58;21

Who owns the outputs

of artificial intelligence,

00;02;59;06 - 00;03;03;14

and currently in Australia

we have various acts

00;03;03;19 - 00;03;07;02

and regulations

that manage various types of data.

00;03;07;13 - 00;03;10;28

So we have the Patent Act to manage

innovations,

00;03;11;01 - 00;03;14;06

the Copyright Act,

which manages general data.

00;03;14;18 - 00;03;18;08

We've got the Privacy Act

that manages personal data.

00;03;18;22 - 00;03;21;19

And we've also got laws

like the Australian Consumer

00;03;21;19 - 00;03;25;04

Law that manage the interests

of our consumers.

00;03;25;15 - 00;03;30;00

So at the moment we're relying on

those laws that weren't really designed

00;03;30;03 - 00;03;33;03

for artificial intelligence

to help us manage

00;03;33;06 - 00;03;37;16

the negative consequences

of the misuse of data.

00;03;38;06 - 00;03;41;07

And then we look to ethical

considerations.

00;03;41;17 - 00;03;45;22

We want to make sure

that when we're using the data in the

00;03;45;24 - 00;03;49;28

AI process, that we're not creating

bias in the outputs.

00;03;50;08 - 00;03;52;27

There's a term called human in the loop,

00;03;53;00 - 00;03;56;08

and it's all about

balancing human judgment.

00;03;56;11 - 00;03;59;15

And the benefits

of artificial intelligence

00;03;59;18 - 00;04;02;22

so that we don't compromise

ethical oversight

00;04;02;25 - 00;04;07;02

or we don't compromise our strategic

thinking in the boardroom,

00;04;07;09 - 00;04;12;07

but we benefit from the speed

and the innovations of AI.

00;04;12;25 - 00;04;16;00

So it's this concept of AI assisted

very much.

00;04;16;03 - 00;04;18;28

Still, having that human in the loop.

00;04;18;28 - 00;04;22;25

As such to ensure

that there is some sort of oversight.

00;04;22;28 - 00;04;26;20

And it's not purely 100% AI driven.

00;04;27;01 - 00;04;29;06

Is what you're saying there?

00;04;29;09 - 00;04;30;09

Absolutely.

00;04;30;09 - 00;04;34;05

And there's various types

of artificial intelligence,

00;04;34;16 - 00;04;37;25

some of which you can track

the transformation

00;04;37;28 - 00;04;41;06

of the data

to the output more easily than others.

00;04;41;09 - 00;04;47;11

And other types of AI, called generative

AI, generate completely new data.

00;04;47;14 - 00;04;51;16

So having a human in the loop

allows you to check

00;04;51;19 - 00;04;55;18

and make sure that the output

is what you intended,

00;04;55;21 - 00;05;00;25

and when you look at the evolution

of some of the laws around the world, now

00;05;01;06 - 00;05;05;10

you can see that

various countries are developing laws

00;05;05;22 - 00;05;08;21

to, for example, sustain their own culture

00;05;08;21 - 00;05;11;20

to prioritize what they value.

00;05;11;28 - 00;05;15;10

And some areas of the world value ethics.

00;05;15;13 - 00;05;19;11

Some areas of the world

value state values,

00;05;19;14 - 00;05;23;22

some areas of the world

value technology and control.

00;05;23;25 - 00;05;27;00

So the bias in the laws

00;05;27;03 - 00;05;30;03

to manage those areas.

00;05;30;17 - 00;05;32;08

And if I can come back to a couple

00;05;32;08 - 00;05;35;28

of points that you raised earlier

around the laws that do exist,

00;05;36;01 - 00;05;39;23

and then it sounds like there are laws,

00;05;39;26 - 00;05;46;10

acts, legislation out there

that link to AI from a governance

00;05;46;13 - 00;05;51;21

and risk perspective,

but nothing that's very specific to I.

00;05;52;04 - 00;05;56;24

Would you say that that's the case

globally, or are there particular regions

00;05;57;09 - 00;06;00;03

that you may be aware of with progressed

00;06;00;06 - 00;06;02;25

to ensure that their acts,

they laws, their legislation,

00;06;02;25 - 00;06;07;27

their policy are aligned

to where I, Hetty, at the speed

00;06;08;00 - 00;06;12;26

that it's developing in terms

of how fast AI is actually accelerating.

00;06;13;27 - 00;06;16;01

Yeah, it's a really great question.

00;06;16;01 - 00;06;19;13

Everybody is evolving in this space,

00;06;19;26 - 00;06;24;29

and there's usually a sequence of bodies

that write

00;06;25;02 - 00;06;28;24

best practice papers that inform and trial

00;06;28;27 - 00;06;33;26

what will actually go into the X,

and then the enforcement of the X.

00;06;34;09 - 00;06;37;05

The EU have gone the path

00;06;37;08 - 00;06;40;19

to developing

AI Artificial Intelligence Act,

00;06;40;29 - 00;06;44;16

and that's the only legislation

we have in the world.

00;06;45;05 - 00;06;48;20

The interesting thing about

that is the EU.

00;06;48;23 - 00;06;51;24

While it's a law for the European Union,

00;06;52;03 - 00;06;56;17

the world up until perhaps very recently

looked at the EU

00;06;56;20 - 00;07;00;24

and followed the EU

with the set of ethics.

00;07;01;08 - 00;07;06;12

So many countries will either adopt

that law into their own law.

00;07;06;15 - 00;07;10;14

And certainly I think in Australia,

we're leveraging heavily

00;07;10;17 - 00;07;14;27

on the good work that the European Union

have done in that space.

00;07;15;14 - 00;07;20;03

Elsewhere in the world

there is really targeted regulations.

00;07;20;06 - 00;07;24;06

So we have a regulation,

for example, around protecting consumers.

00;07;24;09 - 00;07;28;24

So there's specific

AI regulations around the world

00;07;29;09 - 00;07;32;15

that are targeted

and not a broad policy statement.

00;07;32;18 - 00;07;37;19

So that's the journey that we're on to get

that broad policy statement that covers

00;07;38;00 - 00;07;42;10

many, many aspects of

AI, not just a narrow section

00;07;43;09 - 00;07;44;07

in Australia.

00;07;44;07 - 00;07;48;07

In the work that we're progressing

at the moment in that space,

00;07;48;19 - 00;07;52;26

is there an opportunity for Australia

to collaborate

00;07;52;29 - 00;07;56;03

in addressing potential AI risks?

00;07;57;13 - 00;08;00;05

Yeah, look, there

certainly is a role for us

00;08;00;06 - 00;08;04;06

as Australians,

contributing to the global landscape.

00;08;04;09 - 00;08;06;07

And we are already doing this.

00;08;06;07 - 00;08;10;21

There's organizations like

the International Standards Organization,

00;08;10;29 - 00;08;14;01

and they've developed

a number of standards around

00;08;14;07 - 00;08;17;06

various aspects

of artificial intelligence.

00;08;17;09 - 00;08;21;00

And Australia has representation on that.

00;08;21;10 - 00;08;24;23

And two of the representatives

come from good, in fact.

00;08;24;26 - 00;08;29;03

So I've had a wonderful insight

into that process.

00;08;29;18 - 00;08;34;19

There's other bodies, like the Institute

of Electrical and Electronic Engineers,

00;08;34;22 - 00;08;38;12

and they've been putting together

standards on machine learning

00;08;38;15 - 00;08;41;12

algorithms, data usage related to AI.

00;08;41;15 - 00;08;45;01

So there's those bodies that facilitate

00;08;45;04 - 00;08;48;04

the collaboration in the global space.

00;08;48;13 - 00;08;52;05

So Annette, this is one of our favorite

questions.

00;08;52;10 - 00;08;55;16

If we had $10 billion today

00;08;56;00 - 00;09;01;04

to head over to yourself,

what would you do with that $10 billion

00;09;01;20 - 00;09;05;24

if there were top three things within the

AI governance space?

00;09;06;05 - 00;09;08;10

What how would you spend that money?

00;09;08;13 - 00;09;11;12

Look, $10 billion is a lot of money

00;09;11;15 - 00;09;14;20

and a small amount of money,

and everyone says that of it.

00;09;16;05 - 00;09;16;22

Just as a

00;09;16;22 - 00;09;22;11

benchmark, though, the United States

have committed 500 billion for AI.

00;09;22;22 - 00;09;28;04

But interestingly,

a key R&D group in the US

a key R&D group in the US

00;09;28;15 - 00;09;34;00

have a budget of 5 billion,

and that group is called DARPA.

00;09;34;03 - 00;09;39;02

And one of the areas that I wanted to

allocate to is research and development.

00;09;39;15 - 00;09;43;29

And this group called dapper

a very interesting because DARPA stands

00;09;44;02 - 00;09;50;07

for Defense Advanced Research Projects

Agency, and it was created in 1958.

00;09;50;10 - 00;09;54;03

And The Economist calls it

the agency that shaped the modern world.

00;09;54;15 - 00;09;59;11

And the reason is that that agency

developed the drug Moderna,

00;09;59;23 - 00;10;00;26

the Covid vaccine.

00;10;00;26 - 00;10;04;16

But they also developed the internet

and personal computers.

00;10;05;01 - 00;10;08;01

And then the industry

sort of took advantage of that.

00;10;08;13 - 00;10;12;14

So leveraging

and working with organizations like that

00;10;12;24 - 00;10;16;02

to improve the ethics

and safety of artificial intelligence.

00;10;16;15 - 00;10;18;18

Another area is education.

00;10;18;18 - 00;10;23;22

And that could be improving education

outcomes through artificial intelligence,

00;10;24;06 - 00;10;27;11

enhanced tutoring, or adaptive learning

00;10;27;14 - 00;10;30;15

so that people can learn

at their own speed, faster or slower.

00;10;31;03 - 00;10;34;11

And I think the final area

is the public sector.

00;10;34;25 - 00;10;39;19

And in this sector,

an example is in the UK,

00;10;39;22 - 00;10;43;19

where the National Health Service

is using artificial intelligence

00;10;43;22 - 00;10;47;29

to analyze images

to improve patient outcomes.

00;10;48;10 - 00;10;52;23

So I think that it's, a balanced

approach is what is needed.

00;10;53;07 - 00;10;57;14

And I think it's about possibility,

not only productivity.

00;10;58;12 - 00;11;02;09

I feel like we've only just skimmed

the surface of the topic.

00;11;02;14 - 00;11;05;21

Annette,

thank you so much for joining us today.

00;11;06;04 - 00;11;07;13

Thank you. Sharon.

00;11;07;16 - 00;11;11;23

Annette Mcilroy, our executive advisor

from the Risk Advisory

00;11;11;26 - 00;11;16;17

team here in Australia,

sharing her insights on risks and AI.

00;11;21;07 - 00;11;24;07

Brought to you by Nexus, Publish By GHD.

00;11;24;15 - 00;11;25;19

Where ideas connect.

From breakthroughs in healthcare and cybersecurity to the billion-dollar question of ethical AI governance, the landscape is constantly evolving. In this podcast episode, join Annette Mcilroy (GHD) as she breaks down key challenges like decision-making automation, data ownership and the scalability of AI benefits.

Catch up on:

  • How organisations are balancing rapid innovation with ethical oversight
  • The state of AI legislation globally
  • Australia's role in shaping international governance standards in AI

Whether you're a leader, innovator or curious about the future of AI, this discussion offers insights and practical strategies to help you seize possibilities while managing risks.

nexus-subscription-5.jpg

Smarter insights.
Sharper decisions.

Get clarity on what matters — fast. Nexus delivers tested ideas from people who’ve done the work.