On Wednesday 22 April 2026, Prof Tony Haymet delivered a speech at the Centre for AI and Digital Ethics in Melbourne.

Good evening! And thank you for the warm welcome.

I begin by acknowledging the traditional owners of the lands on which we meet, the Wurundjeri people of the Kulin Nation.

As Australia’s Chief Scientist, I am pleased to support the wider recognition of Indigenous knowledge systems – systems with some 65-thousand years of understanding to offer.

This centre, by contrast, is still young. But in a short time, it has lived through a remarkable period of change.

Three chief scientists have been with you at key moments on that journey. Alan Finkel spoke at your launch, urging careful reflection on human rights in the digital age. But not long after, much of the world effectively shut its doors – as Covid arrived and Melbourne fell silent.

During that period, Cathy Foley addressed you by video – was there any other kind in 2021? – highlighting both AI’s role in responding to the pandemic and the dangers this technology can pose to individuals and communities.

And now in 2026, here I am – post Covid and very much in the flesh!

I am delighted to be part of this relaunch and pleased that renewed funding will support the continuation of the centre’s work. Because this work is more vital than ever.

When this centre was set up in the early 2020s, few could have anticipated the extraordinary speed, reach and impact of AI in the second half of this decade. In truth, no one that I knew did.

You began your work in a moment of global uncertainty. You are relaunching in another – one driven not by a virus, but by exponential and unprecedented digital change.

From the outset, CAIDE set itself apart by refusing a narrow view of AI. You treated it not just as a tool but a social system – shaped by law, culture, institutions, human behaviours and market economics. A complex task!

Over time you’ve built a strong national and international presence – delivering serious scholarship and making practical contributions to public policy.

Importantly, your work is grounded in how AI shapes our day-to-day lives, and how it will shape the years ahead.

Like the technology itself, your influence has extended well beyond the university gates.  Across governance, health, education, equity and philosophy – domains where AI was already making itself felt, and where your expertise proved invaluable.

Today, complex AI systems are deployed at scale and embedded in institutions Australians rely on every day.

From hospitals to classrooms, from carparks to courthouses, from fin-tech to weather prediction. AI is like electricity: a largely unseen force, but profoundly influential.

And because of that reach and impact, the ethical questions surrounding AI are more consequential than ever.

You work in territory where multiple forces are at play. Where law meets digital engineering. Where design meets regulation.

Where ethical principles collide with institutional and market constraints. That can make progress slow, contested and difficult. But in the world of AI, we are all in new territory, all feeling our way together.

The Government has set out a framework for responding to these changes through the National AI Plan.

It provides a coordinated approach across government, industry, research and the community – with the aim that all Australians benefit from the AI opportunity. 

Within that framework, there is an essential role for institutions like this one. They help to inform policy, strengthen capability, and support the responsible development and adoption of AI.

***

Centres like this work best when their influence extends beyond a single discipline and well beyond the university itself. So, I’m pleased to acknowledge the role you’ve played in building capacity across this institution and throughout the wider community.

In ways big and small, all of us are now required to reflect on ethical issues regarding AI – whether in our workplaces or in our smartphone apps. This centre has helped Australians do just that.

Indeed, you’ve treated engagement not as an accessory but a core responsibility. Through lectures, events, accessible writing, and media commentary, CAIDE has kept AI ethics in public view. Not as something to panic about, but something that is a shared concern in a civil society that is also a very digital society.

Your professional programs have reached lawyers, clinicians, public servants and industry leaders – people already making real decisions about AI, often under pressure.

Many organisations are understandably uncertain about how to use AI. They see opportunity but also risk – and those elements are moving together, intertwined, and at great speed.

The good news is that ethical capability provides a way forward.

Strong ethical frameworks do not slow innovation. Instead, they can reduce risk, enable responsible experimentation, and build confidence. Most importantly, they foster public trust in the technologies that now underpin daily life.

So, for me, the renewal of your funding is a signal. A signal that ethical capabilities are now part of public and private infrastructure – and part of the skillsets that we need in this century.

When it comes to new technologies, and AI in particular, the question is no longer whether to think about ethics. It is how to devise them, embed them properly, and to do so over the long term – and at scale.

CAIDE’s work helps Australia take part in these global AI governance conversations, rather than importing rules written elsewhere.

Our influence in AI won’t come from scale alone, but from how systems are designed and governed in democratic, pluralistic societies.

Ultimately, AI ethics is not about machines – it’s about people. That’s why the stakes are so high and the ethical questions so important.

AI intersects with our lives in very personal ways. Across employment, health, education, parenting, dating – and the list goes on. You name it, AI’s probably going to influence it.

This means questions – about power, inclusion, harm, privacy and accountability – are not theoretical.

CAIDE’s work keeps these questions in view and helps to inform decision-making that strengthens our civil society.

That matters deeply because AI does not affect people equally. Its benefits can arrive with an early and impressive flourish. But its harms can spread more quietly.

Regional Australians. First Nations communities. People with disability. Workers in severely affected industries. These are just some of the groups that can be disproportionately impacted.

So, ethical AI is not a matter of efficiency. It is about the fair go.

And yet this is not a story of risk alone. Indeed, I am so excited to be living through this AI revolution and its transformative impact on society – and science.

Consider AlphaFold – an AI system that solved a long-standing biological challenge: predicting protein structures. This system is accelerating drug discovery and has profound implications for medicine.  

And so too does the work of Australian researchers who are using AI to improve radiology services. When it comes to chest X-rays, this technology can detect up to 124 findings in under 20 seconds, which enhances the capacity of medical staff.

But alongside opportunity come real risks: workplace disruption, surveillance, bias, loss of privacy. And a wave of misinformation turbo-charged by AI.

Australia’s top science advisory body – the National Science and Technology Council – has recently published reports on misinformation. As its Executive Officer, I’ll touch briefly on some of the findings.

Research shows humans are naturally vulnerable to misinformation. We are more likely to believe claims that feel familiar, emotionally charged, or aligned with our existing views. Repetition reinforces credibility, even when the information is false.

And misinformation is not just an information problem. It is also linked to individual wellbeing, mental health and social connection.

***

Australia, like many countries, is focused on AI governance – from national planning to the establishment of an AI Safety Institute that helps our society to navigate these changing times.

CAIDE belongs in this complex and challenging eco-system. Not as a regulator but as an institution that helps us all think more clearly, act responsibly, and stay accountable as the ground shifts beneath us – and the generations that will follow.

***

I spoke earlier about the infancy of this centre during Covid. 
Well, the infants of today will live with the consequences of how AI is embedded in this decade.

This technology may be one of the most socially disruptive we have faced. But it’s also an extraordinary opportunity.

That is why ethics must not be an after-thought ... a bolt-on once systems are deployed.

Ethical considerations must be embedded at the point of decision-making, again and again, as this technology evolves.

And that is why this relaunch matters.

Not because the work is finished …

… but because, in many respects, it has only just begun.

Thank you.