My right to be me: Navigating the landscape of artificial intelligence and digital technologies
Thank you vice-chancellor Professor Maskell.
It’s a great honour to be giving this oration.
First, I would like to acknowledge the Cammeraygal people, on whose traditional lands I am speaking. I pay my respects to them and to the traditional custodians of other lands where you are all are based.
I acknowledge the elders who are caring for those lands. I pay my respects to the old ones who have come before and the young ones who will follow.
I understand I was supposed to be speaking to you back in April, but gosh, so much has transpired!
No matter where we are, it’s a very worrying time for so many people, especially those of you with vulnerable family and friends.
I have spent many, many weeks in lockdown at my home in Sydney, working remotely with my team in Canberra, and I know how difficult it is. It’s come to feel quite surreal for me … here at this desk … at home.
I’m sorry I’m not with you in person, but I’m pleased you have been able to go ahead online.
And let me start by saying that I wholeheartedly I endorse your cross-disciplinary focus on what really is an urgent issue.
It’s no secret that I’m a great advocate for the development of digital technologies.
Just look at the role artificial intelligence has played in the COVID response.
AI helped sound the international warning bell at the start of the pandemic; and has helped track its spread and forecast the development of mutating strains.
AI played an important role in finding and testing vaccines and therapies, which made the process much faster.
AI has massive potential across the medical field.
It also has potential in many other aspects of our lives, from planning and managing our cities, to automation in transport, to wrangling huge datasets that are beyond the capacity of individual scientists.
It’s not all just: Hey Siri. Please Send A Text To. My Husband.
I’m here to talk about some of the challenges that AI presents. But before I do, I want to touch on a new set of digital technologies that are stampeding over the horizon. That is quantum technologies. And as Australia’s Chief Scientist, I’ve made this a top priority.
It’s a really exciting avenue of research and innovation.
Australia has enormous potential.
In quantum computing, we have some of the world’s top researchers and developers.
Quantum computing is every bit as peculiar as you will have heard. It’s also every bit as powerful.
Because it allows us to simulate the world at the tiniest scales, it will be nothing short of a game changer in medicine and in many other fields.
Quantum computers will be able to build completely new molecules. Then test and simulate their effects in the virtual realm.
This means new therapies. It means new materials. New catalysts, for example, to split water more efficiently to release hydrogen, which is a bit of a holy grail in the shift to a low emissions economy.
It means the ability to simulate systems far too complex for conventional computing, such as climate systems.
Quantum technologies are not limited to computing.
Quantum sensing and imaging also opens up exciting opportunities.
Quantum sensing will transform our ability to map the earth beneath our feet, and the oceans – which are largely invisible to conventional mapping techniques. It doesn’t take a huge leap of imagination to see how important that is for Australia.
In my role as Chief Scientist, I’m taking every opportunity to urge Australian policymakers, educators, and industry leaders to embrace the new digital revolution and stay with the leading pack.
As you are aware, you don’t need to look far for trip wires.
I am very conscious of the risks and the points of vulnerability.
It’s important that, as a nation, we’re clear about these.
I’m sure each of you has a phone within reach right now.
And I’m sure most of them are loaded with apps.
My family all uses “Find My Friends”.
It’s a location app, and it’s a good safety tool.
I use it to check up on my adult kids.
When I look now, they’re mostly at home like all of us. But when we were allowed to roam more widely, I’d see my son was walking along the Corso in Manly, or my daughter was somewhere I hadn’t expected.
And I'd be thinking, why aren't you at work?
They know I do it. But to be honest, I feel a bit conflicted. It’s too easy just to click and check.
My husband uses a fitness app to record his runs.
Fitness apps are likely to become more useful for monitoring our health.
This is information I'd be happy for my phone to share with my doctor.
But I'd be appalled if it became available to my insurer. Or if an employer used the data to make judgements about my fitness or my eating habits.
Sometimes all of this begins to feel like the electronic version of going through the neighbour’s rubbish bin!
We’re playing catch-up with these technologies.
New tech has this dual personality. It arrives so fast – I always find it hard to believe the iPhone is only 10 years old!
But at the same time, it also creeps up on us.
We get excited by what seems like a fun new app, device or platform – but we fail to take that leap to imagine how else it could be used or what it might become.
When it comes to social media, we’re seeing the result in online bullying, fake news and issues with election integrity, problems with ownership of data and privacy.
We didn’t get the checks and balances in place first.
All around us there is amplification of disinformation and conspiracy – and where once we might have been able to shake our heads and ignore a crazy idea, those crazy ideas now move more quickly and grow more quickly than our power to combat them.
This isn’t just a great mass of misinformation that exists in the electronic world.
A pile of unfiltered rubbish in someone’s virtual backyard.
This impacts the real world.
I read an article recently by a Sydney GP exhausted by trying to counter misinformation and conspiracy theories with evidence and argument. It doesn’t work when people are used to having their views reinforced and strengthened the more they express them online.
So social media and the current generation of mobile applications have given us a taste of the dangers.
But the new digital technologies, AI, machine learning, and quantum, will amplify the risks.
I don’t want to give the impression that I view it as all risk and no reward. As I said at the outset, I am an advocate of digital technologies, including AI. And it is clear that models are enormously useful tools. Modelling has been powerful in the pandemic.
But when AI is used to model and predict our behaviour, and then is used to make decisions about the way we are treated – whether that be employment decisions, or banking decisions, or other areas of our lives, we need to tread carefully indeed.
I’m frankly horrified when I see AI being used to interview job candidates.
Not sort applications … but actually conduct interviews.
I see this as yet another avenue for uneven treatment of different groups in our society.
In hiring, it's the bulk of lower-skilled jobs where decisions will be made by AI.
The well-off and the well-connected aren't employed by algorithms, because personal contacts are the currency of the rich.
It should not be the case that an artificial intelligence program conducts job interviews.
From where I stand, that is an easy judgement to make. But not all of our decision-making is that clear-cut.
Consider the use of AI to predict who might reoffend in domestic violence situations. I read a recent study suggesting that machine learning might be superior to human predictions in this field. For police, it allows them to be proactive and engage early in situations of potential domestic violence.
But of course the dangers are obvious and they don’t need spelling out.
Attention to bias in online profiling and deep learning is critical if we are to have the social licence to use this kind of technology.
It’s not immediately clear to me whether the problems can be satisfactorily resolved for this use of AI.
I mention this example not because I have firm advice on the particular question.
I mention it because it focuses the mind quite sharply on those questions of risk and reward.
As I said, models can be useful, but as we know, any model is only as good as the information fed into it.
The data used in AI algorithms is incomplete. It’s based on the way things have been, not necessarily the way they are or will be. It’s based on limited numbers. Approximations.
The Human Rights Commission has considered how bias in algorithms can arise in the commercial world – where AI systems use incomplete and historical datasets for modelling the creditworthiness of customers. Unsurprisingly, women, Indigenous people and young people are most likely to bear the brunt of the built-in biases.
It’s imperative that those approximations and historical patterns don’t come to define the way we live our lives.
We’ve heard a lot about the right to be forgotten. I also want our digital models to respect my right to be me.
Actual, unique me. Not an approximation of me filled with assumptions. Not the average or most likely version of me.
This is really tricky.
The issues and the solutions are by no means simple.
But they will be most effective if they are framed within a clear set of principles, such as those set out in the Government’s AI Ethics Framework.
There are three that I want to bring to your attention tonight.
I mentioned my right to be me.
That is my first principle. The first step to achieving it is ensuring that our digital workforce is diverse in culture, sex, gender, age and life experiences.
I’ll mention two more principles:
One, transparency in the algorithms and data that underpins AI, and the situations in which is it deployed.
And second, accountability.
This is especially difficult in the case of deep learning where we don’t even have visibility of the way algorithms are working, or the data they are bringing to bear on the question.
At this level, the AI system is creating its own models by harvesting information widely and then using a process of simplification and grouping to reduce the data points.
This work going on under the surface is not easily discoverable, which makes it all the more difficult to control.
So the mechanisms of accountability are not straightforward. But I’m keen to hear more about the possibilities for auditing of algorithms and public disclosure of their assumptions.
If we get transparency and accountability right, and improve diversity in the sector, we will be making important steps towards removing unfairness and bias.
Ensuring that we don’t get HAL taking over the cockpit.
Of course, it is one thing to have a set of principles. It is quite another to develop guidelines and tools to put them into effect. This is the next frontier.
There are many other complex issues:
• Cultural differences in ethical decision-making.
• Ownership of AI-invented systems or AI-conducted research. Who does own the intellectual property?
• What about artificial intelligence that creates its own AI? This was a question that I think Star Trek tackled 30 years ago.
Futuristic, and perhaps also prescient!
Now, when I talk about the challenges, I am not telling you anything you don’t already know.
What I can tell you is that the conversation is well and truly underway at multiple levels, and I’m pleased to see the momentum, here and overseas.
The Australian Government recognises that the digital economy is key to securing our economic future and has released a Digital Economy Strategy. A National AI Centre is being established to coordinate expertise and address barriers.
The Government has released an AI Action Plan, supported by the AI Ethics Framework. It will help guide businesses and governments to responsibly design, develop and implement artificial intelligence.
The Australian Human Rights Commission’s recent report offers a number of recommendations for consideration, including establishing an AI Safety Commissioner to provide technical expertise and build public trust in the safe use of AI.
I would like to congratulate the Centre for Artificial Intelligence and Digital Ethics on your focus on the challenges of emerging technologies and the launch of this program. Your input will play an important role. I am excited by the systems thinking and collaboration between the technical, legal and social spheres. Which is exactly what we need as we tackle these really complex issues.
I hope this program will inspire and convene more important, inclusive conversations to help our law and policymakers and our community more widely understand AI to a sophisticated level.
I’m pleased to have taken the opportunity to consider some of the issues afresh as I prepared to speak with you today, and I hope that all of the initiatives, including yours, will come together for a robust approach.
And as many have noted, engagement, consultation and ongoing communication with the public about AI will be essential for building community awareness.
Public trust is critical to enable acceptance and uptake of the technology.
I told you about checking my children’s whereabouts on our family location app.
I don't want to leave you with the wrong impression.
It’s not actually about checking up on them. They’re not teenagers any more.
It’s really about feeling connected with them. When they’ve left the family home, you can’t rely on the dinner table any more to catch up with what everyone’s been doing.
But I only told you part of the story.
I probably should also mention that only four of my six children agreed to share their location information.
Two actually rejected my request!
Such is the lot of a parent!
But seriously, I’m pleased they’re taking control of their privacy online. This is something we all need to do, not only as individuals, but as a society.
It’s not about turning our back on digital technologies. It’s about embracing them and engaging with them in a really active and sophisticated way.
Understanding the issues.
Having the conversations.
And taking charge of the solutions.
Doing those things that your Centre has dedicated itself to.
So once again, congratulations.
I wish you all the best in your work.
Thanks for having me tonight.