Interview: ABC Perth, Automation and AI
Dr Alan Finkel was interviewed by Gillian O’Shaughnessy forABC Perth Afternoons about automation and the changing nature of the workforce.
You can read the full transcript below, or download it as a PDF.
Gillian O’Shaughnessy: Now Alan, the concept machines will replace humans on the jobs front is nothing new, but fear is certainly getting stronger.
Alan Finkel: You’re right Gillian, there’s a lot of concern. You could dismiss it and call it pessimism. But it’s founded on the belief that this time it’s different.
We always hear that you could go back to the luddites from two-hundred years ago and their concerns about the steam engine and what it was doing for industrialisation, and of course, as a result of that we created many more jobs than we lost. But the pain was significant at the time.
And we’ve had rounds of concern. In the 1930s one of the world’s best economists, John Maynard Keynes, wrote on this topic. In the 1960s, the Americans were so concerned about what automation would do to permanently undermine jobs that President Johnson at the time established a special commission. By the time the commission was ready to report three years later, the economy had rebounded, unemployment was approaching an all-time low, and they disbanded without submitting their report. And we see this again and again.
Now, if I was a chartist, and looked at the last two hundred years of job growth, and new jobs being generated through innovation to replace the jobs that were lost, I’d say it’s all going to be fine. There is no possibility for problems. The trends have been moving in the right direction.
But the non-chartists will say to me, “But Alan, this time it is different.” And the potential differences are fundamental. For example, when you’re talking about digital AI doing things, the cost of reproducing the AI and having multiple copies, or millions of copies, is very, very low. Whereas the cost of making millions of steam engines to do a job was very high.
And the rate of changing digital technologies to do existing jobs and evolve into new manifestations is very fast. So, it is different. But that doesn’t mean we should panic. Our priority has to be to ensure that we have jobs going in to the future, but there are many ways that we can address that.
Gillian O’Shaughnessy: Now as a scientist you obviously respect innovation. But at times, humans have the edge over machines in your opinion.
Alan Finkel: Oh absolutely. Look, it doesn’t matter how smart a machine is today, it doesn’t have the benefit of the memories, over forty or fifty years, that make us what we are. And those memories shape the way that we respond to current and future events.
Now, you could take a robot and give it false memories but you know, I know, the optimists know, it’s not going to be the same. So, there’s something very special about humans. We speak as humans. We react as humans. We have emotions as humans that no one has been able to begin to duplicate.
Although there are ways you can make it seem as if a robot or an AI program has a personality, once you probe it, it’s what you would have to say is beauty that is skin deep.
Another big difference is just our uniqueness. Gillian, you are you sitting in that studio. And I am here sitting in Canberra. If I was an artificial intelligence, I could be having a conversation simultaneously with a thousand or a million people. That actually undermines the quality of the experience that you would be having with me, if I was having simultaneous conversations with others. There is something special about you being bound in time and in place. We are humans. We are limited in what we can do.
We have a flexibility, an ability to have nuanced approaches to all challenges, whether in personal relationships, in job opportunities, or in political debates. With today’s technology, at best artificial intelligence could pretend to address [these] but they absolutely could not come near doing it properly.
We have our society. Our interaction with our society. We have religious beliefs. We have cultural beliefs. We have sporting beliefs. We’re unique! We’re human.
Gillian O’Shaughnessy Dr Alan Finkel, our guest on ABC radio Perth. Australia’s Chief Scientist. And we’re looking at robots vs mankind, really. Now you used an example in the article you wrote about, say if we built a CEO robot of a company could it fulfil a large percentage of the duties carried out by a human CEO? And answer was pretty much yes.
Alan Finkel: It could do the day to day tasks. Most day to day tasks lend themselves to being done by conventional machines, or modern and future machines. But the role of a CEO, the role of anyone in management, in fact the role of virtually everybody, is to deal with the unexpected, to be imaginative, to be innovative in their career.
What we’ve seen is that when technologies come in, it doesn’t free up time for you. To the contrary, those productivity tools make you work harder. Gillian, do you work harder now in the presence of email and all the various social media tools that you have than you did in the past, or do you find they’re doing so many things easily for you that you’ve got lots and lots of free time?
Gillian O’Shaughnessy: Well you make a very good point there Alan, because I don’t think I’ve ever worked harder!
Alan Finkel: Exactly! And I find the same. So my personal productivity is enormously amplified by the tools that I’ve got. And it makes me work harder!
Gillian O’Shaughnessy: Now for a CEO that’s one thing. But the man on the street. You know the check-out operator, the truck driver, the bus driver, that sort of thing where we’re already seeing people’s positions taken over by robots. It’s a concern.
Alan Finkel: It is Gillian. It’s a concern but we shouldn’t run from the challenge. Which is what a lot of people do want to do. A few thoughts on that. You describe the number of jobs that will be under threat and it’s true, they will. First of all, I don’t think it will happen as fast as people anticipate. But it will certainly happen. And this is where governments need to step in. Governments need to understand what’s coming at us over the horizon and the fact that we’re having this kind of a discussion. This broad discussion on this topic is important. It’s the first step in developing policy. And Government needs to develop policy responses that embrace retraining and job creation, as well as regulatory responses that set the bounds on what automation, robots and machines can do. So if we say “Oh my gosh! It’s scary. Let’s make digital technology illegal.” then we fail ourselves. If we don’t do anything at all and let everything run rampant then you’re in the situation which would be equivalent to the early days of cars, automobiles or guns.
But take automobiles. Automobiles are fundamentally dangerous. And as they became more common, the rate of road deaths and road trauma was increasing massively. But through decades of thoughtful policy development, standards and regulations we have tamed the car and made it our servant. If we’re sensible in our approach going towards the future, we should be able to tame artificial intelligence, robots and automation to make them the servants of human kind. People need to remember that when you’re looking at robots and humans, in my opinion, we humans are the masters. We can turn the robot off.
Gillian O’Shaughnessy: I’m immediately going to open the pod bay door, HAL! I think when there’s a robot or new technology we also see renewed appreciation for humans in a robot-free zone. Or people become more connected to humanity, their own humanity and others.
Alan Finkel: I think that’s true. It’s absolutely impossible to know how people will react going forward. But I would anticipate that if fast food restaurants for example, are totally automated, people would go there to get the cheapest possible food that won’t do them harm. But there will be a growing interest in people going to local cafes, gourmet cafes, which will probably even advertise themselves as being robot free. It will be a human intense zone.
Gillian O’Shaughnessy: Tell us the story Alan, of the robot that was sent to a shopping mall because people wanted to study how it interacted with humans.
Alan Finkel: That was a study that was done in Japan, but it could have been done anywhere. The researchers created a small mobile robot, it was not particularly intelligent. It had a simple rule. It was told to roll around in the courtyard and when it interacted with a human being, it would ask – through a speaker – it would ask the human being to step aside. Which is the opposite of what you’d anticipate that they would have got it to do. So it was asking the human being to step aside to make room for it.
And what they found is that when the robot encountered children, the children wouldn’t. And they started teasing the robot. And then a handful of children would get together and they started beating it on the head with bottles, kicking it, holding hands and yelling things at it, and dancing around it. They teased that poor robot and smacked it on the head.
Whereas the adults were querulous but stepped aside. What that tells us is that there’s something that we acquire as we grow, from our parents, from our schools that civilises us to have a more embracing approach to things that we don’t understand. It was fascinating. So the rule that they found that helped that robot get around was to avoid human beings who were shorter than 1.4 metres.
Gillian O’Shaughnessy: So generally, in a robot vs human contest, the humans will come out on top. Alan Finkel can I ask you, there’s been growing concern about so called “killer robots” recently among the scientific community and more widely. Elon Musk spoke out about it just this week. What are your thoughts?
Alan Finkel: I share their concern. Any technology that is used as a weapon of war is a concern. Ultimately killer robots collectively are a weapon of mass destruction. So I think it’s extremely important that individual country governments, and collectively governments through the United Nations and other international bodies, treat the control and rules of engagement for these kinds of autonomous killer robots very, very seriously. They’re right up there with chemical weapons, biological weapons, nuclear weapons. We need international agreements to limit the deployment of such devices.
You know what it’s like, you can never prevent rogue countries from using any of those weapons of mass destruction that I mentioned. So there will always be counter-balancing obligations on more advanced countries or more stable countries to keep their own stockpiles. But gosh we need rules to limit, as much as we can, the deployment of yet another weapon of mass destruction.
Gillian O’Shaughnessy: So in the end our success in the future will be based on how we place ourselves alongside robot technology.
Alan Finkel: That’s exactly right Gillian. We are obliged not to run from the problem, but stand up and recognise that we, as humans, can be and must be in control of the situation.