Ranking might have ignored Finkel
This opinion piece by Dr Cathy Foley was originally published in The Sydney Morning Herald on 15 November 2023.
Australia's chief scientist says the way we assess researchers needs to be overhauled.
Throughout most of my research career, there was no such thing as a H-index. This is a relatively new method to assess and rank researchers which has become as common a way to compare careers as the ATAR in the lives of our young people - with equally counterproductive outcomes.
Of course, the H-index and other rankings have their place. However, they now have an outsized influence in a self-perpetuating system that, like becoming a three-star Michelin restaurant, can drive behaviour in unhappy directions. Just as three-star restaurants find themselves devoting inordinate attention to the bathroom taps and placement of the silverware, researchers can find themselves spending valuable energy on producing iterative papers, applying for grants and chasing citations.
The H-index has been expressed to me as a version of what is known as Goodhart's law, to the effect that when a measurement becomes a metric it stops being useful because smart people, often in a privileged situation, are incentivised to game the system to their advantage.
To my mind, Australia needs researchers devoting their time to discovery, to blue-sky research that drives the innovations of the future. Australia needs researchers focused on collaboration and problem-solving for the urgent challenges we face, such as creating clean energy. Australia needs scientists working in industry, in government and across the ecosystem to drive economic and social impact.
The current system for assessing research careers is too narrow to fully recognise these important functions of the research sector.
I don't think Alan Finkel would mind me sharing his H-index. Alan is a great scientist, innovator and engineer who developed super-precise scientific instruments used by pharmaceutical companies around the world for developing new drugs to treat brain conditions such as epilepsy, migraine and pain. The equipment and software he developed allowed neuroscientists to do work that changed lives.
Alan also had a huge impact on the recognition of the reality of climate change, the role of hydrogen in a clean energy future and many other important scientific issues of the past decade. He was chancellor of Monash University and, of course, my predecessor as Australia's chief scientist. His H-index is less than 10, not the kind of number that academics chase. For context, more than 100 Australian National University academics have H-indexes of more than 60. Alan's number results from the fact most of his work was done through his own company, not a university.
This is just one of many startling illustrations of where narrow systems of measuring research success can go wrong. Put simply, the current systems for assessing research careers for hiring, promotion and funding are not fit for purpose. This is why my office commissioned the Australian Council of Learned Academy this year to report on research assessment practices in Australia, as practised by universities and other research institutions.
It's the evidence that tells us things need to change.
Assessment has become dominated by metrics such as the H-index, citation numbers, publication numbers, the ability to secure publication in prestigious journals, and a researcher's track record in grant funding.
Metrics have evolved to be too narrow and carry an outsize influence. This phenomenon is relatively recent; the H-index was devised only in 2005.
Since then, metrics have become self-perpetuating, reinforcing the status quo. They have given rise to an unhelpful nexus between universities, publishers, funders and global ranking agencies, as universities chase higher international rankings through publication numbers and prestigious journals.
Narrow research metrics create perverse incentives and a ‘‘publish or perish'' mentality. Researchers may be incentivised to publish iteratively, and to chase citations, rather than focusing on quality. The current practices do not incentivise risky, innovative or multidisciplinary research.
Assessment practices fail to recognise experience outside the research sector. As a result, they get in the way of mobility between the university sector, industry and government. They disadvantage women, who can find it difficult to compete after time out of the workforce, for example, to have children. They reduce opportunities for women to get funding, secure jobs and be recognised for the perspectives they bring.
The ACOLA report found serious concerns among researchers about the systems for assessing their careers, including the amount of time and effort spent maximising rankings and how assessment practices impact relationships, collaborations, and decision-making. Many researchers expressed concerns about transparency and accountability in hiring and promotion.
ACOLA also considered international principles that should guide any changes. This will inform my advice to the government.
To my mind, to drive Australia's knowledge economy and solve complex problems, we need a highly skilled, diverse research workforce, collaboration across sectors and disciplines and career mobility.
This means finding new ways to assess research careers to ensure the sector remains effective and the discoveries of science and research are translated into new technologies that can be scaled up and used to build knowledge to the benefit of society and humanity.
We need to measure what matters so we get the outcomes we want. We need to emphasise quality, not quantity. We need to encourage more people into science careers, including people with the transformative capabilities epitomised by Alan Finkel, and support our scientists – all of them.
Dr Cathy Foley is Australia's chief scientist.