With the rise of text-based artificial intelligence technologies such as ChatGPT, the challenges of writing, reviewing and publishing scientific papers have also increased. At the IUPAC | CHAINS 2023 World Congress on Chemistry in The Hague, a number of experts in the field of publishing came together to provide insight into these issues.
What should artificial intelligence technologies bring to the (publishing) table? This question, posed by ChemistryWorld publisher and chairman Adam Brownsell, kicked off the panel discussion on the future of scientific publishing and AI. Joris van Rossum, publishing professional at STM, a global association of scientific, technical and medical publishers, says a major challenge is the misuse of AI. AI can be good, but the problem is that it can’t create text out of thin air. It relies on manipulating or copying existing text to create something ‘new’.
Arms race
Immediate Past President at ACS Angela Wilson, who has experience as both an author and editor for a number of journals, posed a question to the other panellists: ‘How do you know when someone is using an AI program like ChatGPT? How would you check that?’ One answer came from Steffen Pauly, editorial director at Springer Nature. He asserts that Springer Nature has a policy that doesn’t allow AI programs to be acknowledged as authors. The researchers themselves remain fully responsible for the results and should be transparent about the use of AI technology, as they should be about other technologies that are already integrated. Andrew Bissette, scientific editor of Cell Reports Physical Science and Chem, agrees with Pauly. ‘ChatGPT can’t be an author, but its use should definitely be disclosed.’
Bissette is convinced that we need tools to detect the use of AI, but ‘it’s also an arms race’, he says. ‘The moment you release your detection tools, people will adapt their AI programs to get around them. It is an important issue, but we should also look at the ethical use of AI to create.’
Frightening
This raised a question from the audience: AI will probably be used to see if AI has been used for the document. How should editors safeguard this process? And who’s going to decide what kind of AI is acceptable or not?
Van Rossum: ‘As scientists, we should have anticipated these technologies. At STM, we’ve produced a best practice white paper on how and when to use AI in research papers. We allow it to be used to improve English, but there is a risk that people will “improve” other things as well. Checking is the hardest part.’ Wilson adds: ‘As soon as our publishers are able to detect the use of AI, we can take action. But the programmes are getting better and better, and in a way that’s frightening. As long as publishers have loose standards, there will be problems.’
AI and reviewing
But what is the risk of using AI to review papers? Bissette says that papers should always be kept confidential. ‘If OpenAI is used to review a paper, it is unclear what will be done with the data.’ ‘If you use AI to review papers, it should be kept in-house behind an impenetrable firewall’, says Peter Schreiner, an editor at Wiley for more than twenty years. ‘But at the end of the day, the human reviewers should have the final say, and I don’t think anyone would disagree with that.’ According to Bissette, it comes down to community ethics and enforcement. ‘We trust the reviewers not to abuse their position or post the papers on the Internet.’
The use of AI to write or ‘improve’ papers could also be a symptom of the ‘publish or perish’ culture that exists in the current research world, the panellists agreed. ‘Because science is largely funded by taxpayers’ money, there is a lot of pressure to publish’, says Wilson. Schreiner describes how Wiley has changed its funding policy to combat this culture. ‘When you ask for money, you can’t link to more than ten papers. I think this is happening in a lot of other places, moving away from impact factors and large numbers of published papers.’ Van Rossum agrees, because ‘this is something we can only achieve together’. ‘We should also be careful and conscious that the company behind ChatGPT or other AI companies does not become the sole owner of the data and walk away unchallenged,’ he adds.
Future prospects
‘What we do should be done in a critical way’, says Pauly. ‘We have always used technology to better serve our customers, and we see that using AI tools can help move the publishing community forward.’ Bissette: ‘The tools are here to stay. As long as we find an ethical way to use them, they can be a great help.’
‘Pandora’s box has been opened’, says Schreiner, ‘but I’m optimistic. There will be a lot more tools, and they will improve the ways of reviewing, etcetera, if we keep the decision making with humans.’ Wilson is excited about the future. ‘But we need to make sure the science is right. There should be checks and balances, and we should come up with some community standards of what is OK and what is not. There should be consistent guidelines across publishers and universities.’
Nog geen opmerkingen