Nature recently polled 5000 researchers on when “it’s acceptable to involve AI and what needs to be disclosed.” From the article:
With generative AI tools such as ChatGPT improving so rapidly, attitudes about using them to write research papers are also evolving. The number of papers with signs of AI use is rising rapidly (D. Kobak et al. Preprint at arXiv https://doi.org/pkhp; 2024), raising questions around plagiarism and other ethical concerns.
To capture a sense of researchers’ thinking on this topic, Nature posed a variety of scenarios to some 5,000 academics around the world, to understand which uses of AI are considered ethically acceptable.
The survey results suggest that researchers are sharply divided on what they feel are appropriate practices. Whereas academics generally feel it’s acceptable to use AI chatbots to help to prepare manuscripts, relatively few report actually using AI for this purpose — and those who did often say they didn’t disclose it.
Despite the statement that “researchers are sharply divided,” I actually thought the results showed quite a bit of agreement on the acceptability, or not, of certain uses. But you should read it and decide for yourself.
Related Posts:
"Where Others Flinch, I Footnote"
Comments