In response to the growing use of artificial intelligence (AI) in education, research, and academic publishing, as well as concerns over potential risks to research integrity arising from the misuse of AI-Generated Content (AIGC), the Institute of Scientific and Technical Information of China (ISTIC), in collaboration with leading international academic publishers including Elsevier and Springer Nature, has released the Guidelines on the Boundaries of AIGC Use in Academic Publishing 3.0.
The Guidelines systematically delineate the appropriate boundaries for the use of AIGC across the entire academic research and publishing lifecycle. Centered on the core principles of transparency, accountability, privacy protection, fairness, and sustainable development, they establish a clear code of conduct and provide practical guidance for key stages of academic activity, including research conduct, manuscript preparation, submission and peer review, as well as post-publication management.
Key Principles of the Guidelines on the Boundaries of AIGC Use in Academic Publishing 3.0
Upholding transparency and accountability
Any use of AIGC tools must be explicitly disclosed in appropriate sections of the manuscript—such as the Methods, Acknowledgements, or Appendices—including the name and version of the tool, the time of use, input prompts, and the scope of generated content. Authors bear full and ultimate responsibility for all published content.
Prohibiting the attribution of authorship to AIGC
AIGC systems do not possess legal personhood or the capacity to assume scientific responsibility and therefore must not be listed as authors or co-authors under any circumstances.
Strictly safeguarding the authenticity of data and images
The direct generation, manipulation, or alteration of experimental data or original research images—such as Western blots, histological staining images, or flow cytometry plots—using AIGC is strictly prohibited. All critical scientific evidence must be derived from genuine research processes.
Strengthening mandatory human verification
References, factual statements, and statistical results generated by AIGC may contain inaccuracies or fabricated information. Users are required to verify the authenticity, accuracy, and relevance of such content on an item-by-item basis.
Ensuring privacy protection and data security
Researchers should exercise caution when uploading unpublished research findings, personal data, or other sensitive information to public AI platforms.
Defining clear boundaries for language editing
The use of AIGC is permitted for grammatical correction, sentence refinement, and non-native language polishing, but must not be used to substantively alter scientific conclusions or to ghostwrite research content.
Clarifying peer review and submission ethics
Reviewers are prohibited from uploading manuscripts under review to AI tools without explicit authorization from the commissioning body. Authors should likewise avoid disclosing confidential or sensitive information when using AIGC to prepare responses to reviewers’ comments.
For further information, the full text of the Guidelines is provided in Appendix 1 for reference.
