Elsevier vs. Springer Nature: Comparing AI Policies for Academic Authors

In the rapidly evolving landscape of academic publishing, artificial intelligence (AI) tools have become increasingly prevalent in the research and writing process. As these technologies transform how scholars approach their work, major publishers have developed specific policies to address AI use in academic manuscripts. Two of the most influential publishers, Elsevier and Springer Nature, have established comprehensive guidelines that researchers must navigate when submitting their work.
This article provides a detailed comparison of AI policies between these publishing giants, highlighting key differences in disclosure requirements, authorship rules, and image policies. Understanding these nuances is essential for researchers seeking to ethically integrate AI tools while maintaining compliance with publisher standards.
Overview of Publisher Approaches to AI
Both Elsevier and Springer Nature recognize the transformative potential of AI in academic writing while acknowledging the ethical considerations it raises. Their policies reflect a balance between embracing technological innovation and preserving the integrity of scholarly communication.
Elsevier’s General Stance
Elsevier’s policy focuses on transparency and human accountability. The publisher distinguishes between using AI to enhance readability and language versus using it to replace key authoring tasks. Their approach emphasizes that authors remain ultimately responsible for their work’s content, regardless of AI assistance.
Springer Nature’s General Stance
Springer Nature similarly emphasizes human accountability while providing specific guidance on different AI applications. Their policy explicitly addresses Large Language Models (LLMs) like ChatGPT, stating that these tools cannot satisfy their authorship criteria. The publisher is actively monitoring developments in this area and updates their policies accordingly.
Disclosure Requirements: What Authors Must Reveal
Elsevier’s Disclosure Policy
Elsevier requires authors to disclose the use of generative AI and AI-assisted technologies in their manuscripts. This disclosure will appear in the published work, supporting transparency between authors, readers, reviewers, and editors. The policy applies specifically to new content creation, not previously published material.
Key points:
- Disclosure must be included in the manuscript
- A statement about AI use will appear in the published work
- Disclosure supports compliance with the terms of use of relevant AI tools
Springer Nature’s Disclosure Policy
Springer Nature takes a more nuanced approach to disclosure requirements. They specify that the use of an LLM should be properly documented in the Methods section (or a suitable alternative part) of the manuscript. However, they make an important distinction: “AI assisted copy editing” does not need to be declared.
Key points:
- LLM use must be documented in the Methods section
- “AI assisted copy editing” is exempt from disclosure requirements
- “AI assisted copy editing” is defined as improvements to human-generated texts for readability, style, and error correction
Authorship Rules: Can AI Be an Author?
Elsevier’s Authorship Rules
Elsevier is unequivocal in its stance on AI authorship: generative AI and AI-assisted technologies cannot be listed as an author or co-author, nor can AI be cited as an author. The publisher emphasizes that authorship implies responsibilities and tasks that can only be attributed to and performed by humans, including:
- Ensuring questions related to accuracy or integrity are appropriately investigated
- Approving the final version of the work
- Agreeing to its submission
- Ensuring the work is original and does not infringe third-party rights
Springer Nature’s Authorship Rules
Springer Nature similarly states that Large Language Models do not satisfy their authorship criteria. They emphasize that authorship carries accountability for the work, which cannot be effectively applied to LLMs. The publisher requires human accountability for the final version of the text and agreement from the authors that the edits reflect their original work.
Image and Figure Policies: AI-Generated Visuals
Elsevier’s Image Policy
Elsevier takes a strict approach to AI-generated images, prohibiting the use of generative AI or AI-assisted tools to create or alter images in submitted manuscripts. This includes:
- Enhancing, obscuring, moving, removing, or introducing specific features within an image
- Using AI for artwork such as book covers or graphical abstracts
The only exception is when AI use is part of the research design or methods (such as AI-assisted imaging approaches). In such cases, authors must:
- Describe the use in a reproducible manner in the methods section
- Explain how the tools were used in the image creation or alteration process
- Provide the name, version, extension numbers, and manufacturer of the model or tool
- Potentially provide pre-AI-adjusted versions of images for editorial assessment
Springer Nature’s Image Policy
Springer Nature similarly restricts AI-generated images, stating that while legal issues relating to AI-generated images remain unresolved, their journals cannot permit their use for publication. They do allow for specific exceptions:
- Images/art obtained from agencies with contractual relationships that created images legally
- Images directly referenced in pieces specifically about AI (reviewed case-by-case)
- Generative AI tools developed with specific scientific data that can be attributed, checked, and verified
All exceptions must be clearly labeled as AI-generated within the image field. The policy covers various image types including video, animation, photography, scientific diagrams, and illustrations, but excludes text-based and numerical display items like tables and simple graphs.
Peer Review Considerations
Elsevier’s Peer Review Policy
Elsevier does not explicitly address AI use in peer review in their main AI policy document. Their focus remains on author responsibilities and content creation rather than the review process.
Springer Nature’s Peer Review Policy
Springer Nature provides specific guidance on AI use by peer reviewers. They emphasize that peer reviewers play a vital role in scientific publishing and are selected for their expertise, which is “invaluable and irreplaceable.” The publisher explicitly asks that reviewers not upload manuscripts into generative AI tools due to:
- AI tools’ limitations (lack of up-to-date knowledge, potential for nonsensical or biased information)
- Confidentiality concerns (manuscripts may contain sensitive information)
If reviewers use AI tools to support their evaluation, they must declare this transparently in their peer review report.
Practical Compliance Checklist
For Elsevier Submissions:
- ✓ Use AI only for improving readability and language, not for core scientific content
- ✓ Include disclosure of AI use in your manuscript
- ✓ Ensure no AI tools are listed as authors
- ✓ Avoid AI-generated images unless part of research methodology
- ✓ If AI is used for images as part of methodology, provide detailed documentation
- ✓ Maintain human oversight and accountability for all content
For Springer Nature Submissions:
- ✓ Document LLM use in Methods section (if applicable)
- ✓ No need to disclose “AI assisted copy editing” for grammar and style
- ✓ Ensure human accountability for final text
- ✓ Avoid AI-generated images unless meeting specific exception criteria
- ✓ Clearly label any permitted AI-generated images
- ✓ If serving as a peer reviewer, do not upload manuscripts to AI tools
Conclusion: Navigating the Differences
While Elsevier and Springer Nature share similar fundamental principles regarding AI use in academic writing, their policies differ in important details. Springer Nature offers more nuanced guidance on what types of AI assistance require disclosure, particularly exempting “AI assisted copy editing” from declaration requirements. They also provide more specific guidance for peer reviewers.
Both publishers emphasize human accountability, prohibit AI authorship, and restrict AI-generated images with limited exceptions. The key to successful navigation of these policies lies in transparency, proper documentation, and maintaining human oversight throughout the research and writing process.
As AI technologies continue to evolve, these policies will likely undergo further refinement. Researchers should regularly check for updates to ensure compliance with the latest requirements. By understanding and adhering to these guidelines, authors can ethically leverage AI tools while maintaining the integrity and credibility of their scholarly contributions.
References
1. Elsevier. “The use of AI and AI-assisted technologies in writing for Elsevier.”
2. Springer Nature. “Artificial Intelligence (AI).”