Comparing AI Policies for Academic Authors: Elsevier vs Wiley

Summary
“Comparing AI Policies for Academic Authors: Elsevier vs Wiley” examines the distinct approaches taken by two of the world’s leading academic publishers—Elsevier and Wiley—in formulating policies that regulate the use of artificial intelligence (AI) in scholarly authorship. As AI technologies continue to evolve, their integration into academic publishing has prompted significant ethical, practical, and technical discussions. Both Elsevier and Wiley have developed comprehensive guidelines that emphasize the importance of maintaining the integrity of the scholarly record while adapting to the changing landscape of academic writing.
Elsevier’s AI policy is built on the principle of preserving the ethical standards of academic publishing. The policy mandates that AI-generated content cannot be credited as authorship, ensuring that the responsibilities traditionally held by human authors remain intact. This framework supports authors in using AI tools for enhancing readability and language without compromising the core principles of human oversight and accountability. By providing a clear guideline for authors, reviewers, and editors, Elsevier aims to uphold the quality and reliability of academic work, ensuring that AI technologies complement rather than replace human expertise in the publishing process.
In contrast, Wiley’s AI policy prioritizes full disclosure and transparency in the use of generative AI technologies. The guidelines require authors to disclose any AI tools utilized during the preparation of their manuscripts, thereby fostering an environment of ethical publishing practices. Wiley’s approach includes a robust set of frequently asked questions (FAQs) to assist authors unfamiliar with AI technologies, helping them navigate the integration of AI into their work. This policy reflects Wiley’s commitment to equipping authors with the necessary knowledge to ethically and effectively incorporate AI into academic writing, maintaining the standards expected across the academic community.
The comparison of Elsevier and Wiley’s AI policies highlights their shared focus on transparency, ethical behavior, and responsibility, while showcasing their unique approaches to addressing AI’s role in academic authorship. Both publishers aim to support authors in navigating the complexities of AI technologies, emphasizing the critical importance of human oversight and the preservation of scholarly integrity. As AI continues to influence the academic publishing industry, these policies set a precedent for balancing innovation with ethical and responsible authorship practices.
Elsevier’s AI Policy
Elsevier has implemented a comprehensive AI author policy that aims to ensure the integrity of the scholarly record while providing guidance on the use of generative AI and AI-assisted technologies in academic publications. The policy is designed to promote greater transparency for authors, readers, reviewers, and editors by clearly outlining the roles and responsibilities associated with AI use in the publishing process.
The policy explicitly states that authorship implies responsibilities and tasks that can only be attributed to and performed by humans, thereby disallowing AI-generated content tools from being listed as authors. This aligns with the broader ethical standards set by Elsevier to maintain academic integrity and uphold the ethical behavior expected from all parties involved in the publishing process. The policy also provides a framework for editors, reviewers, and authors to navigate the ethical, practical, and technical questions related to AI use in their work.
By instituting these guidelines, Elsevier aims to safeguard the quality and reliability of academic publications, ensuring that any AI technology used in the process does not compromise the rights or responsibilities of authors, editors, or publishers.
Wiley’s AI Policy
Wiley’s AI guidelines provide comprehensive insights for authorship, which were developed based on needs identified through interviews with nonfiction, academic, and business book authors. These guidelines aim to help authors address various ethical, practical, and technical questions regarding the use of AI in their work. A key aspect of Wiley’s policy is its focus on transparency and guidance for authors, reviewers, editors, and readers in relation to generative AI and AI-assisted technologies.
One of the main components of Wiley’s AI policy is the requirement for full disclosure of any generative AI technologies and tools used in the preparation of a submission. This ensures that all parties involved are aware of the involvement of AI in the creation of the content, thereby maintaining academic integrity. Additionally, Wiley’s guidelines explicitly state that AI-generated content (AIGC) tools cannot be listed as authors, reinforcing the importance of human oversight and authorship in academic publishing.
Wiley also provides a set of frequently asked questions (FAQs) to assist authors who are new to AI. These FAQs cover topics such as the development of effective prompts and where to start when integrating AI into their work. This approach aims to equip authors with the necessary knowledge to navigate the evolving landscape of AI in academic publishing.
Comparative Analysis of Elsevier and Wiley
AI Usage Policies
Elsevier and Wiley have both released detailed policies regarding the use of AI in academic authorship. Elsevier’s policy emphasizes maintaining the integrity of the scholarly record while allowing authors to use AI tools to improve the readability and language of their submissions, provided that the generated output is critically assessed by human authors. Wiley, on the other hand, requires full disclosure of generative AI technologies used in the preparation of submissions, aligning with its ethical standards.
Transparency and Guidance
Both publishers aim to provide greater transparency and guidance to authors, readers, reviewers, and editors concerning the use of generative AI and AI-assisted technologies. Elsevier’s policy highlights the importance of transparency to uphold the integrity of the scholarly record, while Wiley’s policy underscores ethical behavior across all parties involved in publishing.
Ethical Considerations
Ethical considerations are a significant component of the AI policies for both Elsevier and Wiley. Elsevier stresses that authorship responsibilities and tasks can only be attributed to and performed by humans, ensuring that AI serves merely as a tool to aid the writing process rather than replace human authorship. Wiley also emphasizes ethical behavior, necessitating full disclosure of AI tools used, which complements its focus on ethical publishing standards.
Responsibility and Integrity
Authors using AI in their submissions to Elsevier and Wiley are required to take full responsibility for the accuracy and integrity of their work. Elsevier’s policy allows for AI usage as a companion to the writing process but insists on human oversight and accountability for the final content. Similarly, Wiley’s policy promotes transparency and accountability, ensuring that the use of AI does not compromise the scholarly integrity of the work submitted.
Case Studies
Wiley’s AI Guidelines
Wiley has developed comprehensive AI guidelines aimed at providing insights for authorship, particularly in the fields of nonfiction, academic, and business literature. These guidelines were crafted based on needs identified through extensive interviews with authors in these domains. The accompanying FAQs serve as a resource for those new to AI, offering guidance on how to develop effective prompts and utilize AI tools efficiently during manuscript preparation. Wiley’s policy emphasizes the thoughtful use of AI tools, allowing authors to maintain high editorial standards while safeguarding their intellectual property. The policy also seeks to offer greater transparency and guidance to authors, readers, reviewers, and editors in relation to generative AI and AI-assisted technologies.
Elsevier’s AI Author Policy
Elsevier has introduced a new AI author policy that concentrates on ensuring the integrity of the scholarly record. The policy outlines the responsible use of AI technologies by authors when preparing manuscripts or other scholarly materials. It aims to provide clear guidance to maintain high editorial standards while integrating AI tools responsibly into the scholarly process.
Reception and Impact
The implementation of AI policies by major academic publishers like Elsevier and Wiley has sparked considerable discussion in the academic community. These policies are primarily aimed at maintaining the standards of expected ethical behavior by all parties involved in publishing, including authors, editors, reviewers, publishers, and societies.
Elsevier and Wiley have both stipulated that AI technology should only serve as a companion in the writing process, not as a replacement for human authorship, with authors bearing full responsibility for the content’s accuracy. This approach has been recognized for enhancing academic writing across various domains such as idea generation, content structuring, literature synthesis, data management, editing, and ethical compliance.
The impact of these policies is evident in the way they shape academic integrity and the ethical implications of AI in publishing. By emphasizing the role of AI as an assistive tool rather than an author, these guidelines reinforce the accountability and authenticity of scholarly work. Moreover, the examination of AI authorship policies across 300 top academic journals in late-spring 2023 further highlights the widespread acceptance and adaptation of these principles within the academic publishing landscape.
References
- https://www.skool.com/story-hacker-free/dont-use-ai-for-academic-research-until-you-have-read-this
- https://libguides.library.ohio.edu/AI/publishers
- https://www.elsevier.com/about/policies-and-standards/the-use-of-generative-ai-and-ai-assisted-technologies-in-writing-for-elsevier
- https://www.sciencedirect.com/journal/artificial-intelligence/publish/guide-for-authors
- https://www.elsevier.com/about/policies-and-standards/publishing-ethics
- https://info.library.okstate.edu/AI/publisher-policies
- https://mrccedtech.com/the-ethical-use-of-ai-in-academic-publishing-explored/
- https://www.wiley.com/en-us/network/publishing/research-publishing/editors/the-implications-of-ai-in-academic-publishing
- https://newsroom.wiley.com/press-releases/press-release-details/2025/Wiley-Releases-AI-Guidelines-for-Authors/default.aspx
- https://www.elsevier.com/about/policies-and-standards/generative-ai-policies-for-journals
- https://onlinelibrary.wiley.com/pb-assets/assets/15405885/Generative%20AI%20Policy_September%202023-1695231878293.pdf
- https://www.quora.com/What-are-the-pros-and-cons-of-publishing-in-Wiley-and-Elsevier
- https://authorservices.wiley.com/ethics-guidelines/index.html
- https://www.researchinformation.info/news/wiley-releases-ai-guidelines-for-authors/
- https://www.wiley.com/en-us/publish/book/ai-guidelines
- https://www.sciencedirect.com/science/article/pii/S2666990024000120
- https://onlinelibrary.wiley.com/doi/full/10.1002/leap.1582