Artificial Intelligence Policy
JE3S recognizes that generative artificial intelligence (AI) tools may be used to support scholarly work. To protect research integrity, transparency, authorship accountability, and confidentiality, JE3S sets the following AI policy for authors, editors, and reviewers.
1) Policy for Authors
- Permitted uses: Authors may use AI tools for language improvement (grammar, clarity, readability), formatting assistance, summarization of their own notes, or non-substantive editing, provided the authors verify accuracy and ensure the final manuscript reflects their own scholarly judgment.
- Accuracy and accountability: Authors are fully responsible for the content of the manuscript, including the validity of claims, correctness of citations, originality of text, and compliance with ethics and copyright. AI output must be checked for factual errors, hallucinated references, and biased or misleading statements.
- Prohibited uses: AI tools must not be used to fabricate or falsify data, results, images, tables, references, peer-review reports, or ethical approvals. AI tools must not be used to generate “fake” citations or sources that cannot be verified.
- Authorship: AI tools (including chatbots) cannot be listed as authors or co-authors because they cannot take responsibility for the work, provide consent, or meet authorship criteria.
- Disclosure requirement: If AI tools were used in manuscript preparation beyond minor language editing, authors must disclose the tool(s) used and the purpose of use in the manuscript (e.g., in Acknowledgements or a dedicated “Use of AI” statement). Example: “The authors used [tool name] to improve grammar and clarity. The authors reviewed and edited the output and take full responsibility for the content.”
- Data privacy: Authors must not upload confidential, personal, proprietary, or sensitive data to AI tools unless they have the legal right and appropriate permissions to do so, and only if the tool’s terms and settings support confidentiality.
- AI-generated images/media: Any AI-assisted generation or alteration of figures (including photos, microscopy images, charts, or graphical abstracts) must be clearly labeled and described in the figure caption or methods. Image manipulation that misrepresents results is prohibited.
2) Policy for Editors
- Editorial responsibility: Editors remain responsible for editorial decisions and must not delegate acceptance/rejection decisions to AI systems.
- Confidentiality: Editors must not upload submitted manuscripts, reviewer identities, or editorial correspondence into public AI tools or any AI system that is not contractually approved for confidential scholarly publishing use.
- Permitted uses: Editors may use AI tools for administrative support (e.g., drafting non-confidential emails, language polishing of decision letters) and for preliminary checks, provided confidentiality is protected and editorial judgment remains human-led.
- Integrity checks: Editors may request additional documentation (e.g., raw data, analysis scripts, ethics approval) when there are concerns about AI-assisted fabrication, plagiarism, image manipulation, or unreliable references.
- Transparency: When AI-related concerns materially affect an editorial decision, the rationale should be documented in the editorial record.
3) Policy for Reviewers
- Confidentiality is mandatory: Reviewers must treat manuscripts as confidential documents and must not upload, paste, or share any part of a manuscript (including figures, tables, data, or identifying information) into AI tools or external services.
- No AI-generated peer review: Reviewers must not use AI tools to generate or substantially write peer-review reports. Peer review requires expert judgment, accountability, and careful reading of the manuscript.
- Limited assistance: Reviewers may use basic tools (e.g., spelling/grammar checkers) for their own writing as long as no confidential manuscript content is shared.
- Ethical reporting: If reviewers suspect AI-related misconduct (fabricated data, manipulated images, invented citations, or disguised plagiarism), they should inform the editor with specific evidence where possible.
4) Compliance and Actions
JE3S may apply additional screening and verification procedures when AI-related integrity risks are suspected. Non-compliance with this policy may result in editorial action, including rejection, correction requests, retraction (for published articles), or notification to relevant institutions when appropriate.
This policy is intended to align with widely accepted publication ethics principles and to protect the credibility and reliability of the scholarly record.