Artificial Intelligence (AI) Usage Policy

The International Journal of Didactical Studies (IJODS) recognizes the growing role of artificial intelligence (AI) tools in academic research and writing. The journal supports the responsible and transparent use of AI technologies while maintaining academic integrity, authorship accountability, and ethical standards, taking guidance from internationally recognized principles, including those of the Committee on Publication Ethics (COPE) and the International Committee of Medical Journal Editors (ICMJE).

IJODS maintains a dedicated AI policy to ensure that the use of AI tools does not compromise the integrity, originality, or reliability of scholarly work.

1. Use of AI by Authors

Authors may use AI-assisted tools (e.g., for language editing, grammar checking, or improving readability) provided that such use does not replace the authors’ own intellectual contribution.

AI tools must not be used to:

  • Generate original research ideas, hypotheses, or interpretations

  • Produce substantial portions of the manuscript content

  • Fabricate or manipulate data, results, references, or citations

  • Replace critical analysis, methodological decisions, or scholarly judgment

AI tools cannot be listed as authors, as authorship implies responsibility and accountability that can only be assumed by humans.

2. Disclosure of AI Use

Authors are required to clearly disclose any use of AI-assisted tools in the preparation of their manuscript. Disclosure should be included in the manuscript (e.g., in the acknowledgements or a separate disclosure statement) and specify the purpose for which AI tools were used.

Failure to disclose AI use where required may be considered an ethical concern and handled in accordance with the journal’s Publication Ethics.

3. Responsibility and Accountability

The use of AI-assisted technologies does not alter the ethical responsibilities of authors, reviewers, or editors. AI tools cannot be listed as authors, and responsibility for the content of a manuscript always rests with the human contributors.

Manuscripts that involve undisclosed, unethical, or inappropriate use of AI may be rejected or retracted in line with the journal’s Publication Ethics.

4. Use of AI by Reviewers

AI tools must not be used by reviewers or editors to generate peer-review reports or editorial decisions. Reviewers and editors may not upload manuscripts or any part thereof into AI systems that compromise confidentiality.

The peer-review and editorial decision-making processes rely on human judgment, expertise, and accountability.

5. Use of AI by Editors

Editors may use AI tools to support editorial workflows (e.g., plagiarism detection, administrative screening) but must not rely on AI for final editorial decisions. All decisions regarding manuscript acceptance, revision, or rejection are made by human editors.

6. Ethical Concerns and Misuse

Suspected misuse of AI tools, including undisclosed or unethical use, will be handled in accordance with the journal’s Publication Ethics and Malpractice Statement. This may result in rejection, retraction, or other appropriate editorial action.

7. Alignment with International Standards

This policy is informed by internationally recognized guidelines and recommendations on the responsible use of artificial intelligence in scholarly publishing, including the following resources:

  • STM: Recommendations for a Classification of AI Use in Academic Manuscript Preparation

  • Elsevier: The use of generative AI and AI-assisted technologies in the review process

  • WAME: Chatbots, Generative AI, and Scholarly Manuscripts

The journal acknowledges that policies on generative AI are evolving and commits to periodically reviewing and updating this policy to remain aligned with emerging standards.