Policies on using artificial intelligence adopted by journals in psychiatry and mental health

dc.contributor.authorBaminiwatta, A.
dc.contributor.authorCosta, C.
dc.contributor.authorWeerasinghe, D.
dc.contributor.authorArafat, S.M.Y.
dc.contributor.authorLund, B.D.
dc.date.accessioned2026-03-05T08:41:08Z
dc.date.issued2025-10
dc.description.abstractBACKGROUND: The integration of artificial intelligence (AI) tools in academic publishing is expanding rapidly, raising concerns about authorship, transparency, and editorial standards. Although organisations such as Committee on Publication Ethics and International Council of Medical Journal Editors have proposed guidelines on the use of AI, the extent to which they have been adopted by journals in psychiatry and mental health remains unclear. OBJECTIVES: To examine the adoption and content of AI policies in psychiatry and mental health journals indexed in SCImago and to determine whether higher-quartile journals are more likely to include policies related to AI. METHODS: Policies related to AI in the guidelines for authors and reviewers were examined for two groups of journals, all indexed under Psychiatry and Mental Health in SCImago in November-December 2024. The two groups were (1) a stratified random sample of 200 journals (50 per quartile) chosen from a total of 578 journals and (2) 25 top-ranked journals in psychiatry and mental health. RESULTS: Among the first group, 78 (39%) journals included policies related to AI in their guidelines or instructions for authors and reviewers, the number being greater in top-quartile journals (56% in Q1 versus 20% in Q4; χ² = 14, P = .003). Of the 78 journals, 69 (88.5%) disallowed AI tools as named authors, an equal number mandated disclosure of the use of AI, and 58 (74.4%) emphasised author accountability. Peer review policies mostly prohibited AI use (n = 47); AI-assisted copy editing was permitted in 56 journals; and policies on AI-generated images varied. None reported using AI detection tools. Among the top 25 journals, 16 (64%) included policies related to AI; all prohibited authorship to AI and required disclosure; and one reported using AI detection tools. CONCLUSION: Despite the rising use of AI in publishing, most psychiatry and mental health journals, especially the lower-quartile journals, lack policies on such use. Wider adoption and standardisation of policies related to AI are crucial to ensure research integrity and credibility.
dc.identifier.citationBaminiwatta A, Costa C, Weerasinghe D, Yasir Arafat SM, Lund BD (2025) Policies on using artificial intelligence adopted by journals in psychiatry and mental health. European Science Editing 51: e165365. https://doi.org/10.3897/ese.2025.e165365
dc.identifier.issn2518-3354
dc.identifier.urihttp://repository.kln.ac.lk/handle/123456789/31144
dc.language.isoen
dc.publisherEuropean Science Editing 51
dc.subjectAI policy
dc.subjectartificial intelligence
dc.subjectlarge language models
dc.subjectMEDICINE::Psychiatry::Child and adolescent psychiatry
dc.subjectpublishing ethics
dc.titlePolicies on using artificial intelligence adopted by journals in psychiatry and mental health
dc.typeArticle

Files

Original bundle

Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
Policies on using artificial intelligence adopted by journals in psychiatry and mental health.pdf
Size:
187.44 KB
Format:
Adobe Portable Document Format

License bundle

Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
license.txt
Size:
1.71 KB
Format:
Item-specific license agreed upon to submission
Description: