Artificial Intelligence (AI)

Artificial Intelligence (AI)

Revista Opinião Jurídica is monitoring the area's continuous development and will review and update this policy accordingly, including regarding the forms and technologies used to verify possible unethical conduct.

Authorship aspects

Only creations of the mind are considered for authorship, which attributes responsibility for the work to the authors. Artificial Intelligence (AI) models that use machine learning techniques, known as Large Language Models (LLM), such as ChatGPT, do not meet the authorship criterion.

AI-assisted improvements may be used in foreign language translations and human-generated documents to check for grammar, spelling, punctuation, and tone mistakes. These AI-assisted improvements may include wording and formatting changes to texts, but may not include generative editorial work and autonomous content creation.

AI tools can also offer data-based guidance to help humans make decisions.

The use of LLM should be explicitly pointed out in an explanatory footnote linked to the title of the manuscript. The methodology section of the paper should also describe how AI tools were employed. Transparency of processes should ensure technical robustness and rigorous data governance.

The responsible application of technology requires human supervision, checks, and monitoring. In all cases, there must be human responsibility for the final version of the text and the author’s agreement that the edits reflect their original work.

Accountability must ensure that the technology is fair and non-discriminatory, and that any bias in the data sources and potential bias in the design of the tools are identified and corrected. When this is not possible, transparency about the limitations of the technology is essential.

Any ethical violations found, even in the future using verification technology yet to be developed, are subject to sanctions, according to the seriousness of the transgression, such as warnings, being prevented from processing and publishing articles by Revista Opinião Jurídica, as authors or co-authors, for a period of up to 4 (four) years, or even, retractions, as set out in the retraction policy (see conditions for submission).

Peer review aspects

Revista Opinião Jurídica does not recommend the use of AI to check the information in manuscripts undergoing evaluation, as it considers that generative AI tools have considerable limitations and may not have up-to-date knowledge or may produce absurd, biased or false information. In addition, manuscripts may include confidential information that should not be shared outside the peer review process. For these reasons, Revista Opinião Jurídica asks reviewers not to upload manuscripts to generative AI tools. We request that reviewers disclose in the peer review report any instances in which an AI tool was used to evaluate the claims stated in the publication.