Ferramentas como o ChatGPT ameaçam a ciência transparente - aqui estão as nossas regras básicas para a sua utilização

260120231922

criado em: 19:22 2023-01-26

Relacionado


26-01-2023 saiu a notícia que a revista nature traçou uma linha na areia sobre a geração de texto por IA — como o gpt3
fonte

Tools such as ChatGPT threaten transparent science; here are our ground rules for their use

As researchers dive into the brave new world of advanced AI chatbots, publishers need to acknowledge their legitimate uses and lay down clear guidelines to avoid abuse.

  1. Artificial intelligence is gaining the ability to generate fluent language, making it difficult to distinguish from text written by people.
  2. The release of the AI chatbot ChatGPT has brought the capabilities of large language models (LLMs) to a mass audience.
  3. ChatGPT can write presentable student essays, summarize research papers, answer questions, and generate helpful computer code.
  4. There are concerns that students and scientists could deceive others by passing off LLM-written text as their own, or using LLMs in a simplistic fashion.
  5. Several preprints and published articles have already credited ChatGPT with formal authorship.
  6. There is a need for researchers and publishers to establish ethical guidelines for using LLMs.
  7. Nature and other scientific publishers have formulated two principles for ethical use of LLMs: no LLM tool will be accepted as a credited author on a research paper, and researchers using LLMs should document this use in the methods or acknowledgements sections.
  8. It is currently difficult to detect text generated by LLMs, but in the future, AI researchers may be able to get around these problems.
  9. LLMs will improve, and there are hopes that creators of LLMs will be able to watermark their tools’ outputs in some way.
  10. Research must have transparency in methods, and integrity and truth from authors, to maintain trust and advance science.

Point 7 highlights the concern that researchers and students could deceitfully pass off text generated by large language models (LLMs) as their own work, and use LLMs in a simplistic fashion to produce unreliable results. This is a concern because LLMs, such as ChatGPT, have advanced capabilities and can generate presentable student essays, summarize research papers, answer questions well enough to pass medical exams and generate helpful computer code. Additionally, ChatGPT has produced research abstracts that are so well-written that it is hard to spot that a computer wrote them. This raises ethical concerns about the use of these tools in academic research and the potential for deception and unreliable results. To address this concern, Nature and other scientific publishers have formulated guidelines that prohibit the use of LLMs as credited authors on research papers and require researchers to document their use of LLMs in the methods or acknowledgements sections of their papers.