Ferramentas como o ChatGPT ameaçam a ciência transparente - aqui estão as nossas regras básicas para a sua utilização
260120231922
criado em: 19:22 2023-01-26
Relacionado
- palavras-chave: #promptgpt3 #meta #zettelkasten #1000palavrasoumais #disserte #newsletter #ceticismo #escrita #filosofia #mestredeculturacontemporanea #totalizante #criatividade #episteme
- notas: pedi o chat gpt que recomendasse pentiment para um amigo de férias
- Content, por Kate Eichorn
- its the content killing the culture - wisecrack edition
- A ascensão da indústria de conteúdo
- teoria da cauda longa e a indústria do conteúdo
- filosofia da tecnologia
- Abstracts written by ChatGPT fool scientists
- Como perceber uma arte gerada por IA, segundo artistas
- ALARMADOS PELO GPT3, UNIVERSIDADES COMEÇAM E REVISAR SUA FORMA DE ENSINAR - artigo
26-01-2023 saiu a notícia que a revista nature traçou uma linha na areia sobre a geração de texto por IA — como o gpt3
fonte
Tools such as ChatGPT threaten transparent science; here are our ground rules for their use
As researchers dive into the brave new world of advanced AI chatbots, publishers need to acknowledge their legitimate uses and lay down clear guidelines to avoid abuse.
- Artificial intelligence is gaining the ability to generate fluent language, making it difficult to distinguish from text written by people.
- The release of the AI chatbot ChatGPT has brought the capabilities of large language models (LLMs) to a mass audience.
- ChatGPT can write presentable student essays, summarize research papers, answer questions, and generate helpful computer code.
- There are concerns that students and scientists could deceive others by passing off LLM-written text as their own, or using LLMs in a simplistic fashion.
- Several preprints and published articles have already credited ChatGPT with formal authorship.
- There is a need for researchers and publishers to establish ethical guidelines for using LLMs.
- Nature and other scientific publishers have formulated two principles for ethical use of LLMs: no LLM tool will be accepted as a credited author on a research paper, and researchers using LLMs should document this use in the methods or acknowledgements sections.
- It is currently difficult to detect text generated by LLMs, but in the future, AI researchers may be able to get around these problems.
- LLMs will improve, and there are hopes that creators of LLMs will be able to watermark their tools’ outputs in some way.
- Research must have transparency in methods, and integrity and truth from authors, to maintain trust and advance science.
Point 7 highlights the concern that researchers and students could deceitfully pass off text generated by large language models (LLMs) as their own work, and use LLMs in a simplistic fashion to produce unreliable results. This is a concern because LLMs, such as ChatGPT, have advanced capabilities and can generate presentable student essays, summarize research papers, answer questions well enough to pass medical exams and generate helpful computer code. Additionally, ChatGPT has produced research abstracts that are so well-written that it is hard to spot that a computer wrote them. This raises ethical concerns about the use of these tools in academic research and the potential for deception and unreliable results. To address this concern, Nature and other scientific publishers have formulated guidelines that prohibit the use of LLMs as credited authors on research papers and require researchers to document their use of LLMs in the methods or acknowledgements sections of their papers.