OpenAI Is Supposedly Working On A Tool to Spot ChatGPT-Generated Text.

As some education officials have made the alarming decision to block access to the tool in public schools, such as in the US, OpenAI is reportedly working on a solution to determine whether a text was produced by its ChatGPT model. The use of AI to complete homework assignments by students has raised concerns among educators about the impact on education. One concern is that those language models can be used to plagiarize existing work or enable cheating. In response to these concerns, OpenAI is said to be creating “mitigations” to detect text that is automatically generated by ChatGPT, as reported.

Since then, OpenAI has stated that by making ChatGPT available for research preview, it hopes to learn from its actual use, which it believes is essential for the creation and application of sophisticated, secure AI systems. It is also continuously taking into account feedback and learning from it.

When it comes to the application of AI-generated text, OpenAI has consistently emphasized the value of transparency. The company’s policies require that users are upfront with their audience when using its API and creative tools. OpenAI is looking forward to working with educators to find useful solutions and ways to help teachers and students benefit from AI.

The use of AI in academia will change as a result of the ability to tell whether written content was produced by a human or a machine. Schools would have more efficient ways to enforce bans on AI-generated essays, or they might be more open to accepting papers if they can understand the benefits these tools have to offer students.

An updated version of GPT-3 called ChatGPT was released by OpenAI in November of last year, but it still has the same issues as all other language models.