Lack of knowledge about ChatGPT’s real capabilities leads teachers to use AI for activities of which it is incapable; misused, it harms students
Last Tuesday (27), a tweet about a student accused of plagiarism in his TCC went viral. In this case, a teacher used ChatGPT to ask if the work was done by him (ChatGPT ). The artificial intelligence confirmed that yes, she was the author of the text — the problem is that ChatGPT does this with practically every text.
If you enter any text and ask ChatGPT “are you the author?” or “is the text yours?” he will probably say yes. I did tests for Tecnoblog using an unpublished text and an old joke. In both situations, he confirmed that he was the author. However, this only happened when the prompt was at the beginning of a conversation.
The story above had a happy ending. The student accused of plagiarism proved that ChatGPT does not identify plagiarism using a professor’s article. artificial intelligence said that the text was his own. Still, the situation shows that there are many people who do not understand what ChatGPT is for.
ChatGPT and Bing Chat are revolutionaries, but they are not gods
artificial intelligence are increasingly revolutionary and… intelligent, being able to help us in different ways. However, they are not being omniscient and omnipotent: they have limitations. ChatGPT and Bing Chat, for example, struggle to solve calculations more difficult math. And the first is trained with information until December 2021.
As we were unable to contact OpenAI, we interviewed ChatGPT so that he could explain his role himself. In the words of theAI, it was “trained to interact with users through text, providing information, answering questions, making suggestions and performing various tasks related to natural language processing.”
In short, ChatGPT is, at the same time, a Google that delivers ready-made answers (without you clicking on multiple links) and an assistant, being able to write codes, provide text summaries and help you with insights — you can even searchin Google how people are using it to make their lives easier.
So, if the user wants a program to identify plagiarism, she must look for specific tools for this. The situation is even worse when teachers, whether university or not, use ChatGPT for something it is not useful for — and whose limitations generate wrong answers. In the tweet replies, other users reported that they were accused of plagiarism by ChatGPT — the Tecnoblog< i=6> tried to contact these people and the other student, but had no response until publication.
The level of intelligence, the “IQ”, of GPT-4, large model language technology that is the engine of < /span>, is equivalent to a 10 year old child — which helps explain how easy it is to bypass some of its restrictions. What’s more, these two AIs can respond with false information.Bing Chat and ChatGPT Plus
In fact, the correct way to find out if content was created by AI is to use a specific tool for this. But not even the tools created for this are reliable in identifying whether the author was a “robot”.
For ChatGPT, even an unpublished text of mine is his
When I found out about the case, I went to test whether ChatGPT “would accuse me of plagiarismconfirmed that he was the author of content that had not even been published.ChatGPT and started with the prompt “is the following text written by you? ”. This time, Tecnoblog”. In the first test, he responded that he had not written the text. However, I opened a new conversation, I took a text that will soon be published on
Afterwards, I did another test using a very old joke. Again, I needed to open a new conversation and follow the prompt mentioned in the previous paragraph. Yes, the “I dig, you dig” “poem” was written by ChatGPT.
The same method was used to declare himself the author of an excerpt from this text. From the tests, it is clear that ChatGPT has a problem: it can consider itself the author of text that opens a conversation.
I swear I tried to get OpenAI to answer me about the case. However, their support bot, like their main product, has problems. He didn’t understand that I wanted to talk to someone. The chatbot claims it can put me in touch with a team member if it can’t resolve my issue. What I got was a loop of “write me your problem and I’ll see how I can help”. It’s better to get a poop emoji in response.