Interesting Exercise with Artificial Intelligence Writing

(click image for credit)
With a new school year coming up, I decided to play with ChatGPT a bit to see if I could notice any patterns that might help me recognize when students use it to “help” them with their writing assignments. I know there are tools that help you do this, but they are not as reliable as I would like them to be. This is because you can ask ChatGPT (and other AI programs) to write in a particular style, which can often “humanize” the writing to a point where even the tools can’t recognize the writing as being generated by a computer program.

For example, I asked ChatGPT to write an explanation of Newton’s Second Law, with an example problem, in the style of Dr. Jay L. Wile (that’s me). Here is what I got. First, I was gratified to see what ChatGPT wrote before giving me the essay:

Dr. Jay L. Wile, known for his clear and accessible explanations in science, would likely present Newton’s Second Law in a straightforward and educational manner.

It also ended with a nice statement:

Dr. Jay L. Wile’s explanations often include practical examples and clear step-by-step problem-solving methods to help students grasp complex concepts effectively.

However, if I look at the essay, I don’t see a lot of my style in it. It does have a less-than-formal tone, which is typical of me, but that’s about it. Interestingly enough, when I put this essay (minus the parts quoted above) into two programs designed to detect whether or not the essay was written by AI, Grammarly’s tool said only 12% of it was AI-generated, while Phrasly said it was 50% AI-generated. Both were wrong, because it was 100% AI-generated.

Since I was struck by the fact that ChatGPT added notes about how I write, I wondered if it would recognize other scientific authors. So I asked ChatGPT to write the same essay, but in the style of other authors. Some were authors of homeschool science material, while others were authors of well-known university physics texts. Of the six authors I tried, it added statements about only one, Dr. Raymond A. Serway. It mentioned his organized, methodical style, which I agree with (I used his text in my calculus-based physics courses when I taught it at the university level).

Even though ChatGPT didn’t add statements about the other 5 authors, the style of each essay was slightly different. I don’t know if ChatGPT didn’t recognize those authors and just used different writing styles or recognized them but didn’t add statements about their writing. On a whim, I decided to ask it to write the same essay in the style of an author who doesn’t write about science, Andrew Pudewa. I got his permission to share what it came up with. I am familiar with Andrew’s work, and once again, I don’t see a lot of his style in this essay. Nevertheless, note that it did add a comment about his writing style:

Andrew Pudewa, the founder of the Institute for Excellence in Writing, is known for his clear, engaging explanations that make complex ideas accessible…This style of explanation emphasizes clarity and relatability, making complex concepts more understandable and engaging.

I would agree with that assessment.

Now, of course, ChatGPT is free, so I am sure there are better AI tools out there. Nevertheless, based on this little test, I am not very impressed with AI writing. At the same time, however, I would say that AI seems to be accurate when it comes to evaluating an author’s style when it decides to add an evaluation.

3 thoughts on “Interesting Exercise with Artificial Intelligence Writing”

  1. I had a student turn in a science research paper that was AI generated, and obviously so. Some of the hallmarks that I noticed were complex sentence structures that are not typical not only to this student, but for other 14 year olds including a sentence with both a semi-colon and colon used correctly. Also, though this was a semester long project, all the materials cited as footnotes were accessed on the Monday before the Wednesday deadline. In addition, the sources used were not current and some were behind paywalls of journals with significant costs. I also typed in the title of the paper as a prompt in ChatGPT and it spit out an almost exact paper. And, as you mentioned, the writing was not impressive. nor remarkable, just very generic and anything someone could easily Google or find on Wikipedia with the exception of the information cited on journals behind paywalls.

  2. I had a similar experience with these AI detectives not being reliable. One took a completely AI paragraph and reported it as 100% human. Another ingested a math lesson I wrote in 2019 and said it was 96% confident that it was AI-generated. (I’m a former math teacher and a novelist of 30+ years, so I’m trying to take that assessment as a compliment!) You’re right, AI certainly presents dilemmas for student cheating, and they haven’t made good detection tools yet. As much as we wish our students would write correctly, it’s certainly a tip-off when it’s too perfect, lol.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.