Four Things Teachers Can Do To Avoid AI Cheating

Maya Bialik
8 min readJan 19, 2023

With every great leap forward comes an initial panic followed by a slow adjustment. The latest leap forward has come from AI in the form of GPT-3, a langauge model that can understand and generate language quite convincingly, at least convincing at the level of a K-12 student. The panic in the education world, in this case, comes from the fact that students have already begun using it to write essays for them, and these essays are undetectable by existing plagarism programs.

Fun fact: this image was also created with AI

As a teacher who has been thinking about AI in Education for a little while, I have a few ideas to help those just entering the panic phase.

First, I want to remind us all that this is not the first innovation that has threatened Education As We Know It. Remember when you couldn’t just google any piece of information from your pocket computer? When you didn’t have a calculator with you every day? Technology changes what is available, changing what is necessary to learn, but it never gets rid of the need to learn. Usually it challenges us to make better learning experiences and I’m here for that challenge.

For those thinking, “I am not here for that challenge!” — I am certain that there will be TurnItIn-like programs available to us very soon, so in that very concrete way, this problem will soon be at least partially addressed. There are already a few demos out there, but they are still playing catch-up. If that addresses your worry, you can rest easy and stop reading now!

But what about a more fundamental answer to the question?

Here are four ideas to start:

  1. Write questions that involve System 2 thinking
  2. Write questions that catch misconceptions
  3. If you can’t beat ’em, join ’em
    (for students and for teachers)
  4. Create a classroom environment of self-driven learning
    (so they know they are only hurting themselves)

Let’s go through each one.

Write questions that involve System 2 thinking

The way that AI works (at least for now), is actually very similar to how our brains naturally work: it makes associations between all the words it ever encounters and maps them in terms of how they relate to all the other words it has ever encountered. It then uses this web of associations to “understand” human language and even generate new combinations of words.

As much as we like to think we are perfectly rational, psychologists have shown over and over again that we primarily function off of quick, flash judgments (using our associations) and not slow, deliberate thinking. Take this example:

A bat and a ball together cost $1.10. The bat costs $1 more than the ball. How much does the ball cost?

Even if you chose to pause and calculate that the answer must be 5 cents, you had to stop yourself from mentally blurting out “10 cents”. That’s because our “System 1” intuitive thinking dominates the majority of what our brains actually do. “System 2” thinking, that rational, analytical type of thinking that got you to the correct yet unintuitive answer, is actually something we have to carefully train in ourselves.

In fact, one of the purposes of school is to train this System 2 thinking! That’s good news for us.

Chat GPT does not actually fall for the bat and ball question. But here is one of my FAVORITE exercises and it comes from The Upshot at the NYT. Before you read on, please go and try to solve the riddle for yourself. Here are CHAT GPT’s responses:

It fails in EXACTLY the same way that human reasoning fails! The correct answer was just that each number had to be greater than the number before it, but 77% of NYT readers guessed the rule before ever receiving “no” for an answer! This means they fell into the trap of confirmation bias, seeking evidence to confirm their theory rather than evidence that would disconfirm it, (even though the disconfirming evidence would prove far more useful in coming up with the correct answer).

Therefore, the first piece of advice is to ask questions that engage that slow, effortful reasoning. Maybe not every single question needs to be like this, but sprinkling in System 2 questions will not only avoid easy AI cheating, but it will train in students the parts of their brains that are becoming even more important now that the rest can be automated.

In other words, just like they could google anything but they need to know what to google and evaluate responses, they will need analytical thinking in order to know what to ask the AI and evaluate its responses.

Write questions that catch misconceptions

Because the language model has read a bunch of human-written text, it often has the same misconceptions your students do. For example, when I asked it what it thought would happen if I place a balloon full of water over a flame, it explained that the water would expand causing the balloon to pop.

In reality, the water absorbs the heat of the flame and the balloon does not burst. Even in an air balloon, it pops when placed over a flame because the rubber breaks apart, not because the air expands (the balloon would just get bigger!).

This answer is remarkably like what the students offer as their predictions when I do this demo with 7th graders.

But maybe this example is particularly difficult because it’s about the physical world, which AI has limited exposure to. Here is an example from the intellectual world, in which GPT confused Hobbes and Locke’s beliefs. How could this happen? “Hobbes and Locke are almost always mentioned together, so Locke’s articulation of the importance of the separation of powers is likely adjacent to mentions of Hobbes and Leviathan in the homework assignments you can find scattered across the Internet.” This is the same way that the human brain makes mistakes: if two things were mentioned together many times, we might confuse them for each other.

It has always been good practice to track student misconceptions and design questions around them, but with AI this has gotten even more important. In designing multiple choice questions, making sure that the distractor answers are not just plausible but represent actual student reasoning is going to be key to designing a learning environment that teaches students to think for themselves.

If you can’t beat ’em, join ’em

For students:

Because I teach adorable 12 and 13 year-olds, they asked my permission to use “that AI thing that’s banned in some school in Canada”, rather than trying to sneak it past me. This gave me the opportunity to give them a purpose-built version of the assignment that incorporated AI.

The assignment in this case was to write a parody song about a scientific concept that they did not yet master (as evidenced from their last exam). I created a google doc that had a space for the original lyrics and a space for their science lyrics, to make it easy for myself to compare and contrast at a glance. For the students using GPT, I added two more sections before these sections:

  1. the exact prompt you gave GPT and
  2. the exact output it gave you.

This way, I could compare the output and the student work and still evaluate them on their scientific knowledge and creativity. There is nothing wrong with starting from some inspiration! The point is for me to see their thinking and help them improve it.

For teachers:

Who says students are the only possible users of AI? It can quite easily help teachers with many of our tasks which were previously not automatable! As a quick demo, my fiance and I threw together this tool to play with the idea of feeding the AI a paragraph or so of text and letting it come up with

  • Essential Questions,
  • Learning Objectives, and
  • Aligned Multiple Choice Questions

…that you can then simply export and assign. I’m sure there will be much more in this space that we haven’t even thought of. We called it QuestionWell because it’s a well of questions and also because it should help us teachers figure out how to question students well. Feel free to play with it and let me know what you think!

I can’t wait for half my job to be automated so I can focus on teaching, learning, and relationships.

Enable self-driven learning (so they know they are only hurting themselves)

This one is a toughie but a goodie if you can do it! Ultimately, it’s very easy for students to lose sight of the purpose of learning and concentrate on pleasing the teacher or earning the highest possible grade. I do not fault students for what some might call “trying to gobble up all the points” or even “grade grubbing”— this is how adults have set up the game, and they’re just playing it! But ultimately, as with any kind of cheating, they are cheating themselves. The trick is getting them to see it that way!

First, it’s important to leave numbers out of the learning process until absolutely necessary. As soon as numbers are assigned, students will add them and average them and come up with a grade for themselves. They will actually go out of their way to use any information you give them to try to come up with a grade.

Avoid it.

Give descriptive feedback and checklists, and top it off with self- and peer- reflections. Assign scaffolded but open-ended work and encourage students to make things they will be proud of. That way, in the end, they could even grade themselves, as long as they do it honestly. (If not, they can have a conversation with you to discuss where your views on their learning differ).

The more your classroom environment can resemble this (admittedly impossible to achieve) utopia, the more students will internalize the idea that learning has intrinsic value and the less they will feel the urge to cheat, whether that’s with an AI, or one of the myriad old-fashioned methods.

What won’t work:

Assign only “truly creative” tasks

We tend to think of machines as being incapable of creativity. To some extent this is true: they will not (yet) come up with a truly insightful stand-up routine about something no one has ever thought to point out. However, they are absolutely capable of creativity. After all, creativity is just the act of combining existing concepts in new combinations. Any time it strings together words into a sentence and sentences into paragraphs, it is a creative act. Mapping ideas, using metaphors, brainstorming, these are all things that actually play directly into the strengths of this AI.

Assign opinion tasks

The AI is programmed to not say anything controversial, or rather, to always present “both sides”. Therefore, although we think of an AI as remaining impartial, this does not really get around the cheating issue, since students can simply delete half of the essay or asking it for “the reasons for x” and end up with an “opinion” piece.

Demonize AI as a tool

This is a great leap forward in technology and as such, we haven’t even begun to scratch the surface of what is possible. There are always old people who think any new technology is going to ruin everything. Don’t be like those people. Choose to see the opportunities.

--

--

Maya Bialik

Creator of questionwell.ai. Teacher, author, and speaker making learning meaningful and making teaching more enjoyable.