By Bruce H. Tsuji, Instructor, Department of Psychology

Although cheating is one of the “zombie ideas” that emerges repeatedly in conversations about online teaching (Lalonde, 2020) it is important to keep in mind that academic integrity violations happen in both online and offline environments. Yes of course, cheating happens in proctored face-to-face assessments too! However, since many Carleton instructors will soon be dealing with online assessments for perhaps the first time, I thought a list of some ways I have used to limit the problem could be useful.

The 15,928 students in my online courses at Carleton since 2014 have taught me much more than I have taught them. However, be warned that in the same way as there is no silver bullet for academic integrity violations in face-to-face environments, there is no one thing that will suffice online. The following should be viewed as part of an arsenal of tools to battle cheating but it is unlikely that one can ever hope to eradicate the beast.

(Also, assessment more generally is a huge topic. As a result, I will not deal here with remote proctoring and plagiarism-detection tools like TurnItIn. These kinds of tools are fraught with ethical, technical and possibly even equity issues that are deserving of a separate and broader discussion. I will also leave out any consideration of cuPortfolio and peer assessment nor will I discuss the obvious fact that a consideration of one’s learning outcomes should be paramount in terms of planning assessments.)

First and foremost, talk about cheating in your course! Many students simply do not understand the many types of academic integrity rat holes they can find themselves in. Some specific examples of plagiarism or cheating for your particular assessments can be very useful. One of my students was very surprised when, in repeating one of my courses, her re-submission of a previous assignment resulted in an alleged academic integrity offence. Since that incident I always make clear that copying one’s own work is also considered plagiarism.

I include an honour code to which students must agree before any other elements in cuLearn are opened. The text I use looks like this:

I agree to abide by the following code of conduct in <CourseName>:

  1. My answers to questions, exercises and assignments will be my own work.
  2. I will NOT share questions, answers or assignments with anyone else or post them anywhere on the internet.
  3. I will NOT share course content (videos, lecture slides, or any other material) with anyone else or post them anywhere on the internet.
  4. I am aware of sanctions that may be used if I engage in any activity that will dishonestly improve my results in this course.

You must select an option below then click Submit to continue:

  • I do not agree
  • I agree

When students click on “sanctions” they are taken to the university’s formal statements on academic integrity. While far from guaranteeing compliance, the honour code repeats and reinforces the seriousness of the topic. Unfortunately, many students (and many of us as well) have become anesthetized to End-User License Agreements (EULA) and the honour code may end up in a similar mental category. In that case, however, its brevity coupled with the fact that students may not progress further without clicking the appropriate choice may help it to achieve at least some of its goals.

While there are many reasons why high stakes testing fails to serve the needs of students (see for example National Academy of Sciences, Engineering, & Medicine, 2018) one more reason is that exams worth 40, 50, 60% or more of a final grade encourages academic integrity violations by increasing the potential benefit. Instead, design your course with many smaller formative tests and a relatively large number of summative ones.

To illustrate, my one semester Intro Psych course has 87 separate assessments! No one of these is worth more than 15% of the final grade and many are worth 0%. Most students, in calculating their subjective risk/benefit ratios will be somewhat less inclined to cheat on so many assessments of such low overall value. Moreover, key assessments are “daisy-chained,” or linked together, so that students must complete one or more before moving on to the next. For example each of my tests is daisy-chained to three quizzes which are worth 5% each, and which represent three chapters of content. Students must achieve a cumulative score of at least 50% on the three quizzes in order to gain access to the test. This also becomes an important internal check; if students perform surprisingly well on (for example) a set of three quizzes but equally poorly on the summative test, we may be looking at a potential academic integrity violation. The daisy-chain also helps students to understand the idea of formative quizzes as retrieval practice for my summative tests (TeachOnline, 2020).

Where possible introduce a written assignment(s) that requires some element of personal information to be included. This may not be feasible in all programs of study, but in psychology I have asked for personal examples of common psychological concepts; personal introductions of student’s past or future aspirations, or the classic “ice-breaker” of introducing some other student in the class to all (this can still be done in a totally online class!). While assignments like these are not immune to academic integrity violations, the risk is the same whether they are assigned online or in a face-to-face classroom.

Although many have decried multiple-choice questions (MCQ) they continue to be a staple of assessment in large classes and particularly in online settings (Bates, 2019). In my own case, I have had online classes of over 1,000 students supported by two or three TA’s and obviously assessment relies heavily on MCQ. Regardless of your personal stance on this subject, there are a number of ways that MCQ may be designed to at least discourage cheating.

First and foremost, I have developed a test bank of over 6,000 MCQ that I use in my online classes and the questions are constantly being edited and refurbished. Furthermore, every time a student opens a test or quiz that accesses that bank, the order of the questions as well as the order of the alternatives is automatically randomized by cuLearn. This ensures that even if two students open an exam together at the same time, there is a reasonable probability that they will not see the same questions in the same order or that the correct alternative (a, b, c, d, etc.) will be the same.

Recognizing that online tests are de facto open-book (see Cormier, 2020), I hired graduate students a few years ago to help me write a set of 600 case-based MCQ. These are MCQ that ask students to apply or derive course concepts from or to a paragraph of additional information. They are intentionally written to be relatively resistant to “googling.” Currently, my online tests are conducted via an open-resource protocol—students are allowed to use their texts, their notes and a browser while engaged in them. My 15% tests normally have a 60-minute time limit (with the exception of PMC accommodations) that includes 30 standard MCQ plus another 12 case-based MCQ. The standard MCQ are drawn at random from the pool of approximately 5,400 questions and the 12 case-based are drawn at random from the 600 question pool.

Approximately 8,400 students have written my open-resource protocol assessments since September 2017 and I have distributed a number of surveys to allow students to provide feedback on the concept. Students are not unanimously in favour of my implementation, with the most frequent criticism being the lack of time afforded to the tests and the fact that many of the questions require a nuanced understanding of the content. My response to the time criticism is that I have asked several of my TA’s to write the tests and the 60-minute limit is sufficient if one does not look up each and every question. I also tell my students that many of the questions we must answer in job settings are also time limited, but if they have particular problems with respect to their English language or their cognitive processing abilities I suggest that they consider making an appointment with the PMC.

I am relatively unconcerned with the second criticism, since I have been able to prove to myself that the historical proportion of A’s, B’s, and C’s in my courses without the open-resource protocol is not significantly different from the same proportions with the open-resource protocol. That said, I am constantly editing the items in my test bank with a particular focus on those questions that fail to discriminate well (i.e., the questions for whom a correct answer is not well correlated with a high overall grade). I repeatedly survey my students and I have also reassured myself most of them find the open-resource protocol less stressful, less anxiety-provoking, and would prefer if their other courses adopted a similar policy.

Another key to my MCQ is the use of proper names. Once upon a time I used students’ names in my questions as a way of being “cute” and acknowledging students. However, I learned that if students are communicating with each other about particular questions then unique names helped them in that endeavour. I now try to use the same generic names (“Jess” and “Taylor”) repeatedly throughout my MCQ to decrease the probability that someone might successfully ask “what was your answer for the ‘Bruce’ question?”

Another relatively unpopular choice I have made is to restrict the number of cuLearn questions that are visible at any time and to turn off the ability to browse forward and backward in tests.  I also turn off any indication of the correct answer for any given question. These three design decisions help to reduce the ability of students to take screenshots of my tests and then sell them on the internet (as I discovered to my chagrin in 2016). When students ask why they can’t see the correct answers I tell them that the evidence is clear that if they are forced to discover the correct answer themselves they will remember it much longer than if they are simply told the answer. I also encourage them to review their assessments with their TA’s or me as a way of initiating a relationship with a member of the university teaching community.

Although it may not ameliorate cheating per se, I feel that providing a window of time for assessments is an essential element of good online teaching as a way of helping to reduce stress and encourage a sense of agency in students. All of my quizzes (9 x 5%) are available from the beginning of the semester until the end. My tests (3 x 15%) are available for a 64-hour period (8 a.m. on Day 1 until 11:55 p.m. on Day 3) at an appropriate point in the term. These assessment elements are still time limited (10 minutes for my 10 question MCQ quizzes and 60 minutes for my 42 question MCQ tests) but the semester-long availability of quizzes also helps to reinforce the idea of their use as formative retrieval practice. Since I discuss the concept of distributed practice in the course (the idea that cramming is rarely as effective as memory practice that is distributed over time) this arrangement allows students to select quiz and test times to optimize their memory performance and to best accommodate things like part-time jobs, shared computers or internet, and the challenges of participation from different time zones. Unfortunately, a feature that I have not yet undertaken is to ask students to explicitly set their quiz and test times to hone their planning skills. I will try to do that sometime in the future.

Finally, there are many other types of assessment that may help to mitigate academic integrity violations in online courses but I have focused here on those that may be most amenable to large classes and their attendant marking challenges. It is also important to note that many of the reports and logs available on cuLearn can be of great assistance. I make my students aware that I know the IP address (internet protocol) that they access the course from. I also know the time of day, how long they were online, and how many times they “touched” the course over the semester. Although these data are rather blunt instruments, they do allow me a better perspective into student behaviour than I can ever achieve in a face-to-face classroom

This brief barely scratches the surface of this topic but I hope you might take away one or two ideas. Please let me know what you think at bruce.tsuji@carleton.ca.

Best of luck!

-Bt.

References

Interested in contributing to our blog? Please email tls@carleton.ca to find out how.