In the fall of 1994, a group of researchers in the United States and Japan, led by the psychologist George M. Diekhoff, conducted a survey of cheating behaviors among students in introductory psychology courses at Midwestern State University, in Texas. A year later, they conducted the same survey in related courses at three Japanese universities. In total, they obtained survey data from close to 700 students, which gave the team a good-sized pool from which to draw a few conclusions about cheating.
The researchers followed the usual procedure for measuring cheating rates in higher education: They listed a menu of academically dishonest behaviors on a survey, and asked students to acknowledge whether or not they had engaged in any of those behaviors in their college courses. The survey also collected some simple demographic data from each respondent. That basic strategy, which has been deployed by cheating researchers for more than 50 years now, allows scholars to understand more clearly which types of students are more likely to cheat.
In the Diekhoff study, the students in the American courses were, on average, younger than their Japanese counterparts. Because survey research on cheating has regularly found that older students cheat less frequently than younger ones, the researchers hypothesized that they would find higher rates of cheating among the American students.
What the study found, however, was quite the opposite. While 26 percent of the American students admitted to cheating on at least one exam, a whopping 55 percent of Japanese students made the same acknowledgment. The results, reported in a 1999 edition of the journal Research in Higher Education, should cause us some head-scratching. What accounts for the substantial difference in cheating rates in that study—a difference that seems to defy a well-established finding from previous research?
We can certainly theorize about cultural differences to explain the findings. The researchers, for example, discuss cultural differences in possible factors such as “social stigma” and “group or team orientation.”
But they also point to a key difference between the American and Japanese classes. A closer look at that difference will help us draw out the first of five contextual features of a learning environment that my own research suggests may affect cheating.
I argued in Part 1 of this series that, to better understand how to respond to cheating, we should focus on the learning environments that we create and the extent to which they might play a role in inducing students to cheat. Research on the demographics of cheating students—that younger students cheat less than older ones, for example—may prove helpful to us in designing academic-integrity campaigns on campus, or in determining the target audiences for those campaigns. But what would prove most helpful to teaching-faculty members would be an understanding of how to design or modify our courses in ways that would encourage academically honest work.
So consider the key difference that Diekhoff and his colleagues describe between the American and Japanese courses they studied: In contrast to colleges in the United States, they wrote, Japanese “professors rarely give regular exams and pop quizzes; therefore, final exams are heavily weighted in determining grades. Thus, studying is not a daily habit for many Japanese students, who study only before major exams. Passing these exams is the primary measure of academic success and the pressure to pass a major exam can be enormous.”
In the conclusion to their article, the researchers noted this key difference as a likely explanation of their unexpected findings: “The Japanese student whose academic success is evaluated by his or her performance on a single major exam may well experience more pressure to cheat than does the American student whose grade is based on a series of shorter exams, quizzes, homework assignments, and the like.”
That explanation strikes me as intuitively sensible. The fewer opportunities that students have to earn their grade in a course, the more pressure they feel to perform on each exam or assignment. And the more pressure they feel on each exam or assignment, the more likely they are to succeed by any means necessary, including cheating.
Flipping that conclusion around allows us to establish our first principle about the types of learning environments that are likely to induce student cheating: ones that depend on infrequent, high-stakes assessments.
I used the more general language of “assessment” deliberately there, since I don’t think this principle depends solely on students taking a heavily weighted exam. Any course that gave students a very small number of high-stakes assessments, with just two or three opportunities to earn their grade, would seem to put greater pressure on them to succeed, and hence would intensify the pressure to cheat.
As a point of comparison for their theories, cheating researchers like to consider the historical example of the Chinese civil-service exams, which ran from the 7th century to the beginning of the 20th century. The civil-service exams offered a route (albeit a long and arduous one) to lucrative and stable positions in the Chinese government, and theoretically opened a pathway to success to even the most humble Chinese peasant. Candidates had to take the exams in multiple locations over many years, from qualifying exams out in the provinces to “final exams” at the palace. The exams took many forms, but typically required test takers to write essays on Confucianism, compose verse on specific subjects, and reproduce imperial documents from memory.
In all cases the exams were held infrequently—sometimes as far apart as two or three years. The stakes, meanwhile, were incredibly high. Passing the exams led to a lifelong career; failing them meant returning home in disgrace, and then a decision about whether to devote another two or three years (without income) to studying for the next round. A small subgenre of Chinese literature is devoted to chronicling the lives of failed exam takers.
Because of the crucial role the exams played in determining the composition of the civil service, measures to prevent cheating on the exams were rigorous and punishments were incredibly harsh—up to and including the death penalty. And yet, in spite of such elaborate measures and draconian punishments, cheating was rampant on the exams. Almost every type of cheating we see in our students today, from purchasing prewritten essays to the use of various types of “cheat sheets” (electronic or otherwise), existed in some form on the civil-service exams.
Researchers Hoi K. Suen and Lan Yu argue, in a 2006 article from the Comparative Education Review, that cheating on the civil-service exams stemmed precisely from their infrequent and high-stakes nature. In fact, they suggest, problems with cheating “appear to be so inherently chronic under high-stakes conditions that they defy preventive measures.” The implication of their analysis seems to be that we have no choice but to accept cheating in environments that feature such high-stakes testing.
That certainly may be true for high-stakes tests administered by testing agencies or required by accrediting bodies, such as college entrance exams or medical boards. Fortunately for most of us in higher education, we do not teach under those conditions. Most of us have the ability to design our courses as we see fit, and to make our own decisions about the number and frequency of assessments.
The research I’ve cited here supports a clear principle: Offer students frequent, low-stakes opportunities to demonstrate their learning to you. The more assessments you provide, the less pressure you put on students to do well on any single assignment or exam. If you maintain a clear and consistent academic integrity policy, and ensure that all students caught cheating receive an immediate and substantive penalty, the benefit of cheating on any one assessment will be small, while the potential consequences will be high.
By contrast, if you limit your assessments to just two or three multiple-choice exams over the course of a 15-week semester, you are putting intense pressure on each of those assessments—and, at least according to this research, that pressure may incline your students toward cheating.
I can anticipate two objections to this argument. First, faculty members in some disciplines might rightly argue that they have to ready their students for high-stakes licensing or admission exams in some fields, and that they would do their students a disservice by not helping them prepare for such external exams with high-stakes assessments in their courses.
That makes sense, and so I will not argue that we can or should eliminate high-stakes assessments entirely. Instead, we should prepare students for such high-stakes exams with more frequent, low-stakes tests in and outside of class. If you teach students who are preparing for a high-stakes licensing or admission exam, give them frequent opportunities to practice the skills they will need for that exam. Offer them regular quizzes in the same format as the exam, for example, or use multiple-choice clicker questions in class to help students slowly build up the confidence and skills they will need to succeed on the external exam.
But the more serious objection you might make to my argument runs like this: Why should I bother to redesign my courses to include more frequent, low-stakes assessments, which will require more time and effort on my part, just to reduce the already small numbers of students who may be cheating in my courses?
The short answer: You shouldn’t redesign your courses just to reduce cheating. You should redesign them in order to increase learning. As I will argue in the third and final essay in this series: What the research tells us about how to design our courses to minimize cheating is precisely what cognitive theorists tell us about how to design our courses to increase learning.
Author Bio: James M. Lang is an associate professor of English at Assumption College. His new book, “Cheating Lessons: Learning From Academic Dishonesty,” will be published by Harvard University Press later this year.