The pandemic pushed universities to launch or accelerate plans for delivering examinations online. These forced transitions have often been painful, involving stress and burnout. Exams have been a big pain point.
We do need to understand students’ achievements to effectively determine, plan and support student learning. Assessment is meant to inform this understanding.
Exams are high-stakes opportunities for generating big “chunks” of evidence of student achievement. Cheating invalidates this evidence, which has a knock-on at individual, course and program levels.
Academic program reviews, for example, are often guided by analyses of that year’s exam results. Exam data help staff make changes to the program. If a significant percentage of exam scores result from cheating, this can lead to misjudgments about the curriculum and missteps in designing future exams.
What happened during the pandemic?
It’s understandable, then, why many universities have embraced remote proctoring. This involves the use of artificial intelligence software to identify and monitor students during exams. The value proposition of remote proctoring is that it easily allows us to replicate virtually the security of an in-person, seated, invigilated exam, wherever our students may be. It seemed like a solution custom-made for the pandemic.
There is some evidence of remote proctoring working as intended. However, we must also consider emerging concerns.
Many students have been hostile to what they see as inappropriate surveillance practices. There are concerns about universities’ uncritical accusations of cheating in “flagged” cases generated by monitoring software.
On the faculty side, it’s becoming clear that remote proctoring does not necessarily lead to less work for staff. It may even increase exam-related workload.
Working in educational assessment for two decades has taught me that cheating on exams is a serious, complex issue. It defies easy solutions.
So why not go back to the old ways?
With enrolments growing and in-person teaching resuming, it’s tempting to return to familiar exam practices. Bringing back traditional examinations, however, invites back other well-documented, chronic problems.
Orchestrating mass, in-person exams presents a huge challenge. Assuring relevance of traditional exams to modern competencies is also problematic.
It’s worth asking ourselves: how satisfied were we really with pre-pandemic exam practices?
Out of the many ways we engage learners in higher education, assessment is typically the slowest area to change. As exams are high-stakes, it is unsurprising that exams are quite change-resistant.
We are therefore presented with an unusual and timely opportunity. Right now, there is a strong push for systemic improvement of learning, including better assessment.
Let me suggest two connected ways forward on better exam practices. These are not axiomatic instructions. Instead, these are some resource-supported ways to open dialogues within institutions and teaching teams for exploring sensible solutions for them and their students.
Make scholarly decisions
Scholarship informs our disciplines. It must also inform assessment within our disciplines.
Scholarship of teaching and learning (SoTL) in higher education is not new. In my experience, SoTL or SoLT has often de-emphasised or failed to include assessment, as the popular forms of the acronym suggest.
Increasingly, we need to embrace SoLTA, that is, scholarship that includes and promotes evidence- and research-supported assessment practices. Embracing SoLTA involves becoming deeply familiar with the best research in assessment and examination practices in higher education and disciplinary contexts. This includes informing practice through consulting highly reputable journals like Assessment and Evaluation in Higher Education.
As with our disciplines, we should see ourselves not just as consumers of knowledge but creators, too. This presents an opportunity for universities to support teachers in applying scholarship to teaching, including teaching-focused academics.
Don’t reject exams, make them better
Exploring alternatives to exams is sound general advice, but doing so isn’t always feasible. Programs often have rational imperatives for keeping exams in place, including expectations of external accrediting bodies. In these cases, it’s better to seek improvement, rather than alternatives, to exams.
One route to improvement is adopting good open-book exam practices. For exams with multiple-choice questions, there are solid guidelines for enhancing these. There are even approaches allowing multiple-choice questions to elicit cognitively complex responses.
Two key problems I have found in online exam practices are students using search engines to look up answers, and collusion. One way to resolve the first issue is adopting case-based approaches that use novel material generated specifically for the exam.
Collusion is a tougher nut to crack, but some people are adopting new approaches to doing so. These include running exams divided into sections, with collaboration an anticipated and welcome part of the process.
Business as usual isn’t good enough
Changing assessment is challenging. Higher stakes mean bigger challenges and greater resistance. As universities find their post-pandemic footing, we have a window of opportunity in which we know we must change.
This allows us to answer the question: what’s next for exams? Clinging to new and hastily adopted practices provides an unsatisfying answer. A return to business as usual is no better.
Instead, let’s adopt a scholarship-informed approach to developing our exams and ourselves to better meet an uncertain and challenging post-pandemic future.
This article is republished from The Conversation is the world's leading publisher of research-based news and analysis. A unique collaboration between academics and journalists. It was written by: Christopher Charles Deneen, University of South Australia.
Several linked resources in this article are hosted by The University of Melbourne and University of South Australia. I was employed by The University of Melbourne from 2019-2021 and I am currently employed by University of South Australia. I am an author of or contributor to several linked resources. Authorship and contributions are clearly identified in each resource.