Time to move on from multiple-choice questioning?

Is multiple-choice questioning still a thing now we've got AI?

This week, David takes a look at the evolution and potential obsolescence of multiple-choice questions in education. Traditionally, these questions served as efficient tools for assessing student knowledge, particularly useful for their speed and ease of grading. However, they fall short in evaluating higher-order thinking skills like critical analysis and creativity. With the advent of AI, we now have the capability to assess complex, open-ended responses with a high degree of accuracy. So the question is: have we met the end of multiple-choice questioning?

Recently, I found myself in the midst of a creative frenzy during one of our prompt jams, designing a tutor that could ask multiple-choice questions and grade them. I've been a teacher for over two decades, and I never really considered the purpose of multiple choice questions other than they existed and we use them. As I dove deeper into this project, a thought struck me: are multiple-choice questions really the best we can do? Or are they simply an artefact of a time before the advent of AI?

Pedagogy

Let's start with the pedagogical reasons behind the use of multiple-choice questions. Historically, they were seen as a practical tool for assessing knowledge across a wide range of subjects. These questions could test recall, comprehension, and even some degree of analysis and application. They allowed educators to quickly gauge student understanding and identify areas of weakness, especially in scenarios where students may not yet have enough knowledge to be able to construct a full answer for themselves in one go.

Research does suggest some benefits to this method. For example, multiple-choice questions can help reinforce learning through repetition and immediate feedback. According to a study by Roediger and Butler (2011), repeated testing with multiple-choice questions can enhance long-term retention of information. They also standardise testing, which can be crucial for ensuring fairness in large-scale educational settings. However, it's important to note that while multiple-choice questions can be effective for assessing lower-order thinking skills, they often fall short when it comes to evaluating higher-order cognitive processes like critical thinking, synthesis, and creativity (Biggs & Tang, 2011).

It seems that those higher-order Bloom's taxonomy concepts are a step above what can be assessed by multiple-choice, and that's a shame, but we all know that once a teacher's done their multiple-choice quiz we get into the nitty-gritty and start asking those more comprehensive open-ended questions that really push the learners.

Speed

Speed, I believe, might have been the main driving force behind the widespread adoption of multiple-choice questions. When teachers have to manually grade each student's work it makes a lot of sense to reduce the amount of cognitive overhead going into giving feedback because the longer we leave it between work completion and feedback, the less the impact the feedback can have on improvement. The efficiency of multiple-choice questions was likely a game-changer. With a simple answer key, teachers could quickly mark tests, saving countless hours.

In addition to manual marking, the introduction of Optical Mark Recognition (OMR) sheets further streamlined the process. These sheets allowed for the rapid assessment of large groups of students with minimal human intervention. This method not only sped up grading but also reduced the potential for human error. According to a report by the National Centre for Educational Statistics (NCES), standardized tests using multiple-choice questions can be graded up to ten times faster than those requiring essay responses.

It's no shock then that in a world primarily involved with pen and paper, the multiple choice question had its place and was seemingly king for a very long time.

Computers

When computer-based assessment became popular, multiple-choice questions were a natural fit. Early computer systems struggled with processing and understanding free-form text. Multiple choice questions, with their fixed answers, provided a straightforward way for computers to evaluate student performance.

The rise of e-learning platforms saw a proliferation of multiple-choice questioning products. Tools like Moodle, Blackboard, and Canvas made it easy for educators to create and administer multiple-choice tests. These platforms allowed for automated grading, immediate feedback, and detailed analytics on student performance. They became a staple in online education, standardised tests, and even professional certification exams.

These kinds of assessments are so ubiquitous and so common, but have we turned a corner from needing this type of assessment at all?

AI for the win

But then came AI, and everything changed. AI has the remarkable ability to understand and evaluate complex, open-ended responses. It can assess essays, short answers, and even creative writing with a level of accuracy that rivals human graders. In our testing at Mindjoy we've seen roughly a 10% variance in marking consistency by AI, meaning that whilst it might not be ready to do our marking for us, it can have a good place in formative assessment. This has opened up a world of possibilities for more nuanced and comprehensive assessment methods.

AI can provide personalised feedback, helping students understand not just what they got wrong, but why they got it wrong. This kind of detailed feedback is invaluable for learning and improvement. That's something that you don't really get with multiple-choice questions, the feedback is much more nuanced with AI responding to written questions. Moreover, AI can adapt to each student's learning style and pace, offering customised questions that challenge and engage them in ways that multiple-choice questions simply can't. For example, it's simple to build a tutor that can personalise learning experiences based on student performance and engagement data with Mindjoy.

So, is AI suited for multiple-choice questions? Absolutely. AI can create, administer, and grade these questions with incredible efficiency. Heck, it can even dynamically generate them on the fly with differing levels of similarity in the options. But here's the thing: it can do so much more. AI can analyse patterns in student responses, identify areas where students struggle, and even predict future performance. It can facilitate a deeper, more personalised learning experience that goes beyond the limitations of multiple-choice questions.

Most importantly, what AI unlocks for us is the speed and efficiency savings of multiple choice questions, but with open questions. Long-form, short form, however, you want to present them, AI can tackle those pedagogically tricky higher-order questions from Bloom's taxonomy with ease and - most importantly - the speed of a certain blue hedgehog.

If you've not yet tried these capabilities of AI, why not take a look at our Marking bot which showcases the assessment of a 6 mark exam question, with feedback, which can be adapted for any purpose you need. It works really well too, and unlocks a new form of formative assessment - the iterative draft - where students are iterating and improving their draft answer and actually given meaningful and insightful feedback all before you actually call on them to share their work.

In conclusion, while multiple-choice questions served a purpose in the pre-AI world, their dominance in education may be coming to an end. The rise of AI offers us an opportunity to rethink how we assess learning. By leveraging AI's capabilities, we can move towards more meaningful, engaging, and effective methods of evaluation. It's time to bid farewell to the era of multiple choice and embrace the future of education.

And if you want to see what AI tutors can do for your questioning in the classroom, why not give Mindjoy a whirl?

David Morgan

David Morgan

Cardiff, UK