Overview
Multiple choice question tests (MCQ tests) can be useful for formative assessment and to stimulate students' active and self-managed learning. They improve students' learning performance and their perceptions of the quality of their learning experience (see Velan et al., 2008).
MCQ tests are strongly associated with assessing lower order cognition such as the recall of discrete facts. Because of this, assessors have questioned their use in higher education. You can design MCQ tests to assess higher order cognition (such as synthesis, creative thinking and problem solving), but you must draft questions with considerable skill if such tests are to be valid and reliable. This takes time and entails significant subjective judgment.
Determine whether assessors are using MCQ tests for summative assessment for the ease of making them, or for solid educational reasons. Where MCQ tests are appropriate, ensure that you integrate them effectively into assessment design.
When to use
MCQ tests are good for assessing lower order cognitive processes, such as the recall of factual information, at the expense of higher level critical and creative reasoning processes. When designing assessment for a course, first determine whether MCQ tests should be used at all, based on the learning objectives and outcomes of your course.
If yes, next ask yourself whether they would be best used for formative assessment (to support students' self-management of their learning), or for summative assessment (to grade the extent of students' learning at a particular point). MCQ tests should never constitute the only or major form of summative assessment in university-level courses.
When using MCQ tests for formative learning purposes, you might still want to assign a small grade for them, to indicate to students that their grasp of the material tested is important.
Use MCQ tests when, for example, you want to:
- assess lower order cognition such as the recall of discrete facts
- gather information about students' pre-course understanding, knowledge gaps and misconceptions, to help plan learning and teaching approaches. MCQ tests can provide valuable assessment feedback on students' recall of the facts and concepts essential to higher order learning.
- provide students with an accessible way to review course material, check that they understand key concepts and obtain timely feedback to help them manage their own learning
- test students' broad knowledge of the curriculum and learning objectives, rather than use more intensive methods such as extended writing.
When not to use MCQ tests
Where the learning outcomes to be assessed are at a high cognitive level, it becomes all the more challenging and resource-intensive to design suitable MCQ tests. The closed-ended nature of MCQ tests makes them particularly inappropriate for assessing originality and creativity in thinking.
Benefits
MCQ tests can aid teaching and learning by, for example:
- providing students with rapid feedback on their learning
- being continually available without increasing the marking load (if delivered online, with automated feedback)
- lending themselves to design using quiz tool software, either within or independently of Learning Management Systems (e.g. Moodle). With such software, you can automate presentation and publication and facilitate quiz administration, scoring and feedback provision.
- allowing objective scoring. There can be only one right answer to a well-designed question, so marker bias is eliminated.
- allowing scoring by anyone, or even automatically, thereby increasing efficiency, particularly in teaching large cohorts
- being immune to students' diverse capabilities as writers
- containing recyclable questions. Across the discipline, you can progressively develop and accumulate questions in pools or banks for re-use in different combinations and settings.
Challenges
It can be challenging to use MCQ tests because, among other things, they:
- are time-consuming to develop and require skill and expertise to design well
- are generally acknowledged to be poor at testing higher order cognition such as synthesis, creative thinking and problem solving
- have been shown, when they are used for summative assessment, to encourage students to adopt superficial approaches to learning (see Scouller, 1996)
- can be answered correctly by guesswork. If poorly designed, they can provide clues to encourage guessing.
- typically provide students with little direction as to how to improve their understanding—although you can overcome this with carefully designed feedback
- can disadvantage students with lesser reading skills, regardless of how well they understand the content being assessed
- are very subject to cultural bias.
Strategies
Plan a summative MCQ test
If you decide that an MCQ test is appropriate for summative assessment according to the objectives and outcomes for a course, let students know in the course outline that you'll be using it.
Figure 1 sets out a number of factors to consider in the test planning stage.
Figure 1: Factors in planning MCQ test design
Questions |
Factors to consider |
When should it be conducted? |
|
Where should it be conducted? |
|
Are the costs justified? |
|
How will you manage security? |
|
How will you manage risks? |
|
How will you score the test? |
|
How will you provide feedback? |
|
How will you assure and improve quality? |
|
Construct an MCQ test
Constructing effective MCQ tests and items takes considerable time and requires scrupulous care in the design, review and validation stages. Constructing MCQ tests for high-stakes summative assessment is a specialist task.
For this reason, rather than constructing a test from scratch, it may be more efficient for you to see what other validated tests already exist, and incorporate one into any course for which numerous decisions need to be made.
In some circumstances it may be worth the effort to create a new test. If you can undertake test development collaboratively within your department or discipline group, or as a larger project across institutional boundaries, you will increase the test's potential longevity and sustainability.
By progressively developing a multiple-choice question bank or pool, you can support benchmarking processes and establish assessment standards that have long-term effects on assuring course quality.
Use a design framework to see how individual MCQ questions will assess particular topic areas and types of learning objectives, across a spectrum of cognitive demand, to contribute to the test's overall balance. As an example, the "design blueprint" in Figure 2 provides a structural framework for planning.
Figure 2: Design blueprint for multiple choice test design (from the Instructional Assessment Resources at the University of Texas at Austin)
Cognitive domains |
Topic A |
Topic B |
Topic C |
Topic D |
Total items |
Percentage of total |
Knowledge |
1 |
2 |
1 |
1 |
5 |
12.5 |
Comprehension |
2 |
1 |
2 |
2 |
7 |
17.5 |
Application |
4 |
4 |
3 |
4 |
15 |
37.5 |
Analysis |
3 |
2 |
3 |
2 |
10 |
25.0 |
Synthesis |
1 |
1 |
2 |
5.0 |
||
Evaluation |
1 |
1 |
2.5 |
|||
TOTAL |
10 |
10 |
10 |
10 |
40 |
100 |
Use the most appropriate format for each question posed. Ask yourself, is it best to use:
- a single correct answer
- more than one correct answer
- a true/false choice (with single or multiple correct answers)
- matching (e.g. a term with the appropriate definition, or a cause with the most likely effect)
- sentence completion, or
- questions relating to some given prompt material?
To assess higher order thinking and reasoning, consider basing a cluster of MCQ items on some prompt material, such as:
- a brief outline of a problem, case or scenario
- a visual representation (picture, diagram or table) of the interrelationships among pieces of information or concepts, or
- an excerpt from published material.
You can present the associated MCQ items in a sequence from basic understanding through to higher order reasoning, including:
- identifying the effect of changing a parameter
- selecting the solution to a given problem, and
- nominating the optimum application of a principle.
Add some short-answer questions to a substantially MCQ test to minimise the effect of guessing by requiring students to express in their own words their understanding and analysis of problems.
Well in advance of an MCQ test, explain to students:
- the purposes of the test (and whether it is formative or summative)
- the topics being covered
- the structure of the test
- whether aids can be taken into the test (for example, calculators, notes, textbooks, dictionaries)
- how it will be marked, and
- how the mark will contribute to their overall grade.
Compose clear instructions on the test itself, explaining:
- the components of the test
- their relative weighting
- how much time you expect students to spend on each section, so that they can optimise their time.
Quality assurance of MCQ tests
Whether you use MCQ tests to support learning in a formative way or for summative assessment, ensure that the overall test and each of its individual items are well aligned with the course learning objectives. When using MCQ tests for summative assessment, it's all the more critical that you assure their validity.
The following strategies will help you assure quality:
- Use a basic quality checklist when designing and reviewing the test.
- Take the test yourself. Calculate student completion time as being four times longer than your completion time.
- Work collaboratively across your discipline to develop an MCQ item bank as a dynamic (and growing) repository that everyone can use for formative or summative assessments, and that enables peer review, evaluation and validation.
Use peer review to:
- consider whether MCQ tests are educationally justified in your discipline
- critically evaluate MCQ test and item design
- examine the effects of using MCQs in the context of the learning setting, and
- record and disseminate the peer review outcomes to students and colleagues.
Engage students in active learning with MCQ tests
Used formatively, MCQ tests can:
- engage students in actively reviewing their own learning progress, identifying gaps and weaknesses in their understanding, and consolidating their learning through rehearsal (e.g. Velan et al., 2008).
- provide a trigger for collaborative learning activities, such as discussion and debate about the answers to questions.
- become collaborative through the use of technologies such as electronic voting systems (Draper, 2009)
- through peer assessment, help students identify common areas of misconception within the class.
You can also create activities that disrupt the traditional agency of assessment. You might, for example, require students to indicate the learning outcomes with which individual questions are aligned, or to construct their own MCQ questions and prepare explanatory feedback on the right and wrong answers (Fellenz, 2010).
Ensure fairness
Construct MCQ tests according to inclusive design principles to ensure equal chances of success for all students. Take into account any diversity of ability, cultural background or learning styles and needs.
- Avoid sexual, racial, cultural or linguistic stereotyping in individual MCQ test items, to ensure that no groups of students are unfairly advantaged or disadvantaged.
- Provide alternative formats for MCQ-type exams for any students with disabilities. They may, for example, need more time to take a test, or to be provided with assistive technology or readers.
- Set up contingency plans for timed online MCQ tests. Computers can malfunction or system outages occur - be prepared. Also address any issues arising from students being in different time zones.
- Develop processes in advance so that you have time to inform students about the objectives, formats and delivery of MCQ tests and, for a summative test, the marking scheme being used and the test's effect on overall grades.
- To reduce the opportunity for plagiarism, specify randomised question presentation for MCQ, so that different students will be presented with the same content in a different order.
Use technology
You can design and develop MCQ tests quite efficiently using available applications. Particularly in high-stakes summative MCQ assessments, use applications that UNSW supports as part of the TELT platform—for example Moodle.
Additional information
External resources
- Good Practice Guide in Question and Test Design: PASS-IT Project for Preparing Assessments in Scotland using IT.
- Velan, G.M. (2010). Overview of Questionmark Perception (QMP): A best of breed tool for online assessments.
Further readings
Azer, S.A. (2003). Assessment in a problem-based learning course: twelve tips for constructing multiple choice questions that test students' cognitive skills. Biochemistry and Molecular Biology Education 31(6), 428–434.
Draper, S. (2009). Catalytic assessment: understanding how MCQs and EVS can foster deep learning. British Journal of Educational Technology 40(2), 285–294.
Fellenz, M.R. (2004). Using assessment to support higher level learning: the multiple choice item development assignment. Assessment and Evaluation in Higher Education 29(6), 703–719.
Quinn, D. and Reid, I. (2003). Using innovative online quizzes to assist learning. AusWeb (Australasian World Wide Web conference).
Scouller, K.M. (1996). Influence of assessment method on students' learning approaches, perceptions, and preferences: assignment essay versus short answer examination. Research and Development in Higher Education 19, 776–781.
Velan, G.M., Jones, P., McNeil, H.P. and Kumar, R.K. (2008). Integrated online formative assessments in the biomedical sciences for medical students: benefits for learning. BMC Medical Education 8(52).
Acknowledgments
The contributions of staff who engaged with the preparation of this topic are gratefully acknowledged.