Skip to main content
UNSW Sydney Logo
Teaching
Mobile search
mobile nav menu button
  • About
  • Teaching
    • New to teaching
    • Educational design
    • Assessment and feedback
    • Evaluating education
    • More...
  • Educational Technology
    • Support
    • Training
    • TELT Administrator
  • Events & News
    • Upcoming events
    • Recent news
    • Event recordings & resources
    • Subscribe to education news
  • Awards
    • Awards
    • Fellowships
    • Gathering evidence of your teaching practice
  • Professional Development
    • Beginning to Teach (BTT)
    • Teaching Accelerator Program
    • Foundations of L&T (FULT)
    • Course Design Institute (CDI)
    • Self-paced learning
    • Academic Mentoring
    • More...
  • Contact & Support
  • About
  • Teaching
      1. New to teaching
      2. Educational design
      3. Assessment and feedback
      4. Evaluating education
      5. More...
  • Educational Technology
      1. Support
      2. Training
      3. TELT Administrator
  • Events & News
      1. Upcoming events
      2. Recent news
      3. Event recordings & resources
      4. Subscribe to education news
  • Awards
      1. Awards
      2. Fellowships
      3. Gathering evidence of your teaching practice
  • Professional Development
      1. Beginning to Teach (BTT)
      2. Teaching Accelerator Program
      3. Foundations of L&T (FULT)
      4. Course Design Institute (CDI)
      5. Self-paced learning
      6. Academic Mentoring
      7. More...
  • Contact & Support

Breadcrumb

  1. Teaching
  2. Teaching practice
  3. Assessment
  4. Assessment Methods

Assessing by Multiple Choice Questions

Overview

Multiple choice question tests (MCQ tests) can be useful for formative assessment and to stimulate students' active and self-managed learning. They improve students' learning performance and their perceptions of the quality of their learning experience (see Velan et al., 2008).

MCQ tests are strongly associated with assessing lower order cognition such as the recall of discrete facts. Because of this, assessors have questioned their use in higher education. You can design MCQ tests to assess higher order cognition (such as synthesis, creative thinking and problem solving), but you must draft questions with considerable skill if such tests are to be valid and reliable. This takes time and entails significant subjective judgment.

Determine whether assessors are using MCQ tests for summative assessment for the ease of making them, or for solid educational reasons. Where MCQ tests are appropriate, ensure that you integrate them effectively into assessment design.

When to use

MCQ tests are good for assessing lower order cognitive processes, such as the recall of factual information, at the expense of higher level critical and creative reasoning processes. When designing assessment for a course, first determine whether MCQ tests should be used at all, based on the learning objectives and outcomes of your course.

If yes, next ask yourself whether they would be best used for formative assessment (to support students' self-management of their learning), or for summative assessment (to grade the extent of students' learning at a particular point). MCQ tests should never constitute the only or major form of summative assessment in university-level courses.

When using MCQ tests for formative learning purposes, you might still want to assign a small grade for them, to indicate to students that their grasp of the material tested is important.

Use MCQ tests when, for example, you want to:

  • assess lower order cognition such as the recall of discrete facts
  • gather information about students' pre-course understanding, knowledge gaps and misconceptions, to help plan learning and teaching approaches. MCQ tests can provide valuable assessment feedback on students' recall of the facts and concepts essential to higher order learning.
  • provide students with an accessible way to review course material, check that they understand key concepts and obtain timely feedback to help them manage their own learning
  • test students' broad knowledge of the curriculum and learning objectives, rather than use more intensive methods such as extended writing.

When not to use MCQ tests

Where the learning outcomes to be assessed are at a high cognitive level, it becomes all the more challenging and resource-intensive to design suitable MCQ tests. The closed-ended nature of MCQ tests makes them particularly inappropriate for assessing originality and creativity in thinking.

Benefits

MCQ tests can aid teaching and learning by, for example:

  • providing students with rapid feedback on their learning
  • being continually available without increasing the marking load (if delivered online, with automated feedback)
  • lending themselves to design using quiz tool software, either within or independently of Learning Management Systems (e.g. Moodle). With such software, you can automate presentation and publication and facilitate quiz administration, scoring and feedback provision.
  • allowing objective scoring. There can be only one right answer to a well-designed question, so marker bias is eliminated.
  • allowing scoring by anyone, or even automatically, thereby increasing efficiency, particularly in teaching large cohorts
  • being immune to students' diverse capabilities as writers
  • containing recyclable questions. Across the discipline, you can progressively develop and accumulate questions in pools or banks for re-use in different combinations and settings.

Challenges

It can be challenging to use MCQ tests because, among other things, they:

  • are time-consuming to develop and require skill and expertise to design well
  • are generally acknowledged to be poor at testing higher order cognition such as synthesis, creative thinking and problem solving
  • have been shown, when they are used for summative assessment, to encourage students to adopt superficial approaches to learning (see Scouller, 1996)
  • can be answered correctly by guesswork. If poorly designed, they can provide clues to encourage guessing.
  • typically provide students with little direction as to how to improve their understanding—although you can overcome this with carefully designed feedback
  • can disadvantage students with lesser reading skills, regardless of how well they understand the content being assessed
  • are very subject to cultural bias.

Strategies

Plan a summative MCQ test

If you decide that an MCQ test is appropriate for summative assessment according to the objectives and outcomes for a course, let students know in the course outline that you'll be using it.

Figure 1 sets out a number of factors to consider in the test planning stage.

Figure 1: Factors in planning MCQ test design

Questions

Factors to consider

When should it be conducted?

  • Timing: early / mid / late / post course?
  • Frequency: regularly, weekly, occasionally, once?
  • If in exam week, avoid timetable clashes by formally scheduling the test as an exam.

Where should it be conducted?

  • In a specified physical location, online, by mail?
  • If online, how will you manage different time zones?
  • Are online software tools available to facilitate the construction, conduct and scoring of the MCQ test, and to provide results and feedback?

Are the costs justified?

  • How much time will be needed for development and what will that cost?
  • If the test is on paper, what is the cost of printing exam scripts?
  • What are the costs of exam venues and invigilation?
  • What are the costs of marking, if this is not automated?
  • Is there a more cost-effective method—for example, the use of existing validated questions and question banks or pools?

How will you manage security?

  • How will you ensure the security of exam scripts before the exam?
  • Will exams undertaken in computing laboratories be invigilated?
  • How will you securely manage results information?
  • How will you provide results to students cost-effectively, but without jeopardising their privacy?

How will you manage risks?

  • Which specialist staff / skills do you need to consult (Learning & Teaching staff, IT support staff, examinations unit staff, disability services)?
  • What is the contingency plan? For example, what will happen, in an online test, if system outages occur, or computers malfunction in laboratories?
  • How will you manage test completion by students in different time zones?
  • Have you observed copyright provisions when using existing MCQ tests and questions?

How will you score the test?

  • Should all MCQ items in a test be equally weighted?
  • Will students score zero for uncompleted or wrong answers? (Using negative marks to reduce the incidence of guessing is not recommended.)
  • Should the mark required for a pass be 50% or should it be raised—even to 100%—where you are testing essential factual knowledge?
  • Can scoring be fully or partly automated—for example, by using fully online applications or LMSs (Moodle), or scanned hard-copy scripts?
  • Whose involvement in scoring needs to be secured—for example, teaching teams, casual demonstrators, students?

How will you provide feedback?

  • Will feedback be timely enough to allow for learning improvement within the course? For example, can you give feedback immediate by setting up an answer-contingent progression through the test? Or can you guarantee feedback within a specified short period?
  • How can you go beyond simple notification of a student's score to focus feedback on positive learning improvement?
  • How will students be able to use generic feedback about class-wide performances on the MCQ test to interpret their individual results?
  • Who will provide feedback—teachers, or students through peer assessment or self-assessment?

How will you assure and improve quality?

  • How will you manage MCQ test scoring processes? For example, will you train scorers, provide scorer guidelines?
  • Will your team of assessors develop questions and question banks or pools collaboratively, so as to enhance your test's validity?
  • Can you establish peer review processes to check MCQ tests and questions for alignment with course objectives, for logical sequence, timing, item construction and so on?
  • How will you validate MCQ tests and questions, for the discriminatory value of items?
  • How will the department engage with reviewing and endorsing MCQ use and practices?

Construct an MCQ test

Constructing effective MCQ tests and items takes considerable time and requires scrupulous care in the design, review and validation stages. Constructing MCQ tests for high-stakes summative assessment is a specialist task.

For this reason, rather than constructing a test from scratch, it may be more efficient for you to see what other validated tests already exist, and incorporate one into any course for which numerous decisions need to be made.

In some circumstances it may be worth the effort to create a new test. If you can undertake test development collaboratively within your department or discipline group, or as a larger project across institutional boundaries, you will increase the test's potential longevity and sustainability.

By progressively developing a multiple-choice question bank or pool, you can support benchmarking processes and establish assessment standards that have long-term effects on assuring course quality.

Use a design framework to see how individual MCQ questions will assess particular topic areas and types of learning objectives, across a spectrum of cognitive demand, to contribute to the test's overall balance. As an example, the "design blueprint" in Figure 2 provides a structural framework for planning.

Figure 2: Design blueprint for multiple choice test design (from the Instructional Assessment Resources at the University of Texas at Austin)

Cognitive domains
(Bloom's Taxonomy)

Topic A

Topic B

Topic C

Topic D

Total items

Percentage of total

Knowledge

1

2

1

1

5

12.5

Comprehension

2

1

2

2

7

17.5

Application

4

4

3

4

15

37.5

Analysis

3

2

3

2

10

25.0

Synthesis

 

1

 

1

2

5.0

Evaluation

   

1

 

1

2.5

TOTAL

10

10

10

10

40

100

Use the most appropriate format for each question posed. Ask yourself, is it best to use:

  • a single correct answer
  • more than one correct answer
  • a true/false choice (with single or multiple correct answers)
  • matching (e.g. a term with the appropriate definition, or a cause with the most likely effect)
  • sentence completion, or
  • questions relating to some given prompt material?

To assess higher order thinking and reasoning, consider basing a cluster of MCQ items on some prompt material, such as:

  • a brief outline of a problem, case or scenario
  • a visual representation (picture, diagram or table) of the interrelationships among pieces of information or concepts, or
  • an excerpt from published material.

You can present the associated MCQ items in a sequence from basic understanding through to higher order reasoning, including:

  • identifying the effect of changing a parameter
  • selecting the solution to a given problem, and
  • nominating the optimum application of a principle.

Add some short-answer questions to a substantially MCQ test to minimise the effect of guessing by requiring students to express in their own words their understanding and analysis of problems.

Well in advance of an MCQ test, explain to students:

  • the purposes of the test (and whether it is formative or summative)
  • the topics being covered
  • the structure of the test
  • whether aids can be taken into the test (for example, calculators, notes, textbooks, dictionaries)
  • how it will be marked, and
  • how the mark will contribute to their overall grade.

Compose clear instructions on the test itself, explaining:

  • the components of the test
  • their relative weighting
  • how much time you expect students to spend on each section, so that they can optimise their time.

Quality assurance of MCQ tests

Whether you use MCQ tests to support learning in a formative way or for summative assessment, ensure that the overall test and each of its individual items are well aligned with the course learning objectives. When using MCQ tests for summative assessment, it's all the more critical that you assure their validity.

The following strategies will help you assure quality:

  • Use a basic quality checklist when designing and reviewing the test.
  • Take the test yourself. Calculate student completion time as being four times longer than your completion time.
  • Work collaboratively across your discipline to develop an MCQ item bank as a dynamic (and growing) repository that everyone can use for formative or summative assessments, and that enables peer review, evaluation and validation.

Use peer review to:

  • consider whether MCQ tests are educationally justified in your discipline
  • critically evaluate MCQ test and item design
  • examine the effects of using MCQs in the context of the learning setting, and
  • record and disseminate the peer review outcomes to students and colleagues.

Engage students in active learning with MCQ tests

Used formatively, MCQ tests can:

  • engage students in actively reviewing their own learning progress, identifying gaps and weaknesses in their understanding, and consolidating their learning through rehearsal (e.g. Velan et al., 2008).
  • provide a trigger for collaborative learning activities, such as discussion and debate about the answers to questions.
  • become collaborative through the use of technologies such as electronic voting systems (Draper, 2009)
  • through peer assessment, help students identify common areas of misconception within the class.

You can also create activities that disrupt the traditional agency of assessment. You might, for example, require students to indicate the learning outcomes with which individual questions are aligned, or to construct their own MCQ questions and prepare explanatory feedback on the right and wrong answers (Fellenz, 2010).

Ensure fairness

Construct MCQ tests according to inclusive design principles to ensure equal chances of success for all students. Take into account any diversity of ability, cultural background or learning styles and needs.

  • Avoid sexual, racial, cultural or linguistic stereotyping in individual MCQ test items, to ensure that no groups of students are unfairly advantaged or disadvantaged.
  • Provide alternative formats for MCQ-type exams for any students with disabilities. They may, for example, need more time to take a test, or to be provided with assistive technology or readers.
  • Set up contingency plans for timed online MCQ tests. Computers can malfunction or system outages occur - be prepared. Also address any issues arising from students being in different time zones.
  • Develop processes in advance so that you have time to inform students about the objectives, formats and delivery of MCQ tests and, for a summative test, the marking scheme being used and the test's effect on overall grades.
  • To reduce the opportunity for plagiarism, specify randomised question presentation for MCQ, so that different students will be presented with the same content in a different order.

Use technology

You can design and develop MCQ tests quite efficiently using available applications. Particularly in high-stakes summative MCQ assessments, use applications that UNSW supports as part of the TELT platform—for example Moodle.

Additional information

External resources

  • Good Practice Guide in Question and Test Design: PASS-IT Project for Preparing Assessments in Scotland using IT.
  • Velan, G.M. (2010). Overview of Questionmark Perception (QMP): A best of breed tool for online assessments.

Further readings

Azer, S.A. (2003). Assessment in a problem-based learning course: twelve tips for constructing multiple choice questions that test students' cognitive skills. Biochemistry and Molecular Biology Education 31(6), 428–434.

Draper, S. (2009). Catalytic assessment: understanding how MCQs and EVS can foster deep learning. British Journal of Educational Technology 40(2), 285–294.

Fellenz, M.R. (2004). Using assessment to support higher level learning: the multiple choice item development assignment. Assessment and Evaluation in Higher Education 29(6), 703–719.

Quinn, D. and Reid, I. (2003). Using innovative online quizzes to assist learning. AusWeb (Australasian World Wide Web conference).

Scouller, K.M. (1996). Influence of assessment method on students' learning approaches, perceptions, and preferences: assignment essay versus short answer examination. Research and Development in Higher Education 19, 776–781.

Velan, G.M., Jones, P., McNeil, H.P. and Kumar, R.K. (2008). Integrated online formative assessments in the biomedical sciences for medical students: benefits for learning. BMC Medical Education 8(52).

Acknowledgments

The contributions of staff who engaged with the preparation of this topic are gratefully acknowledged.

  • New staff
  • Teaching for learning
  • Assessment
    • Assessment Toolkit
    • Digital assessment (Inspera)
    • Designing Assessment
    • Assessment Methods
      • Selecting Assessment Methods
      • Selecting Assessment Technologies
      • Capstone Project
      • Case Studies & Scenarios
      • First Year Students
      • Extended Writing
      • Group Work
      • Laboratory Learning
      • Large Classes
      • Multiple Choice Questions
      • Oral Presentations
      • Portfolio
      • Role Play & Simulation
      • Work-Integrated Learning
      • Studio-Based Learning
    • Grading & Giving Feedback
    • Reviewing Assessment Quality
    • Spotlight on Assessment
    • Assessment Development Framework
  • Teaching Settings

Events & news

CEP Lightning Workshops T1 2023
LET'S Meet T1 2023
More
Back to top
  • Print
  • Share
    • Facebook
    • Twitter
    • Google
    • Email
  • Home
  • About
  • Educational Technology
  • Events & news
  • Awards
  • Contacts

Authorised by Pro Vice-Chancellor Education
UNSW CRICOS Provider Code: 00098G, TEQSA Provider ID: PRV12055, ABN: 57 195 873 179
Teaching at UNSW, Sydney NSW 2052, Australia Telephone 9385 5989

Footer menu

  • Privacy Policy
  • Copyright & Disclaimer
  • Accessibility
  • Report an incident
  • Complaints
  • Site Map
  • Site Feedback
Page last updated: Wednesday 29 September 2021