A rubric for assessment, usually in the form of a matrix or grid, is a tool used to interpret and grade students' work against criteria and standards. Rubrics are sometimes called "criteria sheets", "grading schemes", or "scoring guides". Rubrics can be designed for any content domain.
A rubric makes explicit a range of assessment criteria and expected performance standards. Assessors evaluate a student's performance against all of these, rather than assigning a single subjective score. A rubric:
- handed out to students during an assessment task briefing makes them aware of all expectations related to the assessment task, and helps them evaluate their own work as it progresses
- helps teachers apply consistent standards when assessing qualitative tasks, and promotes consistency in shared marking.
You can use rubrics to structure discussions with students about different levels of performance on an assessment task. They can employ the rubric during peer assessment and self-assessment, to generate and justify assessments. Once you've familiarised students with the idea of rubrics, you can have them assist in the rubric design process, thus taking more responsibility for their own learning.
When to use
Assessment rubrics can be used for assessing learning at all levels, from discrete assignments within a course through to program-level capstone projects and larger research or design projects and learning portfolios.
- provide a framework that clarifies assessment requirements and standards of performance for different grades. In this, they support assessment as learning; students can see what is important and where to focus their learning efforts.
- enable very clear and consistent communication with students about assessment requirements and about how different levels of performance earn different grades. They allow assessors to give very specific feedback to students on their performance.
- when students are involved in their construction, encourage them to take responsibility for their performance
- when used for self-assessment and peer assessment, make students aware of assessment processes and procedures, enhance their meta-cognitive awareness, and improve their capacity to assess their own work
- can result in richer feedback to students, giving them a clearer idea where they sit in terms of an ordered progression towards increased expertise in a learning domain.
- by engaging staff teams in rubric-based conversations about quality, help them develop a shared language for talking about learning and assessment.
- help assessors efficiently and reliably interpret and grade students' work.
- systematically illuminate gaps and weaknesses in students' understanding against particular criteria, helping teachers target areas to address.
Using assessment rubrics can present the following challenges:
- When learning outcomes relate to higher levels of cognition (for example, evaluating or creating), assessment designers can find it difficult to specify criteria and standards with exactitude. This can be a particular issue in disciplines or activities requiring creativity or other hard-to-measure capabilities.
- It can be challenging for designers to encompass different dimensions of learning outcomes (cognitive, psychomotor, affective) within specific criteria and standards. Performance in the affective domain in particular can be difficult to distinguish according to strict criteria and standards.
- Assessment rubrics are inherently indeterminate (Sadler, 2009), particularly when it comes to translating judgments on each criterion of an analytic rubric into grades.
- Breaking down the assessment into complicated, detailed criteria may increase the marking workload for staff, and may lead to:
- distorted grading decisions (Sadler, 2009) or
- students becoming over-dependent on the rubric and less inclined to develop their own judgment by creating, or contributing to the creation of, assessment rubrics ( ).
Design a rubric
An assessment rubric can be analytic or holistic.
- Analytic rubrics have several dimensions, with performance indicators for levels of achievement in each dimension.
- Holistic rubrics assess the whole task according to one scale, and are appropriate for less structured tasks, such as open-ended problems and creative products.
Assessment rubrics are composed of three elements:
- a set of criteria that provides an interpretation of the stated objectives (performance, behaviour, quality)
- a range of different levels of performance between highest and lowest
- descriptors that specify the performance corresponding to each level, to allow assessors to interpret which level has been met.
One useful design strategy is to take a generic assessment rubric (for example, Orrell, 2003) that matches well with the assessment task objectives, discipline, level and other contextual setting, and adapt it for your own use, rewriting the attribute descriptions to reflect the course context, aims and learning outcomes, and to apply to the specific assessment task.
Decide how the judgments at each level of attainment will flow through into the overall grading process and how rubric levels correspond to grades. Does the attainment of "advanced" skill or knowledge mean that a distinction or high distinction will be awarded? Does "developing" mean resubmission or fail?
Assess with rubrics
- Ensure that assessment rubrics are prepared and available for students well before they begin work on tasks, so that the rubric contributes to their learning as they complete the work.
- Discuss assessment rubrics with students in class time. Use these discussions to refine and improve rubrics in response to students' common misunderstandings and misconceptions.
- Practise using rubrics in class. Have students assess their own, their peers' and others' work.
- Involve students in developing assessment rubrics, and involve them more as they become competent in doing so. This encourages them to be independent and to manage their own learning.
- Frame your assessment feedback to students in the terms laid out in the rubric, so that they can clearly see where they have succeeded or performed less well in the task.
Include students in developing an assessment rubric; this can help each student to understand the assessment criteria.
Provide the assessment rubric for a task to students early, to increase its value as a learning tool. For example, you might distribute it as part of the task briefing and guidelines presentation. This helps students understand the task, and allows them to raise any concerns or questions about the task and how it will be assessed.
Write rubrics in plain English, and phrase them so that they are as unambiguous as possible.
- Learning management systems (e.g. Moodle) often allow the use of rubrics in assessment, including peer and self-assessment. In Moodle, you can create a rubric and use it to grade online activities such as assignments, discussions, blogs and wikis.
- GradeMark (part of the Turnitin suite of tools) provides a rubric function for online marking.
- Dedicated group peer assessment tools such as iPeer and WebPA also have a rubric function.
- A free online tool, iRubric, allows you to create, adapt and share rubrics online.
Video about iUNSW Rubrik Application
[See Transcript of video]
This video shows how the Mechanical Engineering project (ENGG1000, T1 2011) used the iUNSW Rubrik iPad marking app to mark the final project competition, making the process much quicker and more efficient.
iUNSW Rubrik iOS App used in ENGG1000 T1 2011
UNSW Rubrics in Action - Chemical Engineering
Assessing a final year thesis
As part of his final year undergraduate course in Chemical Engineering, Dr Graeme Bushell has designed and tested the rubric described below over several semesters.
Students in Chemical Engineering, Industrial Chemistry and Food Science programs at UNSW are required to deliver a poster at the end of their final year thesis, explaining their research results. The assessment task aligns with:
- the UNSW graduate capability of producing "scholars who are capable of effective communication", and
- the Engineers Australia stage 1 competency "effective oral and written communication in the professional and lay domains".
The posters are presented over one morning in the final week of semester, with school academic staff and postdoctoral fellows browsing the work. The session runs along the lines of a conference poster session, with students explaining their projects to small groups and/or individuals throughout the session and answering questions as appropriate. Academics are each assigned a set of posters to mark according to specified criteria, using marking sheets, with rubrics, which are collected at the end of the poster session. Each student receives at least 4 assessments. The marking sheets are then collated by the course convenor and a final mark allocated.
A change in assessment scheme
The assessment scheme used for the posters was changed from first semester 2012, as the new convenor for the final year thesis courses (Bushell) felt that the old scheme used too many assessment criteria, and that the implementation of a standards-based approach would improve practice—and more closely align with UNSW recommended practice at the time, which is now policy.
The criteria are listed and a range of performance standards between lowest and highest are included. Descriptors describe each level of performance.
The rubric is first presented to students in the Course Outline and they are encouraged to discuss it with their supervisor. The marking scheme for the rubric is also presented in the Course Outline. The poster assessment is worth 15% of the total marks for the course.
The marking sheet/table
Dr Bushell uses a simple layout for this rubric, as this allows more flexibility than a tabular format in terms of distribution of performance bands within a criterion, the number of performance bands used in each criterion and the weighting of different criteria.
His marking sheet is shown below, followed by an example of how the same criteria might look in a tabular format.
Poster Assessment Marking Sheet
(Original format of rubric, as used by Graeme Bushell, School of Chemical Engineering)
Put a tick next to the description which best describes how well the student explained why the work was done.
□ The student cannot explain why the research was done.
□ The student attempts to explain why the work was done but you don't think they really understand.
□ The student is able to explain why the work was done in direct terms.
□ The student is able to explain the broader context that the work fits into—why it was done and how important it is.
Put a tick next to the description which best describes the quality of the work that was done.
□ The work appears to be incomplete—it fails to address the stated aims.
□ The work contains serious errors—the conclusions are cast into serious doubt.
□ The work contains some minor errors of design or execution that are unlikely to undermine the main conclusions.
□ The work appears to have been completed without errors.
Put a tick next to the description which best describes how well the student presented the work.
□ Taken together, graphical and verbal communication are so poor that you are left unsure what the project is about.
□ Multiple deficiencies: more than one of aims, methods, results and conclusions are not clear.
□ One of the following is not clear: aims, methods, results, conclusions.
□ Aims, methods, results and conclusions are clear but only after probing. Some aspects of the poster or presentation were poorly considered.
□ Aims, methods, results, conclusions are all clear. The poster is adequate.
□ Aims, methods, results, conclusions are all clear. The poster is attractive.
□ Aims, methods, results, conclusions are all clear. The poster is attractive and the presentation engaging.
Put a tick next to the description which best describes how well the student answered questions.
□ The student is effectively unable to answer questions about the project.
□ The student attempts to answer questions about the project but clearly doesn't really understand.
□ The student is able to answer questions about the project—you are fairly sure they understand what they're doing.
□ The student listens carefully and answers questions easily and directly—they are clearly across the project.
Poster Assessment Rubric
(Marking sheet criteria presented in grid format. Each Performance standard cell can be allocated a mark or grade band, as determined in the specific context.)
The student is able to explain the broader context that the work fits into—why it was done and how important it is.
The student is able to explain why the work was done in direct terms.
The student attempts to explain why the work was done but you don't think they really understand.
The student cannot explain why the research was done.
The work appears to have been completed without errors.
The work contains some minor errors of design or execution that are unlikely to undermine the main conclusions.
The work contains serious errors—the conclusions are cast into serious doubt.
The work appears to be incomplete—it fails to address the stated aims.
Aims, methods, results, conclusions are all clear. The poster is:
Aims, methods, results and conclusions are clear but only after probing. Some aspects of the poster or presentation were poorly considered.
Multiple deficiencies: more than one of aims, methods, results and conclusions are not clear.
Taken together, graphical and verbal communication are so poor that you are left unsure what the project is about.
The student listens carefully and answers questions easily and directly—they are clearly across the project.
The student is able to answer questions about the project—you are fairly sure they understand what they're doing.
The student attempts to answer questions about the project but clearly doesn't really understand.
The student is effectively unable to answer questions about the project.
Using Assessment Rubrics Booklet (PDF on Google Docs, 15 MB, 18 pages) from the May 2012 UNSW Learning and Teaching Forum on Assurance of Learning.
Association of American Colleges and Universities, Website with 15 VALUE (Valid Assessment of Learning in Undergraduate Education) rubrics for 'liberal' education
iRubric: online tool for creating and sharing rubrics.
Orrell, J. (2003), A Generic Learning Rubric—Flinders University, South Australia.
Baron, J. and Keller, M. (2003). Use of rubrics in online assessment. Evaluations and Assessment Conference, University of South Australia.
Boud, D. (2010). Assessment Futures website. University of Technology Sydney.
Murray-Harvey, R., Silins, H. and Orrell, J. (1996). Assessment for Learning, School of Education, Flinders University, Adelaide.
Reddy, Y.M. and Andrade, H. (2009). A review of rubric use in higher education. Assessment and Evaluation in Higher Education 35 (4): 435–448.
Sadler, D.R. (2009). Indeterminacy in the use of preset criteria for assessment and grading. Assessment & Evaluation in Higher Education, 34(2), 159–179.
Smith, C., Sadler, R. and Davies, L. Assessment Rubrics. Griffith Institute for Higher Education, Griffith University.
The contributions of staff who engaged with the preparation of this topic are gratefully acknowledged.
This topic was inspired by a resource developed for the Macquarie University Assessment Toolkit.