Here you can browse all the video clips published in the Assessment Toolkit.
Visit the Toolkit homepage to view the videos in context.
Below are the transcripts for the respective videos on the page Standards-Based Assessment.
Assuring Graduate Capabilities: Assessing, measuring and evidencing standards (keynote, edited version)
Professor Beverley Oliver, Pro Vice-Chancellor (Learning Futures), Deakin University
The whole world is changing—and I'm going to do a quick snapshot to various international places—around this stuff, and standards, and where it's come from.
- UK Quality Assurance Agency for Higher Education website: Subject benchmark statements
- Tuning Educational Structures in Europe website
- US Association of American Colleges and Universities web page: Value Rubrics: The Essential Learning Outcomes
- US Qualifications Profile]
We have the rise of the evidence based culture in higher education right around the world. If you go to Australia...
[Slide—Australian Qualifications Framework, Senior Secondary Certificate of Education:
- Certificate I
- Certificate II
- Certificate III
- Certificate IV
- Advanced Diploma/Associate Degree
- Bachelor Degree
- Bachelor Honours Degree/Graduate Certificate/Vocational Graduate Certificate/Graduate Diploma/Vocational Graduate Diploma
- Masters Degree
- Doctoral Degree]
...basically, there's kind of seven things that most universities say most of the time, that they want to promote. And that's them [PPT5: Capabilities: communication, thinking, problem solving, information literacy, self-management, teamwork, civic engagement. See ALTC Good Practice Guide: Assuring Graduate Outcomes]. That's my rough mapping of graduate attributes from 38 universities in May last year.
There's a lot of talk around the sector and has been for the last few years, which is why we've got TEQSA, that we don't have any standards. Yes we do. Every person who teaches in a university has a standard. You do—you've got your private standard. The problem is, we haven't had a public conversation about those standards. And you may well know this story, again, going back to my own experience: Student comes to me, second time in the unit, "Look, I just can't seem to pass this unit. You've marked my essay"—it was last century—"You've marked my essay. Look, I can't—but in Unit X, I'm getting a Distinction." "Oh, that's interesting. Bring me one of your assessments. Okay, so the other person is going, 'Tick-tick-tick-tick-tick-tick, can't write a sentence but I get the idea—that's good enough. That's my standard. I'm marking on content.'" So we had a standard, but we didn't agree on the standard. So you see the problem. And, of course, students will learn to play that game. And sometimes they'll play us off each other.
So, maybe we need to change the game. Or maybe the game has changed anyway.
setting those assessment tasks...?
- an opportunity to demonstrate your standard of performance
- an opportunity to create a professional artefact
- an opportunity for mentoring
- an opportunity to enhance your readiness for the profession]
What about if we change the language? Instead of "assessment", we talked about opportunities to demonstrate your performance towards becoming the professional? Could we talk about feedback differently? Could it be an opportunity to get mentoring and an opportunity to get some feedback on how you can get to the Rock Star status, or the Good Enough status if that's where you want to go?
[Slide—"Expectations" Three standing figures with hands pointing up ("Rock star"), out to the sides ("Good enough") and down ("Not yet")]
So when we're delivering on expectations, if we were clearer, maybe that would help us.
Mantz [Mantz Yorke, in Grading Student Achievement in Higher Education: Signals and Shortcomings, Abingdon: Routledge, 2008] says we should ask students to tell us "How have you satisfied [through your work, the aims [and standards] stated for your course]?" So it's not all about us, it's about them. It's their responsibility. And maybe we could do what he calls...
[Slide—"top down" and graphic of a graduate above dotted line; below dotted line, rows of orange squares, some of which are marked "e", with "learning outcomes" on the left and "judgement" on the right]
...the "top down" approach—I don't know why he chose those words, but his point is this: Set the standard, get the students to gather the evidence. That's what the "e" stands for, it's not what you thought it stood for, it's "e" for "evidence". This is a focus on learning outcomes and making judgements. So the student can say...
[Slide—Rock star/Good enough/Not yet students again, but with "Expectations" replaced by "I meet these expectations and here's my evidence"]
..."This is who I am. I met the expectations, and here's my evidence. Here's something I can hand across the table when I'm in an interview and say, 'Actually, yes, I have done this.'"
Employability is about the what's-in-it-for-me factor, I think. For me, if that's what counts to the student, that's the hook, and we might even sneak up on them and educate them as well. We need to actually shift the needle a little...
[Slide—"Shift the needle" Dial with needle moving from red, through yellow, to green. At left (red), "Less on marks/grades/credits". At right (green), "More on graduate learning outcomes". Below, "Reward me for collecting, creating and sharing personal, portable, digital, warranted evidence of capabilities that helps me achieve my life goals"]
...to move a little bit less, focus a little bit less, on those marks, grades and credits—we still have to do them, that's the system. But we should actually maybe just move it a bit, so that this is the motivating factor. If I'm a student, reward me for collecting, creating, I could say curating and sharing personal, portable, digital evidence, because my employability will be linked to some website somewhere, even if it's LinkedIn or, whatever it is.
That's how I think we can assure standards in the capabilities that count.
Eportfolios: a program-wide approach—Dr Patsie Polly and Thuan Thai
Dr Patsie Polly: So, thanks, Adele, and the organisers of this day, for giving us the opportunity to really speak on behalf of our research team. And what we're presenting today is a program-wide approach to building professional skills and career-readiness in the sciences. So, we've already heard that Medicine has eportfolios, but today we're trying to highlight a new concept for our Science students. And, like Beverley, we like alliteration, and I've tried to jam as many P's as we could into this presentation, but I want you to try and focus on the idea of professional preparedness in science, and the idea is about process, not necessarily product.
So, our project involves the use of eportfolios for science students, and it's a program-wide approach, in that we're trying to build professional skills and develop career readiness and goal-setting, in addition to personal achievement recording for students in stages 1 to 4. It's a collaboration between Medical Science, Science, Learning and Teaching, and Careers, and the development of the project involves the use, or the support, of three seed grants that we've attained, surveys, critical reflection, curricular and co-curricular activities. So in terms of linkage with UNSW learning and teaching priorities, when we think about the present, the project is well matched with embedding graduate attributes—in terms of career awareness and employability—into programs, improving program coherence by integrating courses across stages 1 to 4, improving the quality of both informal and formal learning. In terms of the priorities in the future, we've also got good linkage, in that we're attempting to improve student experience of learning, improve student learning outcomes, in addition to increasing the level of effective use of technology to enable and support student learning and teaching.
So the overall aim of this study is to improve graduate attributes, and at a program level we're talking about professional practice and perspective, and reflective practice among our science students, okay?, and to develop competitive graduate employability and career readiness skills in these students, and also improve students' self-motivation. When we're looking at course-specific aims—in this case we'll look at stage 3A as a case study—we're thinking about career-path understanding in current course learning and teaching. Also the development of research skills, and what this involves is team work, laboratory work, oral and written communication skills.
So the significance of the study is that eportfolio for science undergraduates is a very student-centric resource, for the purposes of reflective practice on learning, professional skills and career pathway awareness. So if we think about eportfolios and the pedagogy behind it, the way we've drawn it out is that we've got this big loop heading upwards, where we have professional skills in scientific skills and ability at the very foundational base of the project and we're trying to build in a process of reflection. And that's what teaches students to better understand, not just the content of their courses, but how they're going to engage with that content—and also understanding their personal learning style, in other words, the strengths and weaknesses.
When we're thinking about educative spaces, we think of the eportfolio as a self-directed, individualised approach to learning, and this promotes lifelong capabilities. Independence, control and engagement are the attributes we want to build in students, and of course these are interrelated. When we're thinking about career development, the first two stages are very important in building an awareness for those students in how to document and reflect on their career development. Once again, this is interrelated with the other two stages.
Thuan Thai, Medicine: So eportfolio as the technology helps students to develop skills that help them to self-regulate learning, and to become responsible for their own learning beyond the walls of the classroom. It also helps them to engage individually as well as collaboratively using the eportfolio, and lastly to give feedback between students, and also for teachers to give feedback to the students.
So, Mahara eportfolio is integrated in the Moodle system, and you could think of it as a professional way of Facebook. It allows communication between the students, and communication between teachers and students as well. So, simply put, it encourages students to collect, select, reflect and connect.
Now there are many benefits of using eportfolio as a form of assessment. For example, eportfolio assessment can act as a sustainable form of assessment, that allows the student to identify their own learning. It also allows them to make judgments about what they learn, and prepare themselves for future learning. Also, it encourages the student to become the assessor of their own work and their peers'. As well, it also encourages the ability to notice quality in their own work, and also be familiar with the standard that is required.
Dr Polly: Okay, so let's think about the program-wide use of eportfolios, and the way we've mapped it out is via a series of arrows, and I hope you can see some of the text that we've put in there.
So at stage 1 we have very foundational courses in medical science and advanced science.
And if we arrow into stage 2—and we're talking about an approach now through the medical sciences, more specifically pathology; we're talking about a course called Processes in Disease, which is a core foundational course offered to medical science students—further on, when we're thinking about professional readiness, we're applying various assessment tasks to build this professional readiness and skill base.
In stage 3 courses, where we're using it in stage 3AB Molecular Basis of Inflammation and Infection, and stage 3B, being Cancer Sciences. Now, both these courses focus on professional skills in research.
Into stage 4, which is the pointy end of the degree, as Julian would say. It's an issue of career awareness. So by the time students get to Honours, they should have in some ways built an understanding of where they could end up on completion of their degree.
Thuan Thai: So academically these students perform excellently. However, the pathways that are available to them at graduation of the degree are not always a known. So I've highlighted some of the key pathways that are available to these students, and this includes further studies, such as doing a Masters or a PhD, as well as also applying for postgraduate programs such as Medicine, Dentistry or Pharmacy. They also have the option of going into research, and this includes basic research, clinical research, industrial R&D. And other industries that are available to these students include sales and marketing as well.
Dr Polly: Okay, so let's focus in on the course use of eportfolios, and more specifically Molecular Basis of Inflammation and Infection, what we generally refer to as PATH3205. It's a stage 3A course, in other words it's offered in Semester 1. And when we're thinking about professional readiness, we're focusing, again, on research skills. So, the way we try and build some of these research skills is not only scaffold some of the process, but use an assessment strategy. Eportfolios are important in that they allow for recording of laboratory research, interpretation and findings within discipline, okay? So in our case we're thinking about pathology and, more specifically in terms of research, inflammation and infection.
So how do we do this? It is via assessment, and we try and build capabilities in research, team oral presentations and laboratory-based findings.
So, the first assessment task is based on research team presentations and it's worth 20% of the course, in terms of assessment schedule. And we call this "collaborative by teacher design". In other words, we've pre-formed groups of students, and they work collaboratively to come up with their research presentation. The other assessment task we've built into the course involves research lab or laboratory reports, and they're worth 10%, and this is "collaborative by self-directed interest". So in other words, we've started the eportfolio, and the students are allowed to engage with that eportfolio to start discussions between themselves. So it's not really group-related, it's individual based, and it's up to them to start talking about their lab-based findings.
Thuan Thai: So the project outcomes are to improve students' professional skills, which includes being able to build a skills portfolio, as well as developing skills that are transferable as well as discipline-related skills. And I've highlighted the fact that we also want to help students understand the graduate recruitment process and the timelines involved, and be able to write a tailored resume and cover letter. A lot of these elements will feed into their ability to understand career awareness skills as well, such as the items listed here.
So I want to make it obvious, and make it clear, that we are still going through this research project as we speak, and we're only at the start of the actual research itself. However, so far our results from the pre-eportfolio questionnaire show that only about 55%—or, an average student's confidence is only about 55% in being able to identify very basic skills such as knowing at least three different jobs for what they can do after graduation. And another low-confidence area includes the fact that they know very little about the graduate recruitment process, which is a focus area for us.
We've also highlighted two other key weaknesses, and these include degree-specific skills, or technical skills, as well as the transferable skills, that I think eportfolios will help us address.
Dr Polly: So in terms of using eportfolios, if we're summing up, we want to build graduate capabilities and professional preparedness, and if we look at this triangle, which is comprised of four small triangles in what we're trying to build in our science undergraduates for completion and entry into the world, we want them to be leaders, scholars, professionals and global citizens, and the types of skills that we're building in obviously feed into each of these aspects.
So it just leaves us to conclude that we'd like to acknowledge our colleagues Jia Lin, Nick, Julian, Kate, Adele, Bill, Fiona and Mita. Thank you.
Rubrics and Assessment: the UNSW Experience—Tim White, Mechanical Engineering
G'day everyone. My name's Tim White. I'm from Mech. Eng. and I've been here as a lecturer at uni for just over 12 months now, and the first course I inherited was ENG1000. ENG1000, it runs across the Faculty of Engineering; it's a first-year introductory course, and we get about 1300 students across the faculty, of which about 250 choose the Mech. Eng. project. And those students form into groups of somewhere between 4 and 6, so on average about 5 students per group, and that happens in about week 2. Then over the next 10 weeks or so of session, amongst other things what they have to do as a group is to build a prototype machine to achieve some particular kind of task. And it's pretty hectic through session, I guess, lots of first years come in. Often it's, well, it's their first time at uni, usually their first session of uni, first time away from home, all that stuff. So there's a lot of hand-holding goes on through session, as well as trying to introduce some basic skills to them, and when it comes time for assessment, mostly most academics feel the same way that that feller sitting down the front there looks—that's a picture of Clancy [Auditorium] in week 1, with about a thousand students there. And trying to assess all those students in first session, even when they're split up into their individual schools, is quite a challenge.
What we have at the end of session is a competition where the machines are actually tested for performance, and until I came along 12 months ago the way that was usually done was, there'd be not really a rubric per se, but there'd be a marking scheme sheet handed out to tutors who were milling around the room on the day of the competition. And that's an example of the kind of sheet that we'd use. There was a rubric background to that, that tutors would get emailed in PDF form before the day, but because most fo the tutors are casual staff they would, of course, just have a cursory look at that, and then on the day itself they'd just have their marking sheet there, and they'd walk around and mark the teams as they competed in various activities. And, usual problems with this system, but I'm sure most of you who have large classes, most of you would be aware of that: if you have several different tutors walking around, and you've got different tutors marking different aspects of the devices—or in some cases there'll be some duplication, 2 different tutors will actually mark the same aspect of a device—because they don't actually have the rubric there to refer back to, there will of course be some variation, and a diligent academic of course will do their best to normalise the marks between tutors, but invariably you'll end up with those inconsistencies between different markers. Also there's lots of paper to collect at the end. Again, because students are new to this might not realise that they need to get one part of their machine marked by one particular tutor, so sometimes you might get 3 marking sheets come in for essentially the same assessment. So lots and lots of paper at the end, also, to upload into Moodle, or even just for your own record keeping, of course, you need to transcribe those marks into Excel or something like that at the end, and so there's a fair chance of transcription errors.
So basically the whole thing was just a shit fight, and certainly that's one of the reasons I got this job, because I was young and naive and didn't know how to say no. There's an artist's impression of my predecessor, who now is teaching some cruisy fourth-year course, and that's probably what I would have ended up like that, until John Paul Posada from the Learning and Teaching team in Engineering came along and introduced me to iUNSW Rubrik.Note the "i"—it's cool, it's hip, and you say that to the students that you're marking with iUNSW Rubrik and they think, "Oh, that's pretty cool—that's something we can relate to."
I can't really do justice to—suppose it's called an "application", it runs on an iPad—I can't really do justice to that here, but there's a screenshot down the bottom there. And what we have with iUNSW Rubrik: we have one tutor who gets trained up, and that only takes an hour—again, if they're a young, hip tutor they already know how to use an iPad. So instead of having all these marking sheets out there maybe getting duplication between marks, we can use tutors just out there on the ground, essentially just for crowd control; the academic and one tutor can walk around and systematically try to score—usually successfully—try to score each team; the mentor's there with their very dexterous fingers, running through the scoring rubric; then, if I have a brain-freeze or something like that, all the rubric's there on the screen, just a tap of the screen, to go back to find out, oh, they achieved this particular outcome with their machine so how many marks is that worth? And also if a particular team has a quibble with the mark they got, that information is there for them to have a look at.
But maybe one of the best things with this system—obviously, from an assessment point of view the consistency's a good thing, but we can also actually have the results to the students on the day rather than waiting a week to get their answers back to them. So that means we can hand out a trophy at the end of week 13, and most students go away with a warm, fuzzy feeling.
And if you want to know a bit more about this, there's a link up there to a video I did; it's on UNSWTV. So maybe that can be shared later on. That talks a little bit more about the process, a little bit more about the app. That video was taken about 36 hours after the birth of our first baby, so I wasn't really at my best then, so don't concentrate too much on what I'm saying, just sort of have a look at the content.
But I guess the main message I want to leave you with today: Rubrics, of course—well, you all know what rubrics are basically about, and I'm pretty sure, if we have not already we're all keen to adopt them in one form or another. But yeah, with a little bit of extra effort, by converting them to an electronic format, certainly that rubrics can be a powerful weapon in your arsenal to improve both the experience of students and teachers at university. Thanks.
Rubrics and Assessment: the UNSW Experience—Brigette Fyfe and Christine Vella, UNSW Global
Hi. I'm Brigette Fyfe. This is my research partner, Christine Vella, and we teach at the University of New South Wales' Institute of Languages, where we're working on researching student use of rubrics.
So the international students we work with have received offers to take up postgraduate studies here, conditional upon successful completion of the 10-week UEC direct entry course. The program's designed to prepare students for all the rigours of graduate study, with a strong focus on academic writing, and our institute's developed an academic writing rubric for use with all written assessments, a full version of which is provided to the students in their course notes.
Although course evaluation surveys had indicated that our students valued the provision of the rubric, what came through was they felt that it simply wasn't enough, that it needed to be explained more from a student perspective. So Christine and I conducted research. We were examining whether use of the assessment rubric—taking it beyond its principal function as an instrument of measurement and utilising it as a teaching tool—would lead to better understanding of the criteria. A challenge we faced was the fact that many of our students didn't trust their ability to analyse their own work; they preferred the teacher to be the sole appraiser. So in order to give students the skills and confidence to perform self-assessment of their own writing against the rubric, and the ability to act upon this assessment, Christine and I used a scaffolded workshop approach to peer evaluation of student writing samples in class.
So just a little bit about how we did that. We began with the rubric, and working to a design brief outlined in the curriculum spec we developed a series of reflective lessons which unpacked the language in the rubric for accessibility. Just to give you an example: So the reflective lessons focused on our "band 7" of our rubric, which represents a level at which students would be successful writing in postgraduate courses. We developed 4 lessons to target each of the 4 criterion sets contained in the rubric. So, as an example, at band 7, according to the rubric, a "successful task response"—that's one of the criteria—"is relevant to the task; presents, develops and supports main ideas; includes clear evidence of analysis" and so on. Therefore in the lesson students were asked to talk about the elements of a successful task response, deconstruct peer essays and identify successful examples of analysis.
Christine Vella: Okay, so I'd just like to fill you in a little bit about our strategy, our approach to using the rubric as a teaching tool. And it actually aligns really well with what Beverley was saying, two of her key words there, "expectations" and "evidence". So, unpacking the rubric was all about expectations, showing the students what we expected of them, and then assessing peer samples of writing, and determining the success of that writing—how well it met the criteria in the rubric—was all about giving them insight to expectations and showing them the evidence that met those expectations.
So we started by putting students in small groups of 3 or 4. We asked them to assess a student's writing against the rubric and to discuss how well that writing met the criteria outlined in the rubric. The students were also required to give that piece of writing a band or a standard, and to come out and share that with the class and justify why that writing had been assessed at that band or that standard. And then they were asked to give the writer of that text some suggestions and feedback to action on in future writing.
So after several weeks of working in small groups and gaining confidence with both the rubric and with peer assessment, we buddied students up, and they stayed with the partner for the remainder of the semester. And now the requirement was for students to email each other soft copies of their texts, and to assess their buddy's text with in-text feedback, and finally to meet with that buddy and to exchange their feedback, to clarify, negotiate and reach an agreement on each other's writing. And we as teachers were just there to facilitate that process.
So what we found was that once students had the confidence to assess in a group and then one peer's work, they were better able to monitor their own work. And that was a big step, because, mostly—we work with international students. They don't really have the confidence to assess their own work—they think that the teacher has all the answers. And if you get them to assess another student's work, they jump into the grammar. You know? "This sentence isn't grammatically correct." They don't see the big picture; they just focus on the small. So the more that they peer assessed, the better their ability to monitor their own work independently, and this was evidenced in the final stage of the program, where they reviewed their own work against a rubric and then came to the teacher to discuss their self-evaluations. And we noticed a huge improvement in the students' writing. The summative questinonaires that we carried out and the interviews, 95% of the students felt that the use of the rubric was linked to improvement in their writing, and also they were able to articulate in which ways. And the most common response from our students, in these interviews and summative surveys, was that they greatly appreciated knowing the standards that they were supposed to be working to, having the rubric to guide their responses, to inform their responses and to evaluate their own writing.
And their confidence grew over time, in peer and self-assessment, which was possibly the biggest challenge, but also the most worthwhile outcome of our program and our research. And finally, for the Institute of Languages, this use of the assessment rubric as a teaching tool has improved standardisation in marking of summative assessments amongst teachers and given our students greater confidence in the whole assessment practice and assessment process that we use over at the Institute of Languages, because of the transparency.
Rubrics and Assessment: the UNSW Experience—Professor Phil Jones, Medicine
What I want to talk about, I want to continue the spotlight on program-based assessment, but focusing on just one program, which is a Medicine program. In 2004 we introduced a major revision to our program, and one of the features of it was to adopt a program-based approach to assessment. Our program is a 6-year program, it's divisible into three 2-year phases, and basically across every phase within courses there are obviously assessments, and we also have end-of-phase assessments as well. Basically, we defined at the beginning of all this 8 graduate capabilities to which our students needed to work towards, obviously for graduation, and as such we defined standards, or certainly defined goals around that. We also broke those down so that each phase there's a development around those sort of expectations; the students go through phase 1, 2 and 3 etc. Every assessment we do is mapped against those capabilities, and we've recently actually introduced an item bank in which every item, be it a multiple choice question, an essay paper, a [station?], an exam, will actually be mapped against the relevant capabilities as well.
Now, there's a poster we've got today to demonstrate the outline to what we've done. I want to just focus on one particular capability, and one aspect of it within the program, just across Phase 1. This is the capability around students developing skills, directing their own learning and being able to critically evaluate information. There are basically two broad goals in Phase 1. One is around directing learning; I'm not going to detail that at all. The other is around finding, evaluating and synthesising evidence. And you can see the expectations we have there; and basically these are primarily met—particularly the first group obviously—primarily met within written work that students undertake. So that basically, for every piece of written work that is done over the 6 years, it is assessed using the same criteria, around self-directed learning and use of evidence. We will look at their selection and use of information sources, how they have addressed the expectations of the assignment or a group project and how they demonstrate critical thinking in using that evidence.
We use a 4-point grading system in every assessment in the program as well, except for obviously written exams. It's a 4 point 'cause we actually have a Borderline group as well as the Fail, Good and Exceptional. The rubric we basically use is essentially developed around Biggs' SOLO taxonomy, so we sort of use a fairly generic approach to those 4 grades, and we contextualise them, obviously, depending on the particular items/exam that the student is—or essay that they're required to do.
Basically all the work, all assessments, all written work, gets submitted into an electronic portfolio. And this is an illustration of a Phase 1 portfolio from one of our students, reproduced with the permission of the student—not one of our students who's actually sitting in the audience, but just to illustrate what a Phase 1, what this part of a Phase 1 portfolio looks like.
Basically, as I said, every piece of written work is submitted electronically; it is graded electronically, grades are entered electronically; examiners' feedback on the capabilities, the criteria, are entered electronically. At the end of the phase the student presents their portfolio—I don't mean formally—basically submits their portfolio for a review. That review is undertaken in conjunction with them also submitting a reflective essay in which they discuss how they have been working towards developing in each of these capabilities across the phase. So they have the opportunity to discuss where they may have had some initial deficiencies, how they responded to the feedback they were given, where they have performed very well, and where they may still have some deficiencies, and outline their plans for the next phase. And that portfolio review, as I said, happens at the end of every phase. At the final phase, end of sixth year, we actually interview the student as well. We interview students to get into Medicine; we interview them to let them out of Medicine. So basically they come to that, and basically have the opportunity to discuss their portfolio with us.
We do actually review the whole portfolio, because we are seeking to make a collective judgment. We are seeking to judge, looking at patterns of performance, and also look for evidence of development. We don't look—the portfolio examiners doesn't look at the individual items within the portfolio; they're looking at the pattern. And this is the sort of pattern they see. And basically for Phase 1 there are 8 courses. So in each course the student has to complete an individual assignment, it's an essay they do. We give them a list, typically of about 6 assignments that they choose from, and what you'll see there if you go across the row, you'll see there is a grade, aligning up with different columns, and those columns reflect which capability the assignment the student has done has been focused on. So it might be, if you look at the first one, it's Developing a Clinical Understanding of Pneumonia and it has a focus on using basic and clinical sciences, so the student would obviously be talking about aspects about, say, the pathology of pneumonia, and then about the other capabilities patient assessment and management, so they'd be talking about how they're relating their understanding of that pathology to what happens to a patient clinically, what the patients experience when they have pneumonia.
So they get a grade for that. And they choose these assignments according to how they have to develop their portfolio. So if they've been weak in a particular capability, they'll choose an assignment in the next course which allows them to address that capability again.
You can't see it, but there is one assignment there; it's the fifth one down, I think. It's what we refer to as a "negotiated assignment". In Phase 1 the students have to describe their own assignment as well—they have to describe a task they're going to do, what the topic is, what the task is; and they also have to describe the assessment criteria that will be used for that assignment. Providing we approve that, that then becomes their assignment for that course. This student did an assignment on breastfeeding amongst different cultures. So they get to choose a topic, but they have to actually define the assignment and define the assessment criteria.
I want to direct your attention to the last 3 columns, 'cause it comes back to that capability I was referring to, self-directed learning. The final 3 dolumns are generic capabilities that are assessed in all assignments, all written work: effective communication (so, their writing, can they write a sentence etc.); their use of the evidence, what they're citing; and also what's referred to as "reflective practitioner", which is one of the capabilities where they have to demonstrate what they have learnt from this and how its impact on their understanding about the area. And what we look for when we come to look at this—
Well, firstly just to say what function this fulfils.
- One function is, it's an excellent mechanism for providing feedback to students—clearly there's a grade, but the portfolio also consists of all the written comments there. And so the student, if they have underperformed in one course, even though they may have passed that assignment, they see that they need to address this particular capability in the next course and they've got feedback to guide them in that.
- It also serves a summative function for us, because we do make a collective judgment; if we see that even if a student has performed well across the particular courses, they are underperforming, for instance, in self-directed learning across the board, they will not progress at that phase. If they are underperforming in the whole portfolio, they will be excluded from the program. This in fact has become our major mechanism for identifying poorly performing students—even though they're passing knowledge-based exams, if they are underperforming in all this work, our rules allow us to exclude them from the program.
- And it also serves an educative function, because we've been doing this for 8 years now, we have a lot of data on how students are performing in these different capabilities. It allows us to look at the curriculum, where we need to improve the curriculum. It allows us to look at student groups, so international students, where they may have issues particularly around writing or using literature, and issues around plagiarism, allows us to identify those and work specifically on that in the curriculum.
Rubrics and Assessment: the UNSW Experience—Carolyn Cousins, Australian School of Business
I'm going to give you just a little peek into our experience in the ASB, using rubrics over the last few years, and give you a little show and tell, as well.
Now, we were propelled into using rubrics, by our Assurance Of Learning requirement for International Busines School accreditation. And that involved assessing and assuring students' performance against program learning goals, the sort of graduate capabilities that Beverley's talking about. We have 22 programs, so we ended up with about 150 learning goals and rubrics. Our aim, then, in developing these rubrics was to try to combine a consistency with flexibility. By consistency I mean, you have—using the rubrics to develop a consistent or a shared understanding of expectations, and a shared language, to be able to discuss those, between staff and students and across disciplines. And we're talking about, not just within courses, but across courses in a major, within and across disciplines, within and across programs—so, it's quite a challenge. But if you can manage to get your consistent criteria and some agreement on standards, then you also get more consistent marking, feedback, and a more consistent sense, for students, of what's required throughout their degree, not just in individual courses.
On the other hand, rubrics have to be—they're not one-size-fits-all, they have to be adaptable. You have to be able to adapt them for a particular course, for a discipline, for levels, for an assessment task, and they have to be user-friendly.
So we had quite a lot of challenges. So we're working—ah, we've been doing this for about 2 years now, and as I said it was determined and driven by Assurance Of Learning processes, quite a rigorous system that we were following. So just to give you a little bit of an indication of where we are in this journey, and where we're going, this never-ending journey, or—it feels more like a merry-go-round that you can never get off [general laughter]. In the last couple of years we've been developing rubrics, sort of a base rubric and all of these 150 rubrics, and we've been trialling them. Now, what we've been doing is what we had to do for Assurance Of Learning, is we've been assessing random samples of students, using, just piggy-backing on existing assessment tasks, and assessing them against the rubric. Sometimes staff have used the rubric as their regular assessment guide, so it's just been embedded, and sometimes it's just been used alongside, so the students haven't actually seen—sometimes the students have seen the rubric, sometimes they haven't. So this is just our sort of development/trial stage, and what we've actually learnt from doing this, in many sort of disciplines and many levels, is that it's already—we can already see enormous benefits from this. It's enabled us to define much more exactly what it is we're actually wanting students to be able to do, and by assessing and analysing the results we've been able to see how well they and we are going, and where we all need to do better or do more of. And I'll just leave the last two bits there for later; that's sort of where we're up to—at the moment we're at the reviewing and streamlining stage, and just about to launch into the next stage, which is going to be rolling these out as marking guides across the Faculty, which we're hoping to do next year. So that'll be interesting.
But now, I'll give a little show and tell. One example, Bachelor of Commerce and Ethics learning goal. First you have to make your learning goal, you might have thoughts about that learning goal—we certainly had reservations and thoughts about that learning goal—but that's what we came up with—well, that "A" is actually a learning objective; 6 is the goal, A is the objective. So that's our Ethics one.
So then we had to make our rubric. So, deciding on our criteria and after discussion with staff, looking at what people are currently doing, examples of good practice in the Faculty and elsewhere, we came up with those 3 criteria. And we've decided on—well, for Assurance Of Learning we need 3 levels, at the moment we're still working with that—I think we're going to have to tease it out into 4 or 5—but they're Beverley's "rock star", "just okay" and "not cutting it".
Now, so, that looks good, but now you get to the nitty-gritty of actually measuring this learning goal, and that's where the fun starts. So here's an example, and I chose this specifically for Accounting, just to show Beverley that our accountants are really, really good at ethics—and they are. Now, so, what is it exactly—you know, this is the sort of je ne sais quoi of a good student answer, and the not-so-good and, you know, you have to articulate, and you need to be able to be specific, but not too prescriptive, not giving students the answer, all the sorts of, some of the considerations that people are already mentioning. So, here's an example across the top. So, in deciding—say, for example, the first criterion—is it going to be qualitative or quantitative? The difference there, it's a qualitative one. I didn't fill it all in, 'cause I didn't want to hit you with a wall of text, but in the second criterion there's also a discrimination between the range of stakeholders that are identified, so there's a bit of quantitative, as well as talking about the quality of the analysis.
So that's what we did for that. Now, that was—we sort of had a bit of a base rubric, we discussed it with the Accounting lecturer, he actually used this to mark an accounting case study exam question, which he made a 3-part question. A B C, 1 2 3—it worked really neatly. We just took his results, and that was our AOL sort of reporting for that learning goal in that program.
Now just to show you that we are trying to be flexible, we just move into another undergrad program, another discipline and another task. This is a team project report in Information Systems. You'll notice that the criteria are pretty well the same, and where you're getting the difference, and the ability to modify, and be flexible, are in the descriptors, in the performance standards. And they actually did have another criterion there, number 2, which was "applying the Professional Code of Conduct to actually classify the types of unethical behavour"—but I left that out for aesthetic reasons. But we had that all filled in too.
So, what did we do with this? I went along, talked to the Information Systems lecturer, and I think the program director was there as well. We looked at [inaudible term], we just used the Accounting one, we looked at that and some others, we sort of discussed it and tweaked it and modified it and ended up with that, and then after it was used we got feedback, which we've been doing, getting feedback from all the staff who've been using them, and we're sort of continually sort of trying to tweak them. You might also notice there, little difference, we've moved from noun to verbs, we're sort of flirting with our rubrics here, so instead of "identification" we've got "identify" 'cause we sort of thought that gives students a more specific idea of what they're actually supposed to be doing—on the other hand, it's a bit too wordy, so I think we might go back to nouns, I don't know. So, I mean, if obsessing sort of anally about words is your thing, then you'll really like making rubrics.
And just as another benefit of this, so if we come back to our original program learning goals, you might think, looking at that, well, that's a bit vague. And we sort of thought it was too. But we weren't sure what we wanted to say. But using different rubrics, going through this process, we were able to clarify a bit more exactly what we wanted, and this semester we've been looking, taking stock, reviewing, revising the rubrics and the learning goals, and here's our new improved Ethics learning outcome, which I think is much more doable, much more specific, much more helpful for students.
So it is really a process of continual improvement, and I suspect that'll change again, but that's life.
And if we could just go back to the first slide, and back to the bottom there, so what we're doing this semester, we're reviewing our learning goals, and also reviewing the rubrics. We're trying to streamline them, I mean, 150 is just too much, so we're trying to sort of standardise the wording a bit, looking at where commonalities are, simplify the rubrics, make them cleaner, simpler, more user-friendly—and more consistent, so that if you're doing a course in a BComm, but your actual program is something else, then your rubric and your learning goal [inaudible word] are pretty close. So we're sending these now out for feedback from the staff—at the moment it's been sort of a fairly small development process but now they're being disseminated to staff. Next semester, program and discipline teams will be working on modifying rubrics for their contexts and agreeing on performance standards in their programs. And then next year is where the fun really, really starts: we're going to be changing—we're not going to talk about ASB graduate attributes any more, we're going to replace those with program learning goals in course outlines and assessment instructions. We're going to try and roll the rubrics out. And what we're hoping to do is to enable people to sort of combine elements of these program learning specific rubrics into their own wholistic marking guides, so that there'll be a sort of common language, if you've got critical thinking, written communication and discipline knowledge in your report marking guide, then the language and the standards will have some similarity with somebody else's standards in another course, in another program. So that's what we're doing there.
Yep, so, and it involved lots of resources, lots of support, both for staff and for students, in making this happen, but that's for next year. Okay, thank you.
Assessment As Learning—Dr Adele Flood, Learning and Teaching Unit
Below is the transcript for the video on page Designing Assessment As Learning.
When we decided that we would call this the Assessment Project, we had to think about what did we mean by "assessment" and so we coined the phrase "Assessment as Learning" to indicate just that: that the student, when they undertake assessment, they are actually learning to do something. They are not trying to regurgitate what a teacher might want them to say back to them, they are not trying to become the teacher, they are not trying to interpret things in the way that they think other people need them to interpret them, but rather that assessment is there to establish what they need to know themselves, so they can identify their gaps in their learning and they can then pursue those gaps to fill them up, really, and to encourage them to understand the process of how to then enter a new phase of understanding.
It’s really important that assessment is made relevant to the student. The student has to understand why they are being assessed in a particular kind of way. A lot of teachers think that it’s a secret. They know that the students need to pass these tests and they set them up so that the students are surprised or a bit anxious and I don’t agree with that at all. I think that assessment should be a known quantity in the classroom, that it should be talked about initially with the students, it should be made clear what they are being assessed, why they are being assessed and how that is going to stand them in good stead in the future.
Assessment is a two-way conversation really and it’s the teacher’s responsibility to make sure that the students understand what the learning is for them and how they can then take that with them and use it in their future life once they have left the university.
Below are the transcripts for the videos on the page Designing Assessment as Learning.
The activities that we did throughout the course were, nearly completely, the same as the activities that I got them to do last time I taught it [the course] two years ago, students to do, the same sorts of activities in design, with one exception I'll tell you about in a sec. What was really different was that they weren't for marks any more. I wanted them to reproduce the normal course, but I wanted, at every point, them to be taking a path not because I was chasing them behind with a stick, but because they were striding forward and that was a good way of going. And if it wasn't a good way of going then they should think about it, and if they were thinking it was a waste of time they should tell their tutors and the tutors should tell me and we'll engage in a discussion—it'd be ridiculous for them to do something that's not good.
So in some sense I wanted them to feel complete freedom. In the end they ended up—I reckon it's a good way we've got of doing the course, and certainly from the feedback I've got from lots of other students, it's incorporated lots of suggestions from students in the past—it's over time evolved, I think, to be reasonably okay—that's not to say it couldn't be infinitely better, but it's not a terrible thing. So, I wasn't, at the end of the day forcing them to do pointless or banal activities that a moment's thought would make you realise were ridiculous, like lots of the stuff I did at school. But nonetheless I wanted them to be choosing to do the things, so if they didn't want to do something, they really didn't have to do it. I didn't say it in those words, but that was the story: you didn't have to do anything you didn't want to do. That was really the big change—can you see that?—like, it's—I'm not explaining it very well, but it's a conceptual change. It wasn't as though I changed the assessment activities in some dramatic way—though I did, in one way that I'll tell you about in a sec—it was rather that I wanted to change their attitudes towards assessment. And I really didn't want to change their attitudes towards assessment; I wanted to change their attitude towards their learning; I wanted them to feel that they were in charge of their learning. And I don't think I can say that to them and then micromanage them as well, and force them to do things—they really have to be in charge of their learning.
Like my children—I try and give them the ability to make their own choices whenever possible. I hope that they'll do the right thing, and I'll try and set it up so they will do the right thing, but at the end of the day they did have the choice to do the wrong thing, and I think that makes them doing the right thing a far more meaningful activity, then, when they do it, both from my point of view, but also from theirs, because they're just given more autonomy and control, and I think that's— You know, if you have more ownership of something—certainly for me, if I have more ownership of something, I'm much more interested in it, and in my own learning. When I've done something that I've wanted to do, and it's worked, I've felt so good, and if someone else is just shepherding me through and it works at the end, I think, "Ah, yeah". And I've so good, and I never forget that—well, I forget everything, but certainly I have all these glowing moments from my education where I remember I did this and I did that, and that was my idea and I tried this and it worked! You know, they're fantastic and I'm really proud of that and that's the sort of positive reinforcement and jolt to the system and flush of emotion that I want them to have constantly. Because at some point they're going to leave us, you know? They're going to leave university and leave me and leave all the teachers that care for them so much and want to do it. And when they leave us, they've just got to keep going on that same trajectory, it can't be that everything goes bleurgh. It's like staking a tree—you shouldn't be staking a tree, it should be growing strong by itself, because at some point you've got to take the stake away.
Engagement as gamification
I'll tell you some of the fun things we did, because did a whole range of things to try and motivate the students. Once a mark motivation had been moved away, here are some of the things we did. And each appealed to different groups of students differently.
One was, in the online environment where the tasks are all distributed and the weekly activities are distributed and the tutor exercises are distributed each week, each student can see on their home page, on their own personal page, what tasks they have to do, which ones they've done, which ones are still outstanding, and as they do each one it gets ticked off, and there's little coloured symbols that show them if they've started it or haven't started it yet, or if it's late now, or if it's on time still, and if they've done it but it wasn't quite correct, and if they want to they go back and do it again and get it more correct, fix it up. And as you do each task, we had a big green bar that appeared everywhere they went on the site, a bar that travelled round with them at the top of each page, their own personal progress bar, and if you'd done all the activities you had to have done up to a certain point it was completely green, but if there were ones that were late that you hadn't done you got a bit of red, and it was all done percentage-wise, basically. So everyone could see when they hadn't got it perfectly right or hadn't got everything perfectly done—it was sort of visible on every page you went to, so a whole lot of this is gamification, these ideas of drawing from game strategies, from, you know, online games. So there was a large group of people who were really just interested in keeping that bar green. So whenever a new task came out they did it because then the bar was green everywhere they went—it wasn't worth any marks, but it was worth a green bar, and for them that was fantastic.
And this incredible thing happened, that—some of the tutors told me that—one tutor said to me, "In the lab this week, a student came up and showed me activities from four weeks ago that he hadn't done then, and he'd done them since, and I needed to tick him off for some of them"—some of the activities the students needs to tick you off, it's qualitative marking. And he said, "I've never seen this happen before." Normally if people don't do an exercise—'cause now you don't get a mark for it—no one ever goes back to it, so what you don't do one week's just gone forever. But these students were going back over what they hadn't yet done, and making sure they'd done everything. That was just fantastic.
Another thing we did was, I kept track of the correctness of their programs, like, if you submitted something and it was incorrect, and you submitted it again and got it correct, then we recorded that as one unsuccessful submission and one successful submission, and the proportion of your submissions which were successful submissions was called your correctness bar, and there was a little bar, your correctness ba,r that said, essentially, the proportion of things you've submitted [that were correct]. Because it's really important in computing to make things correct, not just to keep submitting, submitting, submitting, hoping it's correct, hoping it's correct and never actually getting it right, but to actually have some sort of —this is a scepticism thing—have some sort of confidence that what you've done is actually right and to actually test it before you submit it and so on. And some students, some of the better students, it was easy for them to keep the bar green, but they were going for the 100% correctness. And there were a whole range of stats that appeared on your thing, and depending on how strong or weak a student you were and how much experience you got, you'd just pick different stats to try and optimise. So everyone, potentially—I shouldn't say everyone, but certainly many, many people—were trying to optimise their statistics. Not with any marks, but— How great is that? So, I really like that.
Every time you helped someone else or gave advice to someone, if they liked they could click a thank-you button, or a I-like-what-you-just-did button. There was, like, essentially, a thumbs-up—and there was also a thumbs-down, if you thought that someone was being mean or anti-social or you didn't like what they'd done. It was anonymous if you liked someone or disliked them. And whenever you helped someone and got a thumbs-up, it built up your karma; we had a complicated formula to build up karma, and [inaudible] your built-up karma, and that's displayed with your profile picture wherever you go, is your karma level, so those people who really liked helping people were starting to get big karma, so they were trying to optimise their karma!
As you did more and more of the harder activities, I folded—I let your icons start to get a bit of colour in them. Everything was initially black-and white. So some people were trying really hard to get lots of colour in things. Does this make sense? So there was sort of some—these little teensy things; they weren't worth anything. You wouldn't miss a family birthday because it would affect your colour on your icon the way you might if it's worth marks for an assignment that's due in and there's a date and a lateness and all that sort of stuff. Nonetheless, in the absence of everything else, you had these little incentives to do things.
For the really smart, or really keen students, though, I had this other thing, which I called Puzzle Quest. And this is the thing I'm most proud of—I keep breaking into a smile, it's 'cause I'm thinking of Puzzle Quest. What Puzzle Quest was, was inside the course I hid, like, a mystery, a puzzle, and I mentioned it in the first lecture, but then I tried never to talk about it again. And you stumbled across it by accident or if you were looking around or following instructions, and then you started to [realise]—and there was this whole altered-reality game, essentially, we were playing, where some of the things I would say in lectures, if you started doing the Puzzle Quest you would realise that they were hints to various things, and certain students in the course weren't real students, they were fictional students, some of the identities on the forum, and some of the things I said in lectures were wrong. This is to do with my scepticism things—I like saying wrong things in lectures. And if you started noticing which of the things I said were wrong they'd form clues about various other things. And if you assembled the clues you realised that there was some sort of mysterious, elusive person that was on the run, and he was leaving clues for you in some sort of treasure hunt, and he wanted you to come and help him. And if you assembled the clues you started getting these little awards.
The students could form teams or however they wanted to solve them—but it was in your interests to form a team. It was quite hard to solve it by yourself. So group formation started happening, but not controlled by me. And they all had to learn how to work together and operate, and without me talking about it at all, I ended up with three teams that were obsessively trying to solve Puzzle Quest towards the end. And each team had five or six members in them; one team was bigger than that. And they—for them it was, I think, the main thing in the course. I was watching their discussion page as they set up; they were—every week they were thinking "Did this mean this?" And to solve the puzzles you had to do every activity because some of the activities had little hidden Easter Eggs in them that let you do things inside, and so it was completely enmeshed and entwined in the course. And it was the most fantastic thing ever. Yeah.
Well, I think there were challenges for the tutors. I think the tutors who were the front-line staff found it really hard, because they were—my tutors are largely people who are undergraduates themselves, who have recently done the course. They were getting—they are frontline—they are spending face-to-face time with the students, much more than me; they are the ones making the change, on the ground; and I'm asking them to do something that's not something they've experienced themselves already. We had some ex-students that were working at Google and [Alassion?] and really high prestige firms who were excellent students and a lot of University Medallists and amazing people like that who had acquired these skills since leaving university, come back and they were taking lots of the tutes. And we had some guest lectures from a couple of people who were really good. So there were lots of good role models around.
Nonetheless, probably at least half the tutors were younger than that and hadn't encountered the things that, you know, they were being custodians of, in a way, and that was a really big challenge. But they were good-natured about it all, and they were as interested as the students—or some of them were. And so that was good.
Other challenges are, I kept drifting—it's very—I have to fight this temptation to put marks on things. My whole background in marks is, yYou think of these very elaborate, highly fair marking schemes where a mark for this and a mark for that, and we take the greater of these two, then we peer review it, and we then vote on it, and if there's a standard deviation geratter than that we fall back to a fall-back procedure, and there's arbitration procedures and—all these mechanical ways of making the marks absolutely equitable. And under this new approach we had, it wasn't so important that the marks were so equitable. If you got 16 out of 20, and you got 17 out of 20, 'cause you were in a different tute and had a different marker, and perhaps if we'd swapped you and you'd been in the other tute you'd have been—the marks would have gone the other way around...actually, it's a "Who cares?" situation, if the marks don't mean anything, they're just to tell you how you're going. So a lot of that stuff just fell by the wayside, and that was a big change in—realising that, I found quite hard.
Also, the traditional things about worrying about plagiarism and copying and cheating and all that sort of stuff, again that's irrelevant. If you're doing a task that you don't have to do just 'cause you want to do it, and 'cause you think it's a useful thing to do, there's no point in cheating, because you can just not do it. I guess there's marginal points in cheating, 'cause you could fake up a portfolio entry, but, gee, it's so tenuous and such a long, sort of, complicated path from doing the cheating to getting maybe an extra mark in your portfolio, who would bother going to all that effort; you could just not do it.
So all these mechanisms that we have in place for detecting cheating and plagiarism and things, which were still sort of half in operation throughout the course—it took, for me, a while before I realised I actually didn't need those any more, and that was no good.
What were the other challenges? I think, probably, because marks are a really effective way of getting people to do things, because they've been Pavlov'd to—when someone says there's a mark on this, or it could be an exam, you know, everyone sort of does it, or at least thinks about doing it. I think probably the biggest challenge of this is, once we take the marks away, we have to find other ways of motivating the students to do the stuff that I want them to do, that I think they should be doing. And, 'cause students are quite a diverse bunch, I think many things are needed. So, for example, weaker students need to be treated different to stronger students; students who are already highly motivated need to perhaps have different things to students who are struggling or aren't sure that it's right for them. So I think there's a lot more thinking to be done up front in this particular approach to make it right. But that's just a scale thing—I don't think, it's not a quality thing. It's not that any of the things you have to think about are any harder than any of the other things you have to think about; you've just got quite a lot of things to think about in a big rush when you first make the change. But they're all solvable, and we're competent problem-solvers, academics—that's what our whole life is, solving problems; I'm sure anyone can do it. And once it's in place and set up, then of course the second time it's easier and the next time it's easier and it's just, really, just...bringing about a culture change, I think, is quite hard.
What they had to produce at the end of the course, they had to do a final exam, and they had to produce one piece of work that was graded, because one of the objectives of the course was, I wanted them to be able to write a serious-sized program that worked, in a group, in a team, to make it work, so at the end we graded one piece of work they produced, 'cause that was really my objective of the course, that they be able to do that.
And the other thing was a portfolio. So they had to assemble—they had to—I told them the qualities that I wanted them to develop over the course, this notion of project management, time management, you know, team work, sort of thing, and—in computing, we're not very good at that, normally, we're solo people, especially just out of school, we're geeks, we're sceptical and distrustful of others; and we're a bit obsessive, so things can go massively over time 'cause you don't want to let go of them; it's very hard to make the right design trade-off decisions and get time management right if you're a computing geek—I think, anyway. When you come out of school it's quite hard. So, I told them, "Look, really, guys, unless you can do this it's not going to work, 'cause you're not going to be able to do an interesting and fulfilling project. So we really want you to have time management and good group work and all that sort of stuff. And I really want you to have good style and write really beautiful programs, and here's what I sort of mean by that."
I wanted them to develop as a sceptical person, and in particular in computing that relates a lot to testing. When you write a program you tend to hope that it's correct. And as you know, probably whenever you turn your computer on, always you're downloading patches—programs made by even the best, largest, most well-funded companies with thousands of people are riddled with errors. So any individual person, especially someone just starting out, is going to produce code that's riddled with errors, but a natural human thing is to not believe that our work is as flawed as it is.
So I wanted them to have this sceptical approach to their own work, to test it rigorously, and we had a whole sort of testing methodology and all sorts of things like that.
So they—So with these three attributes (I wanted them to get this scepticism testing, this style/craftsmanship and this sort of teamwork /project management /time management thing), I wanted them to each week say something about how they'd developed or what they'd thought or insights they'd had or how they'd changed, which were in relation to these three things—and to back it up with evidence. And it had to be concrete evidence. And that was a portfolio. So that was quite cool. And then at the end we'd mark it, and we'd mark each of the three things, and we'd just give them a grade from Fantastic, you know, a D, HD, and so on and so on and so on.
It was quite interesting, because then that was a qualitative thing in a course full of people that loved quantitative stuff. It didn't happen till the end, but you got constant feedback from showing it to your tutor and seeing how it was going, and if you wanted you could make your blog public and other people could comment on it too. But it all sort of directly focused them on what they were doing, so although you didn't have to do any of the activities, how were you going to show you've got good time management, that you're good at project management, if you don't get your tasks and assignments in?
But if someone says, "Well I already knew this so I didn't need to do it," well, that's okay, you write that on your blog! You say on the blog, "I chose to do this activity and I got it in on time; I chose not to do this one because I already knew it"—but then, we're assessing them on scepticism, so we need to see some sort of "and here's how I know I didn't need to do it; here's the evidence I didn't need to do it". And so, in looking at the diaries and talking about them a little bit each week, and having them evolve, hopefully we started to directly talk about the properties and attributes we wanted them to have. Yes, so that was the new assessment we had, and that was quite freaky, yes. Quite scary to do.
I tried from the very beginning—I'd planned the first couple of lectures more intensely than I've ever planned lectures before. I must have thought about them on and off for two or three months, really, and I took scads of notes and I planned and I rehearsed it this way and that way trying all sorts of things, because I just wanted it to be that they walked into this environment where it was sort of taken for granted that this was how everything was going to be, and it seemed completely natural and normal, and, and, yeah.
So they didn't start—I don't think it's good if I say to the students, "I'm trying this new experimental thing, I [laughs and gestures wildly] You know, I'm goofy! "I don't know if it's going to work or not. Hope it does, but, hey, it's just your education I'm fooling around with." You know, I didn't feel that was the right vibe, though I would like to give the students—you know, I don't like to keep secrets from them or anything like that, but I do think, especially when they start uni they're looking up to us as experts, and they have this sort of unquestioning sort of acceptance of what we're doing, and I didn't want to unsettle them straight away. Though actually one of the objectives of the course was to make them sceptical, and start to disbelieve me and distrust me, but I just didn't want to do everything in the very first lecture, so I wanted them to come in and just take it for granted that that's how things were going to be.
And so that took a whole lot of planning, not only in planning how I was going to run the first few lectures, but also in the tutor training and the tutor selection; I had to make sure the tutors, who spend more time face-to-face with the students than I do, were on side and understood what I was going to do, and weren't going to undermine it in some sort of subtle way, even unthinkingly.
And then I had to make sure that the activities themselves weren't going to fall over in a big heap and leave everyone in a mess, so— There's nothing magic, really, I can't suggest a magic solution. It was really just hard work. It was just lots of thinking and careful planning, just like you do with your research or something when you're doing something bold and new and exciting. Just lots of thinking about it, thinking about all the things that can go wrong, getting lots of different opinions. I asked lots of students, not only people who'd done the course in recent years but people who'd been tutorinig the course, and also graduates, asking them what they thought about all these ideas about—I bounced ideas off my wife—I just listened to everyone's idea I could about what might go wrong and the best way to pull it off.
I didn't teach them very well how to do time management. I didn't teach them very well how to do reflection. I didn't teach them very well how to do any of those soft-order things. That's, I mean, partly my fault for not really even knowing myself how to do those things, and partly my fault for not realising that it would be just as hard, if not harder, for them to do them as me.
I did some things along those ways. So, there was a lot of talking about project management, of giving examples of things that had succeeded and failed and talking about why, of personal reflections from my own life, of looking at students' blogs, of people being able to see them and us discussing issues. People occasionally would talk about problems they were having at work, and we could brainstorm about why the groups were being dysfunctional and not working. Because they all have a tendency—or, I think they all have a tendency, certainly there's a large tendency—that when things aren't working in a group, they all try and blame each other, and it turns into an exercise in allocating blame. And I wanted them to see, and we talked a lot about this, about how it had to change, not into working out who's at fault, but into working out what I can do to fix it up. You know, we all want this team to succeed; how do I... And seeing it really as a problem just like, how do I make this program work is: "Hm, I've got this sort of person with these weird flaws, and this sort of person with these weird flaws and this one who doesn't like talking to him, and this one who never reads his emails, and—hmm, they're all my ingredients I have to assemble. And I'm not in charge of the team; I'm just a team member, who can make comments and they can be ignored. What can I do to best enable all these things to be assembled so that together we do fantastic things?"
So we talked about that a little bit in class. We talked a bit about reflection. We talked a bit about all of that stuff, but next time I do it, we're going to make it explicit, and I'll probably get guest lecturers to come in and do it. Yeah.
"Passing on the baton"
One thing that was funny was, when the students left this course, and went on to the next course, it was taught again in the traditional way, with the traditional assessments and so on. And that was hard on the students, because I think, because they'd been sort of inculcated into this way of thinking, and then bang!, thrown back into the other way. And this excellent thing happened—I was so proud of them—the students set up their own open learning site for the second course, and they essentially replicated the structures of the first course in the second course themselves. They got people to give lectures, they ran up alternate and parallel tutorial classes, they set up their own assignments and exercises and things like that, and discussion groups, and they took course notes collaboratively. And they essentially mirrored all the things we were doing in the first course themselves in the second course.
So, although it was sad that they had to do it, I was so proud of them, because I thought, "These guys are resilient now. You know, I mean, they can do that after one course." Imagine when we get a whole syllabus lined up like this and they're all being empowered the whole way through—and their portfolio builds up each time, so after three courses you've got this most incredible portfolio you've ever seen, you know. How wonderful will that be?
At the end, they haven't got this intangible thing. They've got actual evidence that they can look at and be proud of, of what they've done. They've got osmethign they could take to a boss—I couched it in terms of going for a job or a promotion, that when you go for the promotion, you've got to convince your boss that you deserve it. He doesn't want clever, smart-aleck reasons why you've technically filled the criteria for a promotion; he won't promote you then. And he doesn't need you to tick every box, if you're outstanding in some and gaps in others that's okay too, as long as you're honest and have a good discussion. And he doesn't have time to read a hundred pages of stuff and do all the analysis for you to see that you really deserve it. It's up to you to show him that you need it, that you deserve it, that you're right for it. It's up to you to assemble the evidence, to analyse the evidence, to summarise the evidence, to make a selection about what goes in and what goes out, and to assemble it into a coherent thing.
And by virtue of doing that, it actually forces you to reflect on the course; it actually forces the students I think to reflect on the course, and think about what they're doing—and at the end they have this thing that shows what they've done, that they should be very proud of. Yeah-yeah-yeah, so they leave the course saying, you know, example with the style sort of thing, some people were saying, "Here's what I did in the first couple of weeks; here's a piece of code I wrote; I now see these seventeen things wrong with it; here's some blog entries I wrote early on; you can see that I wasn't really understanding what style was about and I was really confused. Here's a comment I got back from my tutor on the first assignment, where he made this praise and made these suggestions; here's a blog entry where I'm complaining 'cause I didn't understand what he was saying, and it really looked fine to me— And then here, in Week 7, is the blog entry where I suddenly realised what was going on, because I looked at John's assignment when I had to do a peer review, and I realised I couldn't understand it, even though to him it was clear as mud, I suddenly realised, 'Of course, it's the same with my stuff.'" You know, you can see what you've already written yourself, but that doesn't mean other people can see it, and we're writing programs to communicate. "I just suddenly got it, and suddenly it was so clear, and then here's my second assignment, and look how it's different—bomp-a-bomp-a-bom, and here's my tutor's comment."
And it's fantastic! So you get to see, at the end of the course, how you have changed as a person over the course, and you get to think about your learning. And this is all to do with that thing that I was talking about at the very beginning, about them taking control of their own learning, and that necessarily I think means thinking about their own learning and understanding it and then monitoring it and—they become the teacher, really, as well as me. Yep.
Supporting self-directed learning
I wanted at the end of the course them to have, sort of, changed their attitude about computing. I wanted them to have fallen in love with it, really, and to start to think like a computer scientist, and to know what that meant, and to start to get a feel and a taste for it. And I wanted to give them, really, a longing to do computer science. So that, you know, uni can be a hard time, and there can be unsatisfying courses you do, and various down-times in your life, and various, you know—they're teenagers and they're, everything's a turmoil about them. What I wanted to have was that, I wanted just to fill them with this longing to do computing, so that even when they hit the hard times—if they're doing a course they're not interested in, or they can't see the point of doing, or they happen to have some crisis at home or something like that—that they would keep going, that they would remember, "I'm here for a reason, and I think it's a good reason. I really want to do this."
So I decided that I—in the past I had done things that had led to that happening a little bit, and I was really pleased with those, but that hadn't ever been a primary objective, and I thought this time, "Actually, that really is my primary objective, more than just about anything else." I figured that if I can get them to really want to do computing, to really want to learn computing, to really want to change in the ways that I hope they'll change, then they'll do lots of the work themselves, and it won't be so much up to me to make sure that I've made them put their foot here and do this and do that, you know, walk the path for them with them following me step by step, but I'll have just sort of given them a map and the ability to navigate, and I'll have given them a strong desire to get to the mountain and then they will do all the work of working out how to get there. And it might be different for each student, because everyone's different.
So that was my realisation and the reason that I wanted to change the assessment.
Authentic Assessment through Student Based Learning—Dr Patsy Pollie (School of Medical Sciences) and Gwyn Jones, Learning Advisor, Learning Centre
Below is the transcript for the video on page Assessing Authentically.
Dr Patsy Pollie: … university students doing pathology were not able to present their work clearly or communicate their work clearly in a written format, and it was also the case for oral communication. So the learning outcomes that I found that were going to need development were communication, both in the written and oral format.
So we set about putting in assessment tasks, or actually modifying pre-existing assessment tasks, to make them more dynamic rather than static, giving students ownership of their role in pathology, and, most specifically, coming into more of a research focus, because that's the way the Faculty of Medicine was heading and we tailored some of the assessment tasks such that they would address those learning outcomes and capabilities. So all of a sudden these students came from being very static researchers to being very active and dynamic researchers and taking on that role. So, yes, we started with honours students and thinking, "Why can't these guys communicate? Why can't they do this stuff?" And then we figured we'd assess them and integrate tasks early on, second year, into third year with the idea that they would become enabled, ready, ready to go, for honours. So that was the premise of what we started with.
So the relationship with the Learning Centre and specifically Gwyn was that I recognised that Gwyn had very specific sort of traits in communication, and talent in bringing out those, also, those traits in students. So she was a very keen learning adviser who was interested in developing student attributes like communication. I'd met Gwyn through a colleague and I figured, "Well, I need this person on board, to assist me." Coming in from a research environment, I knew how to do it, but how was I going to get my students to do it? And Gwyn had that experience.
So what we've managed to do was merge our backgrounds and our roles, and we've become quite an interesting unit now. Rather than two people working in separation we work together and in tandem.
Gwyn Jones: What we do is just talk to each other, and find out, and help work through solutions. In a sense the Learning Centre—well, I like to see myself as a little Bunnings person, because there are strategies often you can engage to support and embed these literacies that are contextualised, and if they're contextualised, and they have a need, if students need something, they'll use them. [Patsy: Of course.] So if you have an assessment, and you have a need to do that assessment well, and you have all the tools to help them, it's magic. And so, sometimes I work full-time in the Learning Centre, sometimes you pop in and pop out ...
I think the most successful pattern is embedment, so you embed it throughout your course. [Patsy: Which has worked for us.] So we come in, we sort of [whispers] silently do things, and then we back off because it's already embedded all through the program.
So, anybody can give us a ring, and chat through and maybe we can help, in any way, shape or form.
Authentic Assessment as Performance—Dr Kerry Thomas
Below are transcripts for the respective videos on page Assessing Authentically.
Dr Kerry Thomas: Teaching's all about building desire. So how do you build desire for the things that you think are important? I used to talk to the kids about fishing: I want to catch you; I want to bring you in; you want to do that with the kids. The worst thing that could happen would be if the kids don't take the bait, if I can't catch you—because this is the sort of thing you want to be caught by. But the kids have to be ready to be caught. So what can you do to make things attractive for the catching to take place?
And part of my way of getting to that was to actually look at what happened with expert teachers and then bring that into the discussions in the classroom for what was possible for these students as prospective teachers.
Arts student Melanie Crawford: We were all ... we were all, just—
Arts student Stan Toohey: Totally engaged.
Melanie Crawford: Yeah, and we wanted to be there, and we wanted to do well.
Preparing Students to Become Teachers
Dr Kerry Thomas: The focus in the course is on communication and language. And so, the whole idea is the students begin to think about what it is to teach, beyond just the delivery of content or that initial identity as a teacher.
In other words, they begin to really think about what's that relationship like with students for learning to take place.
Melanie Crawford: It's vitally important for our development professionally to be prepared to be in a classroom, in front of a class, to be confident and, yeah, I guess, just, to know what we need in order to be put in that situation on a day-to-day basis. If we don't get that in university, then it's a very steep learning curve when you get out there, almost too steep, in a way. You need the practice.
Dr Kerry Thomas: I wanted them to take, in the second assignment, a view of an art work, and then compare that view with another view. And they would take on, they would represent, different points of view about the art work. Some took on the role of artists; I have this fabulous video of a student who becomes Salvador Dali and compares it to another position. I've had other students who've taken on roles of critics. I've had students who have taken on roles of curators in an exhibition in another language as well as their own language. So you're really trying to get this whole idea of, "This is the work, this is the work that we want to focus on and they choose that. But how could you actually represent different points of view about this work?"
In the third assignment they had to do a little teaching performance. We would pick up on issues to do with questioning, listening, discussion, building stories, providing that vicarious experience as I spoke of, and really thinking about how the students are part of what is enacted through that authority being enacted.
Stan Toohey: You don't have the option of brushing things under the carpet. You know, these things—if you're going to have problems in the future, you'll see where those problems are going to lie at that point, in that safe environment.
Kerry Thomas: So they had between 10 and 15 as their class, and the other 10 to 15 looked on. So what I was trying to do there was to model the idea of cultivating that judgment of the teacher in the evaluation that others gave, while half the class were the class. And then we flipped this around from student to student, so these performances were done in the classroom.
Stan Toohey: Because you're up there and you're, sort of, your head's full of what you're going to do next, you don't really stop to think, "How's this going?" So you get—there it is, recorded on film—how people are enjoying or not enjoying the process, and, you hear their banter, you know, you just overhear comments. So that was really good, I thought.
Dr Kerry Thomas: Only when you stop thinking you know, that we can start to do something about building what you know, because your knowing as a high-school student won't be the thing that makes you a good teacher. Even though that was the thing that got you here and made you want to do this, that's not the thing that will make you a good teacher. So how can we turn that around?
Melanie Crawford: There was an assignment where we had to film ourselves, with an audience, in character. And then we had to submit that in some form.
Stan Toohey: And how many people just did not want the class to see it? They were quite willing for Kerry to take it away—
Melanie Crawford: Yeah, to hand it in...
Stan Toohey: I was thinking to myself, "This is nothing! You're going to be in front of these glaring eyes; you're going to be scanned—"
Melanie Crawford: The snippets we saw of people's films were brilliant. They just had great ideas—
Stan Toohey: I know! And these are the people who were embarrassed! Some of them were embarrassed and I just thought, "Why? That was so well done! Don't be embarrassed about it." But—Anyway, I think she sort of knocked that out of them.
Melanie Crawford: Yeah.
Dr Kerry Thomas: There are immediate benefits in thinking about the performative role of the teacher and the kind of subtlety that is transacted between teachers and students in those social relations in the classroom. And then, secondly, that this is something that is something which is absolutely critical for their development and professional practice as teachers.
Melanie Crawford: It's more practical, and we're practical, I mean, being Arts students, we're more...
Stan Toohey: There's more of theory.
Melanie Crawford: So an exercise like that, to me, feels so much more achievable, and exciting.
Stan Toohey: It was fun.
Melanie Crawford: Rather than having to sit down and write another 2000-word essay. You walk away from a semester of a course like that and you know you've learnt something, and you know you've changed, and you'll never forget it.
Authentic Assessment in Engineering: Building a Pump—Associate Professor Sami Kara, School of Mechanical and Manufacturing Engineering
Below are transcripts for the respective videos on page Assessing Authentically.
This course is structured around a real product development project, in a much, much simplified manner. So I design a very simple product, actually a pump, and the assessment, actually, is scattered around that project just to mimic what goes on in real-life product development projects. In real-life product development projects, we don't talk about assessment, we talk about reviews, so we replicate it that, each stage of the project we give them actually a task, and then go through a review process, for instance, a concept sketch, a detailed design, manufacturability, and then the final, you know, the pump itself, which is nothing more than building a prototype and proving that their design idea actually works.
So everything is about a review process and getting a tick, and if they are not successful they are supposed to go back and revise it, so, reflect on what they have done and resubmit it because the next assessment will be built on the previous one.
The secondary part, the higher learning, sometimes actually they can't even understand that they are actually learning, is the process itself that they need to go through: how to take a one-page product description to an actual physical product, which won't change, doesn't matter where they end up as an engineer. Every company goes through the same process but in a bigger scale, in a much, much complicated manner.
Real World Processes
This is my personal experience over the years, is that the—giving, actually, a test or exam is not realistic in real life. You know, if you go outside and work as an engineer, the tasks are given, or the problem is actually given so that the engineers go away and solve it. We never give them the test and sit there and solve it in an hour or so. The students appreciate, in fact, that they are given time to think about what would be the actual solution, as opposed to putting them in a classroom and giving them actually half an hour or an hour to address a problem. And that also solves the problem of exam anxiety. They allow them to think in a broader perspective, you know, bring together all the other the knowledge that they are developing in other classes.
The whole assessment, the whole course, starts with what we call the functional requirement: a 1 page written document describes a vague description of how the pump should look like, and that's their real industrial practice. Then you start developing the product, but first you need to come up with a requirement analysis. And you build the actual pump on that basis. In that requirement page, there is also a section that talks about, a company wants to build that in higher volumes, such as 25–30,000 a year; they need to write a feasibility report, if they are to go about building these 25–30 thousands a year. What would be the process implications? Because not every manufacturing process is suitable for making high-volume, and if they are to choose a high-volume process, what will be the implications of that on their design? That's actually a higher level learning process which they would appreciate that the design is not just about sitting in front of the computer but being able to take into account what goes on at the downstream product development process, basically.
They're in teams. The product's structured in such a way there are 5 components, so the every product built in a groups of 5, and each student is responsible from one designing from concept to the building of the individual component; however, there is a very strong interface. If they don't talk to each other, the pump actually will never come together, so at the end of the semester when we test the pump that's one thing that they realise—how critical that interaction is, as a good work.
Yes, it would have been much easier if it was a group assessment, for instance if I asked them to submit actually, 50 design projects as opposed to 300, but then it's quite obvious that some of those students will get away without developing some of those skills, and we don't want that because the core skills that we are trying to teach—the CAD, the engine drawing that every student must develop in our school, at least at basic level, you know, of CAD skills, engine drawing skills, otherwise they will have serious difficulty continuing. Because of that, I choose that path.
Frequent review process, and quick feedback, is the key. Every time we provide an assessment, a very detailed assessment criteria is uploaded on the Moodle web page. So they know how they will be assessed, and they also know the purpose of that assessment, because it's there contributing to the final project outcome. And we take about a week to assess everything and then get back to them. So, before they even forget what they have done, that provides quick feedback for them to revise their mistakes.
And the second thing we built into the course is that every assessment actually builds on the previous assessment. So what they need to do, they need to go away and fix the previous, you know, assessment. That strategy in itself allows them to actually revise their mistake and learn and keep building on it, and that allows us also to have a continuum in the whole assessment process from, you know, start to finish. And when we do CATEI results, for instance, the first thing they comment on is actually being able to, you know, revise their mistakes, and they learn from that, and keep repeating the same thing, and by the time they get to the end of the semester, they end up actually repeating the same thing perhaps 4 or 5 times—and that in itself is a learning process. So, in a way, they are not actually penalised for their mistakes, once they do it, because they get a second chance to fix their problem.
We integrate it, the course itself, with an external entity, which is TAFE. There is a reason for that; TAFE is extremely qualified to teach or train our students when it comes to hands-on manufacturing processes, so that was quite useful from our perspective. However, being an external entity, I don't have any control over how they schedule their subjects, so they have their own limitations. So every semester, it brings extra administration loads that I need to go through.
The second issue is the class size; I mean, this is something we've talked over again and again: if you're running a project-based course, what will be the implication of larger class sizes, especially in this particular course—because we don't have exams, we have actually reviews. If I spend, for instance, even for a very simple design review, for each group, so, it's going to take half an hour, and we are talking about spending 25 hours providing feedback for the review process or student reflection becomes harder and harder and time consuming.
One way to get around that is having properly trained tutors, so when we started implementing this course about 5–6 years ago, I picked the groups of students who did that subject in that year and then trained them, and they've been in the system in the last 3–4 years. So they not only know how the course runs but they were at the receiving end in terms of their experience. Because of that it's extremely useful, and it does sort of reduce my job, but overall, now, we are at a stage that, even if I go away, my undergraduate students can run the course, without me being there.
What I would actually suggest is finding a project, a real industrial case, and then turning that into a learning exercise, rather than the outcome being the individual product itself.
Assessment by Simulation and Role Play—Anthony Billingsley
Below are transcripts for the respective videos on page Assessing with Role Play and Simulation.
One of the real strengths of the simulation is that the students, even though it's a game, the things that are happening in the simulation have some relation to reality. I wouldn't approve of events taking place unless I thought they were plausible. So here we have the students dealing with almost real-life situations, and drawing on the theory that they've learnt and the other experiences that they've had. And I think that gives the whole exercise a very authentic touch, which is very difficult to convey otherwise.
The simulation is totally online, so we have a dedicated site [video of site "Middle East and International Law"] created by a colleague at UNSW, and the site contains all the various facilities the students need, so there's a place for them to place their biographical details, there's an email facility, there's a chat room—various things like that. It also has, then, a capacity for me to monitor what's going on, so I can actually jump into any students' correspondence and see what's going on and follow what they're doing.
Last semester we had a 130 students taking part. It runs for 10 days, and it is 24 hours a day, over that 10 days.
I've allocated 40 per cent of the total mark to the simulation, which is quite a lot, but I think the amount of work that the students have to put into it warrants it.
The students are divided into groups of perhaps 3 or 4, each one assuming the role of a player in the Middle East. This year we had 40 characters—these included the media, American characters (so, the US President, the US Secretary of State) plus lots of characters from the Middle East. It also included organisations like the Red Cross, Al Qaeda, UNRA (the UN Relief and Works Agency for Palestine). What I try to do in that simulation is ensure that all the characters have something to start with—so there'll be, for example, a bombing in Jerusalem, that gets the Israelis and the Palestinians worked up, or there'll be something happening in New York, or whatever. And so everybody has something to start the simulation with.
[video of the screen: "Welcome to Monday, week six. You have now entered the dark world of the Middle East simulation. Please ensure you are conversant with the simulation rules. You have now received the scenario and that should provide the beginning point or your studies over the next 11 days. Good luck. Control."]
At 9 o'clock on the Monday morning they enter the simulation, and they have to ignore everything else that happens outside. So the real world ceases to exist.
What I'm asking them to do is to get inside the skin of the character. So they become that person, and then they write, oh, perhaps 500, maybe 700 words about that person, and how that person is relevant to the simulation, how that person might react to different situations in the simulation.
[video of text "Role Profiles: Creating a Role profile", with sentence highlighted: "An accurate profile of your role is often the first indicator of success in a simulation."
Video of "New Diary Entry" screen, with text being entered: "I wonder if I should shaft Al qaeda this morning."]
More fun is my ability to step in like a Greek god and disrupt people's plans. So I can go in and look at all the email traffic and say, "These characters think they've got some great scheme developed; I'm going to muck that up by issuing a press release through the media, or something like that, or warning somebody else that this is about to happen. Then, I look at all the email traffic and I try to determine if they've been active throughout the simulation, and that they've actually been making significant contributions to the simulation. And again I mark them as a group.
And finally, the report that the students do is marked on an individual basis, and so the students are then invited to attach to the report any emails that they think demonstrate the role that they played, and reinforce their argument that they contributed.
It is a practical experience of international relations that normally students would never get. International relations tends to be a relatively theoretical subject, but here they are actually practising and experiencing some of the things that we've been teaching them over the previous three-and-a-half years or so. What I see is students who know nothing about the region, nothing about the subject at all, finishing it with considerable knowledge, and considerable knowledge that's really been deeply ingrained, because they've actually had to work at this very intensely.
And the feedback, of course, is wonderful, because most of the students really enjoy it.
I find it a very satisfying way of conveying a lot of information, in a very short space of time, to the students.
I spend a lot of time online monitoring what the students do, and that is very time-consuming, especially when you have other classes to run.
It's also time-consuming to mark, so you imagine there's, say, 50 profiles; last semester I had nearly 4900 emails to wander through; and then I have 130 reports—now, they're not long reports, but they still have to be gone through and graded etc. And so that's quite demanding, and I'm also giving lectures to these students as well as to other classes.
As far as the students are concerned, well, it's very demanding again. They're spending a lot of time on this thing, and it becomes quite obsessive. They do get themselves caught up in it, to the cost of other courses, and so I guess I'm not very popular with some of my colleagues when this is running. And I try to avoid clashing with other assignments, but it's very difficult to. So the students have to manage that themselves, and they have to keep on top of their other classes, plus the lectures that I'm giving in my own class at the same time. So there's a lot going on in this 10-day period.
Tips—Moderating simulation etiquette
There was a case where, in the last simulation, an Israeli Mossad agent was captured by Palestinian groups and executed. And I think I certainly found that confronting, and I had to approve it. And the students found that, as well. And so those sorts of things, when they stop and they think about it, they think, "Well, this is what happens in the Middle East. This is what happens in real life." And whilst I want reality, I want people to be also careful that they don't overstep the mark and offend people. We have groups of 3 people comprising one individual and they have to get together, preferably in person, although sometimes it's not always that easy and so they actually do it online or using their mobile phones and things. But other than that, people don't necessarily know who is who. And I discourage that, because you actually find people spying on others in lectures and all sorts of places—they really get up to mischief.
Assessing Authentic Tasks (Role Plays)—Chris Walker, School of Social Sciences and International Studies
Below are transcripts for the respective videos on page Assessing Authentically.
So the cases also are used to highlight the theory of policy practice. So, for example, we will talk to the students while they are doing their case about understanding stakeholders, looking at power, looking at some of the issues around how different participants in the process might organise as coalitions and influence the policy outcome.
One of the ways we do that is we use a role play, where the students might engage in a particular activity where they will all have different roles around a particular problem and they have, say, up to an hour to resolve a problem and come up with a solution. The students will be in different groups and one group might be responsible for coming up with that answer so they will have to interact with a whole series of other groups.
Then I might, mmm what’s the word, intensify the pressure in the room by changing some of the circumstances at the time. So, in one of my case studies, students were looking at how to develop better regulations for P plate drivers—all the students have got pretty good experience of being a P plate driver and they have got some ideas about how to improve it.
So one of the groups is asked to be the Minister for Transport and they are supposed to come up with the new regulation and different student groups come and lobby them during the hour of the seminar.
Then, during that seminar, I would give them a fax from the Premier for example that says there has just been a terrible traffic accident, with all these P plate drivers, and the Premier wants to tighten the rules. So they need to come up with a solution and I really intensify the pressure and the ideas that they are thinking so that they can see that a lot of policy problems in practice sort of change suddenly, or there is a specific event that accelerates the urgency about trying to come up with a solution.
Example of a Successful Role Play
In a case study, about 2 years ago, the minister was so obnoxious in his manner, and wouldn't listen to the groups, and the groups felt they were a bit fairly undone. Outside in the hallway, the groups, with no advice from me or anyone else, they formed a coalition, they all agreed on what they wanted in terms of policy change, and they knocked on the door and they said to me, "We want to see the minister."
I said, "Yeah, who wants to see the minister?"
They said, "It's all of us, or none of us."
I said, "Okay."
So I went to the minister, and I said, "You've got a delegation to see."
And he goes, "Oh, you know, I don't know, I'll just see so-and-so."
I said, "You can't. You can only see all of them."
He says, "Oh, all right."
So they all came in and they basically said, "This is what we want. And if you don't agree with what we want, we're all going to cause trouble for you."
And this is the classic, like—the students were acting out a real policy problem. This is what happens in practice! You, know, you get coalitions of groups—you know, the mining industry or whatever—going in and saying, "This is what we want, and if you don't do it we're going to cause trouble for you."
And so the next week when we have a reflection, I say to the students, "What happened?"
And they said, "Well, we weren't listened to."
And I said, "What happens in practice? What do you see in the real world?"
And they could suddenly see that the case study and the role play, even though it was in the classroom, had all these elements that are actually reflected in the literature about policy, and helps them then understand what's happening outside in the world of practice. So they become more analytical and they go, "Oh, now I think I understand what's going on now, you know?"
Benefits of Authentic Assessment through Student Research Tasks—Dr Patsy Pollie (School of Medical Sciences) and Gwyn Jones, Learning Advisor, Learning Centre
Below are transcripts for the respective videos on page Assessing Authentically.
Dr Patsy Pollie: They're able to work and talk like research scientists, quite comfortably, very easily, no dramas involved in stressful situations. They're very comfortable in that skin, all of a sudden, and their expectations are very … real.
Gwyn Jones: I think what is really, really interesting is when you get comments that are based on, "No one ever asked me to be a researcher before. And I really had fun. But it's very frustrating 'cause it takes a lotta time." And I think that's fantastic, 'cause that student has grabbed the whole idea that they're in control of this. It's empowered them, and I think this is one aspect of the feedback we do get, is that the students that get it are empowered by it, and they certainly take it on to their honours year, and it certainly has been highlighted in a lot of the honours presentations. The students who have had this training and support in pathology in their third year stand out from the other guys who haven't had the same kind of support. So that kind of embedding, I think, has really paid off.
Patsy: We're now facing the situation where academics and researchers are finding it—their presentations are very hard to mark, and their project manuscripts thesis hard to mark, because everything has been done to a T, all the boxes are sort of ticked—and beyond, in terms of the communication. So we feel like it's made a difference in the way they, deal with their tasks, communicate. They're empowered. They're just able, they're ready to do it and they're operating at a researcher's standard that's been out there for about 3 to 5 years already.
Gwyn: Yes, the levels—since we started this the levels in their writing tasks, their presentation tasks, have raised, and it was really funny because one of the tutors at our monitoring meeting was saying, "But look, some of these guys are getting 75 and 80s!" And I said, "Yeah, hopefully they will all get 75." And they should. You know, we should be there supporting these guys that make the effort and certainly in the presentation, their SOMS honours presentations now are such a standard, they are at a professional conference standard, which certainly wasn't the case 5 years ago. It's about engaging with everything that is good academic practice and they realise, "Oh, that's good, I can take that to another course." So it's quite transferable and it certainly has been noticed in other courses these guys are taking. And it's now starting to actually be recognised and utilised within the other streams as well, some of these strategies. So, yeah, it's about the learning, it's about the learning as opposed to the pathology, it's the learning of the pathology.
Patsy: They're very happy as they start to, especially in the honours year, when they realise that this is all for a serious reason, you know, to get their honours grades in the end, but also the lifelong approach to coming into a PhD in research, coming into a research assistant position in research, but they're all skills that are used constantly. And I wish someone had taught me. We learnt by fire, and that was okay for that time, but the thing that drove this interaction as a starting point was like, "Why didn't I get this? This is so good." And the students are so thankful by the end of it because they realise that that early intervention at second year has paid off at the end.
Gwyn: But do you know what I think is the biggest thing for these guys, is they come up with the realisation that "Oh my God, this is developmental. I can continue strengthening these skills. These are skills I can actually play with. These academic literacy things, they're soft stuff, but I can play with them, and I can continue to develop." And I see that in the honours because I get emails, "Can I come in? I need a little bit of brushing up in this." So they have come away with a recognition that they are, that they do have the power to continue strengthening their own learning. Which is just the happiest news. And it really doesn't have much to do with the marks. At all. It is seeing these guys take this mantle. And then also seeing these students wanting to be tutors, on the program. So there's kind of, "Can I? Can I be a tutor, please? Can I?"
Patsy: They want heavy involvement, 'cause they see where it's heading.
Gwyn: Which is great. And now we have very well-informed, and very well-skilled tutors. So these kids are getting really valuable feedback, because now it's legitimate, enculturated, fabulous pathologists talking to baby pathologists. So it's kind of exciting, I think.
Patsy: It is. I think we've finally gotten to the point where the soft stuff's becoming hard stuff.
Gwyn: [laughs] Yes!
Patsy: Real stuff.
Gwyn: There's a quote from a medical student I worked with. He says, "I had no idea all that soft stuff was soooo important!" 'Cause the scientists see hard stuff as the thing.
Patsy: The data.
Patsy: Rather than communicating it. Which is what it's all about.
Authentic Assessment through Collaboration
Dr Patsy Pollie: The tutors are very important so what we have asked the students to do is engage in this conversation, as Gwyn has already pointed out, but in the group format, so they're already in their research teams as part of the course itself and learning content, but as part of that team they feedback on each other so they have this whole network of thinking all of a sudden, and then they bounce that with the tutors rather than coming to us each time. Okay, they do initially and then they recognise that the tutors are very integral in terms of the way they are developing their thinking, and they realise, they value that the tutors are actual researchers at the moment so they go to them.
Gwyn Jones: There is also the collaboration that starts right at their first encounter and that whole concept of establishing, when you have multiple authors in science it's a group task so from the offset that "groupness" is in a sense in our first encounter with the written task. They have a research group that they work with within their tutor group so it's not a formal grouping as such for the assignment, 'cause that assignment is an individual task. The second year when they're doing their oral presentation, that is a group presentation, where they do their research as a group, so the written task is a group effort. They then negotiate their script, and then they collectively present it. So it's one voice, but a multiple group. And then that is very much similar to what they have to do in their labs, and when they get into their honours years, and further on to the research. So I think that collaboration is another incredibly important aspect, of the research teams, of the collaborative nature of the assignments, because they don't work unless they work together.
Designing and Scaffolding an Authentic Research Task
Gwyn Jones: Well, to design the task was really about deconstructing what the students needed for written tasks. What do they do in pathology? So that's when I would yell at Pats and we would say, "What does a pathologist really need? What writing skills do they need?" And that's when we came up with the—
Dr Patsy Pollie: The role of the researcher—
Gwyn: Well, actually the media assignment—
Patsy: —and the media assignment, because they had to take on the role of the researcher in order to accomplish that assignment. Now, pathology has many different aspects, but what I've come in to do is teach content, of course, but also bring in a research flavour to it. So the media assignment was interesting in that it was designed such that the student take ownership, and what we did was we asked the student to take a topic from the media—so it could have been a disease outbreak, a cure for cancer, something like this. So it had a pathology theme to it but the student chose their own topic and then designed a question, a research question, about that topic, and then the rest of the assessment flowed. So we had very guided research, well, criteria for the research assignment that the students adhered to, but the topic itself was their own topic. So there was a structure or framework that we had designed together but the topic was their own and that's the whole individual aspect of the writing task.
Gwyn: And the key to this that there was a transparent criteria for not only the students but the tutors, as well. So the skills focus then supported the tutors and supported the students, so they were in a position to help each other. And that transparent criteria, I think, has been one of the items in the assessment that allows the tutors to have a conversation with the students, so it's not a mystery thing, it's actually out there very, very exposed, and they know how to structure it within a creative framework, but they can choose their own topics—
Patsy: —choose their own topics. So, in addition to that, we support that development. So we have 3 skills workshops and students attend that; they get the feedback; they have the opportunity to ask us questions on how to go about doing the task. There are follow-up meetings with tutors to get them enabled for marking, for example, get them on the same page in terms of what the skills focus can offer the students, in addition to how it can support them when they are marking it. That's more or less the way we've framed it, so, support structure for the students and for the tutors, so it becomes this big community of support, all on the same page, all using the same criteria, by the way, not only us, the tutors, the students, and ... it works.
Gwyn: Well, the scaffolding is actually is designed within, embedded within the program, and a lot of the support lectures are scaffolded progressively. We start early; we have a dream the students too will start early; that's our dream. And some do, some are fantastic. Of course, some of the students wait to the very end. And that's exactly how a lot of people function, so that's fine. The scaffolding is then engaged in their second year, for example, when they come into the role of the researcher is established in our second year with them. The third year they're seeing that same role, but they're extending it from a script and a process into a group task where they present, so they're actually writing about their processes, but they're presenting the processes in an oral way. And then when they come to their honours years, everything just slots in, 'cause they're doing their thesis, they're doing their oral presentations, and I continue to work with them—independently, actually. They come in groups, but you always, they select if, are the ones that want to come for continued support in their honours year. So we scaffold it, really, through those 3 years, and they're kind of, on their own by the time they get to honours.
Below are the transcripts of the respective videos on the page Assessing Large Classes.
I'm quite "famous" around the university, because when we talk about "large" classes, I'm famous because I take the first-year undergraduate core Accounting unit, and that's up to 2000 students. Not all in the one room at the one time. So it's broken into classes of maybe up to 400 to 500, and the small class that we have on the evening, is about 150 students. And I think that's a really interesting, to frame what we mean by large classes, because I can remember int he first university I worked at when I first came to Australia, I thought it was huge, it was 240 students, and I was going, "Oh my goodness, how am I going to cope with that?" And then I went to Sydney Uni, and then I came here, and, yes, as I say, the numbers are just quite huge.
And it did become a challenge when the numbers did go up to 2000, because we hadn't planned the rooms and the course requirements to cope with that amount. And I actually had to put a lock on the numbers that were enrolling, because physically if you have a lectuer room that can sit 500, you can't put 500 bodies in that room. Even though it looked like we could take a capacity of much larger, the actual logistics of that is not quite the same. So if you have, like, 1800 students, you really need room capacity to be taking maybe 2200, because as we all know, when students even come into a room, they're not going to sit at the front, and they're not going to go into the middle of the seats. So in large classes, even trying to facilitate where the students sit, or to accommodate them to get in, is quite a challenge.
Tips, Using Technology
My first thing to say to anybody is, "Technology is your friend." You may be fearful of it, and you won't even know what it's capable of, but don't treat it like, "Oh, God, I don't belong to that era and I don't get into that." Technology's your friend.
So you've got Blackboard—put things on Blackboard. Use Blackboard as your discussion; use Blackboard as the only communication that the student can have—on issues that concern the whole student base. Obviously they can still use email to you personally if it's a private, confidential matter, but questions? Use Blackboard, use that. And if you're lucky enough to have a senior tutor or somebody else, you could have somebody manage your discussion board. And that cuts down on a lot of the interruptions, and also, you're not like—could you imagine me having to tell 2000 students the same thing, every time something comes up?
So it's a great tool. And if you're not comfortable with it, book an appointment with your school IT specialist, or somebody who is comfortable and familiar and understands what our technologies are capable of. And then you can learn ways that you can manage your time and activities that saves duplication.
The other thing I'd strongly suggest is that, find out—'cause every school knows who are the good lecturers, who are the lecturers that excel or that get the awards or whatever—identify who they are, talk to them, have a coffee with them and ask them, "How do you manage it?", because it's really important for you to learn, efficiently—like, learn from other people, so you don't have to make the same mistakes that we all made.
You're going to need to learn really fast how to manage your time effectively. So using whatever tools, like use your Outlook for your appointments so you don't miss them, use—I don't know whether you use a hard-copy diary, but do your to-do lists and—it's very, very critical about you being able to manage, because you'll get overwhelmed if you don't have some way of controlling what you need to do and when—like, use your Outlook or something to put your deadlines, and put your deadline a couple of days before, so as you've got signals and warnings about "Oh God, the exam deadline's due and I have to get that paper ready."
So, it's very much, as I say about, technology is your friend.
Using Lecture Recording Technology
Even conscientious students cannot make every single lecture. That's the reality of it. So why disadvantage them? So one is—because with 2000 students we do do repeat lectures—so we actually, on the big semester with the large numbers we repeat the lecture 5 times, and one of them's in the evening. I also have them podcast and, with the podcast, screen capture. So you don't get the lecturer, but you get the sound and you get the screens, and as the tutor's doing any calculations or drawing attention to anything, that's all on screen. I don't say that it takes away anything from us. I just think, if it helps a student learn?
Because the other thing, I did some statistical analysis at the beginning when I started to do it, and what I found—and this is with the help of our IT people—what we found was, in fact, the majority of students—and there was about a 25% of students actually used it, so not the whole class—the peak times of using were just after the lecture, so the hour after the lecture. That was even in the evening one, it was an hour later—so it tends to lend itself, it's not so much the people that aren't going, that are using it, it's the people that do go. What they're likely to be doing is downloading it and going back over a point that, maybe they've been writing, or maybe I spoke too fast, so they're going back over the material, rather than people—there's obviously a small number that will use it if they don't go.
So they're using it to increase, so it's in addition to—it's a tool in addition to the normal learning. We also noticed that peak times of using were, obviously, the week before the mid-semester and the week before the final, so they're using it for revision. Well, aren't all those things that we want students to do? So I don't see any argument against why you would not podcast, and why you would not—particularly with the screen capture. That's just an additional revision tool for students. So I'm afraid I'm—that's one of my pet subjects. Podcasting? I don't know why you wouldn't do it.
Particularly when you've got large classes and large numbers, and you're really testing more the qualitative-type work—that's where I think the rubric idea lends itself, and to speed up the marking. I've kind of worked on a few different sort of rubrics, and at another uni when we've done oral presentations and case studies. So we've set up the rubric and you can go, like, basically across the five grades—and it's the instruction to the tutor that needs to be quite clear, so you're giving the tutor the example of, "Well, this would be an A, and this would be a C, and this would be a D"—though we also make it transparent, we provided that to the students as well, so they know up front, "This is good practice, this is if I want to get an A, this is the language, this is what I need to do and if I want to get a C this is what I need to do."
And it's also much easier when you hand the rubric back with the assignment to a student who gets an E or a D, you know, a fail, because they can go, "Oh, that's what I needed to do, and that's what I did do, and—I thought I could have got away, that I didn't reference properly, and I thought I could have got away when I just repeated the reference, I didn't analyse." So, it kind of—I think it softens the blow to the student when they get the rubric back and they haven't done well. That's a nice clean one. The borderline Cs, Bs or As, those ones—on the form when we give them the rubric back there's a space that you can write something—the tutor needs to be very aware that if they're borderline, they need to say "Well, what didn't you not do there?" So it helps the tutor coach the language to give the feedback as well. So, you know, I think they're useful, really, really helpful with the large classes and over a number of markers.
We do a case study in the exam, and we all know that students in the exam, handwriting goes to pot. So, it's like, having the rubric, and we kind of know what we're looking for, rather than going, "What are they writing about?" So it kind of speeds up, and again gives that consistency in the marking.
Unfortunately—and I haven't solved this one yet, but I'm working on it—students don't necessarily get feedback on their final exam, and that's one of the things that, in the longer term—I'm working on a project about giving feedback to the masses, but finding out a way to give them the feedback about their final exam, because I know in some faculties the student gets their final exam back, but in our Faculty they don't get it back, so—and I guess, just on the feedback, because we're—kind of went there: We provide feedback through Blackboard, and I've done some focus groups and got some feedback from students saying, "What type of feedback are you looking for?" and they're saying, "We like the statistical stuff that says, 'This is how the class did, and this is where I fitted, and this is, you know, how I did.'" And they said they don't necessarily want the individual detail, but they like to see where they fare in the group analysis. And I would do a collective, you know, "This question was done well, this was the general answers, more importantly this question wasn't, and this is what was missed." So you're kind of giving them a group, and a collective feedback, and students seem to think that that's sufficient.
So the rubrics I think are great, to—particularly for a qualitative question.
It is time-consuming, but again it comes back to some of the assessments that, you know, if you design, like—we can use some multiple-choice quizzes, because we're assessing some of the lower-level knowledge. So I wouldn't be saying, "Do your whole course with that", but we can certainly—some things lend themselves to multiple-choice quizzes. The student gets instant feedback about how they did. We also do our multiple-choice quizzes over a number of weeks, so they're getting periodic and, you know, as they go along through the course.
And then, using Blackboard, I would also put an Announcement about, "This is what was done well, these were the averages", you know, just a bit of the generic about the quiz and the generic statistics—they do like getting that kind of statistical feedback. So that's one of the ones.
When we've done a mid-semester exam, we make sure the marking's done within 2 weeks. They then have part of the tutorial, so every student will get their paper back, in the tutorial, and the tutor goes through and shows what the answer should have been, and the student's got their paper there and they can see what they did and what they didn't do. They also see the marking that we've given it, 'cause we've shown them, and we show them the marking guide. So if they have any disputes, anything they're not clear about, they can ask, and we'll take the time and explain.
And so they get that, and then they also on the Blackboard they'll have the generic sort of feedback. We don't post the solution on the Blackboard; that's only covered in the tutorials, so it's a bit of an incentive for the student to go. And the tutor will go through that on an overhead or a PowerPoint, and point out where the common errors and mistakes were.
Generally—and I know it's a general criticism that students tend to say, you know, on the course evaluation form they don't get feedback. We don't score badly on that, because we make a point of ensuring that each tutor gives feedback on the mid-semester and the multiple-choice quizzes are given feedback on Blackboard.
There are some of my ideas. As I say, I think the main gap is final exams. Students, unless they come to, make an appointment to come and see their final exam paper, they don't get feedback, and that's one of my objectives, to try and get a way around that and I think the way to do it is, post it on Blackboard, the day the exam results come out have a notice on Blackboard, and they'd still have access into their old Blackboard course, before they start going into the new one. And I think that's a good strategy to give them feedback.
Feedback from Tutors
At the end of every session, I'll get the tutors together and get some feedback from them about what went well, what didn't went well, how could we improve—and even at the end of each task, 'cause I don't necessarily want to leave things to the end of the semester. Like at the end of our, say, mid-semester, I'll ask the tutors "What worked there? What didn't work? How could we improve? What are we going to do?" And I capture that in a kind of spreadsheet sort of thing so as I've got it. I also ask the tutors, after each tutorial, to, you know, just send us a little email to say—sometimes questions are great, they get the class engaged, and sometimes the student doesn't just get it, and so that gives us the feedback, one is the next time we teach that topic, can we pick that up. But also, maybe the question wasn't a good question, and it lets us go and identify, you know, a replacement type of question to get that learning outcome.
So I do the feedback at the end of the semester, I do a feedback from them at the end of a significant task. Before we go and do any of the assessments and also at the beginning of semester, I'll meet with them and give them out the materials, and discuss with them and ask their comments and—so I'm getting feedback. So it's very important that the tutor knows that they can give us feedback, that, like, we don't get it right all the time, and we're always, there's always the capacity to improve, and sometimes the tutors, particularly the younger ones, who're not that far out of, you know, being a student themselves, or are probably our Honours students anyway, they get a perspective that we don't always have, and so I encourage them and they know that it's okay to provide feedback. I may not always act on it; I may decide "No, I've already thought about that and we don't do it like that because..." and I'll explain that. But I also don't want it to come across like I'm being defensive, like I'm defending why I'm doing it this way? So I think it's important that they know that it's okay to give you feedback, 'cause you don't always get it right.
Managing Effectively, Validly and Reliably
One is upfront with the planning, and ensuring that your assessment helps you be able to assess the learning goals and, depending on the assessment that you've chosen, then I would give it out to a couple of permanent staff, ask them to look at it. I'd also give it to—we hire as tutors our Honours students, and so I also would give it to them, and ask them, you know, "Is there any ambiguity there? Does this make sense? Is there any language—" Because also in the lectures quite often we'll use some language, and in the tutorials some other informal language comes up, so we're making sure that the language is compatible. Any of the questions as well that goes into the exam or the assessment, I will also put back to the lecturer who's maybe covered that, and say, "How have you covered that point? Are you sure that you've covered that point?". Nothing goes in that I'm confident hasn't been covered in the lecture or the tutorials. So having put it out to a couple of trusted people—obviously our permanent staff members and I use two of my key tutors. So once I'm comfortable with that, I'll then— Before we have the assessment, I'll have a meeting, with the tutors. Give it out again, "Anything there we're not sure about?" I'll go through, they have a detailed, this is how the exam should run, and again, if there's anything that's actually happened to slip through and gone into the exam paper, then I'd say to the tutors, "This is the advice that you make sure that every class, every student, gets that information."
And once the exam goes and they're all [crosses fingers] following the same instructions, everybody's got the same information, the real key thing is when those exams come back, and they need to be marked, and I guess that's one of the questions we're going to cover anyway. We break the questions up into—like, there'll be teams of tutors that will mark certain questions, and I would tend to identify one of my more experienced tutors or the permanent staff member, and I'll go, "You're team leader." So, for each question there'll be a team leader, and there is a small group, of tutors working on the question. When we're going to commence the marking, the group sits together for a couple of hours, until there's a level of comfort, and mark, and so if there's any ambiguity or any questions or variations in the way something's been approached, then they get to discuss that and decide, so as they've got a plan of action of, if the student's done it this way, how will we interpret that. I also ensure that the team have all exchanged mobile numbers, so they're in constant contact, if something comes up after—like, they may mark about 20 scripts or something together, and if something comes up afterwards, there has to be an agreement within the group of how to handle it.
The other thing we do is, once the tutors have all marked, and before we release anything to the students, I have a small team that will do data analysis, so for each script [it will] be put against the tutor, the question will be marked and we'll do a variance analysis, so across the tutors and across the questions, is there—you know, if there's a range of just one, one and a half marks, we go "I don't think there's a big problem there", but if it's more than that we will re-mark a tutor's work if it's considerably below, you know, everybody else's.
The other thing with the data analysis, and identifying the tutors—oh yes, sorry—what I also wanted to mention was, we will randomly, out of all of the big scripts, like, do about a 20% re-marking, just for validity, so just checking that there's consistency within the range and there's no mistakes. And that's something that's come from the professional bodies, and I've worked with them. They do this very stringent marking, and I've taken that into my school, and I think in Accounting it kind of lends itself to making sure you have that very quantitative statistical analysis. And not every discipline will do that, but it's still a practice that's worth considering. So I guess that gives me a level of comfort and that I'm confident that there's consistency across the papers, there's consistency across the marking, and that all the students are getting equitable share of good marking, and we have this kind of belts and braces in place that safeguards consistency.
Strategies, Planning, Logistics and Administration
Particularly with large classes—everybody needs to plan, but with large classes you need to be meticulous in your planning, and to break the pieces of the plan up, like, you know, first of all if you're looking at what the goals are for your course, like, your learning objectives, and then thinking about, well, what's the best assessment to meet that learning objective, and I also come from a philosophy where the assessments—we need to give every student an opportunity to shine in their assessment. So I deliberately, and I think it's probably a good idea and it's good practice, to have various types of assessment, you know, not just, oh it's all essay-writing or it's all number-crunching, because your learning objectives are much broader than that.
So a very detailed plan of how you're going to meet objective and what's the best assessment to meet the objective, and then—and then it's the logistics. In large class teaching, it's all about being able to plan the detail of the logistics. For example, if—we recently went to using multiple-choice quizzes online, 'cause that could meet a couple of the learning objectives, and then we had to think about, "Well if you've got 2000 students, you can't have 2000 students going onto the Internet at the same time to do a quiz, so then it had to be broken up into, you know, "How long will we leave it open for? So if we leave it open for a week, what if they have a technical difficulty? What's our backup plan? How do we help them resolve that without us having to be available 24/7?" So we then have to write very detailed instructions to the students, that, you know, "If you're doing your quiz and your system crashes or falls over, or you have a technical difficulty, you know, do a screen capture, send that to us."
The other thing that, you know, I had to do—again it's about the logistics—is set up a mailbox for those student queries, so it doesn't clog up my own, you know, emails that we have to work and deal with. And then I had an assistant who had a job every day to go in there and check those and send an email back to the student. So it's very—the administrative part of large class teaching is very, very significant, and a person who has that role needs to either be a good administrator or learn those skills very quickly, and the whole kind of time management and "what's the most efficient way to do this?", is very, it comes part and parcel with planning for anything of managing the large class teaching.
So, as I say, identify various sorts of assessments to meet your learning objectives, and then work through in detail how that's going to get administered. If you're going to do an in-class class test, how many—you can't have all the students sitting together in one lecture room, so you're going to have to plan to have it on a separate day. You have to work out, from timetabling, is there any gap in the timetable that those students can fit into, that you could run your class, and we often had to do it on a Saturday—and then you have students who can't do it on a Saturday for various reasons, and then you have to have your contingency. So not only do you have your big plan in detail, you also have an alternative contingency plan to mop up those people that fall through the cracks.
So the lecturer in charge of a large class also has to have very good, very strong administrative skills, and use every possible tool and resource available to help them manage that.
Benefits of providing feedback: Tim Hanna, Arts and Social Sciences
Below is the transcript for the video on page Giving Assessment Feedback.
With the issue of feedback, again there's some issues that I think are specific to students in the first semester of their degree, because it's their first signals of how they're going at university and what university's expecting of them. And so I saw my role—particularly with the first assessment task of the first year undergraduate English course—to be to make it very clear what they were doing right, what they could be doing better, and what was expected of them in general. I think, consistently one of the things I found students respond to, above anything else, above any of the specifics of your feedback, is if they can see that you care about how they're going, if they feel that you value what they contribute, they respond enormously better than if they're just being given a diagnosis of the rights and wrongs of what they're doing.
So there's the quite typical techniques of: Whatever comment you write at the end, begin with something good, talk about what could be improved and then point them to resources that could help them improve in those areas. That I would consider a bare minimum. I think, if you put marks across their actual text, which is what they're submitting in an English course, without it being overwhelmingly full of crosses, they can see that you've paid really careful attention to it, and again, based on their feedback it seemed to be the thing that really made a difference.
It's not necessarily that you want to compete with students in other subjects at the expense of those other tutors, but if a student senses that you care particularly about them and their development, in my experience they reward you with greater levels of preparation for class, and a really significant effort in their next assessment task.
Giving Assessment Feedback: Danny Carroll, Educational Designer, ASB
Below is the transcript for the video on page Giving Assessment Feedback.
That gap between the feedback we present online and what students actually do with it, it is a difficult problem to ascertain what the true effect of our feedback is. Some of the key issues are things like thorough pre-planning. We are always talking to academics about trying to plan as early as possible and engage with us as early as possible about the implementation of a technology into a course assessment pattern to make sure they understand it, that it gets in the course outlines, that they explain to students why we are doing it a particular way and so on so early planning is quite vital.
The nature of the assessment itself is quite important in that we are always recommending that assessment types be as far as possible related to the learning activities of the course, that the type of assessment task is authentic and situated and these are all very desirable things from the learning and teaching aspect but they have got a natural complement in the technology implementation side, sometimes some particular technologies have got particularly good connection to a task type and an assessment type.
Academics have designed integrative tasks. It’s basically a part one / part two of a task where the early part of the feedback directly relates to the completion of the second part of the task and I think that’s one good thing, to build in a cyclic aspect to the feedback.
The next thing would be trying to widen the adoption of good practices so that they become a new standard and this is something that perhaps conversations can be had at School in Faculty levels. It would be highly desirable if we had a sort of basic level of accepted, standard common practices for assessment submissions, through Turnitin and regular feedback to students by early on in the course, if those sorts of things were adopted as standards.
Selecting Assessment Tasks: Dr Adele Flood
Below is the transcript for the video on page Selecting Assessment Tasks.
Course development and assessment should be synonymous with each other. It should not be that an academic sets out to design a course and then the assessment is the add-on at the end, where we suddenly think we have to find a mark so we can prove we have taught the course and the students have learnt everything we know.
Assessment as learning is about setting a series of tasks across a course where we scaffold the knowledge the students have. We then lead them to an understanding of what they have learnt, what they need to learn and how to go about finding out what they need to know.
Most people tend to begin their course design by thinking about the content and then thinking about what they need to teach, and then at the end they think ‘Ooh, I’ve got to do some examining here to see if they have learnt what they need to know’. I believe — and I think there is a lot of research now that supports this — that the first thing you should really be thinking about, after you have thought about the learning outcomes, is: How am I going to know that those students have achieved those outcomes? That is the assessment part of learning. That is when you start to think: How am I going to test this? How am I going to find out if the students have actually understood the concepts, the practicalities of their work? How am I going to be valid and reliable in ascertaining that they have learnt what they need to know?
If you are really thinking about assessment as learning, and thinking about it in terms of being student-centred, then the assessment needs to be authentic. That is, the students need to see why this assessment is relevant. They need to understand what you are asking them and why you are asking it. If they can’t see that, they do not, sometimes or mostly, feel that there is any really valid reason for doing it, except that you are going to give them a mark.
The assessment tasks need to be diverse, they need to be flexible, they need to be fit for purpose. For example, if you want to test the creativity or thinking of students, the assessment tasks need to reflect a desire for creativity in learning to come to the forefront. So if you are espousing that you want the students to be creative but you are giving them a set of multiple-choice questions, where there is absolutely no opportunity for any valid input from the student, then your actions are not supporting your words.
All students in a class need to feel that they can answer those questions to the best of their ability. That means that you need to take into account, in your assessment, inclusivity of the student population. So there is no point in setting assessment tasks that one group of the students feel comfortable with if you are alienating the rest of them and causing grief for students who just don’t understand what the concepts are that you are examining.
Assessing Authentic Tasks: Overview of Case Studies—Chris Walker, School of Social Sciences and International Studies
Below are transcripts for the respective videos on page Assessment by Case Studies and Scenarios.
The course has a very strong focus on trying to ensure that the students are able to get an understanding of what practice is like and how the theory that they are covering in their course at university is useful for understanding and interpreting practice. So, how they can use theory to help them resolve problems or analyse problems and then come up with possible solutions.
So over the period of a semester, the students will stay in the same group but they have three different teachers and they will do three different case studies. For example, the students might do a case study on obesity for four weeks, then they might follow that up with a case study on bikie gangs and then they might do a case study on why men purchase sex.
The purpose is in the four weeks the students should immerse themselves in that particular topic area, research the issue, get to understand it as a policy problem and then, within that four week period, at the end, propose some solutions to manage it.
We are limiting them to four weeks to try and give them a sense of what happens in practice, that in the world of policy practice problems come up, you might be in an organisation or department and you have to propose a solution, you have got a short amount of time, you are not familiar with the topic, you might have just been working on another topic, you have to drop it and start focusing on the current problem before you. So the structure of the course is to try to give them that sense of reality.
Postgraduate students bring their own history of experience and practice in organisations. So you can do a case in a one hour session. You know, here is the material, go home and read it, they come back and you kind of do a question and answer session and you analyse the case and pick it to pieces.
With undergraduates I just find that is much, much harder. They don’t have the organisational experience, they don’t have the sort of assumed knowledge about how organisations and individuals interact in an applied setting. So stretching this, their immersion in this topic over a period of weeks, helps. I think it gives them the opportunity to think a bit more, to reflect on what we are doing at uni and what they are hearing in the press, or what’s on the news, or talk amongst their friends. I just think the time helps them make up for the deficit that they have in practice.
Assessing Case Studies: Chris Walker
The assessment for this course is also quite different from other standard courses. We are asking the students to complete two written pieces of work. They do a quiz in the middle of a semester and they also have a high participation mark, so for each workshop 10% of their mark is determined in each workshop, so at the end of the course 30% relates to participation. We do that because their workshops are quite intense, they have to get across the topic quite quickly. If they are doing the role play, it requires a lot of participation and engagement and they will do that three times over the semester.
At the end of the semester we ask them to write a reflective journal and the journal has to demonstrate how they link the practical experience in the workshop to the theory in the course that has been covered in the lectures.
So we are looking for them to draw on the literature and reflect on their case study and bring the two together and the quiz, by just focusing on the lecture material, means the students have done a lot of preparations. So by the time they get to about week 12, where they are writing their reflective journal, they have got a good body of knowledge on the lecture material, the theory, they have done three case studies and they have seen the practice.
Introduction to Oral Presentations through Pecha Kucha—Associate Professor Ed Scheer, School of the Arts & Media
Below are transcripts for the respective videos on page Oral Presentations.
It's a multimedia performance course, so they do a group project where they're looking to how certain artists use technologies and then they do a presentation on audio-visual technologies and their use in the performing arts, essentially. So we started doing PowerPoint slide presentations and they became rather unwieldy, rather open-ended.
And then the idea of doing the Pecha Kucha became much more useful because it's a much more, sort of, contained format. It's designed, it was created for designers in Tokyo about 10 years ago to—it's designed for multiple presenters one after the other to go bang-bang-bang, go through the work they're doing and allow for some discussion afterwards. But you get through a lot of presentations in a short period of time, so it is quite useful for large-enrolment subjects where you have got a large number of students that have to present, and everyone's got the same rules: 20 slides, 20 seconds each, no presentation exceeds 6 minutes 40 seconds. So, it's perfect for undergraduate teaching, really.
And it's also—because there are a few variables—it's a good way of testing people's presentation skills. So not only are you capturing some of the research that people, the students, have done, you're also working with transferable skills, presentation skills and things like that. So it ticks a few boxes at the same time.
Challenges of Oral Presentations
The challenges are often very prosaic things like, "How do I, actually, how do I work with an unfamiliar software?" So, PowerPoint's pretty straightforward. There are other presentation tools that you could use, but ... So the juggle then is how much class time do you want to actually spend going through how you use PowerPoint in this way, so I try to do it fairly minimally, just have a quick, you know, 20 minutes in the class saying, "Okay, this is how you do it, you go into these menu items," and actually show them how to do it.
Challenges for students are also things like, if they are doing a bit of ethnographic-style work, going out and videoing examples of the practice they're interested in exploring, they want to embed video somewhere, that's obviously—Pecha Kucha actually isn't great for that, 'cause you've only got 20 seconds per slide, so I discourage students from using video in the presentations, just speak to the still images. Occasionally little bit of audio can be, can run through the presentation as well. So, yeah, I think those are primarily the challenges that students face.
Benefits of Pecha Kucha Oral Presentations
It's efficient, you know? It's precise. Everyone knows what the task is. Okay? 20 slides, 20 seconds each; you're using PowerPoint or some presentation software; you can go in and play with the parameters a little bit—you are encouraged to be creative with that; you get experience talking about your ideas in front of your peers, so I think that's an important transferable skill, as we've said.
And without being ... I think we have all had those presentations in tutorials for instance where a student's done a lot of work and they have written a long text and they read every word. And in some cases that could be an adequate way of engaging student interest in a topic and generating discussion, but all too often it's actually something that shuts discussion down because it's rather densely, rather overdone in some cases.
So this is actually saying presentation is something different. It's not necessarily just reading a paper. It's using a presentation tool, a software in a creative format that's becoming sort of recognised in creative industries around the world as a kind of vehicle for presenting new ideas and new work, in a time-efficient format. So, it's incredibly useful, I think, as a proto-professional task, and also as a way of engaging students in each other's own research work, 'cause remember this is material that hasn't been covered elsewhere in the course, so this is students doing research work, or facilitated research.
So I tend to use it at third year, because that's the level where you really want to start the students to start doing their own, kind of, student-centred learning in earnest. And I find it useful for that, although as I said my colleagues in Media are using it for campaign strategy related tasks, and that's happening at levels other than third year.
Using Pecha Kucha Oral Presentations to Assess Creativity
The idea about saying, "Take PowerPoint and use it creatively"—so, using transitions, using a lot of imagery, embed some sound, you know, play around with the format but do it in a contained way—is a fantastic way of actually assessing creative work in a classroom situation when you're short on time and you might have a large number of students to get through. So for me it's an ideal tool in that sense: presentation tool, creative, multi-media performance tool, which in my course is perfect because we are talking about multimedia performance, and a way of assessing student learning based on their own, kind of, research task.
Yes they're given a rubric which indicates, which just takes the assessment criteria for that task and indicates how they performed across a range of things like use of imagery, use of animations and transitions, how they actually engage with the software, how are they engaging with the topic, the actual mode of presenting, speaking alongside the images—obviously we are trying to discourage them from presenting a text and then reading it out word for word. So you're trying to set up very clearly what good presentation skills are there, as opposed to bad presentation skills. So, yes, the rubrics accord with the assessment criteria, and the assessment criteria accord with, what is a good presentation.
In my course, also, I link it back to the appropriateness of the choice of material; what kind of artist are they working with; are they using the software in a way that's kind of emphasising aspects of the artist's work (how sympathetically are they designing their slides, for instance). So it does get into the detail of the creative part of the presentation, as well as what constitutes a good presentation.
How Pecha Kucha Oral Presentations Work
Well, they do a group project, as I said, where they're looking into some artists that we haven't covered in the course, but ... They go and find some artists whose work they're interested in, and who've foregrounded the use of technology in their performances in particular ways. So they've done that work, and then they have to present that to the group, so what I do is, the last 2 weeks of the course are devoted to these student presentations. And you need a lot of time to get through them. But if you limit the time in the way I've suggested, in the way the Pecha Kucha format allows, you can get through everyone's presentations in plenty of time. And they're readily assessable, because, as I said, everyone's using the same format.
So you basically go into PowerPoint; you do your 20 slides and you just adjust the transitions so that they move automatically through the slides after 20 seconds. So it's a, kind of, very contained way of checking that students are doing the research, have actually done this work, and know what they're talking about, while you're also checking that they've learnt something about the presentation software like PowerPoint, and that they have some experience in speaking to it in a classroom situation.
Sometimes, depending on what else is happening in the course, they'll write a report, students will accompany the presentation with a report, but if there are sufficient assessment tasks already in the course, then I just assess the Pecha Kucha and I don't ask for a report necessarily.
Assessing with Technologies—Dr Carol Russell, Faculty of Engineering
Below are transcripts for the respective videos on page Using Technologies to Support Assessment.
Some of the teaching issues include things like, I need to run project-based design problem-solving activities in very large classes, several hundred and sometimes larger than that. There is a lot of theory as well, a lot of engineering science that the students have to learn at an individual level, and sometimes there is a little bit of tension between teaching theory and teaching the design problem-solving activities. There is also a need to develop a lot of graduate attributes, communication skills, the ability to do teamwork and to relate that work to real situations.
Students often work on projects in small groups and teams of 4 or 5 and as well as having the output of their design work they are expected to lear., for example, about the design process and to articulate what they learned about the design process, and to comment on each other's work. So that involves peer review of each other's work, it involves commenting on each other's—on the team processes and how they contributed and how their colleagues contributed to the team process, and that's part of an assessed activity in a number of courses, and that starts with first year.
There's increasing use of mobile technologies for things like classroom voting and for peer marking in the classroom, and using things like iPods, iPod Touches for—or even just SMS and web services to give feedback to the teacher from a large class on whether they understand something or not. Sometimes that's done by clickers, but we have actually chosen to use iPods because iPods are multipurpose devices and they can be used for other things and we have been developing applications for other things. Some of them have marking rubrics—we have software for marking rubrics built in and linked to a web service so that people can select an option from a rubric for all the marking criteria and different standards. So again it helps build up understanding of what the criteria are for a particular activity like a presentation, a student presentation to their peers—the peers can give immediate feedback on criteria.
Simulations are widely used in engineering in the discipline, and can be used to give students the experience of real contexts. One example is in mining engineering where the consequences of a mine design decision can have very long-term consequences that students are not going to be able to experience—but the simulation can take them through what might happen. So there's a possibility of giving students something that they couldn't actually get in practice.
The other way that technology can be used is online role plays—that's been used. In a couple of cases where students take on roles and go into an online environment and have a discussion online, in role, and have to negotiate with somebody else, so they learn negotiation skills.
Things like, how do I know if students are understanding things in my lectures? They need to understand these concepts. You can run online quizzes with feedback, then you can monitor the results. There is one project that's developed a whole suite of adaptive tutorials where students get feedback that the teacher has programmed in beforehand, and then they can monitor how the class, as a whole, goes through those tutorials, and gets the pattern of where people are getting stuck, and where there needs to, maybe, be some remedial teaching done, or additional feedback in the tutorials. And maybe that's one area where some of the engineering academics at least have an interest in technology per se, and are comfortable with developing technical fixes to teaching problems, so, some of the things we've developed have been kind of bottom-up, practical, pragmatic solutions.
Challenges of Assessing with Technologies
The other important thing is the amount of time it takes to prepare things, and it is worth putting a lot of effort in for something that is going to benefit a large number of students over a long period. But you are not going to want to do a lot of work preparing a simulation or an activity for a few students for one semester that is never going to be used again. So there is a cost-benefit analysis to be done on that aspect of it, and most of the applications that have been developed in engineering have been for large classes, and where they are quite difficult problems that are ongoing like high failure rates or things that can't be done any other way.
The other challenge, which occurs in all disciplines, is, of course, academics' staff time is limited, so they're not going to opt for solutions that take a huge amount of their time or need them to develop skills that they're not really interested in developing.
Benefits of Assessing with Technologies
Sometimes the technology can be used to help you do repetitive tasks that are time-consuming. I mean, at a very simple level, there is setting out a question-and-answer forum instead of answering individual student emails. That's ... given, and I hope that most people realise that now.
You can put quite a bit of effort into complex quiz design, if it's going to be used again and it is robust enough to be used again. I was talking to somebody the other day who was concerned about students cheating on a quiz, but I think that may not be a very well-designed quiz, because if they can just copy the answers and share them with other students, then there are tools to prevent that happening.
Tools to put people into groups, project groups, so that the academics don't have to do that manually—that can happen automatically online. The students choose their projects and then are allocated to teams; then they have team spaces to work in. There is Web PA software—software we've been using—for them to give each other feedback on their team contributions. We use Calibrated Peer Review for peer marking of design project reports, and again that's a fully online system where the students give feedback to each other. And that has a number of educational values; in particular, the Calibrated Peer Review helps them understand the marking criteria and what they're being required to do, and it is much more effective than them just being told to do it by the teacher. And also it saves the teacher's time!
Some of the online tools—and there are a number of them—for marking things like essays or reports can save a lot of time and help the consistency of the marking, and they can help you articulate the criteria to the students as well.
You can embody a lot of routine work in the technologies, and it maybe just takes a little bit of time to work out, “I'm spending a huge amount of time doing this, and it's routine—is there a tool that will do it for me?" That's a good place to start in terms of time-saving.
Strategies for Assessing with Technologies
Be clear about what you're trying to do, what kind of learning it is. I find the distinction between being clear about whether it's an individual and—What's the context? Where are the students? Are they in a classroom, are they at home? Or do you have an opportunity to give them something to do at home that will help the problem? Just really start with the problem and what it is you are trying to assess.
There are lots of tools for all sorts of things. There are rubrics to help clarify the teacher marking; you can use some mobile technologies for teacher marking, for peer marking; and if it's a teamwork—“How do I assess teamwork; how do I know that people aren't cheating; how do I know that there aren't freeloaders in a team project?"—then the peer feedback on team contributions will help; it's very specifically for that and to help people learn how to contribute to teams. On situated learning it tends to be again very context-specific, where there may be some reason why people can't go into a real environment, like in mining engineering or in medicine. There are some things that students probably can't be let loose to do on their own, so there needs to be some support, and technologies can help with that, provide them with information, guidance, support or a substitute, or something to practise on that isn't real, that is safe.
You can look around for those tools or ask for advice. We fortunately have, in Engineering, have an Educational Technologist who's helped to set up a lot of those things and likes doing the problem-solving, so if somebody comes and says, “Look, I'm a bit unstuck with this—I'm looking for something that will do X," then quite often he'll be able to help with that. And I think, probably, Learning & Teaching staff would be able to provide a similar function.
Assessing Students with a Disability: Dr Leanne Dowse, School of Social Sciences and International Studies
Below is the transcript for the video on the page Assessing Inclusively.
We see disability in a much broader framework than we ever have and we see that framework really to do with both the capabilities and capacities of the individual themselves but also what we as a community and as a society and as a university do to assist that person to participate in their education on an equal basis with all other students.
Disability is covered under both legislation at the federal level and legislation at the state level. So at the federal level we see the Disability Discrimination Act, which is the main act that oversees the way that education is delivered. We have a set of standards under that act called the Standards of Education and they set out broadly what universities and their staff need to do to create inclusive environments, where students with disabilities can participate on an equal footing with other students.
One of the key issues for a student with a disability, if you can put yourself in their shoes for a minute, is their right to privacy. So we don’t require that students who have a disability come and tell us. In fact they have a right to their own privacy. People are protected under privacy legislation. So we have a system here at the University of NSW where the disclosure of a disability is in no way mandatory. It’s about balancing rights and responsibilities. So a student’s rights are to privacy but their responsibilities are also to the rest of us in the sense that if we are to make reasonable accommodations or to ensure that we can assist them correctly then they need to be able to disclose, safely, to someone in the university and that is what the Student Equity and Disability Unit does.
So one of things we do as academics is that we of course do the best to assist our students by providing as many inclusive opportunities so that we create our assessments particularly so that regardless of the skill set the person brings, that all students should be able to take part. So we don’t concentrate on one particular kind of assessment in a course, that we don’t always demand a particular sort of delivery, that we have some flexibility.
If an academic is concerned that a student is having an issue then the first port of call is to direct them to get some assistance and I think to be clear that the duty of care that academics have towards their students is to all students and also to themselves. It is not the role of the academic to provide that assistance, we are not trained counsellors, we are not trained to intervene in those kinds of ways, but most certainly to be able to point out to students when you’re concerned that whatever their issue is, is actually affecting their performance on their assessment tasks.
Assessing Inclusively, from the student perspective: Felix Rodrigues (BA Design student, College of Fine Arts)
Below is the transcript for the video on page Assessing Inclusively.
As a deaf student coming from the deaf community, I use sign language, which has a different grammar from English. So, when it comes to doing essays or large parts of written work, it can be very difficult for me in terms of a language challenge. I will often need assistance in terms of understanding English, and also in producing it, particularly in long essay writing.
I go to The Learning Centre and have one-on-one tutorials to help me deal with language issues that I might be having, but the person who has been working with me didn’t really have a full understanding of sign language, the deaf community and the issues that I actually face. I had to do a lot of explaining about what issues arise from being deaf, and what Australian sign language is about, via an interpreter. I think that the people that work in a place like The Learning Centre, where they are helping students with their English and helping them with their assessments and their essays, those people really ought to be afforded the training and the education so they are prepared to deal with a wide diversity of students, including deaf students like myself, and give us the strategies to navigate and negotiate. And also, the one hour at The Learning Centre is up pretty quickly and sometimes it’s not really enough time for me to finish my essays. It’s frustrating for me as I’m often having to ask for an extension and not being understood.
I’m often asking my lecturers for extensions to my essays at COFA. Obviously the design elements of the assignments are fine for me, the same as the other students, but when it comes to written essays, yes, I am constantly having to ask for extensions due to the language issues I face as a deaf person.
All of my teachers are great. When you take the time to explain to them. I know that a lot of my lecturers have contacted SEDU [Student Equity and Disabilities Unit] here at the university. My lecturers are always very nice when you take the time to explain to them about the interpreters, you know, there are going to be two interpreters, I’m deaf, here is how it works. And some teachers go the extra mile, they prepare extra notes for me, and they give them to me in advance, so not only I can prepare but my interpreters can prepare. That is a system that works really well, that really helps back up my learning.
I think the first thing the staff should do is contact SEDU. If they are wanting information about the deaf community and about working with deaf students. All my lecturers actually have meetings where they discuss strategies for dealing with me as a deaf student. So I would say that in terms of working with interpreters, what the guidelines are, how it works, how the classroom dynamics are going to work, what it is like having a deaf student, they should really get in touch with SEDU, they are the ones who are going to provide them with that sort of information.
Assessing in the First Year: Professor Prue Vines, Faculty of Law
Below is the transcript for the video on page Assessing First Year Students.
When you have very bright students, as we mostly do, they need to feel that their brains are being exercised. And so one of the things that I think is a very important principle is to have high expectations with your assessment and with your teaching, rather than low expectations, because people nearly always, in my experience, rise to a high expectation.
So we're trying to get first years to start to see themselves as part of a community of scholars, instead of a bunch of kids who are coming in with their mouths open and expecting us to stuff things in it. That's the very big shift that has to be done in first year. Later on they should already feel they're part of that community, feel more able to challenge and so on, because they've got the base from first year.
Aligning course content with assessment
Make sure that whatever assessment you're doing is really congruent with whatever your objectives for teaching are. So it really is silly to us to teach a whole bit of a course and then not have that in the exam, or, you know, do things like that. Because even though we would really like our students to be really intrinsically interested—and often they actually are—it's human nature to sort of slide out of what's easy. You know, if it looks as though you're not going to be tested, it's human nature not to put too much effort into that.
So I think, if you have something in your course, you should assess it.
Choosing the assessment tasks
[In the classroom, Prue Vines: Today we're going to talk about the ever-present subject of death, and I think it's always useful to point out that death is present in everyday life anyway, but most of us prefer not to think of it too often. But right now, it's very important for the purposes of succession law...]
Voiceover: Law's a language-based discipline, so it's actually very important that we do both writing things, and verbal things, because we are dealing with a discipline that is all about language; it's nuances, it's connotations, it's definitions and so on. And so, our assessment needs to be—that's why multiple choice assessment is rarely much use because it just doesn't allow us to look at those fine gradings of meaning and things like that, and that's why the written and verbal assessment's really important.
We have told them in advance, for example, that they will need to read, say, 3 theoretical articles beforehand...
[In the classroom, Prue Vines: We've got a presumed death. When was that presumed?]
...but we won't tell them the question. They need to have—and they're told that they have to think about what that article tells them about the legal system, so they've done some thinking before they come in, and then we give them a question that's lateral, that asks them to apply a theoretical thing to a concrete position, because of course that's often the best way of working out whether people really understand the theory.
[In the classroom, Prue Vines: If I say to you that this happened on or after—that I presumed something on or after—between 2000 and 2006, does it say when it happened?]
And so one of the techniques that I use in class participation is to keep asking the same person to go deeper into their answer...
[In the classroom, Prue Vines: ...carbon monoxide into your body faster or slower than a young person? Slower? Why?]
...and each time you say "Why?" they're being pushed back into a justification, and keep on having to articulate that justification, until I'm satisfied. I try to do this without being terribly scary. I don't think it is terribly scary if you set up a good classroom environment, where they know that the point of the class participation is to do some risky things in a safe sort of place.
[In the classroom, Prue Vines: Can we use section 35 in a situation where the death had been presumed? So what are the steps that the judge goes through, in determining that? So first of all, well Marcus, you were telling me quite a good account of this...]
I mark them for engagement rather than mastery. I regard class participation assessment as being about the level of energy in engagement they're willing to put into the class, that is: Are they ready? Are they listening to others and being respectful in their listening? If somebody says something and I ask another person what they said, they will know, because they're actually all in this community of scholars that we hope we're in together.
[In the classroom, Prue Vines: At present guardianship is for life, not for death, so that's one of the problems. Yes?
Student: So what happens if you've missed the point at which people still had capacity, but refused to give—
Prue: Well, in that case it has to go to the Guardianship—]
Voiceover: The fact that we assess class participation forces them to do their reading before they come to class. You know, they are human, and if nobody says this matters, then it's natural to not do it. So we are very lucky that we've actually managed, by the use of class participation over the last 4 years, we have actually managed to create more or less a culture where students do read before they come to class.
And that, of course, is an enormous benefit because it means when they come to study for an assignment or something, they've actually already done their study so they shouldn't need a real, you know, big bash at it, and because they've learned slowly, they've really learned, instead of just sticking it in the forebrain in order to regurgitate it on an exam paper. So that's one thing that classroom participation does: it actually enhances their learning dramatically, and it just, doesn't work as well unless you assess it, so that really is using an assessment as the reinforcement, to make sure that the learning happens.
And then there's the techniques and the skills that you learn by doing that, which are thinking skills and also, the process of getting used to speaking. And we're not suggesting here that every needs to be able to speak like a barrister, but everybody who is any kind of professional—not necessarily a lawyer, but anyone who has to deal with the public, or gives advice of any kind, etc.—should be able to say, "This is the advice that I'm giving you, and this is the reason that I'm giving it." And that doesn't come without practice, and class participation means you can take a person who's very shy, and you can work with them until, by the end of their law degree, they are reasonably comfortable—they may still not want to get up in front of a 200-seat lecture hall and give a lecture, but they will be comfortable with talking to the person next to them, somebody they haven't spoken to before, and explain what they're about. And that's a very valuable skill.
[In the classroom, one student explaining something to another over a text book.]
Early on in the piece we give them a piece of assessment which is a diagnostic piece of assessment. So we give them a test. And we're looking for a number of things there. When they get the thing back, at the bottom of the test there is a little thing saying, you know, what you got and the mark and so on, but also a number of tick boxes of places we think they should go for extra help or things like this, and the things that we look for are:
- problems with expressing English, which may not be because you're an International student but might just be because you've done a lot of science and maths
- problems that seem to be of general understanding of the topic and so on
- being a person who seems to be really good at detail and no good at framework, or being a person who seems to be really good at framework and no good at detail.
And just explaining to them that that's an issue is often enough to turn it around. And so we try and get that back to them by about week 6, if we can.
And we also send them off to our peer tutor program, to the Learning Centre, to English classes and some particular workshops that we run ourselves, in order to deal with the problems that we identify at that point. I see no point in wasting all of our time, with somebody struggling along, when by a simple workshop or something they can sort out the issue and reach their potential. No, it seems stupid to me to just try to knock them out.
Some people will say, "Look, I'm really shy. I shouldn't have to do this." They sort of see it as a kind of a right. Some people who come from countries where it is not normal to challenge the teacher in any way can find it extremely difficult, and sometimes they will say, "I shouldn't have to do this because I'm from this particular culture." I don't actually accept that argument. I accept that I need to be sensitive to where people have come from, and that they may—I don't mean in terms of their country or whatever but in terms of their background and where they're moving from in a learning fashion. But I don't accept that anybody is entitled to say, "I've come to this law school, and I'm going to decide what I learn"—not until you get to the elective stage. At this stage we know that this is valuable; we have very good pedagogical reasons for why we do it.
Strategies for classroom engagement
I also tell them how I assess it, that I keep a roll, and after every class, or during the class, whenever somebody says something, I have a scale which I explain, of marks that I stick in the roll, depending on whether I thought the person did something really spectacular, or whether they just did what they should have, and I also have a mark for where I think people have—you know, they've offered to do something and they haven't done it, there's a bad mark system as well. And that includes the listening component, you know, just thinking round the class, "Were these people listening?"
I think the other advantage of doing that and explaining how I do it, is that they lose their fear that it's an entirely arbitrary process. So they know that I'm actually—they can see me doing it, they can see I've got the roll out, when somebody's doing something I write it down there. So it's actually very transparent, even though I don't actually hold it up and show them what the thing is.
I also—in first year in particular, not in later years—but around week...9? I will give them an interim class participation mark, with the proviso that it won't go down, but it may go up, if they improve their performance in the next few weeks. That's a wonderful incentive, and it can help the people who really have opted out a bit to come back in again. I think it's very important in first year to get that going, because my experience is that if you don't get someone really to speak and to be involved, speaking, in class in the first 3 weeks, they may not ever do it. So it's really important.
Using Role plays in Formative Assessment—Dr Benjamin Barry and Dr Gail Trapp, School of Medical Sciences (Exercise Physiology)
Below are the transcripts for the respective videos on the page Assessing with Role Play and Simulation.
Gail Trapp: We use role play specifically in two courses, a first-year course called Exercise Programs and Behaviour, and in a third-year course called Physical Activity and Health. The dominant reason for using role play is to allow the students to have the opportunity to practise their clinical skills in a quasi-clinical environment with real clients or patients.
Ben Barry: And we certainly have got really good feedback from the students that that experience of what work is like, in the first year, was really valuable and genuinely did, at their own description, provide them with some motivation for the future years of study.
Gail Trapp: The assessment is a written form of assessment. As I say, the actual clinic, the role plays are not directly assessed, but they have an assignment that is written, it's very comprehensive. So they have to report on their testing, their decision-making process, the program that they devise for their client, and then their results, and they have to evaluate the program. In both courss it's cut into3 sections, so that they get 2 lots of formative assessment before a final summative score on the assignment. So they get lots of feedback in the whole process.
Well, the feedback is instantaneous; they get it from the tutors at the time. They're not actually assessed on that clinical environment, because we feel that they will eventually get assessed for their direct clinical skills; we have what we call Oskies, which are objective measures of clinical skills, in their last year, so this is more an environment for them to develop their skills under tutelage, so...it's a very supportive environment.
Ben Barry: Formative assessment, I suppose, where they're really just taking skills that they've learnt in the labs and trying them out in a simulated workplace setting.
Interviewer: Are they given, sort of, guidance before they do this in things like communication—
Gail Trapp: Oh yes, yes.
Interviewer: And is that in that course, or throughout?
Gail Trapp: No, it's embedded in the course. It's embedded in both courses. So they get both lectures and lab sessions on motivational interviewing, behavioural change and on their clinical skills as well, so, you know, simple things like measuring blood pressure and what's an appropriate test to give to a certain client, and those sorts of issues.
Beforehand I discuss with the tutors the behaviours and what we're looking for and how we expect their clinical interactions to be structured, from the student's point of view. And then they observe the student going through their processes. If the student requires help they're allowed to intervene in any way, shape or form that is suitable and supportive of the student, of course. And they then sit down afterwards and give them oral feedback, but that's pretty much it.
Tips, Ensuring Authenticity
Gail Trapp: Well, I think in the third year it becomes much more authentic, simply because they're working with a real client; they're not working with one of their peers, and that changes the milieu quite considerably. In the first year I'm not sure that it's that critical, because, I mean, it is an artificial environment but it's very much placed within a clinical setting, so they get, the idea is to introduce it to them, to allow them to get the feel for the environment. We very much stress that this is about behaving and thinking like a professional, it's not interacting so much with your peers—and they're pretty good about it, you know, they understand that this is a time for them to learn how to be good clinicians.
Ben Barry: Yeah, their approach is pretty good and, look, we do pair this activity up with them doing some very similarly scheduled observational work and tutorials within an actual workplace within our university clinic, so they get the idea of what happens in that type of environment and then they are modelling that. Whether or not it's essential that they've seen it in the workplace before they're doing it, I don't know, but it probably helps.
Gail Trapp: I think the fact that the two go hand in hand is very useful, so they're actually seeing a real clinical environment and then role playing it themselves in a quasi-clinical environment, which, as I say, they take the task on very well and it doesn't seem to be an issue, the authenticity.
Ben Barry: And good tutors, who are themselves typically recent graduates who are doing some additional study or are out in the workplace—that really supports it well. The students find them really good and approachable to ask questions, because they're much nearer, I suppose, in their stages of study and learning. And as well as that, the tutors sound—they keep it online, they really emphasise what are the important things about the task
Ben Barry: This type of activity is probably, and even how you've described it, fairly generally applicable to anyone who wants to, whose students are going to wind up working with clients, and devising some sort of program, be it in health care or not, and then wants to see them again and follow it up. For that to occur in this small group setting, it's a lot of logistic stuff. We had to work out a facility that resembled the workplace environment, get that built and equipped. Then we had to make sure that cohorts of a hundred students for the first years and 60 to 70 students for the third years were actually scheduled in, in groups of 2 to 4 students, and then matched up with tutors. And in the initial stages, Gail put in huge effort to try and to allocate students to those herself, but often it was hard to get them all to actually turn up and get it done in the end.
So we worked with some particularly helpful people in Timetabling—Nicola Bloom, especially—to actually get this locked into the timetable. The students enrol themselves in these activities as part of their integrated course activities and it all just happens from there. Timetable the tutors in and it works out.
Gail Trapp: It works very well now. I mean, initially, I think—it was a bit of a nightmare for you, wasn't it, organising it all?
Ben Barry: It did, it requires a lot of extra groups, but once that planning was done, it gets rolled over each year with pretty minimal effort.
Gail Trapp: Well, in the first case the first year students are paired up with a fellow student. So they both act as the client and as the exercise physiologist, and they work in the small clinical rooms under the supervision of the qualified exercise physiologist who's working as a tutor. So it's very much a genuine sort of situation. So they're actually assessing their clients; they're making decisions about what sort of assessments to use, they're interacting with the clients and they're making health assessments and physical activity assessments on the client.
Then they go away and they write a program for that client and then they come back and re-test. And generally the feedback from the tutors is that they're very much more adept and at home in the environment on the second, on the repeat session.
And the third year is very similar except they're required to do more sophisticated testing and their program that they have to develop is much more based on behavioural change. So there's an exercise component, but really what we're looking at is their interviewing skills and their capacity to interact and guide their client into the appropriate lifestyle change that they need to do.
I like the idea that those role plays are very much a learning environment rather than an assessment environment, and if they do get assessed on that, then they'll start change—it'll change the environment. And I'm not sure that I want to do that. I'm not sure. I think it's—my gut feeling is it's better to leave it as a supportive learning experience. They get lots of assessment.
Field Based Learning—Associate Professor Elizabeth Fernandez, School of Social Sciences and International Studies
Below is the transcript for the video on page Assessing Work-Integrated Learning.
The Purpose and Nature of Field Based Learning
An important aspect of field-based learning is its purpose, which is to give students the opportunity to apply their theoretical knowledge and skills gained in the classroom in an actual field setting, and through that process basically to test that knowledge, and to build on that knowledge, and to reinforce and consolidate that knowledge, through practice.
Field-based learning in certain disciplines—particularly professional disciplines like social work, maybe medicine, psychology—is a kind of transformative approach or tool in the student's learning because it entails the student acquiring not just knowledge or learning intellectually or cognitively, but also developing attitudes, testing their values.
It has also an affective component, because, if you consider certain disciplines like social work, students' feelings and reactions to the situations that they deal with are a big part of their learning. So processing their feelings, and processing their values and attitudes, are part of the learning, and that's a major challenge for the field teachers.
Processes for Assessing Field Based Learning
There are a number of steps involved in assessing students on placement or field-based learning, and one has to lay the foundation at the beginning of the placement. It isn't something that you do at the end of the placement or late in the placement. The foundation is usually laid right at the beginning. And one of the strategies that we recommend and support is that the student, the supervisor work together, develop a learning contract. This learning contract will incorporate expectations of the university, expectations of the field teacher and also expectations of the student, especially their personal learning goals. And that contract then becomes the blueprint for the placement and also the reference point for further assessment. It also becomes the basis on which the field teacher selects learning experiences and learning tasks for the student so that they are directed at particular goals that have been set out in the learning contract.
Now, when I talk about university goals, as part of curriculum we have what we might describe as a curriculum for field-based learning, or for the placement. This would identify a number of criteria for learning that would cover the knowledge, the skills, the values and attitudes that are part of learning in the practicum setting. They are also designed in conjunction with the professional expectations or requirements or the standards that they have set. So it's a combination of professional standards as well as the academic curriculum that is followed in the course—and that becomes, I guess, a reference point for developing the learning contract, and also the document on which the evaluation or assessment is based later on.
Now, the second thing, I think, is to keep in mind an assessment of where the student is at, at the beginning of the placement, and by this I mean trying to establish a baseline so that that baseline can be used to look at the student's progression from the beginning of the placement to the middle of placement, to the end of the placement. Another issue is also determining what level of skill and competence the student possesses in the context of that particular setting. Here, for example, if a student is on first placement their level of competence, their level of learning, might be limited to what they have learnt in the classroom. If it is a final placement one can presume that they've gained some skills and some level of competence from the previous placement. Even so, those assumptions cannot be made without taking into account that the particular setting that they are experiencing, even as a final placement student, may have particular, specific knowledge areas that they have to acquire that place difference demands on the student. So one needs to take into account, I guess, an assessment of the student's level of skill, knowledge and competence in the context of the particular setting.
And each setting would have different demands and different types of new knowledge that the student has to acquire in order to function in that particular setting. It may relate to the client group, it may relate to the community, it may relate to the organisation, it may relate to the particular therapeutic models that are being used in that particular setting. So here again, this is another level of assessment that is part of the groundwork.
The other issue to take into account is working out how the field teacher is going to monitor the student's work or, in other words, what types of evidence will the student provide during the placement that will be the basis of being assessed? And so this would include deciding—the field teacher and the student deciding—what would be recorded in terms of their, say, interactions with clients, what kind of reports they would produce, what kind of methods or mechanisms they would use for monitoring the student's work so that it becomes a basis for supervision and ongoing feedback.
For example, in some settings students may be required or requested to tape an interview—with appropriate permissions—or they may be asked to produce a verbatim report of an interview, and that then can become the basis for the student and field teacher to explore how the student is progressing, areas of strength, areas of difficulty and areas where change is needed. Sometimes in some placements—in fact we would encourage many of the placements to have the student maintain a reflective journal, a practice journal. And this is for the student's own self-evaluation; there might be some mutual negotiation as to whether this journal becomes a basis for assessment or it's a mechanism for the student to monitor their own learning, and some of the issues that they are confronting. So those are some of the steps, initially.
Also, the other thing is, how are you going to communicate your assessment to the student? Is it going to be through verbal feedback or a written report? Different programs have different criteria and formats for recording the assessment or even conducting the evaluation process. Some have very structured formats—for example, in social work what we currently use involves a listing of different expectations, with a provision for rating the student, as well as provision for a narrative about how the student is going. That format also includes provision for the student to have their input to summarise their progress, and for the field teacher to also comment and summarise the student's progress. So in this sense there is a mechanism for incorporating the student's assessment of themselves.
The next step is how you put the feedback into writing, and what are some of the principles and steps that should be taken? For example, we would stress that in assessing the student, apart from, I guess, statements about how the student is performing, to include evidence, practical evidence, examples, examples that illustrate how the student carries out particular tasks, or particular skills or behaviours that they are reflecting. Whether it be a positive performance or whether it be a negative performance, we would encourage detailed examples to illustrate the student's performance. And the student might provide some of those too, in their narrative.
And finally, after that assessment report is written, something that the field teacher needs to take into account is sharing it with the student and being available to respond to the student's reaction to the report. Because quite often, even though there has been an evaluative discussion, in which the student has participated and feedback has been provided, quite often, when the student sees it in black and white, it has quite an impact, so it's important that the student has the opportunity to reflect on it and be able to respond to it with the supervisor. And of course we put in place mechanisms if there are differences of opinion, and dealing with those differences.
So, essentially that's the process.
Complex Partnerships in Field Based Learning
It's also unique in the sense that it is a complex partnership involving a number of stakeholders: the university, the student, the agency and the professional who is part of that agency. So it's a kind of multi-layered partnership, which has its own complexities because there are particular tensions arising from the fact that there are divergent interests and competing interests and goals where the student's learning has to fit into that. And the student's learning needs to be the primary focus, but nevertheless the agency has—and the field teacher who is a professional—have their own responsibilities as well. So there are particular tensions that arise from that, and particular complexities that one has to deal with, not only in the learning process but also in the assessment process.
Challenges of Field Based Learning
There are a number of challenges, one arising from what I referred to as this multi-dimensional partnership, which involves different stakeholders: the university, the field teacher, the student and the agency. And in this sense there can be a number of tensions arising from those different parties to the assessment process. For example, there can be from the field teacher's point of view, there is that transition from being a practitioner to being an educator and to being an assessor, and ... it's a different role entirely for a practitioner, so it's combining that practitioner role with an educator and assessor's role.
There are particular dynamics also that come into it, based on different disciplines. For example, in social work some of the professional values that we emphasise when practitioners to through their own training are to do with being supportive, enabling, caring ... and in the assessment process they are having to make a judgment. They are having to make a decision about students' performance and this is a little bit different to the orientation, their professional orientation, which is to do with being supportive. However, I mean, in social work, supervising students is conceptualised in terms of different dimensions. There is an administrative dimension, there's an educational and teaching dimension and there's also a supportive dimension. But there can be issues in the interaction of these different components of supervision.
The other issue is also to do with the fact that the students who come to a placement, or the way placements are organised, is such that there's variability between the different settings. Students are placed with individual field teachers for the most part. Sometimes there are groups of students with a field teacher, but essentially it's individualised learning based on a learning–teaching relationship with a clinical teacher, and so in that context the teacher who is assessing does not have the benefit of comparability of their assessments with other students. If the student is performing marginally, there isn't another student in the setting to compare with, except the field teacher's previous experience of having students. Also, there's variability in the different settings in terms of what is offered by way of learning, by way of challenge, by way of exposure. In some agencies there may be particular demands, in some others there may not be, so the learning environment to which the student is exposed can vary from place to place and that's something that needs to be taken into account in the assessment, and I'll come back to that in a moment.
There is also the issue that an important aspect of field learning is not just about skills and behaviours but it's also about confronting ethical dilemmas, and field teachers have a very important role in facilitating the student and enhancing the student's ability to move away from their personal value dispositions to a more anti-oppressive approach, because of the diversity of clients that they are dealing with and the diversity of situations that they are dealing with. And this I guess can trigger personal issues for students. So, in that assessment process there are particular challenges for the field teacher that arise from this particular dimension of learning as well.
I think the other issues is also to do with the disjuncture between academic learning, classroom learning or the academic curriculum and the field curriculum. Sometimes students may learn some content in advance, in the placement, of being exposed to that content in the academic curriculum. So in that case they may not have had the academic support, the support of the academic content in the placement. At other times it's an advantage: if they are learning something ahead of time, then it benefits them when they actually do the course because they've got a practical experience to relate it to when they actually approach it.
So, the point I'm making is that learning in the field and in the curriculum may not always be simultaneous; there may be some differences in the sequence in which they are exposed to that learning, and that can present particular challenges.
One of the things that students and the university expects when they go on a placement or a practicum is the opportunity and the context to link theory and practice. And while that is a challenging thing for students, and quite often something that's delegated to the field, it is a challenge for field teachers, especially because they may not be aware of the theories that are being taught in the classroom, and in that sense they may not feel confident to facilitate that process of integrating theory and practice. On the other hand, they may develop particular strategies for ensuring that they update their knowledge in order to perform that role and assist the student in that process.
So these are some of the issues and some of the challenges that arise in field based learning.
The other issue is the consistency of assessments, because students are performing in different field learning settings and so there's the potential for inconsistency in assessing students and applying particular standards. We try to overcome this by having a tutor from the university visit the practicum at least once, so there is an opportunity for a face-to-face, 3-way communication to discuss some of the issues that might arise in the placement.
Another issue for field teachers is also the fact that they may not have support from their agency for the teaching role, because of the pressures of the agency, so that quite often there is no allocation in their workload for teaching students, and this may create issues in terms of time available for supervision of students, and that can impact also on the student's learning, if they're not offering enough of time, or if they're not receiving support for providing time for the student.
Benefits of Field Based Learning
Well, for the student, first of all, it is an opportunity for confirmation of their achievements. It's an opportunity to let them know if they are on track. It also gives them some indications of areas where they need to improve. It builds their confidence and gives them the opportunity to consolidate what they know and repeat those behaviours, as well as look at ways in which they can enhance their skills. So the assessment feedback is extremely important for the student in terms of also setting goals for the rest of the placement. For example, if this assessment occurs at mid-placement it gives the student the opportunity to set goals for the rest of the placement.
There are advantages for the field teacher, too, because it gives them some direction for their teaching when they've assessed the student and they've arrived at some decisions about where the student is at, what are some of the gaps, what are some of the strengths. It gives them a focus for future teaching, for future development of learning experiences in the placement for the student. And this is recognising that the learning process in a placement is also dependent on what is available in the setting, and the kind of experiences that the student is exposed to. So in this process the field teacher can do more planning, for future teaching.
It also confirms their strengths as a teacher, because many field teachers would see it also as a validation of their performance as a supervisor and as a teacher. So, in that sense, it has value for the field teacher as well.
It's important for the agency and for the community, because it ensures quality service to the client system if the student is performing well, and if the student's work is being monitored and being assessed, and in that case the quality of the service to the community and the client is enhanced as well.
So there are benefits on all sides.
Important Principles of Field Based Learning
There are a number of principles that we advocate with field teachers when assessing students.
First of all, that evaluation and feedback should be seen as something that is continuous and it isn't something that occurs just at the time of evaluation or at the end of a placement. So we encourage field teachers in their weekly supervisory sessions with the student to provide feedback on a regular basis, so that when it comes to evaluation it's more of a recapitulation of the feedback that the student is receiving on an ongoing basis, and it also then helps to desensitise that whole issue of being assessed for the student if it's happening on an ongoing basis.
Another important principle that we stress is openness with the student, to discuss with the student ahead of time when the evaluation is taking place; how; who will be involved; what information is needed; how can they prepare for it. And this helps to lower the student's anxiety about the assessment process. So, in all settings I think it is important that students' anxiety and students' investment in the process is recognised because assessment in field learning, in clinical placements, has particular impacts on the student, because it is in many ways assessing them as a person and as a professional and it's quite different from the assessment in the classroom or assessment of an assignment. The assessment of their practice has a major impact on them, so preparing them and giving them the opportunity to assemble their evidence of how they are performing is really important.
Also that it should be a shared process. We encourage field teachers to even suggest to the student that they might undertake a self-evaluation prior to a formal evaluation with the supervisor, and this is the reason why sometimes assessment formats are constructed, to allow for particular space where students can have some input. So it's a shared process between student and field teacher. It's also important from the point of view of the student, the validity of the assessment, if it is something the student has contributed to, and it means that there is some level of ownership by the student and therefore a more effective document for the student.
Another important aspect that is to be stressed is that assessment should focus on strengths as well as difficulties. So the student's achievement and strengths need to be highlighted and labelled and validated so that they can improve on those and use those particular skills in other situations. Secondly, that it should highlight areas for improvement. So, in other words, a balanced approach, which recognises their achievements as well as their difficulties.
The other issue is also recognising contextual factors in the evaluation process, again referring to the fact that settings are variable and the types of experiences students are given tend to vary from setting to setting. And the context of an agency is not as predictable as learning in the classroom. So one of the principles that I think is important here is to identify issues that are ... to move away from atypical performance of students, and recent events that might colour the evaluation because they were either of a positive or negative nature, but to look at the whole context of the placement and to also recognise some of the factors that might inhibit the student's learning or limit their learning opportunities.
For example, the student might have had a difficult case, a very challenging case, or they are in an agency where the staff morale is very low, or it was a particular stage when there was a poor flow of work, and that then limits the opportunities for the student to be able to learn. So, taking into account some of those contextual factors in making the assessment. The other important issue is also focusing on the—not just the behaviours or the achievements of the student, but being able to interpret them in the context of why certain behaviours and certain skills produce certain outcomes. And a nice way of putting it comes from Kadushin, who has written widely about supervision, and he says evaluation should not be about the final game score, but rather how the game was played. And so in this sense sometimes students come away not knowing why they received a very positive evaluation, or what was good about the particular assessment, but needing to know why particular behaviours contribute to particular outcomes.
Consistency is the other issue, because students are very conscious and very attuned to being treated fairly. So if there are a number of students in the same setting, consistency in the assessment across students is fairly important—similarly if it's an individual student, consistency across the whole placement—so that issues are raised wherever they surface, rather than overlooking them and then clobbering the student with a lot of negative feedback at a time when they cannot address that feedback.
The other important issue, particularly, as I've mentioned earlier, a lot of the learning takes place in the context of an individual relationship with a teacher, and that relationship between the field teacher and the student is the main context or the vehicle for learning. So it needs to be a positive relationship, and I think field teachers who invest some time in developing that rapport right from the beginning of the placement could be laying the groundwork for a good assessment which is carried out in the context of that positive relationship. So, looking at some of the relational aspects in the learning–teaching situation is also important.
We encourage field teachers to invite assessment about themselves, evaluation of their own role as a supervisor, so that the supervisor also learns from feedback, and the student engages in the process of giving feedback as well as receiving feedback.
And another final principle, perhaps, is emphasising to students the tentativeness of the evaluation, that it is how they are performing at this point in time, and that the good student can become better, and the student who is not performing so well can improve. Because as I mentioned earlier they have a very high investment in how they are assessed on a practicum, because of its links to their future professional practice.
Yes, those would seem to me to be some of the important principles in conducting assessments.
How Field Based Learning Works
How is it organised? Students—here again, I'll use the example of social work, where the students are actually placed with a professional social worker who supervises their work on a day-to-day basis. The students in the BSW program—Bachelor of Social Work—have to do 900 hours of fieldwork as required by the professional association, and this is split into two placements of 65 days and 75 days, one in the third year, one in the final year. The third year is a concurrent placement where they are in the agency for 3 days a week and they've got coursework on the other 2 days, and the final placement is a block placement where they are fully immersed 5 days a week in the placement setting.
They are attached to a social worker who is, if not a social worker, somebody who has the experience in the area to be able to supervise a student and that might be supplemented by university supervision. It is a requirement of the association that they have a social-worker-supervised placement. And this particular field teacher—we refer to them as field teachers or supervisors—has the responsibility of designing learning experiences and learning tasks and allocating, delegating tasks to the student, monitoring that, and supervising the student on a weekly basis—at least 1 hour per week formal supervision, and informal day-to-day supervision as well.
So that's essentially the context of field-based learning.
Below is the transcript for the video on the page Using Assessment Rubrics.
Hi, my name is Tim White. I'm the lecturer looking after the Mech Eng part of ENG1000, which is the faculty-wide Engineering Design and Innovation course in the Faculty of Engineering at the Uni of New South Wales.
As part of ENG1000 students are required to take part in a team-based design-and-build activity. They're put into teams of 4 or 5; the teams are randomly selected at the start of session, and over the next 12 weeks or so of session they need to work together and build a vehicle that is required to transport some kind of material from Point A to Point B on a track.
[footage of 3 such machines being tested]
To reward the students for their efforts throughout the session, irrespective of whether they placed all the balls in the compartments or not, we developed quite a complicated marking rubric based on the performance of their vehicle. Some marks would be allocated for if the vehicle moved at all, whether it moved under an obstacle, a bridge in the path of the track, and then whether it actually delivered any balls into the compartments at the end of the track.
In the past when competitions like this have been run, at the end of the competition you'd end up with a whole heap of sheets of paper, one for each team. In ENG1000, in Mech Eng, usually that's about 40 teams, and each of those sheets has the marks for the various criteria on it [onscreen text: 40 Sheets 12 marks]. Somebody then has to sit down and transfer all those marks across a spreadsheet [onscreen text: 480 Transcriptions]. The most obvious problem with that system is that it's quite time-consuming to transfer the marks across; also, almost always, there'll be at least one or two numbers won't be transcribed properly, and just the fact that paper can easily go missing, especially when you have 4 or 5 markers. That's something we were really keen to try to address with the new marking system.
For Session 1, 2011, Engineering L&T team suggested that we try iUNSW Rubrik [shown on screen] as an alternative to this old paper-based system.
Setting up the activity on the web-based interface was quite intuitive; it was mostly just a case of ticking some boxes and putting in a few descriptive comments where necessary. Also, whilst setting up the activity, it forces you to have a bit of a think about how you want to allocate marks, where the best places to give marks are and the best places to take them away.
The competition itself, or Project [inaudible word] in Session 1 of 2011 was run across 2 days in week 13 of session. When we were setting up the activity, it became pretty clear that the user interface for the marking app was quite intuitive, and because of that we decided to only run with one marker on the day.
Vivienne Wong: I'm Vivienne Wong. I'm a tutor for ENG1000 this semester, and I'm part of the Mechanical stream. So 30 teams came through [footage of Vivienne marking using iRubrik, during the competition] and, yeah, the mark was really successful, it was really easy, it was really intuitive to use. Yeah, and it made everything really quick.
Tim White: Maybe the best benefit of being able to have the marks available instantly and being able to be collated in real time was that at the end of the second day of competition the winner was able to be announced [still photos of winners, competitors and judges] literally within a minute or so of when the competition ended, and we were able to hand out a trophy, take photos and all that stuff, without the students needing to come back for another day for a separate presentation ceremony.
Production & App development by UNSW Faculty of Engineering / L&T Educational Technology Team
Special thanks: UNSW School of Mechanical Engineering / Dr. Tim White / Dr Alex Churches / Andrew Pratley / Vivienne Wong
Music: Artist - Ricaud Damien / Album - Trancendam-Hybride / Song - Electrolife (CC BY-NC-SA 2.0)]
Assessing Classroom Participation—Nico Roenpegaal, COFA
Below is the transcript for the video on page Interpreting and Grading Learning.
To see an emergence of oral literacy and social literacy in future working environments, it’s paramount to be able to interact, to articulate your ideas, your questions, spontaneously, instantly. To deliver your point, not just through written means, but through your whole body presence.
I mean we all know that some people are simply present in the classroom but when you read their stuff it’s just horribly structured or, you know, they just can’t get to the point, but in the classroom they are able to show themselves.
The most important aspect when it comes to class participation is to create a non-judgemental space. That is maybe the most difficult one as well, but also for me the most encouraging and the most stimulating. So how can we measure class participation if there is judgement in the room, if there is hierarchy in the room, if people are afraid to express their insecurities?
Someone who is very shy, does not participate whatsoever, but after the first assignment, Whoa, a very coherent writer, ‘HD’ — do I treat that person in the classroom afterward differently than another less active student? So is it possible for me when I mark classroom participation to treat them equally? Do I have my favourite students? Or do I over encourage, over support, weaker students.
So what I do, in my first class, I let them design the criteria. They are responsible for how they are marked, how they are evaluated. And then of course I can complement, I can complete this list and the single points serve as prompts for discussion as well. Halfway through the session I give a self-evaluation feedback form and this rubric is part of this form and they also have a comment and open section. And that is helpful to them.
What I do then with their self-evaluation, I give them feedback. In most cases I would say 95% I agree, there might be one difference out of 10 and if there are two or three, which rarely happens, then I speak to the student. Then at the end of the session I offer them to have the self-evaluation again and that’s more on a voluntary basis.
Assessing Classroom Participation in Practice—Dr Iain Skinner, School of Electrical Engineering and Telecommunications
Below are transcripts for the videos on page Assessing Classroom Participation.
We don't design the assessment as an extra; we actually design a learning activity, the learning activity being that they have to do something that reflects what would actually take place in the workplace for them as engineers. So that we are assessing their authentic—or as near as we can get, their authentic—activities.
As they come in here they look at their equipment, they look at their outcomes, they look at why things don't work. And we observe, and we make notes and we make records about how they approach their tasks, so what we are trying to do, in the best approximation we can, is assess their professional approach, their professional attitude. Simultaneously, of course, you're assessing their base-level skills, but those things are really tick-the-boxes. The other part, the professional attitude, is a little bit more ephemeral, but that's the critical part in terms of what students actually have to achieve after 4 years.
These students will be graduates, and then they'll be at the point where they'll have responsibility for supervising vacation summer students and other graduates, and they'll be engaging in formative assessment tasks with students in the workplace. So they're not very far away from having to accept more responsibility.
They have to undertake certain experiments. We specify the outcomes of the experiments; we've got a list of the laboratory skills that the students are expected to demonstrate. Part of it is formative, part of it is entirely, "Whoops, you've not connected this piece of equipment to that piece of equipment correctly." You demonstrate, you show them, you go away and they do it again. In the end, though, if you want students to actually engage in something in a serious way, you've got to have a bit of a carrot at the other end of it. It's very crude, manipulating what students do, but hey, we're teachers, we have to do that.
[Teacher–student group interaction:
Teacher: Okay, so how did you do section C? What settings did you use and why?
Student: We used KP to be 2.5, and KV to be 0.05.
Teacher: Why did you pick those two?
Student: We wanted a KP above 1...
Student: And ... 0.05 seems to be in the middle.
Teacher: Okay, but what are you trying to do, like, you know—This experiment's all about the trade-off between all of the different characteristics of the feedback response, so...]
Iain Skinner: So depending upon the course, and—The one I teach, they get 15 per cent of their summative mark based upon their classroom participation in the laboratory activities. In some courses that could be as high as 25 per cent, depending upon how practical the actual laboratory-based skills would be for that particular subject.
[Teacher: So have a go at that, and pick whichever parameter you like; it doesn't matter; whichever parameter you think's the most interesting.
Students: Thank you.]
Iain Skinner: So you're correcting them, you're encouraging them, you're setting examples, you're teaching them basically how to have the attitude and the mindset and the approach of the professional engineer. When they have a problem, they have to solve the problem. Now, when they leave school they've got a different mindset; they leave school and if they've got a problem, they ask the teacher to fix it. By the time they finish 4 years here, they're getting about in the workplace; if they've got a problem they're expected to fix it. People actually look to them to fix problems, so they need to make that transition. So that's the benefit in terms of their longer-term learning.
[Video of teacher giving student feedback in front of a computer.]
As a formal assessment it has a direct feedback on the teacher, in this particular, the space, because you're getting comments from them in real time, before the end of the course. You can actually modify what you're doing, if not this year then next year. Exams don't provide that level of valuable control for the educator.
The points at which the students struggle is (a) they need to have a little bit of time management, so they come in knowing what they're meant to do. I guess that's very much aligned with the need for some students in other parts to do their pre-reading before they turn up to their tutorial class. The other challenge is for the students to actually take some initiative and responsibility for what they're doing in the space. I've never been a fan of what is called join-the-dot activities, because it doesn't require students to actually move outside a limited scope.
[Video of students in lab explaining things to each other.]
You learn far more by debugging things when they go wrong, and finding out why this is not good, why that is not good, therefore this is good for that reason. Then if it all works first up, you never understood why it's actually the way of doing things. That's a challenge for some students. They just don't like the idea that they could spend a lot of time on something and it still doesn't work. They resent it, in fact, in some cases: "Just give me the answer."
When students commence as first-year students in an engineering degree they have very little awareness of what their professional expectations, their professional behaviour patterns are. For that reason, they will get a lot of formative feedback, together with lots of little tasks in first and second year, to develop that mindset.
One of the advantages of working with fourth-year laboratory students is that by this stage they actually know what is expected of them; they know they're expected to be actively safe in what they're doing, to think about the hazards; they know they're supposed to have appropriate ways of keeping records of their laboratory work. So that it does become easy when they're not complying with the professional expectations, because they really do know what this sort of pattern is. First-year student, you ask them, "Where's your lab book?" They may be a little confused about exactly what you mean. If I ask a fourth-year student or a postgraduate student "Where's your lab book?", they will know—even if they'll sometimes they'll pretend they don't—they'll know what I'm talking about in the need for that lab book. So the answer is, yes, they do know, by this stage; they need a little but of nudging, a little bit of polishing, but they certainly have the core values of a professional engineer by the time they get to fourth year.
Well, the strategy's very, very simple: by putting some marks on the activity, the students will undertake the activity; if the students undertake the activity, they will actually learn something. That's not rocket science, to use that terrible cliche.
I think it's important when you have a form of classroom participation, or any form of assessment, you've got to have some sort of vision in your head as to what the ideal is. And you have to have an agreement between all the staff involved in the course, that that's a common ideal. Now, if your ideal and someone else's doesn't match, then you've got to negotiate, and that's probably a good thing, because you remove some of the biases or overemphasising one thing that one person may do. But you've got to have a clear ideal in your head against which you're measuring the students' professional attitudes.
Peer assessment—Dr Louise Lutze-Mann, School of Biotechnology and Biomolecular Sciences
Below is the transcript for the respective videos on page Student Peer Assessment.
The peer assessment idea, where we ask them to look at each others’, I like primarily not to make it so much assessment as feedback, and for making them realise and improve, so it’s more that I think the quality of the work they produce improves if they get to look at what other people have done and what was easy and difficult and what they did. Some people are resistant to that of course – they are going to steal my ideas and it’s not fair – and some really appreciate being able to learn from others.
When we have done it for the assignment, we say the assignment is due in two weeks time. Next week, if you bring your assignment in, we will instigate peer assessment among those who bother to participate, we will give you a set of criteria against which to judge it and we will have you assess each other’s and then write out an assessment and return that document to the student. You cannot participate if you haven’t already done the assignment. Because some of the students are like, "Well, someone will do no work, read mine and go home and use it." And I understand that notion.
The students who participate record that it is fantastic for two reasons. One, it actually gets them to do the assignment early, and they have a week to reflect on it and improve. Two, they get some very good feedback from their colleagues who are often far more critical than we are. They are quite happy to be ruthless with their friends and colleagues: "This is crap! I couldn’t read this! This didn’t make sense!" It can be a bit bruising. But the quality is improved.
I had a colleague who said to me, "What am I going to do if they all get 90%?" I said, "Celebrate! Wouldn’t it be great if they had all learnt enough that you felt they all did a fantastic job on this assignment."
Assessing Peer Review—Dr Arianne Rourke, COFA
Below are transcripts for the respective videos on page Student Peer Assessment.
Well what they do is they are given two students to assess, it’s anonymous, and then two students assess their work and they also assess themselves. So each of these then average out to become a mark out of 10, so it’s really only handing 2% to each of these five different things. So it also takes the onus off being about the mark. It’s not about the mark because the mark gets down to really only being 2%, it’s really about giving quality feedback and taking that on board and trying to improve your writing and to give very good critiques of other student’s work so you do learn a lot from doing that sort of actual process as well.
Train the students how to do it, let them practice doing it, give them a criteria that they can mark to, show them how they can use that criteria to mark works, given them examples of poor, average and good works so they know what it looks like, give them some simple steps to follow to get that end product, give them links to sites maybe but don’t take them to too many. I think it’s actually better to have what they need on the one site and keep it as simple as possible.
Also allow them, if they want to, to have one to one contact with you because I think that’s still important. Don’t let them feel isolated from you because you have got to deal with people on a need to know basis as individuals. We have got international students, we have got students that come with all sorts of concerns and problems about even the idea of writing such a large amount of word content, 10,000 words. A lot of them get very worried about that because this is the first time they have ever done this, because they haven’t had to do that in their other courses and a lot of them haven’t done honours or PhDs, so it’s very confronting for them.
So the best thing you can do with peer review is use it to break down the assessment into manageable chunks so that they gradually move into it and build it up and don’t feel overwhelmed by the process. Peer review can give that because you feel not alone, because you are in there working with your peers and you are all in there together and you are all going for the same common goal.
One of the challenges of doing peer review is that sometimes students do feel that their assessor has been a bit harsh. The way that I deal with that is I say, "Well, what you have got to do is take on board that, see how you can change your writing, but at the same time put a lot of effort into giving somebody else some really good comments on their work. It really all averages out and you can still get a very good mark, by taking on board the criticism, giving back to others and then reflecting on yourself, on how you might improve." The worst scenario, I can step in and mark it but believe it or not, over the years, that has never actually had to happen. Because somehow they then get it into perspective. I think initially students get it way out of perspective about the mark, they get too hung up about the mark.
Online Peer Assessment
An important part of using online is to have the first meeting so that you can go through it and explain it to them in person. They can ask you any questions and then they get that human person in front first, whom they are talking to so that when they see a picture of you on the site they feel connected. Also, there is a welcome on site, the information is set out in a very easy to follow way with all the assignments colour coded that they can work through and it is easy to access the criteria, the examples.
Most students find it pretty easy because it starts off very basic, asking you a few questions and then things build up and I show you how to do it. So I kind of go in as a student myself by how I put the examples and build them. So they have got something to gauge themselves against, examples to say, "Well am I doing this right? Yes. I will have a look at what has been set out there," and to see what is happening. But, also, using online means they have access to other students’ work as well, which you don’t often get to do in a face to face classroom.
One of the benefits is they learn to be critical about their own writing by gauging against other people’s writing. Another benefit is they become very collaborative in the way that they work and they have a lot to contribute to that because they are post-graduate students who come with a lot of experience. They do this course towards the end of the degree so they have already had a lot of experience in other courses working with these students, so they get a sense of a community, like a little community practice happening online together and they really want to help each other along so they get the feeling they are not alone. A lot of the time when you do your assessment in the classroom, you go home and you do it and you feel a bit isolated and on your own. Then you ask what other students are doing but you don’t get to really see their work. Then you get feedback and often at the end. This gives you feedback progressively, as you do it, so it gives you a chance to change it, improve it and get another set of eyes to look at it as you go through the peer review process.
There are a lot of challenges for the instructor when it’s online. We were making it anonymous so that students, because they knew each other from doing course work, they needed to be free to be able to say what they really thought and didn’t feel they had to mark high just because it was their friend and they knew them. So one of the challenges to start with was that I was giving them ID numbers and I was emailing these out. That was very time consuming and of course when students dropped out of the course I had to change who was reviewing who. I have actually found it easier to just, through MyUNSW actually, use the class utilisation system in there and tell everybody just to use their last three digits of their student number and I can do one mass mail out to everybody and they just have to look on a list and they will find their student number and find the other three or two numbers that they had to review.
So it’s got to have flexibility and some online courses, if you set them up too early, and you set it up that can’t be changed, it can become very frustrating for the students because they want that flexibility to be able to have extensions. They want that flexibility if something happens with the reviews they get, you can at least then change them if they find out who they are or something. And also the lecturer themselves, things happen, people get sick or whatever. You need to have it just like you would in a classroom that we are human and to allow for that.
Peer Review—Associate Professor Julian Cox, Faculty of Science
Below are transcripts for the respective videos on the page Student Peer Assessment.
Assessing to build graduate attributes
It's always been important to me that we have a range of assessment tasks that do target particular graduate attributes or capabilities—certainly, I guess in terms of the essays, group video assignments, all sorts of other tasks that I've used.
I guess really we tend to be targeting three things. One is written communication, the second is oral communication and the third is group or team work. Having come from working in a professional program for quite a number of years, we really see that the students who succeed when they leave university are not necessarily those that have the deepest technical knowledge of the discipline or area, but those that have a good working knowledge in a technical sense but also well-developed graduate capabilities or skills or attributes. And certainly, those students that go into industry, it's clear that those who are able to work well in a group or team and communicate clearly and professionally, both with regard to the oral and written modes, succeed very well.
So, that's always been my incentive for having assessment tasks that really target those attributes.
Guiding peer review
It's a really interesting concern around the guidance that students are given with regard to how they should assess a piece of work. And I guess I've tended to use a variety of processes from very open and creative to very explicit rubrics. So if I think about oral communication I'll have a very structured rubric, that still allows open comment, but really points the students towards a range of criteria by which they can assess their colleagues. The essays that I tend to use have a slightly more open rubric, with only several questions guiding their assessment or evaluation of the essay. Through to a group video assignment where really what I'm looking for is more creativity, both in what the students do and also in how the reviewing students see it. And there I've left that quite open.
And I haven't had the students complain about any form of rubric—and even being completely open, they seem comfortable with that. Although I have thought that as a learning process it might be useful for the students to negotiate with me or with the tutor in class, the sorts of criteria by which they might assess those videos, and I'm thinking I'll at least try that and see how comfortable people feel with a very open marking framework and with a more constrained marking framework.
Peer review in Moodle
In terms of written work, technology's been a real saviour, because I've used—in various platforms, but currently in Moodle—the peer review or Workshop activity/tool that really facilitates the process in various ways.
Certainly most importantly, the students can upload the work themselves. I don't have to collect the task—the essay, if you will. The really valuable part of technology is that it randomly allocates the peer reviewers to each piece of student work. So I don't have to mess around with having to try to allocate three students to each essay—and when we're talking about a class that has 230 students in, that saves hours and hours. Then of course the students are able to provide the feedback and re-upload that, so the authors are able to see the peer review comments through that tool—I don't have to receive those and redistribute those.
So the students get the full value of that process, while reducing the workload for me immensely through the use of that technology.
Strategies and benefits
Well, to me, peer review is the process of engaging students in the review of work that they do. I find it's important that we just keep to the review, generally, that many students don't want to engage in peer assessment, that is, actually giving their colleagues a mark. But certainly there's a lot of value in the students being able to provide each other with critical, qualitative feedback.
Peer review strategies
One of the strategies that I use that engages peer review very solidly is in a Year 1 course, where we're trying to acculturate students to the scientific community, and the assessment really aligns with the idea of peer-reviewed publications, so, the scientific publication process. The students actually have to prepare an essay that is a prospective biography, that is, at the end of the course they're really looking at where they might be going in terms of their scientific career, and trying to imagine what it's going to be like for them in ten to fifteen years post-graduation—so even as first-year students really looking forward and thinking about what they might become. So that's the context or the task.
Now, the students complete a draft of that essay. That's then submitted online for students for peer review, and each essay is reviewed by three students. At no time during that first process do I or any of the tutors engage with that material; it's purely that the other students serve, as with publication, as peer reviewers. The author of the particular essay then gets that feedback; they have the opportunity to modify that essay based on the feedback from their peers, and then the final essay is submitted, and that's marked by either myself or one of the tutors in the course.
So what we find is that that's a very valuable process because the students understand the alignment between what they're doing as a process and the scientific review process in the professional sense.
Peer review as active learning
I think it's a very good active learning process. So rather than individually getting feedback from the teacher, that as they're performing peer review, that they're thinking about all the facets of the particular task that are important to good performance in that task. And then they're able to take those ideas on board and build them back into their own work, perhaps for the current task—certainly where there is the iterative process with the prospective biography essay, but even so, even if they're not able to employ it immediately, they still have the benefit of thinking about what makes a good essay, or a good presentation, and then to be able to take that forward into similar tasks within the same course or perhaps in another course they're doing in that semester or year.
Peer review challenges
The students only find challenge when it comes to assigning marks to each other. Often they inflate marks, they give a very high mark, a higher mark than really the work deserves, because I think there's a fear of [laughs] maybe reprisal among students. I think also they feel that it is ultimately the job of the academic as an expert to provide the mark; they don't feel they should be marked, in that numerical-mark sense, by their colleagues.
That said, they find the process really valuable, so in terms of being able to provide qualitative feedback, to provide critique to each other, they seem to have no fear and in some cases can be brutally honest but still constructively critical. And so they see the benefit in that process, in itself, within the class context but again also they see their future, particularly in science and in advanced science, that they're going to be involved in the scientific community where peer review is an essential part of life, so they just really have to get used to it, so they find it's valuable to engage in that process early.
Peer review benefits
I think the advantage of peer review is that it probably does make assessment a more pleasurable and potentially more efficient process, particularly where there is the chance for the students to build feedback into the final submission, so that when staff receive those typically the work's of higher quality, so it's more enjoyable to assess—say, the essay is easier and more pleasurable to read, but of course, because there are fewer mistakes, hopefully, because the peers have corrected those with the author, or the performer, then that means the staff have less to comment on. So in effect everyone wins: the students certainly will end up with a better mark, they've learnt what they need to learn in terms of the process; and the staff, again, have that more pleasurable, more efficient time in actually giving the mark and grade to the task.
Introduction to Plagiarism, its Definition and Importance—Associate Professor Sue Starfield, Director of the Learning Centre
Below are transcripts for the respective videos on page Reducing Plagiarism.
What Plagiarism Is
Plagiarism is quite a complex and complicated issue because there are a number of definitions of it and it's quite wide ranging. So one of the things I'd suggest that you do, first of all, if you're trying to understand what plagiarism is, is to actually go to our Academic Integrity and Plagiarism Workshop on the Learning Centre website and take the little quiz that we have there, and see if you agree with the definition of plagiarism. You will see that—you may be surprised to see that things that you might not have thought constituted plagiarism are actually considered to be plagiarism. So it's important that you yourself as an academic understand what the university considers to be plagiarism, and it can range from cheating, copying, collusion to just poor citation and referencing practices.
The approach at UNSW towards plagiarism is educative in the first instance. So we're not setting out to catch students, or punish them; we're really setting out in the first instance to make students aware of what our expectations are, to design assessment tasks that help students develop the skills to reference and cite accurately.
Why Plagiarism Occurs
A lot of the time it is out of ignorance. Students just aren't aware of what the reference and conventions are. They lack experience in referencing, they are unsure at times of what the assignment is asking them to do and they are also, if English isn't their first language, it can be an issue of not having adequate linguistic resources. So students are asked to paraphrase, they're asked to put things in their own words and they often just feel that they don't have enough language to do it, so it's easier to copy and paste or cite the words of others or perhaps put in the words of others, the authorities, and not reference them appropriately.
And another factor that we in the Learning Centre have identified as being a potential cause of plagiarism is poor time management. So often when students are referred to us because they have been found to be plagiarising and they are sent to us for some counselling or help with understanding what's involved in appropriate academic practice, they tell us that they were under pressure, it was the night before, they didn't have time to really do the reading and think about the writing, so they just cut and paste. So perhaps we can also help our students develop good time management.
The perceived increase in plagiarism is probably in part due to the World Wide Web and just the availability of material, and the ease with which one can cut and paste.
How to minimise plagiarism
So what can academics do to help reduce the amount of plagiarism? It's quite a challenge; everyone's under pressure; we don't want to do too much marking, but I think that sometimes if we look at the assessments that we're setting and review them we can look at ways in which we can design out opportunities for plagiarism.
- If we don't change our assessment tasks regularly, the likelihood that there will be copies of students' work from previous years floating around is quite high.
- We should try and encourage elements in the assessment task that draw on students' local and current experience, so they really have to come up with something that's a little bit original in their assessment.
- We should set assignment tasks that perhaps link to one another. So they might start off with a draft of something that we look at, then they're given feedback and they go on to further develop the tasks, so we have evidence that it's their own work, or it's more likely to be their own work.
- We show them how to acknowledge online sources, because sometimes students will think that if something's online it doesn't have an author, it's just there and available.
- With group work there's a stronger likelihood, possibly, of collusion or copying, so we need to try and minimise that, possibly by asking students to submit reflective accounts of what they did, what their contribution was.
- We need to give students opportunity to practise their writing skills, receive feedback and improve their writing.
- It's probably a really good idea, particularly with first-year students, to devote time in the early lectures to explaining clearly what the referencing conventions are, in your particular subject, what you expect from the students, and show them examples of appropriate referencing. You can make these available in your Blackboard site or your Moodle site. It's a really good idea to provide examples of successful assignments, done previously, in previous years, if you get the students' permission. You can annotate these. For many students, particularly those coming from different countries, or with poorer English language skills, they may not have been exposed to the conventions that Australian universities require.
UNSW Learning Centre services and resources on plagiarism
At the Learning Centre we do have resources online that help students develop those skills. We also run workshops throughout the semester on all these areas: writing skills, reading skills, note-taking skills, how to reference, how to cite, how to paraphrase, how to summarise. So you're not alone and you can certainly refer your students to our workshops. And, as I said, you can ask us to come into your classes as well and run workshops for your students if you're really concerned about their ability to avoid or minimise plagiarism in their work
Using the Turnitin Similarity Detection Tool
Now, Turnitin is not a plagiarism detection tool; it's a similarity detection tool. What does that mean? It means that there's a huge database that Turnitin has of previously submitted assignments from all over the world, journal articles, databases, and what it does, it compares the student's assignment to all these texts and it highlights texts that are identical or similar. Then you have the possibility of looking at the students' assignments and deciding whether it is in fact plagiarism, is it inadvertent or is it deliberate. You can call the student in—in fact, there is a whole set of procedures that the university has through academic misconduct for dealing with plagiarism.
So I would say the first port of call is to use Turnitin if you are concerned, and make the students submit through it.
There are other ways that you can use Turnitin, too, not necessarily to detect but to help students educate themselves, particularly with new students in first year or perhaps new postgraduate coursework students who are coming from a diversity of backgrounds. They can use Turnitin, I think, for themselves, to put their drafts in it, and see to what extent it highlights similarity with some other texts that are in the database, and this can help students as well become more aware of what's required, because sometimes it can just be an issue of poor referencing, poor citing, or perhaps they need to paraphrase more or summarise more. So I think that Turnitin can also be a bit of a developmental and educational tool for students, and I'd like to encourage academics to think about using it in that way.