Thursday, June 25, 2015

The Assessment in Higher Education conference 2015

I am writing this on a sunny evening, sitting in a pub overlooking Old Turn Junction, part of the Birmingham Canal Navigations, with a well-earned beer after two fascinating and exhausting days at the Assessment in Higher Education conference.

It was a lovely conference. The organising committee had set out to try to make it friendly and welcoming and they succeeded. There was a huge range of interesting talks and since I could not clone myself I was not able to go to them all. I am not going to describe individual talks in detail, but rather draw out what seemed to me to be the common themes.

A. It is all just assessment

The first keynote speaker (Maddalena Taras) said this directly, and there were a couple of other things along the same lines: the split between formative and summative assessment is a false dichotomy. If an assessment does not actually evaluate the students (give them a grade, hence summative) then it misses the main function of an assessment. This is not the same as saying that every assessment must be high stakes. Conversely, in the words of a quote Sally reminded me of:

“As I have noted, summative assessment is itself ‘formative’. It cannot help but be formative. That is not an issue. At issue is whether that formative potential of summative assessment is lethal or emancipatory. Does formative assessment exert its power to discipline and control, a power so possibly lethal that the student may be wounded for life? … Or, to the contrary, does summative assessment allow itself to be conquered by the student, who takes up a positive, even belligerent stance towards it, determined to extract every human possibility that it affords?” (Boud & Falchikov (2007) Rethinking Assessment in Higher Education: Learning for the Longer Term)

The first keynote was a critique of Assessment for Learning (AfL). Not that assessment should not help students learn. Of course it should. Rather, the speaker questioned some of the specific recommendations from the AfL literature in a thought-provoking way.

The 'couple of other things' were a talk from Jill Barber of School of Pharmacy at Birmingham, about giving students quite detailed feedback after their end of year exams; and Sally Jordan’s talk (which I did not go to since I have heard it internally at the OU) about the OU Science faculty's semantic wranglings about whether all their assessment gets called “summative” or “formative”, and hence how the marks for the separate assignments are added up, without changing what the assessed tasks actually are.

B. Do students actually attend to feedback?

The second main theme came out many times. On the one hand, students say they like feedback and demand more of it. On the other hand, there is quite a lot of evidence that many students don’t spend much time reading it, or that when they do, it does not necessarily help them to improve. So, there were various approaches suggested for getting students to engage more with feedback, for example by

  • giving feedback via a screen-cast video, talking them through their essay highlighting with the mouse (David Wright & Damian Kell, Manchester Metropolitan University). Would students spend 19 minutes reading and digestion written feedback on an essay? Well, they got a 19 minute (on average) video - one of the few cases where some students thought it was too much feedback!
  • making feedback a dialogue. That is, encouraging students to write questions on the cover sheet when they hand the work in, for their tutor to answer as part of the feedback. That was what Rebecca Westrup from the University of East Anglia was doing.
  • Stefanie Sinclair from the OU religious studies department talked about work she had one with John Butcher & Anactoria Clarke assessing reflection in an access module (a module to designed to help students with limited prior education to develop the skills they need to study at Level 1). Again, this was to encourage students to engage in a dialogue with their tutor about their learning.
  • Using peer and self assessment, so that students spend more time engaging with the assessment criteria by applying them to their own and other’s work. Also the suggestion from Maddalena Taras was that initially you give the student’s work back without the marks or feedback (but after a couple of weeks of marking) so that they read it with fresh eyes before they get the feedback (first) then the marks.
  • There was another peer assessment talk, by Blazenka Divjak of the University of Zagreb, using the Moodle Workshop tool. The results were along the same lines as other similar talks I have seen (for example at the OU where we are also experimenting with the same tool). Peer assessment activities do help students understand the assessment criteria. It helps them appreciate what teachers do more. Students’ grading of their peers, particularly in aggregate, is reliable, and comparable to the teacher’s grade.
  • A case of automated marking (in this case of programming exercises) where students clearly did engage with the feedback because they were allowed to submit repeatedly until they got it right. In computer programming this is authentic. It is what I do when doing Moodle development. (Stephen Nutbrown, Su Beesley, Colin Higgins, University of Nottingham and Nottingham Trent University.)
  • It was also something Sally touched on in her part of our talk. With the OU's computer-marked questions with multiple tries, students say the feedback helps them learn and that they like it. However, if you look at the data or usability lab observations, you see that in some cases some students are clearly paying not attention to the feedback they get.

C. The extent to which transparency in assessment is desirable

This was the main theme of the closing keynote by Jo-Anne Baird from the Oxford University Centre for Educational Assessment. The proposition is that if assessment is not transparent enough, it is unfair because students don’t really understand what is expected of them. A lot of university assessment is probably towards this end of the spectrum.

Conversely, if assessment is too transparent it encourages pathological teaching to the test. This is probably where most school assessment is right now, and it is exacerbated by the excessive ways school exams are made hight stakes, for the student, the teacher and the school. Too much transparency (and risk averseness) in setting assessment can lead to exams that are too predicable, hence students can get a good mark by studying just those things that are likely to be on the exam. This damages validity, and more importantly damages education.

Between these extremes there is a desirable balance where students are given enough information about what is required of them to enable them to develop as knowledgable and independent learners, without causing pathological behaviour. That, at least, is the hope.

While this was the focus of the last keynote, it resonated with several of the talks I listed in the previous section.

D. The NSS & other acronyms

The National Student Survey (NSS) is clearly a driver for change initiatives at a lot of other universities (as it was two years ago). It is, or at least it is perceived to be a, big deal. Therefore it can be used as a catalyst or leaver to get people to review and change their assessment practices since feedback and assessment is something that students often give low ratings for. This struck me as odd, since I am not aware of this happening at the OU. I assume that is because the OU has so far scored highly in the NSS.

The other acronym floating around a lot was TESTA. This seems to be a framework for reviewing the assessment practice of a whole department or degree programme. In one case, however (a talk by Jessica Evans & Simon Bromley of the OU faculty of Social Science) their review was done before TESTA was invented, though along similar lines.

Finally

A big thank-you to Sue Bloxham and the rest of the organising team for putting together a great conference. Roll on 2017.

No comments:

Post a Comment