Assessment think tank, HEA, 2008-01-31

Assessment think tank, at The Higher Education Academy, York, 31st January to 1st February 2008

Several of these events appear to have been arranged, and this one was with the Subject Centres both for History, Classics and Archaeology (HCA), and for Philosophy and Religious Studies (PRS).

Around 20 or so delegates were present, mostly from representative subject areas, but including from the JISC’s Plagiarism Advisory Service. Previously, I only recognised Sharon Waller from the HEA, and had talked with Costas Athanasopoulos (PRS Subject Centre) at the JISC CETIS conference: he was the one who invited me.

I won’t try to document the whole event, but to pick out a few things which were highlights for me.

The discussion around plagiarism was inspiring. There was very little on the mechanics and technology of plagiarism detection (Turnitin is popular now) and plenty on good practice to avoid the motive for plagiarising in the first place. This overlaps greatly with other good practice – heartening, I felt. George MacDonald Ross gave us links to some of his useful resources.

Also from George MacDonald Ross, there was an interesting treatment of multiple-choice questions, for use preferably in formative self-assessment, avoiding factual questions, and focusing on different possible interpretations, in his example within philosophy.

As I’m interested in definitions of ability and competence, I brought up the issue of subject benchmarks, but there was little interest in that specifically. However, for archaeology fieldwork, Nick Thorpe (University of Winchester) uses an assessment scheme where there are several practical criteria, each with descriptors for 5 levels. This perhaps comes closest to practice in vocational education and training, though to me it doesn’t quite reach the clarity and openness of UK National Occupational Standards. Generally, people don’t seem to be yet up to clearly defining the characteristics of graduates of their courses, or they feel that attempts to do that have been poor. And yet, what can be done to provide an overall positive vision, acceptable to staff and student alike, without a clear, shared view of aims? Just as MCQs don’t have to test factual knowledge, learning outcomes don’t have to be on a prosaic, instrumental level. I’d be interested to see more of attempts to define course outcomes at relatively abstract levels, as long as those are assessable, formally or informally, by learner, educator and potential employer alike.

One of the overarching questions of the day was, what assessment-related resources are wanted, and could be provided either through the HEA or JISC? In one of our group discussions, the group I was in raised the issue of what a resource was, anyway? And at the end, the question came back. Given the wide range of the discussion throughout the day and a half, there was no clear answer. But one thing came through in any case. Teaching staff have a sense that much good, innovative practice around assessment is constrained by HEI (and sometimes wider) policies and regulations. Materials which can help overcome these constraints would be welcome. Perhaps these could be case studies, which documented how innovative good practice was able to be reconciled with prevailing policy and regulations. Good examples here, presented in a place which was easy to find, could disseminate quickly – virally even. Elements of self-assessment, peer assessment, collaboration, relevance to life beyond the HEI, clarity of outcomes and assessment criteria, etc., if planted visibly in one establishment, could help others argue the case for better practice elsewhere.

Identity as a programming language

If Sam can do it (and at the same time claim that Scott and Adam have as well) then I guess we all can…

You are C++. You are very popular and open to suggestions. Many have tried to be like you, but haven't been successful
Which Programming Language are You?

It is very interesting to note how compulsive these kind of tests are: it seems like we all want to know how we are rated by others. Very natural. Perhaps we can get a hold of this and link it in to the domain of assessment and the issue of identity?

Also, there should be an easy way of presenting the results of such tests (OK, perhaps more serious ones) in an e-portfolio, and make that available to others to search on. Perhaps I’m saying no more than something about FOAF and another way in which it could be used: this certainly links to Scott’s approach to e-portfolios.