ePortfolios, Y|N?

I retweeted a link to this post yesterday, and promptly found myself in the middle of a storm of debate about the validity and legitimacy of the points it raises.  As it’s not exactly a topic that lends itself to discussion in 140 character chunks, I thought I’d bring it here to see if people want to continue what turned out to be a pretty impassioned and heated discussion.

For my part, I think there are some good points made here.  While I think there’s a definite role for eportfolio technology in certain contexts, I’m not sold on the whole lifelong portfolios for lifelong learners rhetoric, and I don’t think it necessarily meets the needs or desires of learners or teachers.

My biggest issue is that there is a lack of distinction between a portfolio of work that is ultimately intended as an assessment resource to be externally viewed and evaluated, and a student’s body of work which he is supposed to reflect on and learn from.  The intrusion of workplace CPD into this space simply exacerbates this lack of focus and conflicting motivations.  While it may be possible for a single system to fully meet the technical requirements of these very different competing interests, I don’t think that’s necessarily the appropriate approach.  Learning is all about having the freedom and safety to fail, and about taking ownership of our successes and failures in order to grow as learners and as experts in the subject we’re studying.  Having authority over our own work is a fundamental part of that, and something that has to be handed over when that work is used for formal evaluation.

I don’t think we need specialised software in order to retain a record of our learning and progress.  A personal blog can be a powerful tool for reflection, a pen drive of files can be more portable and accessible than a dedicated tool, your youtube or vimeo or flickr channel is more than adequate for preserving your creations.  All of these have permanence beyond the duration of a course: although some institutions will allow continued access to institutional portfolio systems after a student has finished his course of study, it’s not a given and is always subject to change.  Using existing services ironically offers far more opportunity for true lifelong learning than a dedicated system.  And such distributed systems reflect the ways in which people reflect on and share their work outside the walls of the university.  I still have my ‘portfolio’ of my undergraduate work: the printed out essays I handed in with my lecturers’ comments written on them.  That was exactly what I needed as a learner, that’s exactly what I need now should I ever wish to reflect on that period.

For material to be used for assessment, yes, there is a need for secure and reliable storage systems and appropriate standards such as Leap2A and BS8518 to support the exchange of evidence, but the systems and processes should be appropriate to the subject and the material to be assessed rather than assessment being tailored to suit the available systems.

Many thanks to @drdjwalker, @dkernohan, @mweller, @markpower, @jamesclay, @ostephens, @jontrinder and @asimong for joining the discussion on Twitter.

Under development: QTI-IPS

A couple of months ago, JISC released an Invitiation to Tender for a QTI v2.1 implementation and profiling support project.  A consortium of experts produced the successful bid, bringing together some of the leading experts on QTI in UK HE, and the project formally kicked off this week.  It concludes in mid-September this year.

The consortium is led by the University of Glasgow, and includes experts from the University of Edinburgh and Kingston University, contributions from the IMS QTI working group chairs and tool developers, independent consultants Sue Milne, Graham Smith and Dick Bacon, and input from us here at JISC CETIS.  QTI experts at the University of Southampton are advisors to the project.

A project blog has been set up  which will provide a central point for dissemination to the wider QTI community.  Information on how to get involved with the QTI interoperability testing process is also available there.

The project aims include:

  • Contributing to the definition of the main profile of QTI 2.1;
  • Implementation of the main profile in at least one existing open source test rendering/responding system;
  • Providing support in the use of QTI 2.1 and the conversion of other question and test formats to QTI 2.1 to those developing assessment tools and authoring questions;
  • Providing a publicly available reference implementation of the QTI main profile that will enable question and test item authors to test whether their material is valid, and how it renders and responds.

Follow the project blog for future developments!

CAA Conference 2011 call for papers out now

The 2011 International Computer Assisted Assessment (CAA) Conference: Research into eAssessment will be held on 5 and 6 July 2011 at the DeVere Grand Harbour Hotel, Southampton.  Jointly hosted by the School of Electronics and Computer Science at the University of Southampton and the Institute of Educational Technology at the Open University, this is a two day research-led conference which aims to advance the understanding and application of information technology to the assessment process through rigorous peer-reviewed research.

The 2011 Call for Papers is now available, together with detailed guidance on conference themes and the proposal submission process.  The deadline for receipt of submissions through the conference’s EasyChair web interface is Friday 15 April 2011.

Papers from the 2010 conference and earlier are also available.

Draft briefing paper on IMS Question and Test Interoperability v2.1 now available

With IMS Question and Test Interoperability v2.1 almost ready for final release, this draft briefing paper provides an introduction to the specification based on the most recent public draft available.  It covers the structure and purpose of the specification, and looks at the history and background to it as well as reasons for its adoption and some common concerns and criticisms.  A final version will be released with the final version of the specification; in the meantime, we hope this paper will provide a useful guide to this significantly improved specification.  It is likely to be of particular interest to IT managers, learning technologists and developers interested in online and electronic assessment and new to QTI.

Any comments, corrections or requests for additional content are very welcome, either by commenting here on this blog or by email.

Mobile assessment: MCQs on Android

Mobile learning and mobile assessment are recurring topics of interest, and with the huge popularity of smart phones capable of highly sophisticated technical innovations they’re increasingly viable.  Both the Google App Inventor and Apple’s iOS Dev Center enable non-experts to create applications for these platforms, enabling the delivery of highly focused activities that can nevertheless be easily shared and adapted for different circumstances.

One example of this can be seen in Liam Green-Hughes RefSignals Android app which assesses users’ knowledge of the meaning of various signals and gestures used by ice hockey referees to indicate penalties.  It’s a fully-formed MCQ test: introductory rubric, a series of questions with feedback on incorrect answers and score keeping, and a final score display.  And, as he says, all produced without writing a line of code.

The source code for the app is available from the link above, allowing the test system to be adapted to any subject or purpose, although given the philosophy of simplicity Google designed into their app inventor teachers – and learners – can easily create a similar tool themselves should they wish.  And being Android, there are no app store oddities to present a barrier to sharing and exchange of such simple but potentially invaluable developments :)

Many thanks to my (iPhone using) colleague John Robertson for the tip.

Under development: Peer Evaluation in Education Review (PEER)

The Peer Evaluation in Education Review (PEER) project based here at the University of Strathclyde is one of five projects funded in the JISC Learning and Teaching Innovation Grants programme round 5.  Running from 1 June 2010 to 30 June 2011, the project explores a range of issues around the implementation and delivery of peer assessment within higher education.

PEER is led by David Nicol and Catherine Milligan, building on the highly influential Re-engineering Assessment Practices in Higher Education (REAP) project.  The interplay between the two projects is clear from the extensive information available through the umbrella site that links the two, offering a wealth of information and resources around assessment, peer assessment and feedback.  The website is constantly under development, so is well worth regular revisiting to see the latest developments.

The project’s first phase involves the development of a framework for peer review and a detailed evaluation of existing peer review software.  A range of tools was evaluated in relation to a list of desirable features, and outcomes from this exercise are being added to the website for future reference.  Stage two involves a series of small scale pilots in a range of subject areas and institutions: the project team are also very interested in hearing from others piloting peer review software for potential inclusion within this research activity.  The final phase will see the development of a number of resources including guidelines on implementing peer review within a course of study and a literature review.

Unlike some LTIG projects, technical development activities are limited to those necessary to integrate those systems chosen for the pilot phase with the institutional LMS, Moodle.  Both the PeerMark functionality within Turnitin, and Aropa, developed by the University of Auckland, New Zealand, will be tested during the pilots.

Testing is the best teacher

In an educational world where there is a constant drive to find new and entertaining ways to support learning, it’s intriguing to see a major research project publishing its findings that, actually, traditional tests of recalled knowledge are significantly more beneficial for learning than either ‘cramming’ or the increasingly popular concept mapping.

The research, published in this month’s Science magazine and summarised in an excellent article in a recent New York Times, highlights the clear improvement in performance learning supported by assessment provides over these other approaches.  It’s also noteworthy that cramming, although markedly less effective than testing, was found to be more effective than concept mapping.

The article also highlights what is for me the most fascinating part of this research: an inverse relationship between learners’ confidence in their knowledge and the quality and quantity of what they actually recalled.  Students who repeatedly reviewed a text or who drew concept maps while consulting the text had higher levels of confidence in their knowledge of that text than those students who read the text once and then sat a test based on their recall of it.  However, in subsequent testing a week later the students who had learned the text through testing consistently performed significantly better than those who had learned by concept mapping or cramming.

One commentator suggests that this is because ‘the struggle [to recall] helps you learn, but it makes you feel like you’re not learning.’  This really does seem to get towards the core tension between making learning ‘fun’ and feel productive, and maximising what students actually learn.  Concept mapping can allow learners to illustrate in great detail what they (think they) know, but it provides no challenge to that knowledge, no way of pointing out the gaps in knowledge.  It’s a world of unknown unknowns, with the misplaced confidence it inspires demonstrating the Dunning-Kruger effect in unfortunate action.

Whatever the underlying reasons behind the findings of this research, it’s good to see the debate being opened up again and the encouragement for a re-evaluation of received wisdom about the best ways to learn.

WebPA and Moodle integration

webpa-logo-gifWebPA continues to benefit from its lively and creative community, the latest development being a very elegant Moodle-WebPA plug in developed by John Tutchings at Coventry University.

John has produced two videos to demonstrate the plug in in action, the first illustrating the single sign in for the two systems which allows population of the WebPA course with students taken over from Moodle. The second demonstrates migration of existing Moodle groups to WebPA, again utilising the single sign in across both systems.

This plug in is still at the beta stage, but anyone interested in helping test it is welcome to contact John, who can be reached via his website.

Assessment of games based learning

It was well worth the early start today to attend a fascinating webinar presented by Nicola Whitton of MMU on ‘assessment of game based learning’.  Part of the successful series of webinars hosted by the Transforming Assessment project funded by the Australian Learning and Teaching Council and based at the University of Adelaide, this was the second of two events focusing particularly on games in education.

While the previous seminar looked less at assessment and more generally at games and pseudo-games such as Second Life, Nicola’s talk drew a sharp distinction between play, play worlds and simulations.  Games don’t need to have awesome graphics or vast budgets to succeed: great learning designs may be gamelike without the author ever consciously intending to design a game.  Games might ‘mashup the real world and the game world’ in imaginative and creative ways, but

lecture theatres aren’t particularly effective in real life, so reproducing them in a world where you can fly just seems really strange

I feel as though I’m SL-bashing again, but I found it really refreshing to have someone state so clearly that no, just because something’s virtual doesn’t make it a game, its the nature of the interaction between learner and content that does.  It doesn’t make it a more or less legitimate learning tool either, but the distinction is important as they both have valuable but different things to offer, and represent very different learning models.

Nicola distinguished between the use of games as an assessment tool and the assessment of games based learning, an important distinction that often seems to be overlooked.

Assessment within games offers some valuable elements: it can be automated, repeatable, potentially integrated in the learning process, impartial.  External assessment, defined here as any non-game assessment activity, by contrast, is capable of greater creativity, more tutor control, but is also more time-intensive and can be unconsciously partial.  Higher levels of learning such as analysis and critical thinking are far more difficult to assess by any automated method, including games, as they attempt to use quantitative methods to assess qualitative outcomes.

Games often have a binary, win or lose outcome that doesn’t accurately reflect the subtleties of degrees of competence or ability, and which can be counterproductive to learning through play when used as assessment.  By using external assessment processes and disassociating game performance from course grade, games can provide a safe learning environment in which failure in the immediate game context can actually be invaluable for further learning and growth.

As with any other form of assessment, including pen and paper tests, expertise in the assessment format – in this case, gaming literacy – can significantly alter the outcome of the assessment.  As always, assessment must genuinely assess the intended learning outcomes and not, for example, the ability to navigate effortlessly through the game world (a major issue even for experienced gamers when it comes to Second Life) or familiarity with general gaming conventions.  This suggests that assessing game based learning within the game environment would be a preferred approach, but while teachers may find it relatively easy to integrate innovative approaches within their teaching practice, applying this to assessment, particularly higher-stakes assessment, can provoke hostility from higher authorities.  Nicola did, however, reference the SQA’s GamesSpace initiative, presented at a CETIS special event earlier this year, as an example of a national assessment authority embracing such technologies for a major qualification strand.  GamesSpace is particularly worth noting as it allows the assessment of process and not simply product and incorporates human rather than automated marking: the candidate uses an avatar to progress through a series of role-related tasks, priorities and activities which are recorded in a format identical to the pen and paper alternative for manual human marking.

Learners too may demonstrate some hostility towards games as teaching aids – but this resistance is something that has been observed in relation to other innovative approaches too. Anything that appears to trivialise learning or that can be interpreted as trying to make learners ‘not feel they’re learning‘ can provoke scepticism and resistance in learners.  It can be hard to get away from the priviliging of traditional models of teaching and learning, from the scholars seated at the feet of the master, from the three essays in three hours make-or-break finals paper.  When learners can see the value in using such approaches, they are generally very willing to engage with them – learners are in general pragmatic, strategic and outcomes-orientated, whether their teachers like it or not.  Interestingly, Nicola’s research has demonstrated that a ‘propensity to play games for fun is in no way related to an inclination to play games for learning.’  She also cast some healthy scepticism on the oft-quoted finding that women play puzzles while men play shooters, pointing out that these findings come from surveys completed by self-selecting groups and can’t be taken as gospel; as a women who’d far rather shoot pigs than click cows I find it good to have my preferences acknowledged :)

The two sessions offered very different views of a sometimes controvertial field, and regardless of personal opinion these varied perspectives were invaluable.  This excellent series of seminars will be continuing for the rest of this year and into 2011 and is well worth engaging with, as is the rest of the project’s extensive and highly informative site.

WebPA Resource Pack now available

webpa-logo-gifA Resource Pack designed to support those considering adopting the award winning WebPA peer assessment system has been developed by the project team and is now available online for free download.  Various sections of the guide address different users – management, academic staff, learning technologists and IT support – and a range of resources is included.  This is an excellent example of the benefits of the strong and active community of practice built up around this project, and will help to inform others who are considering adopting this system.