Making Assessment Count Evaluation project

The Making Assessment Count (MAC) project ran from November 2008 to October 2010, funded by JISC as part of their Transforming Curriculum Delivery through Technology programme and led by the University of Westminster.  Focused on the desire to engage students with assessment feedback provided to them, it explored processes for encouraging and guiding student reflection on feedback and developing a dialogue between learners and teachers around feedback.  Participants at a joint Making Assessment Count/JISC CETIS event on Assessment and Feedback back in February 2011 heard not only from project staff but also from students who were actively using the system and whose enthusiasm for it and recognition of its impact on their development as learners was genuinely inspiring.

Although the project team developed the eReflect tool to support the Making Assessment Count process, the primary driver of the project was the conceptual model underlying the tool and the development of eReflect was a pragmatic decision based on the lack of suitable alternative technologies.  The team also contributed a service usage model (SUM) on their feedback process to the eFramework.

The Making Assessment Count Evaluation (MACE) project will see the Westminster team overseeing the implementation of the MAC model in an number of partner institutions, and will see the eReflect tool in use in the Universities of Bedfordshire and Greenwich, and Cardiff Metropolitan University (formerly the University of Wales Institute Cardiff).  By contrast, City University London and the University of Reading will be focusing on implementing the MAC model within Moodle (City) and BlackBoard (Reading), and exploring how the components already provided within those VLEs can be used to support this very different process and workflow.

It is perhaps the experiences of City and Reading that will be of most interest.  It’s becoming increasingly evident that there are very strong drivers for the use of existing technologies, particularly extensive systems such as VLEs, for new activities: there is already clear institutional buy-in and commitment to these technologies, institutional processes will (or at least, should!) have been adapted to reflect their integration in teaching practice, and embedding innovative practice within these technologies increases the sustainability of them beyond a limited funding or honeymoon period.  The challenges around getting MAC into Moodle and BlackBoard are those that led to the need for the eReflect tool in the first place: traditional assessment and survey technology simply isn’t designed to accommodate the process of dialogue and the engagement of a student with feedback provided that the MAC process drives.  Of course, Moodle and BlackBoard, representing community-driven open source software and proprietary commercial systems respectively, present very different factors in integrating new processes, and the project teams’ experiences and findings should be of great interest for future developers more generally.

Deployment of the MAC process in such a wide range of institutions, subject areas and technologies will provide a very thorough test of the model’s suitability and relevance for learners, as well as providing a range of implementer experiences and guidance.  I’m looking forward to following the progress of what should be a fascinating implementation and evaluation project, and hope to see learners in other institutions engage as enthusiastically and with such good outcomes as the participants in the original MAC project.

Under development: Sharing Higher Education Data

Meaningful work placements and graduate employability have always been an important part of university education and preparation for a professional future in certain disciplines, and are arguably even more so today in a climate of limited employment opportunities, with high university fees and loans positioning students as customers investing in their future careers.  Certain subject areas enjoy good relationships with industry, providing industrial placements to give students real-world experience in their future fields, while local companies benefit from the expertise and cutting edge knowledge these students can bring to the workplace.  Universities and colleges similarly benefit from this ongoing engagement with industry, ensuring their courses remain relevant and meaningful.

Shrinking university staff numbers have increased workloads, limiting the time staff have to spend assisting individual students in seeking suitable placements and opportunities for work-based learning.  In any case, reliance on university staff is not necessarily the best way in which students can prepare themselves for seeking suitable, fulfilling employment on graduation, or establish fruitful relationships with potential employers.

The Sharing Higher Education Data (SHED) project attempts to address these issues through the delivery of a ‘matchmaking’ service for students and employers, which will both facilitate communication between them and enable students to plan their learning paths in the light of the expectations and requirements of their chosen profession.  Sample case studies included in the student and employer information sheets about the service help illustrate the range of ways in which SHED can benefit both user groups while increasing interaction between academia and industry.

SHED uses the popular Mahara open source eportfolio tool to allow students to develop their profiles, and, vitally, provides them with strict control over what information is made publicly viewable by potential recruiters.  Students can also view common employer search terms within their particular field in order to better understand the employment market in that area and to support the review and revision of their profiles to enhance their employability.  The integration of the XCRI information model and specification (eXchanging Course Related Information) provides a common framework for describing and sharing course information, while Leap2A and InterOperability provide support for the sharing of eportfolio and competence information.

As a partnership between the Centre for International ePortfolio Development at the University of Nottingham and Derby College, SHED will also be able to demonstrate how the system can be used across a number of different institutions without compromising privacy while maximising opportunities for placement and project work and professional development.  Although small-scale and local to begin with, it is intended that the system be scalable to include many institutions, subject areas and locations, and provide both a valuable service for students and employers and insight into regional and national trends in industry and development.

Badges, identity and the $2million prize fund

You’ll almost certainly have noticed some of the excitement that’s suddenly erupted around the use of badges in education.  Perhaps you’ve heard that it’s the latest in a long line of ‘game changers for education’, maybe you’re even hoping for a slice of that $2million prize fund the HASTAC Initiative, Mozilla and the MacArthur Foundation are offering for work around their adoption and development through the Digital Media and Learning Badges for Lifelong Learning competition.  Supported by a number of significant entities, including Intel, Microsoft and various US Government departments, the competition offers up to $200k each for a number of projects around content and infrastructure for badges for lifelong learning, as well as an $80k award for a research project in ‘Badges, trophies and achievements: recognition and accreditation for informal and interest-driven learning’ together with two smaller doctoral student grants, and student and faculty prizes.  That’s a decent amount of cash available for – what?

This is all based around Mozilla’s Open Badges Initiative, which attempts to provide an innovative infrastructure to support the recognition of non-traditional learning and achievement for professional development and progress.  Drawing upon the widespread use of badges and achievements in gaming and the current trend for gamification, the project is described in gamified language, claiming that badges can help adopters ‘level up’ in their careers via the acquisition and display (sharing) of badges.  There’s a fair point being made here: gamers can develop a profile and express their individual identity as gamers through the display of achievements they earn as they play, which can then be shared ingame through the use of special titles or on appropriate fora through signatures and site profiles.  Achievements reflect the different interests a player has (their weighting on the Bartle scale for example) as well as their skill.  Within a fairly closed community such as a single game, a suite of games or a website, these achievements have significant value as the viewers are other gamers for whom the achievements have meaning and value.

LarsH on Stack Overflow’s response to the question ‘why are badges motivating?‘, asked over a year ago but still very relevant, sums this up eloquently:

We like other people to admire us.  As geeks we like others to admire us for our skills.  Badges/achievements stay visible in association with our online identity long-term, unlike individual questions and answers which quickly fade into obscurity.

If I play a game and get a great score, it’s nice, but it means little to others unless they have the context of what typical scores are for that game (and difficulty level etc).  Whereas an achievment is a little more compact of a summary of what you’ve accomplished.

Badges also give us a checklist whereby we can see how far we’ve come since we joined the web site – and how far we have to go in order to be average, or to be exceptional.’

LarsH’s comments were in the context of participation in an online community which awards badges for numbers of ‘helpful’ answers to questions and other contributions, but the underlying theme is the same for all contexts: the notion of building a persistent persona associated with achievements and success that endures beyond a single assessed instance (one play through a game, one helpful answer) which which it is specifically associated.  It creates a sense of status and implies competence and trustworthiness, which in turn can inspire others to emulate that behaviour in the hope of seeking similar recognition, or indicate that this is a trusted individual to ask for advice or guidance from.  Badges not only provide recognition of past contributions but also an implication that future contributions can also be trusted and an incentive to participate usefully.

Being able to capture and reflect this sometimes quite fine-grained information in other contexts would indeed have some advantages.  But as soon as these awards and achievements are looked at by someone outside their immediate context, they immediately lose a large part of their value, not because they’re worthless outside their original context but because the viewer lacks the expertise in the field to be able to trust that the badge reflects what it claims or to understand the implications of what it claims.  The value of the badge, therefore, isn’t inherent in the badge itself but in the assertions around it: that is was issued by a trustworthy party on reliable evidence to the specific individual who claims it.  A lot like, say, a traditional certificate for completing an accredited course, perhaps…

As Alex Reid (no, not that one) says, ‘passing a high stakes test to get a badge is no different than the system we already have’, and a lot of the problems around developing a trustworthy system are those that have already been faced by traditional awarders.  Comparisons to diploma mills swiftly emerged in the aftermath of the competition announcement, and it’s not difficult to see why: if anyone can issue a badge, how do we know that a badge reflects anything of merit?  Cathy Davidson’s vision of a world where employers hand out badges for ‘Great Collaborator!’ or ‘Ace Teacher!’ is nice (if far too cutesy for my tastes), but it’s not exactly hard to see how easily it could be abused.

At the heart of the badges initiative is the far older issue of identity management.  As our badge ‘backpack‘ is intended to gather badges awarded in a range of different contexts, how are we to be sure that they all belong to the one person?  As the example above of Alex Reid, American academic, versus Alex Reid, cage fighter, cross dresser, Celebrity Big Brother contestant and ex husband of Jordan, demonstrates, names are useless for this, particularly when the same person can be known by a number of different names, all equally meaningful to them in the same different contexts the backpack is intended to unify.  Email addresses have often been suggested as a way of identifying individuals, yet how many of us use a single address from birth to death?  In the US, social security numbers are far too sensitive to be used, while UK National Insurance numbers aren’t unique.  Similarly, how is a recruiter to know that a badge has been issued by a ‘respectable’ provider on the basis of actual performance rather than simply bought from a badge mill?  Unique identification of individuals and awarders, and accreditation of accreditors themselves, whether through a central registry or decentralised web of trust, is at the heart of making this work, and that’s not a small problem to solve.  With the momentum behind the OER movement growing and individuals having more reason and opportunity to undertake free ad hoc informal learning, being able to recognise and credit this is important.  As David Wiley notes, however, there’s a difference between a badge awarded simply for moving through a learning resource, and one awarded as an outcome of validated, quality assured assessment specifically designed to measure learning and achievement, and this needs to be fully engaged with for open or alternative credentialing to fulfill its potential.

There’s also a danger that badges and achievements can be used to legitimise bad or inadequate content by turning it into a Skinner box, where candidates will repeatedly undertake a set task in the expectation of eventually getting a reward, rather than because the task itself is engaging or they’re learning from it.  Borrowing from games can be good, but gamers can be very easily coaxed into undertaking the most mindless, tedious activities long after their initial value has been exhausted if the eventual reward is perceived as worth it.

Unlike, say, augmented reality or other supposed game changers, it’s not the underlying technology itself that has the potential to be transformative – after all, it basically boils down to a set of identity assertion and management problems to be solved with which the IDM people have been wrestling for a long time, plus image exchange and suitable metadata – but rather the cultural transformation it expresses, with the recognition that informal or hobbyist learning and expertise can be a part of our professional skillset.  Are badges the right way of doing this?  Perhaps; but what’s much more important is that the discussion is being had.  And that has to be a good thing.

ePortfolios, Y|N?

I retweeted a link to this post yesterday, and promptly found myself in the middle of a storm of debate about the validity and legitimacy of the points it raises.  As it’s not exactly a topic that lends itself to discussion in 140 character chunks, I thought I’d bring it here to see if people want to continue what turned out to be a pretty impassioned and heated discussion.

For my part, I think there are some good points made here.  While I think there’s a definite role for eportfolio technology in certain contexts, I’m not sold on the whole lifelong portfolios for lifelong learners rhetoric, and I don’t think it necessarily meets the needs or desires of learners or teachers.

My biggest issue is that there is a lack of distinction between a portfolio of work that is ultimately intended as an assessment resource to be externally viewed and evaluated, and a student’s body of work which he is supposed to reflect on and learn from.  The intrusion of workplace CPD into this space simply exacerbates this lack of focus and conflicting motivations.  While it may be possible for a single system to fully meet the technical requirements of these very different competing interests, I don’t think that’s necessarily the appropriate approach.  Learning is all about having the freedom and safety to fail, and about taking ownership of our successes and failures in order to grow as learners and as experts in the subject we’re studying.  Having authority over our own work is a fundamental part of that, and something that has to be handed over when that work is used for formal evaluation.

I don’t think we need specialised software in order to retain a record of our learning and progress.  A personal blog can be a powerful tool for reflection, a pen drive of files can be more portable and accessible than a dedicated tool, your youtube or vimeo or flickr channel is more than adequate for preserving your creations.  All of these have permanence beyond the duration of a course: although some institutions will allow continued access to institutional portfolio systems after a student has finished his course of study, it’s not a given and is always subject to change.  Using existing services ironically offers far more opportunity for true lifelong learning than a dedicated system.  And such distributed systems reflect the ways in which people reflect on and share their work outside the walls of the university.  I still have my ‘portfolio’ of my undergraduate work: the printed out essays I handed in with my lecturers’ comments written on them.  That was exactly what I needed as a learner, that’s exactly what I need now should I ever wish to reflect on that period.

For material to be used for assessment, yes, there is a need for secure and reliable storage systems and appropriate standards such as Leap2A and BS8518 to support the exchange of evidence, but the systems and processes should be appropriate to the subject and the material to be assessed rather than assessment being tailored to suit the available systems.

Many thanks to @drdjwalker, @dkernohan, @mweller, @markpower, @jamesclay, @ostephens, @jontrinder and @asimong for joining the discussion on Twitter.

Public draft consultation on standard for transfer of assessment data

You may remember a proposed standard for the transfer of qualification assessment data and evidence that was previously covered on this blog.

Work on this has been ongoing since then, and a draft standard is now available for public consultation and comment.  The public draft can be accessed via the BSI website, and comments may be submitted by following the instructions there.

All comments must be submitted by 30 November to be considered for the final version of the standard.  Depending on the nature and extent of comments received, the standard is likely to be released in the first quarter of 2010.