Making Assessment Count Evaluation project

The Making Assessment Count (MAC) project ran from November 2008 to October 2010, funded by JISC as part of their Transforming Curriculum Delivery through Technology programme and led by the University of Westminster.  Focused on the desire to engage students with assessment feedback provided to them, it explored processes for encouraging and guiding student reflection on feedback and developing a dialogue between learners and teachers around feedback.  Participants at a joint Making Assessment Count/JISC CETIS event on Assessment and Feedback back in February 2011 heard not only from project staff but also from students who were actively using the system and whose enthusiasm for it and recognition of its impact on their development as learners was genuinely inspiring.

Although the project team developed the eReflect tool to support the Making Assessment Count process, the primary driver of the project was the conceptual model underlying the tool and the development of eReflect was a pragmatic decision based on the lack of suitable alternative technologies.  The team also contributed a service usage model (SUM) on their feedback process to the eFramework.

The Making Assessment Count Evaluation (MACE) project will see the Westminster team overseeing the implementation of the MAC model in an number of partner institutions, and will see the eReflect tool in use in the Universities of Bedfordshire and Greenwich, and Cardiff Metropolitan University (formerly the University of Wales Institute Cardiff).  By contrast, City University London and the University of Reading will be focusing on implementing the MAC model within Moodle (City) and BlackBoard (Reading), and exploring how the components already provided within those VLEs can be used to support this very different process and workflow.

It is perhaps the experiences of City and Reading that will be of most interest.  It’s becoming increasingly evident that there are very strong drivers for the use of existing technologies, particularly extensive systems such as VLEs, for new activities: there is already clear institutional buy-in and commitment to these technologies, institutional processes will (or at least, should!) have been adapted to reflect their integration in teaching practice, and embedding innovative practice within these technologies increases the sustainability of them beyond a limited funding or honeymoon period.  The challenges around getting MAC into Moodle and BlackBoard are those that led to the need for the eReflect tool in the first place: traditional assessment and survey technology simply isn’t designed to accommodate the process of dialogue and the engagement of a student with feedback provided that the MAC process drives.  Of course, Moodle and BlackBoard, representing community-driven open source software and proprietary commercial systems respectively, present very different factors in integrating new processes, and the project teams’ experiences and findings should be of great interest for future developers more generally.

Deployment of the MAC process in such a wide range of institutions, subject areas and technologies will provide a very thorough test of the model’s suitability and relevance for learners, as well as providing a range of implementer experiences and guidance.  I’m looking forward to following the progress of what should be a fascinating implementation and evaluation project, and hope to see learners in other institutions engage as enthusiastically and with such good outcomes as the participants in the original MAC project.

Under development: Sharing Higher Education Data

Meaningful work placements and graduate employability have always been an important part of university education and preparation for a professional future in certain disciplines, and are arguably even more so today in a climate of limited employment opportunities, with high university fees and loans positioning students as customers investing in their future careers.  Certain subject areas enjoy good relationships with industry, providing industrial placements to give students real-world experience in their future fields, while local companies benefit from the expertise and cutting edge knowledge these students can bring to the workplace.  Universities and colleges similarly benefit from this ongoing engagement with industry, ensuring their courses remain relevant and meaningful.

Shrinking university staff numbers have increased workloads, limiting the time staff have to spend assisting individual students in seeking suitable placements and opportunities for work-based learning.  In any case, reliance on university staff is not necessarily the best way in which students can prepare themselves for seeking suitable, fulfilling employment on graduation, or establish fruitful relationships with potential employers.

The Sharing Higher Education Data (SHED) project attempts to address these issues through the delivery of a ‘matchmaking’ service for students and employers, which will both facilitate communication between them and enable students to plan their learning paths in the light of the expectations and requirements of their chosen profession.  Sample case studies included in the student and employer information sheets about the service help illustrate the range of ways in which SHED can benefit both user groups while increasing interaction between academia and industry.

SHED uses the popular Mahara open source eportfolio tool to allow students to develop their profiles, and, vitally, provides them with strict control over what information is made publicly viewable by potential recruiters.  Students can also view common employer search terms within their particular field in order to better understand the employment market in that area and to support the review and revision of their profiles to enhance their employability.  The integration of the XCRI information model and specification (eXchanging Course Related Information) provides a common framework for describing and sharing course information, while Leap2A and InterOperability provide support for the sharing of eportfolio and competence information.

As a partnership between the Centre for International ePortfolio Development at the University of Nottingham and Derby College, SHED will also be able to demonstrate how the system can be used across a number of different institutions without compromising privacy while maximising opportunities for placement and project work and professional development.  Although small-scale and local to begin with, it is intended that the system be scalable to include many institutions, subject areas and locations, and provide both a valuable service for students and employers and insight into regional and national trends in industry and development.

Badges, identity and the $2million prize fund

You’ll almost certainly have noticed some of the excitement that’s suddenly erupted around the use of badges in education.  Perhaps you’ve heard that it’s the latest in a long line of ‘game changers for education’, maybe you’re even hoping for a slice of that $2million prize fund the HASTAC Initiative, Mozilla and the MacArthur Foundation are offering for work around their adoption and development through the Digital Media and Learning Badges for Lifelong Learning competition.  Supported by a number of significant entities, including Intel, Microsoft and various US Government departments, the competition offers up to $200k each for a number of projects around content and infrastructure for badges for lifelong learning, as well as an $80k award for a research project in ‘Badges, trophies and achievements: recognition and accreditation for informal and interest-driven learning’ together with two smaller doctoral student grants, and student and faculty prizes.  That’s a decent amount of cash available for – what?

This is all based around Mozilla’s Open Badges Initiative, which attempts to provide an innovative infrastructure to support the recognition of non-traditional learning and achievement for professional development and progress.  Drawing upon the widespread use of badges and achievements in gaming and the current trend for gamification, the project is described in gamified language, claiming that badges can help adopters ‘level up’ in their careers via the acquisition and display (sharing) of badges.  There’s a fair point being made here: gamers can develop a profile and express their individual identity as gamers through the display of achievements they earn as they play, which can then be shared ingame through the use of special titles or on appropriate fora through signatures and site profiles.  Achievements reflect the different interests a player has (their weighting on the Bartle scale for example) as well as their skill.  Within a fairly closed community such as a single game, a suite of games or a website, these achievements have significant value as the viewers are other gamers for whom the achievements have meaning and value.

LarsH on Stack Overflow’s response to the question ‘why are badges motivating?‘, asked over a year ago but still very relevant, sums this up eloquently:

We like other people to admire us.  As geeks we like others to admire us for our skills.  Badges/achievements stay visible in association with our online identity long-term, unlike individual questions and answers which quickly fade into obscurity.

If I play a game and get a great score, it’s nice, but it means little to others unless they have the context of what typical scores are for that game (and difficulty level etc).  Whereas an achievment is a little more compact of a summary of what you’ve accomplished.

Badges also give us a checklist whereby we can see how far we’ve come since we joined the web site – and how far we have to go in order to be average, or to be exceptional.’

LarsH’s comments were in the context of participation in an online community which awards badges for numbers of ‘helpful’ answers to questions and other contributions, but the underlying theme is the same for all contexts: the notion of building a persistent persona associated with achievements and success that endures beyond a single assessed instance (one play through a game, one helpful answer) which which it is specifically associated.  It creates a sense of status and implies competence and trustworthiness, which in turn can inspire others to emulate that behaviour in the hope of seeking similar recognition, or indicate that this is a trusted individual to ask for advice or guidance from.  Badges not only provide recognition of past contributions but also an implication that future contributions can also be trusted and an incentive to participate usefully.

Being able to capture and reflect this sometimes quite fine-grained information in other contexts would indeed have some advantages.  But as soon as these awards and achievements are looked at by someone outside their immediate context, they immediately lose a large part of their value, not because they’re worthless outside their original context but because the viewer lacks the expertise in the field to be able to trust that the badge reflects what it claims or to understand the implications of what it claims.  The value of the badge, therefore, isn’t inherent in the badge itself but in the assertions around it: that is was issued by a trustworthy party on reliable evidence to the specific individual who claims it.  A lot like, say, a traditional certificate for completing an accredited course, perhaps…

As Alex Reid (no, not that one) says, ‘passing a high stakes test to get a badge is no different than the system we already have’, and a lot of the problems around developing a trustworthy system are those that have already been faced by traditional awarders.  Comparisons to diploma mills swiftly emerged in the aftermath of the competition announcement, and it’s not difficult to see why: if anyone can issue a badge, how do we know that a badge reflects anything of merit?  Cathy Davidson’s vision of a world where employers hand out badges for ‘Great Collaborator!’ or ‘Ace Teacher!’ is nice (if far too cutesy for my tastes), but it’s not exactly hard to see how easily it could be abused.

At the heart of the badges initiative is the far older issue of identity management.  As our badge ‘backpack‘ is intended to gather badges awarded in a range of different contexts, how are we to be sure that they all belong to the one person?  As the example above of Alex Reid, American academic, versus Alex Reid, cage fighter, cross dresser, Celebrity Big Brother contestant and ex husband of Jordan, demonstrates, names are useless for this, particularly when the same person can be known by a number of different names, all equally meaningful to them in the same different contexts the backpack is intended to unify.  Email addresses have often been suggested as a way of identifying individuals, yet how many of us use a single address from birth to death?  In the US, social security numbers are far too sensitive to be used, while UK National Insurance numbers aren’t unique.  Similarly, how is a recruiter to know that a badge has been issued by a ‘respectable’ provider on the basis of actual performance rather than simply bought from a badge mill?  Unique identification of individuals and awarders, and accreditation of accreditors themselves, whether through a central registry or decentralised web of trust, is at the heart of making this work, and that’s not a small problem to solve.  With the momentum behind the OER movement growing and individuals having more reason and opportunity to undertake free ad hoc informal learning, being able to recognise and credit this is important.  As David Wiley notes, however, there’s a difference between a badge awarded simply for moving through a learning resource, and one awarded as an outcome of validated, quality assured assessment specifically designed to measure learning and achievement, and this needs to be fully engaged with for open or alternative credentialing to fulfill its potential.

There’s also a danger that badges and achievements can be used to legitimise bad or inadequate content by turning it into a Skinner box, where candidates will repeatedly undertake a set task in the expectation of eventually getting a reward, rather than because the task itself is engaging or they’re learning from it.  Borrowing from games can be good, but gamers can be very easily coaxed into undertaking the most mindless, tedious activities long after their initial value has been exhausted if the eventual reward is perceived as worth it.

Unlike, say, augmented reality or other supposed game changers, it’s not the underlying technology itself that has the potential to be transformative – after all, it basically boils down to a set of identity assertion and management problems to be solved with which the IDM people have been wrestling for a long time, plus image exchange and suitable metadata – but rather the cultural transformation it expresses, with the recognition that informal or hobbyist learning and expertise can be a part of our professional skillset.  Are badges the right way of doing this?  Perhaps; but what’s much more important is that the discussion is being had.  And that has to be a good thing.

Public draft consultation on standard for transfer of assessment data

You may remember a proposed standard for the transfer of qualification assessment data and evidence that was previously covered on this blog.

Work on this has been ongoing since then, and a draft standard is now available for public consultation and comment.  The public draft can be accessed via the BSI website, and comments may be submitted by following the instructions there.

All comments must be submitted by 30 November to be considered for the final version of the standard.  Depending on the nature and extent of comments received, the standard is likely to be released in the first quarter of 2010.

Navigating through the competences maze

Relativity - M C Escher

Around 35 delegates struggled through Wednesday’s sweltering heat and the baffling mysteries of Manchester Metropolitan University Business School’s internal layout to discuss a range of issues around competences for learning, assessment and portfolio.  Delegates represented a wide range of knowledge and expertise, from novices looking to find out ‘what it’s all about’ to experienced practitioners and developers.

It was an impressively international turn out, with delegates from Norway, Greece, Austria, Spain and Belgium joining the UK contingent, mainly representing the iCoper project which is exploring the linking of assessment with competences.  Assessment interests were also represented by the University of Southampton, who are working on the automatic construction of statements of competency from QTI XML, exploring the underlying modelling of competencies for machine processing.  The majority of delegates came from a strong (e)portfolio background, with interests in the movement of information into and out of eportfolios.  JISC and CETIS participants also highlighted the relevance of this work to JISC’s Curriculum Design projects.

The morning session featured a number of short presentations (all presentations from the day can be found here) on competences requirements in the field of medical education, an area which is relatively advanced in the use of competence frameworks.  Claire Hampshire (MMU), Julie Laxton (ALPS CETL), Karen Beggs (NHS Education for Scotland) and Jad Nijjar (iCoper and Synergetics) covered a range of topics, including the desire for non-hierarchic representations, the management of massive amounts of data, and addressing the various points in a student’s career at which  information can move between one system and another.  The ownership of data in portfolios, including competency information, is an ongoing issue that still is not clear, with at least three actors involved: the data subject, data controller and data processor.  Three main points of interoperability were identified: across time (for example, undergraduate to postgraduate), across specialities (for example, from psychiatry to gynecology), and from elearning experiences to portfolios.

After coffee, Paul Horner (Newcastle University), Shane Sutherland (PebblePad), Dave Waller (MyKnowledgeMap) and Tim Brown (NHS Education for Scotland) delivered short presentations on various tools for handling competence information.  One key issue that emerged from this session was the strong need for a specification to enable the sharing of profiles between systems: while evidence can be exported as HTML, entire profiles cannot be moved between systems except in unwieldy formats such as .pdfs.  Interoperability is needed for both import and export.  There is a noticeable move away by customers from monolithic approaches towards using a variety of (Web 2.0) tools, and developers are working on building open APIs to support this. 

What struck me most from both sessions was the way in which developments around eportfolios and competence recording are very firmly rooted in actual teaching and learning practice, with requirements emerging directly from real-world practice and tool developments directly benefiting teachers and learners.

In the afternoon the meeting split into four groups, ostensibly to work on identifying and representing information structures for a purported competences specification.  In practice, my group spent most of our time discussing widely around the whole area of competences, eportfolios and assessment, but as a newbie in this field I found this hugely helpful.  Overall conclusions from the groups identified the following requirements and issues:

  • ability to transfer information between different tools and systems
  • transition
  • curriculum progression pathways
  • relationship between competences and evidence versus qualifications
  • repeatable pattern of description at the core
  • fairly simple structure
  • identifiers for defining authority
  • a definable core structure enables extension for extra semantics
  • able to express the relationship between a learning object and skills, competences and knowledge
  • collection of outcomes
  • architectural issues: data is created and needed in many locations instead of at a central point
  • competences are highly context dependent

The meeting concluded with asking delegates what they want CETIS to focus on in taking forward work on competences.  Suggestions included:

  • development of a data model
  • business case for interoperability
  • look beyond HE/FE to workplace standards, particularly in the HR domain
  • look for connections to the HIRA progress reports due out by November
  • look at what has failed so far in order to learn from past experiences
  • look at defining competences in such a way that a specification can be combined with XCRI
  • have loosely defined competences that can be moved between systems
  • need a high level map of the competency domain in comparison with curriculum description and learning objects.

CETIS will be looking at how best we can take this work forward and, as always, we very much welcome input and suggestions from our community – please feel free to leave comments here, follow up via the wiki or contact Simon or me!

Assessment, Portfolio and Enterprise too

A recent joint meeting of the JISC CETIS Assessment, Enterprise and Portfolio SIGs drew a wide range of participants to discuss topics of interest to all three SIGs.  The morning sessions covered a range of topics that touched on all three domains, while the afternoon was given over to a special session on student retention.

John Winkley of AlphaPlus Consultancy, who has been working with JISC as an expert consultant in the area of assessment, opened the meeting by introducing delegates to a number of funding opportunities in the domain that JISC will be releasing in the next few weeks.  These opportunities include at least two and up to four demonstrator projects, funded to build on and further develop outputs from earlier JISC toolkit activities, and two Invitations to Tender for desktop research studies.  These studies will look at advanced eassessment techniques, and at quality concerns around eassessment.  The demonstrator projects must be led by a HEFCE-funded institution, while the ITTs will be open to all bidders including Scottish insitutions, FE colleges with less than 400 HE students, and the private sector.  All work is due to be completed by March 2009, and will add considerably to JISC’s portfolio of work in this area.

One project which has benefited from JISC funding for part of its lifetime is the WebPA project based at Loughborough UniversityNic Wilkinson presented the successful peer assessment system to delegates, illustrating some of the reasons for its success at the recent IMS Learning Impact Awards in Austin, Texas.  One of the most signficant factors in the system’s ongoing success is the effort the project team have put into attracting and supporting a signficiant number of participating organisations that have now integrated the system into their own teaching practice.  It was also extremely interesting to learn how positively the students themselves have responded to the system, and their attitudes towards the anonymity of peer marks: the system awards each member of a group an aggregated mark derived from the individual scores awarded by their peers, and students are reported to not want to receive individual marks in order to avoid potential clashes outside the classroom.

After the break, Karim Derrick of TAGLearning discussed a proposed British Standard for managing the transmission of coursemarks and portfolios of digital evidence of coursework between schools and awarding bodies.  Based on TAG’s extensive experience in this area, the proposed standard includes ‘an XML schema for describing the relationship between components, options and exam specifications’ and a ‘universal translator’ API to support data exchange between the various systems used by exam centres and awarding bodies.  Although the current focus for this work is firmly on the schools sector, if adopted it’s not hard to see how it could be extended to support the universities admissions process and external marking at all levels, particularly in vocational courses where a single accrediting body has to deal with substantial amounts of data. 

Alan Paull of APS Ltd closed the morning with a lively journey round the admissions domain landscape and the DELIA project.  DELIA enables the sharing of enhanced learner information as part of the admissions process, enabling admissions officers to make more informed decisions when evaluating borderline applications.  This not only improves the quality of the admissions process, enabling a closer matching between applicants and course requirements, but can have a positive impact on subsequent retetention of such students.

The afternoon featured a special session on student retention, looking at a range of issues around the topic and attempting to capture requirements for work in the domain.  Simon Grant of JISC CETIS and the Centre for Recording Achievement led an interactive session that asked participants to consider self-assessment of suitability for courses and the different personas we adopt as our contexts change.  Simon also touched on some of the problems that arise when our different personas come into conflict, a situation which can be exacerbated by the widespread use of social networking services and individuals’ lack of awareness of the potential implications of forgoing privacy when using them. 

Helen Richardson, also of JISC CETIS and the Centre for Recording Achievement, closed the day by discussing some of the findings of the STAR project and the National Audit Office’s report on student retention.  The STAR project produced a detailed series of guidelines to help support students both before and during their university careers, including the use of technologies such as SMS messaging to aid this.

We’re grateful to all our presenters for sharing their work with us and for being so willing to respond to questions and comments from the audience, and to all those who attended on the day and helped to make it a success.

Assessment meets Enterprise meets Portfolio: three way SIG meeting ahead

The room’s booked, the agenda’s confirmed and lunch has been ordered, so it must be time for another SIG meeting.  This time, the Assessment SIG is joining up with the Enterprise and Portfolio SIGs on 22 May at the University of Strathclyde to look at issues that affect all three domains and areas of overlap between the domains. 

The agenda includes the usual mix of news and updates, project presentations and discussion sessions, plus a special themed requirements gathering session focused on the pressing issue of student retention.  Myles Danson of JISC opens the day with a heads-up on forthcoming Invitations to Tender in the assessment domain, a topic that is always of great interest.  Nicola Wilkinson of the WebPA project, based at Loughborough University, will introduce their Learning Impact Award-nominated system, while Alan Paull will discuss the University of Nottingham’s DELIA project on admissions.

The admissions process is also the focus of proposed BSI standardisation work for the transmission of digital evidence and assessment data between schools and awarding bodies to be presented by Karim Derrick of TAG Learning.

The afternoon will feature presentations and discussions on student retention aimed at gathering requirements, recommendations and priorities for future activities, led by our own Simon Grant and Helen Richardson and building on the work of the STAR project and the National Audit Office.

As always, the meeting is free to attend, with lunch and refreshments provided.  It’s open to all, and we just ask that you register in advance to secure your place.  We look forward to seeing you there!