The evaluation of assessment diaries and GradeMark at the University of Glamorgan

Two major, institution-wide innovations introduced in recent years at the University of Glamorgan are the subject of this project, funded as part of the JISC Assessment and Feedback Programme Strand B.

Arising as a result of a Change academy project running from 2008-10, the use of assessment diaries for scheduling and planning assessment, and GradeMark for online marking, have been adopted across the institution to various extents within different schools and faculties.  This new JISC project will examine the reasons for variation in adoption and explore staff and student experiences of these technologies as well as exploring strategies for staff development to encourage wider uptake.

The assessment diary system is a very simple, but very elegant approach to dealing with the issue of assessment bunching, identified by Glamorgan students as a major issue undermining learning and assessment performance.  Initially the problem was addressed simply by widely sharing assessment due dates for all courses within staff Outlook calendars, but considerable development work since then has resulted in an MS Access database backend with a web interface in Blackboard.  Seamless integration with Blackboard through a building block provides a single point of access to this information which can be fully personalised by both students and staff.  This provides a highly visual way of understanding the rhythms of course workload and has been very successful in helping students manage their time and plan their work in relation to assessment deadlines.  Staff have also found that the ease of access to detailed information on course timings has facilitated dialogue amongst staff across a range of courses and reflect more effectively on the student learning experience when redesigning or rescheduling modules.  Those departments that were early adopters of the diaries have seen significant improvement in their National Student Survey scores for feedback, and the diaries are also believed to have positively impacted on student retention rates.

By contrast, GradeMark is a commercial online marking tool within Turnitin that is gathering increasing adoption in the UK.  Adopted across the University of Glamorgan since 2009, it is seen as addressing a number of student dissatisfactions with feedback, including timeliness, level of detail and, importantly, the development and maintenance of a meaningful dialogue between both students and teachers, and amongst teachers themselves.  Both students and staff have responded very positively to the tool in an initial evaluation of its impact, and the current project will be able to explore its impact in greater detail.

Project outputs will include a series of video interviews with students and both teaching and administrative staff which will be freely available as OERs on YouTube, together with Panopto recordings of staff development and training sessions for asynchronous viewing.  The team is also exploring the use of online avatars for staff development discussions and scenario roleplaying, an exciting approach that has worked very successfully at the University of Hong Kong and which I hope to follow up in a later post.

The project’s blog is well worth a read to see how the team go about their evaluation and some of the issues they encounter on their journey.

Making Assessment Count Evaluation project

The Making Assessment Count (MAC) project ran from November 2008 to October 2010, funded by JISC as part of their Transforming Curriculum Delivery through Technology programme and led by the University of Westminster.  Focused on the desire to engage students with assessment feedback provided to them, it explored processes for encouraging and guiding student reflection on feedback and developing a dialogue between learners and teachers around feedback.  Participants at a joint Making Assessment Count/JISC CETIS event on Assessment and Feedback back in February 2011 heard not only from project staff but also from students who were actively using the system and whose enthusiasm for it and recognition of its impact on their development as learners was genuinely inspiring.

Although the project team developed the eReflect tool to support the Making Assessment Count process, the primary driver of the project was the conceptual model underlying the tool and the development of eReflect was a pragmatic decision based on the lack of suitable alternative technologies.  The team also contributed a service usage model (SUM) on their feedback process to the eFramework.

The Making Assessment Count Evaluation (MACE) project will see the Westminster team overseeing the implementation of the MAC model in an number of partner institutions, and will see the eReflect tool in use in the Universities of Bedfordshire and Greenwich, and Cardiff Metropolitan University (formerly the University of Wales Institute Cardiff).  By contrast, City University London and the University of Reading will be focusing on implementing the MAC model within Moodle (City) and BlackBoard (Reading), and exploring how the components already provided within those VLEs can be used to support this very different process and workflow.

It is perhaps the experiences of City and Reading that will be of most interest.  It’s becoming increasingly evident that there are very strong drivers for the use of existing technologies, particularly extensive systems such as VLEs, for new activities: there is already clear institutional buy-in and commitment to these technologies, institutional processes will (or at least, should!) have been adapted to reflect their integration in teaching practice, and embedding innovative practice within these technologies increases the sustainability of them beyond a limited funding or honeymoon period.  The challenges around getting MAC into Moodle and BlackBoard are those that led to the need for the eReflect tool in the first place: traditional assessment and survey technology simply isn’t designed to accommodate the process of dialogue and the engagement of a student with feedback provided that the MAC process drives.  Of course, Moodle and BlackBoard, representing community-driven open source software and proprietary commercial systems respectively, present very different factors in integrating new processes, and the project teams’ experiences and findings should be of great interest for future developers more generally.

Deployment of the MAC process in such a wide range of institutions, subject areas and technologies will provide a very thorough test of the model’s suitability and relevance for learners, as well as providing a range of implementer experiences and guidance.  I’m looking forward to following the progress of what should be a fascinating implementation and evaluation project, and hope to see learners in other institutions engage as enthusiastically and with such good outcomes as the participants in the original MAC project.

Under development: xGames

xgames-logoThe xGames project, a collaboration between Reid Kerr and Anniesland colleges, has been running for nearly a year and is currently in the final stages of piloting its innovative use of wireless xBox360 controllers for classroom quizzes.  Funded as part of the JISC Learning and Teaching Innovation Grants: SWaNI (Scotland, Wales and Northern Ireland) FE programme, the project has produced a highly user friendly question editor to allow complete novices to quiz and game design to easily author questions.  These questions can then be played in one of several games designed by the project on a large screen linked to a standard Windows PC fitted with USB receivers for up to four wireless xBox controllers.  Using wireless controllers is crucial as the range of the sensors allows a great deal of flexibility in classroom set up, permitting the use of breakout groups to discuss topics and feedback, for example.  Additionally, xBox controllers are familiar to many learners who are more confident using them than PC gaming.

The video below demonstrates the system’s use in a primary school classroom, and the engagement and enthusiasm of the children is immediately obvious, with lively discussions about the quiz questions and clear enjoyment of the session, the immediate indication of correct and incorrect answers providing instant feedback to the pupils.  The use of the large screen allows the teacher to constantly maintain a clear overview of the progress of the entire class, allowing her to identify topics that are generally not understood and which require whole class revision or struggling individuals within the group.  Discussion amongst the older group of college students is more muted, but their focus on the game mechanics and subject matter is evident.

[youtube]http://www.youtube.com/watch?v=ZRZZj1u9KQ0[/youtube]

The games, screenshots for which can be found under the games menu on the project site (software will be available from this site in due course), are designed using industry standard software such as XNA Game Studio, 3D Studio Max, Fireworks and Illustrator, with the question editor using a Visual Basic form for generating plain text files containing the question stem, distractors and correct response.  Unlike a commercial system such as Quia, questions are stored in a shared public folder so they can easily be shared and reused by teachers in different institutions.  XGames has generated interest from FE and, particularly, schools, and may well see further uptake as an affordable and easily adopted way of bringing game based learning into more classrooms.

First CFP for CAA 2012 now out

The first Call for Papers for the 2012 International CAA Conference: Research into E-Assessment is now available on the conference website, which also includes links to previous proceedings and a wide range of other important information about the event.

The conference will take place on 10-11 July in Southampton, and is jointly organised by Electronics and Computer Science at the University of Southampton and the Institute of Educational Technology at The Open University.

Thanks to @drdjwalker for the heads up :)

JISC Assessment and Feedback Programme Strand C

The final part of the current JISC Assessment and Feedback Programme, Strand C provides support for technical development work to ‘package a technology innovation in assessment and feedback for re-use (with associated processes and practice), and support its transfer to two or more named external institutions’.  This will see a number of innovative systems, including those developed over recent years with direct support from JISC, that have reached sufficient maturity adopted outside their originating institution and used to directly enhance teaching and learning.

The Open Mentor Technology Transfer (OMtetra) project will see The Open University’s Open Mentor system packaged and transferred to the University of Southampton and King’s College London.  This unique system profiles tutor feedback to enhance the tutor’s ability to reflect on the feedback she provides, enhancing both the quality of the feedback students receive and the tutor’s own professional development and understanding.

QTI Delivery Interaction (QTIDI) led by the University of Glasgow in partnership with the Universities of Edinburgh, Southampton and Strathclyde, Kingston University and Harper Adams University College will package the JISC-funded MathAssessEngine – itself a development of earlier JISC-funded work – for deployment in the partner institutions through Moodle and other VLEs through the development of a thin Learning Tools Interoperability layer to launch assessments from within the VLE and return scores to the VLE gradebook.  As the name suggests, this system will be fully compliant with the IMS Question and Test Interoperability (QTI) v2.1 specification.

Uniqurate is led by Kingston University, working in collaboration with the Universities of Southampton and Strathclyde, to produce and share a high quality QTI assessment content authoring system.  Again building on extensive previous work in this area, the focus of this project will be to produce user friendly interfaces to allow newcomers to eassessment and those unfamiliar with QTI to easily and confidently create high quality interoperable content.  As a sister project to QTIDI, the two project teams are working closely together to provide a suite of tools for interoperable eassessment.

The Rogo OSS project will see the University of Nottingham package their in-house eassessment system and support its implementation at De Montford University and the Universities of East Anglia, Bedford, Oxford and the West of Scotland.  This is an extensive and mature open source system, with over seven years development and deployment behind it, supporting a wide range of question types, QTI import and export, support for LaTeX, foreign languages, accessibility, a range of multimedia formats, LDAP authentication, invigilator support and embedded workflows covering the range of activities within the eassessment lifecycle.

All four projects represent an exciting stage in the work JISC has been funding, directly and indirectly, for several years, and will see these tools becoming available to the wider and non-specialist HE community supported by detailed resources and lively and engaged user communities.

JISC Assessment and Feedback Programme Strand B

Where the JISC Assessment and Feedback Programme Strand A projects are focused on identifying and introducing new assessment and feedback processes and practices, the Strand B Evidence and Evaluation projects are reviewing the impact and implications of innovations and transformations that have already taken place, and exploring how these can be extended to new contexts and environments.  These eight projects cover a broad range of approaches and will provide invaluable insight into the value of such changes for institutions, learners and staff.

The EFFECT: Evaluating feedback for elearning: centralised tutors project at the University of Dundee will examine the success of esubmission and their TQFE-Tutor system, a centralised email account, blog and microblog supporting their online PDP programme for tertiary tutors.  The project aimed to significantly increase response times and teaching quality through the use of this centralised system, as well as providing opportunities for peer interaction and collaboration for both students and staff.  As well as rigorously evaluating the impact of the programme, EFFECT will produce valuable guidance on how to adapt the system for implementation by other institutions and courses.

Student-Generated Content for Learning (SGC4L) at the University of Edinburgh is evaluating the impact of PeerWise, a freely available web based system which not only allows students to author questions for sharing with their peers but also supports extensive social discussion tools around that content, providing a greatly enhanced learning experience that students have responded to enthusiastically.  The project will emphasise the production of generic information that will be applicable across a wide range of subject areas and institutional contexts.

OCME: Online Coursework Management Evaluation, based at the University of Exeter, is rolling out and evaluating a fully paperless coursework management system, involving the integration of Moodle and Turnitin.  This will inform the development of best practice guidelines for the deployment and management of online assessment.

The problems of assignment bundling and timeliness of feedback are being considered by the Evaluation of Assessment Diaries and GradeMark at the University of Glamorgan project.  This project will be evaluating the use of student diaries to highlight to both staff and students when assignments are due to encourage good time management and planning.  The project is also looking at the use of GradeMark, an online grading system available as part of Turnitin, to provide timely, personalised and high quality feedback on assignments.

Evaluating the Benefits of Electronic Assessment Management (EBEAM) at the University of Huddersfield is also looking at the impact of Turnitin and GradeMark on student satisfaction, retention, progression and institutional efficiency.  This project will benefit from their early adoption of the systems to provide detailed insights and recommendations for their implementation in a wide range of subject areas and across different institutions.

The University of Hertfordshire’s Evaluating Electronic Voting Systems for Enhancing Student Experience (EEVS) project will provide an exhaustive examination of the use, benefits and best practice around the use of electronic voting systems in formative and summative assessment.

The eFeedback Evaluation Project (eFEP) builds on The Open University’s extensive experience in providing distance learning to explore the use of written and spoken feedback in modern language studies.  The value of such feedback in face-to-face courses will be examined through deployment in similar classes at the University of Manchester.  Detailed reports on the value of different feedback techniques together with training resources for staff and students will provide valuable advice for other institutions considering adopting such approaches.

The University of Westminster’s Making Assessment Count Evaluation (MACE) project builds on the success of the Making Assessment Count project which will be familiar to those who attended our joint event with them this February.  This project will not only evaluate the impact of the MAC eReflect self-review questionnaire within Westminster, but also pilot implementations at six other institutions including the transformation of the MAC SOS (Subject, Operational and Strategic) feedback principles into Moodle and GradeBook at City University, London.  By demonstrating the effectiveness of the system in a wide variety of subject areas and institutional contexts, the project will provide a valuable resource for those considering adopting the system.

JISC Assessment and Feedback Programme Strand A

JISC has a long tradition of providing support and encouragement for innovative assessment activities, recognising the crucial role assessment plays in education and the significant concerns about the current state of university assessment and feedback repeatedly revealed by the National Student Survey, stimulus for the National Union of Students’ recent high profile Feedback Amnesty campaign.

Their latest work in this area is focused on a substantial programme of projects funded under the three strands of the current Assessment and Feedback Programme, covering institutional change, evidence and evaluation, and technology transfer.  The twenty projects that successfully bid for funding under this programme address a wide range of assessment and feedback processes and educational contexts, illustrated by Grainne Hamilton’s excellent Storified account of the programme start up meeting earlier this month.  These projects are focused on using technology to increase the quality and efficiency of assessment and feedback practice on a large scale.  Crucially, there is a strong element of sharing running throughout the programme, both in supporting the transfer of technology to new institutions, and in sharing outcomes and learning from previous work to help support future practice – literally feeding forward to the future.

Strand A focuses on institutional change, with the eight projects funded reviewing and revising assessment and feedback processes and practices, and using technology to support major changes and best practice in their chosen area.

Feedback and Assessment for Students with Technology (FASTECH) is working with the Higher Education Academy-funded Transforming the Experience of Students through Assessment (TESTA) project to implement technology-enhanced assessment and feedback processes in a large number of degree programmes at Bath Spa and Winchester Universities.  The project will learn about the approaches teachers and learners take to assessment and how technology can be used to affect this.  In addition, a range of resources and support will be made available to support practitioners in transforming individual and institutional practice.

COLLABORATE: working with employers and students to design assessment enhanced by the use of digital technologies at the University of Exeter is focused on employability issues, and on ensuring that assessment is designed with students’ future career prospects firmly in mind.  The project is structured around a series of collaborations: with employers, with programme and institutional teams, and with students and recent graduates, redesigning assessment to ensure that it is pedagogically sound and realistic in preparing students for life beyond graduation.

FAST (Feedback and Assessment Strategy Testing) led by Cornwall College is facing the intriguing issue of embedding technology-supported assessment in a geographic area which lacks full broadband roll out and with students whose ability to physically visit the college campus is limited by poor transport links or employment.  A small-scale pilot on a small cohort in a single campus will be followed with full scale roll out in a Personal and Employability Development module studied by over 700 students on 43 different courses in seven different campuses.  The information on technical and support issues encountered and methods adopted to overcome them will be disseminated to the wider community, as will model CPD packages for potential adoption in other institutions.

InterACT at the University of Dundee is also dealing with a rather unique student cohort.  Providing continuing education and CPD for practicing doctors, their courses are entirely distance taught and, crucially, the timing of assessment submission is entirely at the discretion of the individual student.  This leads to issues around timeliness of feedback and feed forward, which may have an impact on learner satisfaction and, subsequently, on retention.  This project will examine the ways in which a range of technologies such as blogs, FriendFeed, VOIP, webinars and chat tools can enable personalised, timely and focused feedback that encourages reflection and engagement, and enhances the student experience.

The timeliness and effectiveness of feedback is also a focus of the Integrating Technology-Enhanced Assessment Methods (iTEAM) project at the University of Hertfordshire, which is exploring the ways in which electronic voting systems and increased embedding of QuestionMark Perception can be used to provide prompt personalised feedback.  A student dashboard will be developed to integrate information from EVS, QMP, the institution’s Managed Learning Environment (MLE) and other relevant sources to provide a central point for information on a student’s performance across all subjects, enabling personal tutors to provide meaningful and personalised support and students to better understand their own learning behaviours.

The Institute of Education’s Assessment Careers: enhancing learning pathways through assessment project will explore the construction of assessment frameworks, incorporating multi-stage assessment and structured feedback and feed forward.  There is a strong emphasis on assessment as an holistic whole rather than single, stand-alone events, with assessment instances part of a larger learning journey rather than marking the end point of a phase of study.  The framework will be piloted in a number of Masters modules, with learner and tutor experiences then informing the model as it is scaled up for use on an institutional level.

TRAFFIC: TRansforming Assessment + Feedback for Institutional Change at Manchester Metropolitan University builds on MMU’s work in the JISC Curriculum Design and Delivery programme to implement an institution-wide assessment transformation programme.  The project will undertake an extremely thorough review of assessment across the institution, explore ways in which technology can enhance and support assessment and feedback processes, and provide a very rich range of resources for the broader community.

eAFFECT: eAssessment and Feedback for Effective Course Transformation is the culmination of a range of activities examining assessment and feedback processes undertaken recently at Queen’s University Belfast.  The project will examine staff and student approaches to assessment, in particular addressing concerns about workload, learning styles and how technology can support transformation in assessment processes.  A practice-based website and extensive supporting documentation will help individual practitioners and institutional change managers apply the lessons of this project to their own contexts.

Badges, identity and the $2million prize fund

You’ll almost certainly have noticed some of the excitement that’s suddenly erupted around the use of badges in education.  Perhaps you’ve heard that it’s the latest in a long line of ‘game changers for education’, maybe you’re even hoping for a slice of that $2million prize fund the HASTAC Initiative, Mozilla and the MacArthur Foundation are offering for work around their adoption and development through the Digital Media and Learning Badges for Lifelong Learning competition.  Supported by a number of significant entities, including Intel, Microsoft and various US Government departments, the competition offers up to $200k each for a number of projects around content and infrastructure for badges for lifelong learning, as well as an $80k award for a research project in ‘Badges, trophies and achievements: recognition and accreditation for informal and interest-driven learning’ together with two smaller doctoral student grants, and student and faculty prizes.  That’s a decent amount of cash available for – what?

This is all based around Mozilla’s Open Badges Initiative, which attempts to provide an innovative infrastructure to support the recognition of non-traditional learning and achievement for professional development and progress.  Drawing upon the widespread use of badges and achievements in gaming and the current trend for gamification, the project is described in gamified language, claiming that badges can help adopters ‘level up’ in their careers via the acquisition and display (sharing) of badges.  There’s a fair point being made here: gamers can develop a profile and express their individual identity as gamers through the display of achievements they earn as they play, which can then be shared ingame through the use of special titles or on appropriate fora through signatures and site profiles.  Achievements reflect the different interests a player has (their weighting on the Bartle scale for example) as well as their skill.  Within a fairly closed community such as a single game, a suite of games or a website, these achievements have significant value as the viewers are other gamers for whom the achievements have meaning and value.

LarsH on Stack Overflow’s response to the question ‘why are badges motivating?‘, asked over a year ago but still very relevant, sums this up eloquently:

We like other people to admire us.  As geeks we like others to admire us for our skills.  Badges/achievements stay visible in association with our online identity long-term, unlike individual questions and answers which quickly fade into obscurity.

If I play a game and get a great score, it’s nice, but it means little to others unless they have the context of what typical scores are for that game (and difficulty level etc).  Whereas an achievment is a little more compact of a summary of what you’ve accomplished.

Badges also give us a checklist whereby we can see how far we’ve come since we joined the web site – and how far we have to go in order to be average, or to be exceptional.’

LarsH’s comments were in the context of participation in an online community which awards badges for numbers of ‘helpful’ answers to questions and other contributions, but the underlying theme is the same for all contexts: the notion of building a persistent persona associated with achievements and success that endures beyond a single assessed instance (one play through a game, one helpful answer) which which it is specifically associated.  It creates a sense of status and implies competence and trustworthiness, which in turn can inspire others to emulate that behaviour in the hope of seeking similar recognition, or indicate that this is a trusted individual to ask for advice or guidance from.  Badges not only provide recognition of past contributions but also an implication that future contributions can also be trusted and an incentive to participate usefully.

Being able to capture and reflect this sometimes quite fine-grained information in other contexts would indeed have some advantages.  But as soon as these awards and achievements are looked at by someone outside their immediate context, they immediately lose a large part of their value, not because they’re worthless outside their original context but because the viewer lacks the expertise in the field to be able to trust that the badge reflects what it claims or to understand the implications of what it claims.  The value of the badge, therefore, isn’t inherent in the badge itself but in the assertions around it: that is was issued by a trustworthy party on reliable evidence to the specific individual who claims it.  A lot like, say, a traditional certificate for completing an accredited course, perhaps…

As Alex Reid (no, not that one) says, ‘passing a high stakes test to get a badge is no different than the system we already have’, and a lot of the problems around developing a trustworthy system are those that have already been faced by traditional awarders.  Comparisons to diploma mills swiftly emerged in the aftermath of the competition announcement, and it’s not difficult to see why: if anyone can issue a badge, how do we know that a badge reflects anything of merit?  Cathy Davidson’s vision of a world where employers hand out badges for ‘Great Collaborator!’ or ‘Ace Teacher!’ is nice (if far too cutesy for my tastes), but it’s not exactly hard to see how easily it could be abused.

At the heart of the badges initiative is the far older issue of identity management.  As our badge ‘backpack‘ is intended to gather badges awarded in a range of different contexts, how are we to be sure that they all belong to the one person?  As the example above of Alex Reid, American academic, versus Alex Reid, cage fighter, cross dresser, Celebrity Big Brother contestant and ex husband of Jordan, demonstrates, names are useless for this, particularly when the same person can be known by a number of different names, all equally meaningful to them in the same different contexts the backpack is intended to unify.  Email addresses have often been suggested as a way of identifying individuals, yet how many of us use a single address from birth to death?  In the US, social security numbers are far too sensitive to be used, while UK National Insurance numbers aren’t unique.  Similarly, how is a recruiter to know that a badge has been issued by a ‘respectable’ provider on the basis of actual performance rather than simply bought from a badge mill?  Unique identification of individuals and awarders, and accreditation of accreditors themselves, whether through a central registry or decentralised web of trust, is at the heart of making this work, and that’s not a small problem to solve.  With the momentum behind the OER movement growing and individuals having more reason and opportunity to undertake free ad hoc informal learning, being able to recognise and credit this is important.  As David Wiley notes, however, there’s a difference between a badge awarded simply for moving through a learning resource, and one awarded as an outcome of validated, quality assured assessment specifically designed to measure learning and achievement, and this needs to be fully engaged with for open or alternative credentialing to fulfill its potential.

There’s also a danger that badges and achievements can be used to legitimise bad or inadequate content by turning it into a Skinner box, where candidates will repeatedly undertake a set task in the expectation of eventually getting a reward, rather than because the task itself is engaging or they’re learning from it.  Borrowing from games can be good, but gamers can be very easily coaxed into undertaking the most mindless, tedious activities long after their initial value has been exhausted if the eventual reward is perceived as worth it.

Unlike, say, augmented reality or other supposed game changers, it’s not the underlying technology itself that has the potential to be transformative – after all, it basically boils down to a set of identity assertion and management problems to be solved with which the IDM people have been wrestling for a long time, plus image exchange and suitable metadata – but rather the cultural transformation it expresses, with the recognition that informal or hobbyist learning and expertise can be a part of our professional skillset.  Are badges the right way of doing this?  Perhaps; but what’s much more important is that the discussion is being had.  And that has to be a good thing.

QTI v2.1 briefing paper now available

An updated version of our QTI Briefing Paper is now available.  It provides an introduction to the specification’s structure and purpose, some details about the history of the specification and a discussion of the pros and cons of adoption.

This is an updated version of the draft document released in March, and will be replaced with a final version after the final release of QTI v2.1 by IMS.

eAssessment Scotland 2011 details now available

The website for this year’s eAssessment Scotland event is now up, and has a wealth of information about this popular, free two-day event.

Running on 25 – 26 August at the University of Dundee, this third conference features a packed programme including a range of workshops and seminars, poster dispays and exhibitions, keynotes from Steve Wheeler, Becka Colley and Donald Clark as well as the presentation of this year’s Scottish eAssessment Awards.

Proposals for poster presentations can be submitted until 1 August, while entry for the Scottish eAssessment Awards closes on 16 August.

Registration will open shortly.