Under development: eMargin

eMargin logo

When I was studying English at university, one of the more engaging and intriguing sites of discussion and debate was the margins of printed texts.  These are the ultimate asynchronous discussions, taking place over decades in some cases, rarely revisted by their participants once they’d left their comment on previous comments.  It was fascinating to encounter often very different perceptions on both primary and secondary texts, and they encouraged me to reflect on my own interpretations and arguments as well as articulating them in the form of comments added to those already there.  These serendipitous discoveries definitely enhanced my learning experience, providing the opportunity to discuss texts and solidify my understanding significantly beyond that provided by limited tutorial time and the very few other opportunities for debate available.  Similarly, I encouraged my students to write on their books to increase engagement with the texts they read and legitimise their interpretations and opinions, although that was often met with askance looks that clearly said, ‘sod that, I’m selling them later.’

So I was very interested to learn about the eMargin project, which is developing an online collaborative textual annotation resource as part of the JISC Learning and Teaching Innovation Grants funding round six.  The eMargin system allows a range of annotation activities for electronic editions of texts, encompassing notes and comments on individual sections, highlighting, underlining and so on, all personalisable to support different tastes and access requirements.  What takes this beyond the usual functionality offered by ebook readers is the ability to share these annotations with class-mates and students from other institutions, enabling their use as educational resources by design rather than chance.  Teachers are able to control the degree of exposure of annotations in line with institutional policies on student IPR, and the system may be developed further to allow students to control which comments they wish to share and which to keep private, allowing them to use the same system for personal study as well as class work.  By providing an easy means for sharing ideas, together with a wiki feature for building and capturing consensus, this system will be of value in all disciplines, not just English Literature where it is being developed.

The project team, Andrew Kehoe and Matt Gee of the Research and Development Unit for English Studies at Birmingham City University, are developing the system through a number of iterations in the light of feedback from teachers and learners, and engaging participants in other institutions and other disciplines to demonstrate its versatility.  The team is also exploring the possibility of integrating eMargin with VLEs, and its potential as an eassessment tool; it may also have value in tracking the development of learners’ ideas in order to reduce opportunities for plagiarism.

The project runs until the end of May 2012, when source code, user guides and an archive of textual annotations will be available via the project site.  You can also visit their FaceBook page.

Under development: Peer Evaluation in Education Review (PEER)

The Peer Evaluation in Education Review (PEER) project based here at the University of Strathclyde is one of five projects funded in the JISC Learning and Teaching Innovation Grants programme round 5.  Running from 1 June 2010 to 30 June 2011, the project explores a range of issues around the implementation and delivery of peer assessment within higher education.

PEER is led by David Nicol and Catherine Milligan, building on the highly influential Re-engineering Assessment Practices in Higher Education (REAP) project.  The interplay between the two projects is clear from the extensive information available through the umbrella site that links the two, offering a wealth of information and resources around assessment, peer assessment and feedback.  The website is constantly under development, so is well worth regular revisiting to see the latest developments.

The project’s first phase involves the development of a framework for peer review and a detailed evaluation of existing peer review software.  A range of tools was evaluated in relation to a list of desirable features, and outcomes from this exercise are being added to the website for future reference.  Stage two involves a series of small scale pilots in a range of subject areas and institutions: the project team are also very interested in hearing from others piloting peer review software for potential inclusion within this research activity.  The final phase will see the development of a number of resources including guidelines on implementing peer review within a course of study and a literature review.

Unlike some LTIG projects, technical development activities are limited to those necessary to integrate those systems chosen for the pilot phase with the institutional LMS, Moodle.  Both the PeerMark functionality within Turnitin, and Aropa, developed by the University of Auckland, New Zealand, will be tested during the pilots.

WebPA and Moodle integration

webpa-logo-gifWebPA continues to benefit from its lively and creative community, the latest development being a very elegant Moodle-WebPA plug in developed by John Tutchings at Coventry University.

John has produced two videos to demonstrate the plug in in action, the first illustrating the single sign in for the two systems which allows population of the WebPA course with students taken over from Moodle. The second demonstrates migration of existing Moodle groups to WebPA, again utilising the single sign in across both systems.

This plug in is still at the beta stage, but anyone interested in helping test it is welcome to contact John, who can be reached via his website.

WebPA Resource Pack now available

webpa-logo-gifA Resource Pack designed to support those considering adopting the award winning WebPA peer assessment system has been developed by the project team and is now available online for free download.  Various sections of the guide address different users – management, academic staff, learning technologists and IT support – and a range of resources is included.  This is an excellent example of the benefits of the strong and active community of practice built up around this project, and will help to inform others who are considering adopting this system.

Moodle XML Converter

Moodle XML Converter is a simple, free, online tool for creating Moodle quizzes and glossaries from human readable text files.  Developed by Olga Tikhonova, Yulia Ivanova and Alekzandr Ivanov at the Yakutsk State University, the tool supports a number of item types ( MCQ, MRQ, short answer, essay, description, true/false, cloze, numerical and order) and supports feedback and formatting.  The team have also set up a Google group to support the tool.

In concept it’s similar to MCQFM, led by Steve Bennett of the University of Hertfordshire, which provided a human readable method for creating QTI items and linked up with the University of Southampton‘s R2Q2 renderer.

Qyouti: MCQ testing with QTI and scanners

Developed in response to frustration at existing high stakes MCQ testing options, Qyouti combines IMS QTI and scanning technology to provide robust, inexpensive and flexible assessment and is now available for free download from the tool’s SourceForge site.

Jon Maber, who developed the tool for Leeds Metropolitan University, describes how the tool works:

Qyouti is software which takes an IMS QTI file containing questions, a class list and prints the questions on an ordinary colour laser printer with areas for the student responses to be made in pencil or pen. I.e. the responses are marked as crosses or ticks (or just about any other kind of mark) in boxes that are right next to the options in the question paper. Every page is bar-coded with the candidate’s name and ID so it is impossible to give the marks to the wrong person. At the end of the exam the papers are scanned with an ordinary desktop scanner.  Then Qyouti processes the scanned images and produces a list of candidates with their marks […]  Each individual script has metrics encoded on it using square barcodes and so there is potential for customising font and layout for candidates with visual impairment or dyslexia.  A proper statistical analysis is done on the question items too.

Jon is keen to find volunteers to test and help contribute to the further development of the tool, and is offering free staff training in the use of MCQs in return for significant contributions.  He can be contacted through his homepage.