A Word in Your Ear 2009

A Word in Your Ear 2009 – Audio Feedback is a one day conference on the use of audio for providing assessment feedback to university students being held at Sheffield Hallam University on Friday 18 December.  There has been some interesting work in this area in recently such as the JISC-funded Sounds Good project (Bob Rotheram who led that project is the keynote speaker at this event) and this event looks like an excellent opportunity to learn more about initiatives in this area.

Discussion on innovative ways of online assessing

There’s a lively discussion (beginning here) going on on the VLEs JISCMail discussion list around ‘innovative ways of online assessing’.  Although in some cases ‘innovative’ seems to be equated with ‘electronic’, there are some interesting activities and comments emerging:

  • MS Word documents uploaded and marked by comments; Google Docs may provide greater preservation of comments.
  • Ongoing submission process over the academic year managed by Moodle’s iterative assessment feature.
  • ePortfolios – although Emma Duke-Williams observes that ‘mostly… they’re used as summative – and staff, rather than student, controlled repositories.’
  • Awarding marks for posting in discussions on the VLE.
  • Wiki contributions.
  • Audio feedback.
  • Video feedback.
  • Peer assessment.
  • Self-assessment.
  • Need to think beyond just summative assessment to consider formative and diagnostic assessment.
  • Ways of seeing whether learners are engaging with feedback provided and acting on it.
  • Knowing one’s students.

Under development: Eric Shepherd’s assessment maturity model

Eric Shepherd of Questionmark has been developing an assessment maturity model ‘that provides a way for organisations’ executives and managers to monitor, chart and improve their use of assessments’, and has recently begun to formalise this in an online model.

The model separates the entire assessment process into three key areas (development, delivery and reporting), each comprised of six measures (stakeholder satisfaction, security, strategic alignment, maturity, data management and communication).  Shepherd also identifies four phases of assessment maturity which can help identify the needs and requirements of an organisations (ad hoc, managed, refined and aligned).  Each of these elements is or will be expanded and further developed as the model itself matures, with ongoing development being focused on the project wiki.

There is also a great deal of useful information available on the site, such as learning resources to help assessment managers understand their own processes’ maturity and helpful links to relevant material.

The model is still in development but should already be of value to users and will continue to develop over time.

Thanks to Steve Lay for the heads up!

Turnitin win copyright decision in US

A group of students have finally lost their case against plagiarism detection service Turnitin with the judgement that Turnitin does not breach students’ copyright by retaining copies of submitted essays.  In particular, The Chronicle reports that

the district-court judge said Turnitin’s actions fell under fair use, ruling that the company ‘makes no use of any work’s particular expressive or creative content beyond the limited use of comparison with other works.’  He also said the new use ‘provides a substantial public benefit.’

While the reasoning is clear, I have always felt a bit uncomfortable about the tension between the undoubted benefits to be gained from Turnitin and the fact that, as a commercial company, Turnitin’s parent company iParadigms is undeniably profiting, albeit indirectly, from other people’s intellectual property.  However, institutions frequently claim IPR rights to their students’ work, and understandably see considerable benefits from requiring the submission of essays to Turnitin.

Plagiarism is undoubtedly a serious issue facing HE, but I think my biggest issue with this is the way in which so many institutions choose to use Turnitin exclusively as a tool to detect and evidence plagiarism.  There is an assumpation inherent in this approach that all students may attempt to cheat.  Retaining papers once they’ve been checked for plagiarism for adding to the database also suggests an assumption that students cannot be trusted not to pass their work on to others.

Other institutions take a more collaborative approach, allowing students to submit their work multiple times to Turnitin and using it as a teaching aid to help students avoid indirect plagiarism and learn how to better develop their essay writing and argumentation skills and understand their relationship to their research sources.  In this approach, all students receive a direct benefit from the software, profiting significantly as learners and writers, and those who may be tempted to intentionally plagiarise may look for other ways to cheat instead.

Bridging the tool development gap

A post this morning on the WebPA discussion list raised an issue that I’ve long felt has a negative impact on the uptake of JISC project outputs: how to support the use of tools produced by JISC projects in an institutional environment that is not interested in supporting them.

WebPA is a great project success story, being adopted by a number of institutions and winning a Bronze Award at the 2008 IMS Learning Impact awards.  It provides an innovative approach to peer assessment evaluation, allowing individual marks for each participant in a piece of group work.  The system generated a lot of interest when they presented at a CETIS event last year.

The poster has identified WebPA as a possible tool to support his teaching, but says:

Unfortunately there is little if any chance of the application ever being hosted on my University servers, I won’t even waste my time trying to get this on their radar […] I should say that while I am not completely IT illiterate I am not going to install the application myself since this is well beyond my personal skill level.

This highlights what I feel is a gap in the project lifecycle: bridging the support gap between the production of useable tools and enabling those outputs to be used in real educational contexts.  Although some lecturers have a high level of technical confidence and competence, this absolutely cannot be expected for the vast majority, and there seems to be a lack of support for those who are keen to use these innovative tools but lack the confidence or expertise to do so.  How do we encourage institutions to be willing to broaden their horizons and support those lecturers who wish to use what they feel are the best tools for their teaching practice?  The poster references commercial companies which host open source systems such as Moodle, but what about newer systems that lack the wide uptake that make providing support and hosting services commercially attractive?

So who should be responsible for supporting projects after the end of their formal funding period, and supporting lecturers and institutions in using these tools?  We’ve addressed this issue before in relation to supporting emerging developer communities in an open source model, but what about tools that are ready for use in actual teaching practice?

Navigating through the competences maze

Relativity - M C Escher

Around 35 delegates struggled through Wednesday’s sweltering heat and the baffling mysteries of Manchester Metropolitan University Business School’s internal layout to discuss a range of issues around competences for learning, assessment and portfolio.  Delegates represented a wide range of knowledge and expertise, from novices looking to find out ‘what it’s all about’ to experienced practitioners and developers.

It was an impressively international turn out, with delegates from Norway, Greece, Austria, Spain and Belgium joining the UK contingent, mainly representing the iCoper project which is exploring the linking of assessment with competences.  Assessment interests were also represented by the University of Southampton, who are working on the automatic construction of statements of competency from QTI XML, exploring the underlying modelling of competencies for machine processing.  The majority of delegates came from a strong (e)portfolio background, with interests in the movement of information into and out of eportfolios.  JISC and CETIS participants also highlighted the relevance of this work to JISC’s Curriculum Design projects.

The morning session featured a number of short presentations (all presentations from the day can be found here) on competences requirements in the field of medical education, an area which is relatively advanced in the use of competence frameworks.  Claire Hampshire (MMU), Julie Laxton (ALPS CETL), Karen Beggs (NHS Education for Scotland) and Jad Nijjar (iCoper and Synergetics) covered a range of topics, including the desire for non-hierarchic representations, the management of massive amounts of data, and addressing the various points in a student’s career at which  information can move between one system and another.  The ownership of data in portfolios, including competency information, is an ongoing issue that still is not clear, with at least three actors involved: the data subject, data controller and data processor.  Three main points of interoperability were identified: across time (for example, undergraduate to postgraduate), across specialities (for example, from psychiatry to gynecology), and from elearning experiences to portfolios.

After coffee, Paul Horner (Newcastle University), Shane Sutherland (PebblePad), Dave Waller (MyKnowledgeMap) and Tim Brown (NHS Education for Scotland) delivered short presentations on various tools for handling competence information.  One key issue that emerged from this session was the strong need for a specification to enable the sharing of profiles between systems: while evidence can be exported as HTML, entire profiles cannot be moved between systems except in unwieldy formats such as .pdfs.  Interoperability is needed for both import and export.  There is a noticeable move away by customers from monolithic approaches towards using a variety of (Web 2.0) tools, and developers are working on building open APIs to support this. 

What struck me most from both sessions was the way in which developments around eportfolios and competence recording are very firmly rooted in actual teaching and learning practice, with requirements emerging directly from real-world practice and tool developments directly benefiting teachers and learners.

In the afternoon the meeting split into four groups, ostensibly to work on identifying and representing information structures for a purported competences specification.  In practice, my group spent most of our time discussing widely around the whole area of competences, eportfolios and assessment, but as a newbie in this field I found this hugely helpful.  Overall conclusions from the groups identified the following requirements and issues:

  • ability to transfer information between different tools and systems
  • transition
  • curriculum progression pathways
  • relationship between competences and evidence versus qualifications
  • repeatable pattern of description at the core
  • fairly simple structure
  • identifiers for defining authority
  • a definable core structure enables extension for extra semantics
  • able to express the relationship between a learning object and skills, competences and knowledge
  • collection of outcomes
  • architectural issues: data is created and needed in many locations instead of at a central point
  • competences are highly context dependent

The meeting concluded with asking delegates what they want CETIS to focus on in taking forward work on competences.  Suggestions included:

  • development of a data model
  • business case for interoperability
  • look beyond HE/FE to workplace standards, particularly in the HR domain
  • look for connections to the HIRA progress reports due out by November
  • look at what has failed so far in order to learn from past experiences
  • look at defining competences in such a way that a specification can be combined with XCRI
  • have loosely defined competences that can be moved between systems
  • need a high level map of the competency domain in comparison with curriculum description and learning objects.

CETIS will be looking at how best we can take this work forward and, as always, we very much welcome input and suggestions from our community – please feel free to leave comments here, follow up via the wiki or contact Simon or me!

More on MCQs

There’s been an interesting discussion over the last couple of days on the Computer Assisted Assessment JISCMail list around delivery of multiple choice questions. 

The question of how long should be allowed for multiple choice questions produced a consensus of around ‘a minute per question plus a wee bit’ for a ‘typical’ MCQ, but that difficulty level or the use of negative marking or more sophisticated questions would impact on this.  Sandra Gibson cited research by Case and Swanson which suggests that

good students know the answers and … select the right one in very little time (seconds), poor students try and reason out the answers which takes longer. It depends how long you want to give the poorer students to try to work it out, which then impacts on the validity, reliability and differentiation of your assessment.

Discussion broadened to cover the issue of sequential delivery, i.e. when a candidate is unable to return to questions and revise their response once they have moved on to the next question in the test.  There were some compelling educational arguments in favour of this, for example, a series of questions building on or even containing the answers to previous questions; and less satisfactory justifications such as technical limitations on delivery software.    Fascinatingly, a number of posters reported the same (sadly anecdotal) finding that where students revise their response, the likelihood is that they’ve changed a correct answer to a wrong one.   It was also noted that tests which do not permit candidates to revise their responses required less maximum time than those that do.

It’s a good discussion that’s still going on, so well worth following or contributing to!

Assessment when ready failing pupils?

The Guardian is reporting that Single Level Tests, the replacement for the controversial Sats exams which have been piloted over the last eighteen months, are plagued with ‘substantial and fundamental’ problems.  The exams, which allow pupils to take the exams ‘when ready’ at any age between seven and fourteen as part of the larger personalisation agenda, produced what the Guardian calls ‘extraordinary results’, with primary school pupils consistently outperforming those in secondary school in certain areas.

This variation in performance across age groups is explained by the fact that the tests themselves are based on the primary school curriculum, which younger pupils have freshly been taught while older pupils have forgotten much by the time they sit the tests.  This is a fundamental flaw in these tests which raises a number of questions around the area of assessment when ready and assessment on demand; it is ironic that a system intended to recognise individual needs and abilities could actually undermine individual performance.

IMS withdraw QTI v2.1 draft specification

Over the last few days a new notice has appeared on the IMS Question and Test Interoperability webpage in place of the QTI v2.1 draft specification:

The IMS QTIv2.1 draft specification has been removed from the IMS website. Adequate feedback on the specification has not been received, and therefore, the specification has been put back into the IMS project group process for further work.

QTI v2.1 was under public review for more than 2 years and did not achieve sufficient implementation and feedback to warrant being voted on as a final specification. Therefore it has been withdrawn for further work by the IMS membership. IMS cannot continue to publish specifications that have not met the rigors of the IMS process.”

IMS GLC has convened a set of leading organizations to take the lead on this new work – which will be considered to be in the CM/DN draft phase in the IMS process.  Therefore, we are very encouraged and hopeful that a new version will be available in due time, possibly a QTI v2.2, along with the necessary conformance profiles. However, we cannot assume that it will be a linear evolution from QTI v2.1.

Until that time the only version of QTI that is fully endorsed by IMS GLC is v1.2.1, that is supported under the Common Cartridge Alliance: http://www.imsglobal.org/cc/alliance.html . While QTI version 2.0 has been voted on as a final specification by the IMS members, it’s deficiencies are well known and IMS does not recommend implementation of it.

This was clearly completely unexpected, not only for us at CETIS but also amongst a number of commercial and academic developers who have been working with the specification as can be seen by posts to the technical discussion list hosted by UCLES.  In particular, I’d encourage you to read Wilbert’s response on behalf of CETIS.

Concerns from the developer community addressed a number of the issues raised in IMS’s statement.  In response to the claim that ‘adequate feedback on the specification has not been received’, several commentators argued that this is because of the high standard of the specification; while the suggestion that ‘QTI v2.1 … did not achieve sufficient implementation … to warrant being voted on as a final specification’ sparked the addition of a number of implementations to Wikipedia’s QTI page.

There is agreement that work will progress on the basis of the public draft, so it is still perfectly possible that the outcome will be a mildly amended version of the public draft with some small profiles.

CETIS will be following this up, and will of course keep you all informed about progress.  In the meantime, we’d be very keen to hear any thoughts or comments you have, although I would encourage you to sign up for both the UCLES list and the official IMS QTI list to ensure your voice is heard as widely as possible; it would be most beneficial for the wider QTI community I feel for discussion to be focused in one place, i.e. the UCLES list.