Inuagural Open Badges (Scotland) Working Group Meeting

Bill Clinton isn’t the only one creating a buzz about the open badges movement at the moment. Perhaps with slightly less coverage than the Clinton initiative, yesterday saw the first (Scottish) open badges working group meeting.

Organised by Grainne Hamilton at RSC Scotland, following the success and interest shown at their recent Open Badges Design Day, the meeting was very well attended with a group of really enthusiastic practitioners from across the Scottish education sector, many of whom are already implementing badges. There was also good representation from key agencies such as the SQA and the Colleges Development Network.

What struck me about the meeting was how much real buy-in and activity there was for badges from schools to colleges to universities. Whilst there was a lot of diversity in approaches (most people implementing badges are still at pilot stages), there were also a number of common themes of interest for future developments including badges for staff development purposes and the sharing of implementation of “badging” through VLEs in particular Moodle and Blackboard.

One of the great selling points of badges is their potential to bridge the gap between achievement and attainment of formal qualifications and give people (and in particular students) more opportunities to present things which aren’t recognised through formal qualifications. This was a prime motivator for many at the working group as they want to be able to allow students more ways to showcase/sell themselves to potential employers, and not have to rely on formal qualifications. This of course links to developments around e-portfolios.

There was also a lot of interest in using badges for staff development within colleges and universities. RSC Scotland is already paving the in this respect as they have developed a range of badges for their online courses and events, and a number of colleges are beginning to use badges for staff development activities.

Over the coming months a number of sub-groups will be forming around some of the key areas identified at yesterdays meeting, setting up a shared workspace and of course, most importantly sharing their work with each other and the wider working group, and of course the rest of the community.

If yesterday afternoon was anything to go by, there will be lots more to share around the development and implementation of badges. I’m certainly looking forward to being part of this exciting new group, and thanks again to Grainne and Fionnuala and the RSC for bringing this group together and their commitment to supporting it over the coming year.

Badges? Certificates? What counts as succeeding in MOOCs?

Oops, I did it again. I’ve now managed to complete another MOOC. Bringing my completion rate of to a grand total of 3 (the non completion number is quite a bit higher but more on that later). And I now have 6 badges from #oldsmooc and a certificate (or “statement of accomplishment”) from Coursera.

My #oldsmooc badges

My #oldsmooc badges

Screenshot of Coursera record of achievement

Screenshot of Coursera record of achievement

But what do they actually mean? How, if ever, will/can I use these newly gained “achievements”?

Success and how it is measured continues to be one of the “known unknowns” for MOOCs. Debate (hype) on success is heightened by the now recognised and recorded high drop out rates. If “only” 3,000 registered users complete a MOOC then it must be failing, mustn’t it? If you don’t get the certificate/badge/whatever then you have failed. Well in one sense that might be true – if you take completion to equate with success. For a movement that is supposed to be revolutionising the (HE) system, the initial metrics some of the big xMOOCs are measuring and being measured by are pretty traditional. Some of the best known success of recent years have been college “drop outs’, so why not embrace that difference and the flexibility that MOOCs offer learners?

Well possibly because doing really new things and introducing new educational metrics is hard and even harder to sell to venture capitalists, who don’t really understand what is “broken” with education. Even for those who supposedly do understand education e.g. governments find any change to educational metrics (and in particular assessments) really hard to implement. In the UK we have recent examples of this with Michael Gove’s proposed changes to GSCEs and in Scotland the introduction of the Curriculum for Excellence has been a pretty fraught affair over the last five years.

At the recent #unitemooc seminar at Newcastle, Suzanne Hardy told us how “empowered” she felt by not submitting a final digital artefact for assessment. I suspect she was not alone. Suzanne is confident enough in her own ability not to need a certificate to validate her experience of participating in the course. Again I suspect she is not alone. From my own experience I have found it incredibly liberating to be able to sign up for courses at no risk (cost) and then equally have no guilt about dropping out. It would mark a significant sea change if there was widespread recognition that not completing a course didn’t automatically equate with failure.

I’ve spoken to a number of people in recent weeks about their experiences of #oldsmooc and #edcmooc and many of them have in their own words “given up”. But as discussion has gone on it is apparent that they have all gained something from even cursory participation either in terms of their own thinking about possible involvement in running a MOOC like course, or about realising that although MOOCs are free there is still the same time commitment required as with a paid course.

Of course I am very fortunate that I work and mix with a pretty well educated bunch of people, who are in the main part really interested in education, and are all well educated with all the recognised achievements of a traditional education. They are also digital literate and confident enough to navigate through the massive online social element of MOOCs, and they probably don’t need any more validation of their educational worth.

But what about everyone else? How do you start to make sense of the badges, certificates you may or may not collect? How can you control the way that you show these to potential employers/Universities as part of any application? Will they mean anything to those not familiar with MOOCs – which is actually the vast majority of the population. I know there are some developments in California in terms of trying to get some MOOCs accredited into the formal education system – but it’s very early stages.

Again based on my own experience, I was quite strategic in terms of the #edcmooc, I wrote a reflective blog post for each week which I was then able to incorporate into my final artefact. But actually the blog posts were of much more value to me than the final submission or indeed the certificate (tho I do like the spacemen). I have seem an upward trend in my readership, and more importantly I have had lots of comments, and ping backs. I’ve been able to combine the experience with my own practice.

Again I’m very fortunate in being able to do this. In so many ways my blog is my portfolio. Which brings me a very convoluted way to my point in this post. All this MOOC-ery has really started me thinking about e-portfolios. I don’t want to use the default Coursera profile page (partly because it does show the course I have taken and “not received a certificate” for) but more importantly it doesn’t allow me to incorporate other non Coursera courses, or my newly acquired badges. I want to control how I present myself. This relates quite a lot to some of the thoughts I’ve had about using Cloudworks and my own educational data. Ultimately I think what I’ve been alluding to there is also the development of a user controlled e-portfolio.

So I’m off to think a bit more about that for the #lak13 MOOC. Then Lorna Campbell is going to start my MOOC de-programming schedule. I hope to be MOOC free by Christmas.

eAssessment Scotland – focus on feedback

Professor David Boud got this year’s eAssessment Scotland Conference off to a great start with his “new conceptions of feedback and how they might be put into practice” keynote presentation by asking the fundamental question ‘”what is feedback?”

David’s talk centred on what he referred to as the “three generations of feedback”, and was a persuasive call to arms to educators to move from the “single loop ” or “control system” industrial model of feedback to a more open adaptive system where learners play a central and active role.

In this model, the role of feedback changes from being passive to one which helps to develop students allowing them to develop their own judgement, standards and criteria. Capabilities which are key to success outside formal education too. The next stage from this is to create feedback loops which are pedagogically driven and considered from the start of any course design process. Feedback becomes part of the whole learning experience and not just something vaguely related to assessment.

In terms of technology, David did give a familiar warning that we shouldn’t enable digital systems to allow us to do more “bad feedback more efficiently”. There is a growing body of research around developing the types of feedback loops David was referring to. Indeed the current JISC Assessment and Feedback Programme is looking at exactly the issues brought up in the keynote, and is based on the outcomes of previously funded projects such as REAP and PEER. And the presentation from the interACT project I went to immediately after the keynote, gave an excellent overview of how JISC funding is allowing the Centre for Medical Education in Dundee to re-engineering its assessment and feedback systems to “improve self, peer and tutor dialogic feedback”.

During the presentation the team illustrated the changes to their assessment /curriculum design using an assessment time line model developed as part of another JISC funded project, ESCAPE, by Mark Russell and colleagues at the University of Hertfordshire.

Lisa Gray, programme manager for the Assessment and Feedback programme, then gave an overview of the programme including a summary of the baseline synthesis report which gives a really useful summary of the issues the projects (and the rest of the sector ) are facing in terms of changing attitudes, policy and practice in relation to assessment and feedback. These include:
*formal strategy/policy documents lagging behind current development
*educational principles are rarely enshrined in strategy/policylearners are not often actively enaged in developing practice
*assessment and feedback practice doesn’t reflect the reality of working life
*admin staff are often left out of the dialogue
*traditional forms of assessment still dominate
*timeliness of feedback are still an issue.

More information on the programme and JISCs work in the assessment domain is available here.

During the lunch break I was press-ganged/invited to take part in the live edutalk radio show being broadcast during the conference. I was fortunate to be part of a conversation with Colin Maxwell (@camaxwell), lecturer at Carnegie College, where we discussed MOOCs (see Colin’s conference presentation) and feedback. As the discussion progressed we talked about the different levels of feedback in MOOCs. Given the “massive” element of MOOCs how and where does effective feedback and engagement take place? What are the afordances of formal and informal feedback? As I found during my recent experience with the #moocmooc course, social networks (and in particular twitter) can be equally heartening and disheartening.

I’ve also been thinking more about the subsequent twitter analysis Martin has done of the #moocmooc twitter archive. On the one hand, I think these network maps of twitter conversations are fascinating and allow the surfacing of conversations, potential feedback opportunities etc. But, on the other, they only surface the loudest participants – who are probably the most engaged, self directed etc. What about the quiet participants, the lost souls, the ones most likely to drop out? In a massive course, does anyone really care?

Recent reports of plagiarism, and failed attempts at peer assessment in some MOOCs have added to the debate about the effectiveness of MOOCs. But going back to David Boud’s keynote, isn’t this because some courses are taking his feedback mark 1, industrial model, and trying to pass it off as feedback mark 2 without actually explaining and engaging with students from the start of the course, and really thinking through the actual implications of thousands of globally distributed students marking each others work?

All in all it was a very though provoking day, with two other excellent keynotes from Russell Stannard sharing his experiences of using screen capture to provide feedback, and Cristina Costa on her experiences of network feedback and feeding forward. You can catch up on all the presentations and join in the online conference which is running for the rest of this week at the conference website.

Enhancing engagement, feedback and performance webinar

The latest webinar from the JISC Assessment and Feedback programme will take place on 23 July (1-2pm) and will feature the SGC4L (Student Generated Content for Learning) project. Showcasing the Peerwise online environment the project team will illustrate to participants how it can be used by students to generate their own original assessment content in the form of multiple choice questions. The team will discuss their recent experiences using the system to support teaching on courses at the University of Edinburgh and the findings of the project. The webinar will include an interactive session offering participants the opportunity get first hand experience of interacting with others via a PeerWise course set up for the session.

For further details and links to register for this free webinar are available by following this link.

Binding explained . . . in a little over 140 characters

Finding common understandings is a perennial issue for those of us working in educational technology and lack of understanding between techies and non techies is something we all struggle with. My telling some of the developers I used to work with the difference between formative and summative assessments became something of an almost daily running joke. Of course it works the other way round too and yesterday I was taken back to the days when I first came into contact with the standards world and its terminology, and in particular ‘bindings’.

I admit that for a while I really didn’t have a scoobie about bindings, what they were, what the did etc. Best practice documentation I could get my head around, and I would generally “get” an information model – but bindings, well that’s serious techie stuff and I will admit to nodding a lot whilst conversations took place around me about these mysterious “bindings”. However I did eventually get my head around them and what their purpose is.

Yesterday I took part in a catch up call with the Traffic project at MMU (part of the current JISC Assessment and Feedback programme). Part of the call involved the team giving an update on the system integrations they are developing, particularly around passing marks between their student record system and their VLE, and the development of bindings between systems came up. After the call I noticed this exchange on twitter between team members Rachel Forsyth and Mark Stubbs.

I just felt this was worth sharing as it might help others get a better understanding of another piece of technical jargon in context.

Couple of updates from the JISC Assessment and Feedback Programme

As conference season is upon us projects from the current JISC Assessment and Feedback programme are busy presenting their work up and down the country. Ros Smith has written an excellent summary post “Assessment and Feedback: Where are we now?” from the recent International Blended Learning Conference where five projects presented.

Next week sees the CCA Conference and there is a pre-conference workshop where some of the assessment related standards work being funded by JISC will be shared. The workshop will include introductions to:

• A user-friendly editor called Uniqurate, which produces questions conforming to the Question and Test Interoperability specification, QTIv2.1,

• A way of connecting popular VLEs to assessment delivery applications which display QTIv2.1 questions and tests – this connector itself conforms to the Learning Tools Interoperability specification, LTI,

• A simple renderer, which can deliver basic QTIv2.1 questions and tests,

• An updated version of the comprehensive renderer, which can deliver QTIv2.1 questions and tests and also has the capability to handle mathematical expressions.

There will also be demonstrations of the features of the QTI Support site, to help users to get started with QTI.

The workshop will also provide an opportunity to discuss participants’ assessment needs and to look at the ways these might be addressed using the applications we have available and potential developments which could be part of future projects.

If you are interested in attending the conference, email Sue Milne with your details as soon as possible.

Design Studio update: Transforming Assessment and Feedback

Those of you who regularly read this blog, will (hopefully) have noticed lots of mentions and links to the Design Studio. Originally built as a place to share outputs from the JISC Curriculum Design and Delivery Programmes, it is now being extended to include ouputs from a number of other JISC funded programmes.

The Transforming Assessment and Feedback area of the Design Studio now has a series of pages which form a hub for existing and emergent work on assessment and feedback of significant interest. Under a series of themes, you can explore what this community currently know about enhancing assessment and feedback practice with technology, find links to resources and keep up to date with outputs from the Assessment and Feedback and other current JISC programmes.

Assessment and Feedback themes/issues wordle

This is a dynamic set of resources that will be updated as the programme progresses. Follow this link to explore more.

My memory of eAssessment Scotland

Along with around another 270 people, attended the eAssessment Scotland Conference on 26 August at the University of Dundee. It was a thought provoking day, with lots of examples of some innovative approaches to assessment within the sector.

Steve Wheeler got the day off to a great start talking us through some of the “big questions” around assesment, for example is it knowledge or wisdom that we should be assessing? and what are the best ways to do this? Steve also emphasised the the evolving nature of assessment and the need to share best practice and introduced many of us to the term “ipsative assessment”. The other keynotes complemented this big picture view with Becka Coley sharing her experiences of the student perspective on assessment and Pamela Kata showing taking us through some of the really innovative serious games work she is doing with medical students. The closing keynote from Donald Clark again went back to some of the more generic issues around assessment and in particular assessment in schools and the current UK governments obsession with maths.

There is some really great stuff going on in the sector, and there is a growing set of tools, and more importantly evidence of the impact of using e-assessment techniques (as highlighted by Steve Draper, University of Glasgow). However it does seem still quite small scale. As Peter Hartley said e-assessment does seem to be a bit of a cottage industry at the moment and we really more institutional wide buy in for things to move up a gear. I particularly enjoyed the wry, slightly self-deprecating presentation from Malcolm MacTavish (University of Abertay Dundee) about his experiments with giving audio feedback to students. Despite being now able to evidence the impact of audio feedback and show that there were some cost efficiencies for staff, the institution has now implemented a written feedback only policy.

Perhaps we are on the cusp a breakthrough, and certainly the new JISC Assessment and Feedback programme will be allowing another round of innovative projects to get some more institutional traction.

I sometimes joke that twitter is my memory of events – I tweet therefore I am mentality :-) And those of you who read my blog will know I have experimented with the Storify service for collating tweets from events. But for a change, here is my twitter memory of the day via the memolane service.

Assessment and Feeback – the story from 2 February

Many thanks to my colleague Rowin Young and the Making Assessment Count project at the University of Westminster for organising a thoroughly engaging and thought provoking event around assessment and feedback yesterday. I just got my storify invite through this morning, so to give a flavour of the day here is a selected tweet story from the day.

A capital day for assessment projects

Last Monday CARET, University of Cambridge hosted a joint workshop for the current JISC Capital Programme Assessment projects. The day provided an opportunity for the projects to demonstrate how the tools they have been developing work together to provide the skeleton of a complete assessment system from authoring to delivery to storage. Participants were also encouraged to critically review progress to date and discuss future requirements for assessment tools.

Introducing the day Steve Lay reminded delegates of some of the detail of the call under which the projects had been funded. This included a focus on “building and testing software tools, composite applications and or implementing a data format and standards for to defined specification” – in this case QTI. The three funded projects have built directly on the outcomes of previous toolkits and demonstrator activities of the e-framework.

The morning was given over to a demo from the three teams, from Kingston, Cambridge and Southampton Universities respectively, showing how they interoperated by authoring a question in AQuRAte then storing it in Minibix and finally delivering it through ASDEL.

Although the user-interfaces still need a bit of work, the demo did clearly show how using a standards based approach does lead to interoperable systems and that the shorter, more iterative development funding cycle introduced by JISC can actually work.

In the afternoon there were two breakout sessions one dealing with the technical issues around developing and sustaining an open source community, the other looking innovations in assessment. One message that came through from both sessions was the need for more detailed feedback on what approaches and technologies work in the real world. Perhaps some kind of gap analysis between the tool-set we have just now and the needs of the user community combined with more detailed use cases. I think that this approach would certainly help to roadmap future funding calls in the domain as well as helping inform actually practice.

From the techie side of the discussion there was a general feeling of there still being lots of uncertainty about the development of an open source community. How/will/can the 80:20 rule of useful code be reversed? The JISC open source community is still relatively immature and the motivations for be part of it are generally because developers are being paid to be part of it – not because it is the best option. There was a general feeling that more work is needed to help develop, extend and sustain the community and that it is at quite a critical stage in its life-cycle. One suggestion to help with this was the need for a figure head to lead the community – so if you fancy being Mr/Mrs QTI do let us know:-)

More notes from the day are available for the projects’ discussion list.