Here Be Dragons

It’s something of a rarity for me to go to a conference or meeting in in Glasgow, however I was so glad that I managed to get to the JISC RSC Scotland Annual Conference “Here Be Dragons“, last Friday (8 June). It was a thoroughly engaging, entertaining and educational event covering topics from cutting edge neuroscience research to mind-reading.

Congratulations to all colleagues at the RSC for organising such a great event and giving the opportunity for colleagues from Scottish colleges and universities to come together and be inspired by future developments from all the keynotes and sessions, and to share and celebrate their experiences of using technology in education through the iTech Awards. I was also delighted that the ExamView project from the JISC DVLE programme (which CETIS has been supporting) won the highly commended award in the Assessment category.

Below is twitter summary of the event.

[View the story “Here Be Dragons – JISC RSC Scotland annual conference ” on Storify]

Relating IMS Learning Design to web 2.0 technologies

Last week I attended the “relating IMS Learning Design to web 2.0 technologies” workshop at the EC-TEL conference. The objectives of the workshop were to to explore what has happened in the six years since the release of specification both in terms of developments in technology and pedagogy and to discuss how (and indeed if/can) the specification keep up with these changes.

After some of the discussions at the recent IMS meeting, I felt this was a really useful opportunity to redress the balance and spend some time reflecting on what the the spec was actually intended for and how web 2.0 technologies are now actually enabling some of the more challenging parts of its implementation – particularly the integration of services.

Rob Koper (OUNL) gave the first keynote presentation of the day staring by taking us all back to basics and reminding of the original intentions of the specification i.e. to create a standardized description of adaptive learning and teaching processes that take place in a computer managed course (the LD manages the course, not the teacher). Learning and support activities and not content are central to the experience.

The spec was intentionally designed to be as device neutral as possible and to provide an integrative framework for a large number of standards and technologies and to allow a course to be “designed” once (in advance of the actual course) and run many times with minimal changes. The spec was never intended to handle just in time learning scenarios, or in situations where there is little automation necessary of online components such as time based activities.

However as Rob pointed out many people have tried to use the spec for things it was really never intended to do. It wasn’t build to manage highly adaptive courses. It wasn’t intended for courses where teachers were in expected to “manage” every aspect of the course.

These misunderstanding are, in part, responsible for some of the negative feelings for the spec from some sectors of the community. However, it’s not quite as simple as that. Lack of usable tools, technical issues with integrating existing services (such as forums), the lack meaningful use-cases, political shenanigans in IMS, and actually the enthusiasm from potential users to extend the spec for their learning and teaching contexts have all played a part in initial enthusiasm being replaced by frustration, disappointment and eventual disillusionment.

It should be pointed out that Rob wasn’t suggesting that the specification was perfect and that there had just been a huge mis-interpretation by swathes of potential users. He was merely pointing out that some critisism has been unfair. He did suggest some potential changes to the specification including incorporating dynamic group functionality (however it isn’t really clear if that is a spec or run-time issue), and minor changes to some of the elements particularly moving some to the attribution elements from properties to method. However at this point in time there doesn’t seem to be a huge amount of enthusiasm from IMS to set up an LD working group.

Bill Olivier gave the second keynote of the day where reflecting on “where are we now and what next?”. Using various models including the Garner hype cycle, Bill explored reflected on the uptake of IMS LD and explored if it was possible to get it out of the infamous trough of disillusionment and onto the plateau of productivity.

Bill gave a useful summary of his analysis of the strengths and weaknesses of the spec. Strengths included:
*learning flow management,
*multiple roles for multiple management,
*powerful event driven declarative programming facilities.
Weaknesses included:
*limited services,
*the spec is large and monolithic,
*it is hard to learn and hard to implement
*it doesn’t define data exchange mechanism, doesn’t define an engine output XML schema,
*no spec for service instantiation and set up,
* hard to ensure interoperability
*run time services are difficult to set up.

Quite a list! So, is there a need to modularize the spec or add a series speclets to allow for a greater range of interoperable tools and services? Would this solve the “server paradox” where if you have maximum interoperability you are likely to have few services, whereas for maximum utility you need many services.

Bill then outlined where he saw web 2.0 technologies as being able to contribute to greater use of the specification. Primarily this would involve making IMS LD appear to be less like programming through easier/better integration of authoring and runtime environments. Bill highlighted the work that the 10Competence team at the University of Bolton have been doing around widget integration and the development of the wookie widget server in particular. In some ways this does begin to address the service paradox in that it is a good example of how to instantiate once and run many services. Bill also highlighted that alongside technological innovations more (market) research really needs to be done in terms of the institutional/human constraints around implementing what is still a high risk technological innovation into existing processes. There is still no clear consensus around where an IMS LD approach would be most effective. Bill also pointed out the need for more relevant use cases and player views. Something which I commented on at almost a year ago too.

During the technical breakout session in the afternoon, participants had a chance to discuss in more detail some of the emerging models for service integration and how IMS LD could integrate with other specifications such as course information related ones such as XCRI. Scott Wilson also raised the point that more business workflow management systems might actually be more appropriate than our current LD tools in an HE context. as they have developed more around document workflow. I’m not very familiar with these types of systems so I can’t really comment,but I do have a sneaky suspicion that we’d probably face a similar set of issues with user engagement and the “but it doesn’t do exactly what I want it to do” syndrome.

I think what was valuable about the end of the discussion was that were were able to see that significant progress has been made in terms of allow service integration to become significantly simpler for IMS LD systems. The wookie widget approach is one way forward as is the service integration that Abelardo Pardo, Luis de la Fuente Valentin and colleagues at the University of Madrid have been undertaking. However there is still a long way to go to make the transition out of “that” trough.

What I think what we always need to remember that teaching and learning is complex and although technology can undoubtedly help, it can only do so if used appropriately. As Rob said “there’s no point trying to use a coffee machine to make pancakes” which is what some people have tried to do with IMS LD. We’ll probably never have the perfect learning design specification for every context, and in some ways we shouldn’t get too hung up about that – we probably never will – we probably don’t really need to. However integrating services based on web 2.0 approaches can allow for a far greater choice of tools. What is crucial is that we keep sharing our experiences, integrations and real experiences with each other.

EC SIG OER Meeting 27 February

Last Friday the EC SIG met at the OU, Milton Keynes for a really interesting day of presentations and discussion around OER. The meeting was in part timed to to coincide with the JISC OER call and to give an overview of some current developments in OER from a range of perspectives from the institutional to the individual.

Andy Lane and Patrick McAndrew started the day with an overview of institutional impact of the OpenLearn project. One of the key institutional barriers was (unsurprisingly) trying get over the assumption that providing open content wasn’t “giving away the family silver” and the fear of not being able to control what others might do with your content. OpenLearn has fundamentally been about de-bunking these perceptions and illustrating how making content open can actually bring about a range of benefits to the institution. The ethos of the OpenLearn project has been to enhance the student experience and the student, not the institution has central to all developments. In terms of institutional benefit, perhaps the most significant one is that there is now a clear trail showing that a significant number of openlearn students do actually go on to register for a fee paying course.

Sarah Darnley, from the University of Derby gave an overview of the POCKET project which is using OpenLearn materials and repurposing/repackaging then for their institutional VLE. They are also creating new materials and putting them into openlearn. Russell Stannard, University of Westminster rounded off the morning’s presentations with his fascinating presentation of his multimedia training videos. To quote Patrick McAndrew Russell is a bit of a ‘teacherpreneur’. During teaching of his multimedia course Russell saw that it would easier for him to create short training videos of various software packages which students could access at anytime thus freeing up actual class time. Russell explained how the fact that his site was high in google rankings has led a huge number of visits and again increased interest in the MSc he teaches on. Although not conceived as an OER project, this is a great example of how just “putting stuff out-there” can increase motivation/resources for existing students and bring in more. However I do wonder as Russell starts producing more teaching resources to go with his videos and his institution get more involved how open he will be able to keep things.

The afternoon session started with Liam Earney of the CASPER project sharing the experiences of the RePRODUCE programme. CASPER has recently surveyed to projects to find out their experiences dealing with copyright and IPR issues when repurposing material. A key finding is that the within the HE sector there is generally an absence of rights statements and only 14% of the projects found it easy to clear copyright. Ambiguity abounds within institutions about who/where/what and how of content can be reused. Of course this is a key area for the the upcoming JISC OER call.

The rest of the afternoon was spent in discussion around the call. Four of the programme managers involved were at the meeting and we able to answer questions relating to it. It is important to note the the JISC call is a pilot and is not a means to an end. It will not, and is not trying to solve all the issues around OER, however what it will do is allow the community to continue to explore and move forward with the various technical and IPR/copyright issues in the context of previous experience.

Copies of the presentations from the day are available from the CETIS wiki, and also a great summary of the day is available via Cloudworks ( a big thanks to Patrick McAndrew for pulling this together).