Relating IMS Learning Design to web 2.0 technologies

Last week I attended the “relating IMS Learning Design to web 2.0 technologies” workshop at the EC-TEL conference. The objectives of the workshop were to to explore what has happened in the six years since the release of specification both in terms of developments in technology and pedagogy and to discuss how (and indeed if/can) the specification keep up with these changes.

After some of the discussions at the recent IMS meeting, I felt this was a really useful opportunity to redress the balance and spend some time reflecting on what the the spec was actually intended for and how web 2.0 technologies are now actually enabling some of the more challenging parts of its implementation – particularly the integration of services.

Rob Koper (OUNL) gave the first keynote presentation of the day staring by taking us all back to basics and reminding of the original intentions of the specification i.e. to create a standardized description of adaptive learning and teaching processes that take place in a computer managed course (the LD manages the course, not the teacher). Learning and support activities and not content are central to the experience.

The spec was intentionally designed to be as device neutral as possible and to provide an integrative framework for a large number of standards and technologies and to allow a course to be “designed” once (in advance of the actual course) and run many times with minimal changes. The spec was never intended to handle just in time learning scenarios, or in situations where there is little automation necessary of online components such as time based activities.

However as Rob pointed out many people have tried to use the spec for things it was really never intended to do. It wasn’t build to manage highly adaptive courses. It wasn’t intended for courses where teachers were in expected to “manage” every aspect of the course.

These misunderstanding are, in part, responsible for some of the negative feelings for the spec from some sectors of the community. However, it’s not quite as simple as that. Lack of usable tools, technical issues with integrating existing services (such as forums), the lack meaningful use-cases, political shenanigans in IMS, and actually the enthusiasm from potential users to extend the spec for their learning and teaching contexts have all played a part in initial enthusiasm being replaced by frustration, disappointment and eventual disillusionment.

It should be pointed out that Rob wasn’t suggesting that the specification was perfect and that there had just been a huge mis-interpretation by swathes of potential users. He was merely pointing out that some critisism has been unfair. He did suggest some potential changes to the specification including incorporating dynamic group functionality (however it isn’t really clear if that is a spec or run-time issue), and minor changes to some of the elements particularly moving some to the attribution elements from properties to method. However at this point in time there doesn’t seem to be a huge amount of enthusiasm from IMS to set up an LD working group.

Bill Olivier gave the second keynote of the day where reflecting on “where are we now and what next?”. Using various models including the Garner hype cycle, Bill explored reflected on the uptake of IMS LD and explored if it was possible to get it out of the infamous trough of disillusionment and onto the plateau of productivity.

Bill gave a useful summary of his analysis of the strengths and weaknesses of the spec. Strengths included:
*learning flow management,
*multiple roles for multiple management,
*powerful event driven declarative programming facilities.
Weaknesses included:
*limited services,
*the spec is large and monolithic,
*it is hard to learn and hard to implement
*it doesn’t define data exchange mechanism, doesn’t define an engine output XML schema,
*no spec for service instantiation and set up,
* hard to ensure interoperability
*run time services are difficult to set up.

Quite a list! So, is there a need to modularize the spec or add a series speclets to allow for a greater range of interoperable tools and services? Would this solve the “server paradox” where if you have maximum interoperability you are likely to have few services, whereas for maximum utility you need many services.

Bill then outlined where he saw web 2.0 technologies as being able to contribute to greater use of the specification. Primarily this would involve making IMS LD appear to be less like programming through easier/better integration of authoring and runtime environments. Bill highlighted the work that the 10Competence team at the University of Bolton have been doing around widget integration and the development of the wookie widget server in particular. In some ways this does begin to address the service paradox in that it is a good example of how to instantiate once and run many services. Bill also highlighted that alongside technological innovations more (market) research really needs to be done in terms of the institutional/human constraints around implementing what is still a high risk technological innovation into existing processes. There is still no clear consensus around where an IMS LD approach would be most effective. Bill also pointed out the need for more relevant use cases and player views. Something which I commented on at almost a year ago too.

During the technical breakout session in the afternoon, participants had a chance to discuss in more detail some of the emerging models for service integration and how IMS LD could integrate with other specifications such as course information related ones such as XCRI. Scott Wilson also raised the point that more business workflow management systems might actually be more appropriate than our current LD tools in an HE context. as they have developed more around document workflow. I’m not very familiar with these types of systems so I can’t really comment,but I do have a sneaky suspicion that we’d probably face a similar set of issues with user engagement and the “but it doesn’t do exactly what I want it to do” syndrome.

I think what was valuable about the end of the discussion was that were were able to see that significant progress has been made in terms of allow service integration to become significantly simpler for IMS LD systems. The wookie widget approach is one way forward as is the service integration that Abelardo Pardo, Luis de la Fuente Valentin and colleagues at the University of Madrid have been undertaking. However there is still a long way to go to make the transition out of “that” trough.

What I think what we always need to remember that teaching and learning is complex and although technology can undoubtedly help, it can only do so if used appropriately. As Rob said “there’s no point trying to use a coffee machine to make pancakes” which is what some people have tried to do with IMS LD. We’ll probably never have the perfect learning design specification for every context, and in some ways we shouldn’t get too hung up about that – we probably never will – we probably don’t really need to. However integrating services based on web 2.0 approaches can allow for a far greater choice of tools. What is crucial is that we keep sharing our experiences, integrations and real experiences with each other.

Amplification and online identity (or wot I do and wot it looks like)

I don’t know if it’s just me, but do you ever dread answering the question, “so what is it you do?” To those who don’t work in education, or in a technology related field, it can take quite a while for me to explain just exactly what it is that I do. Often, I just opt for the slightly tongue in cheek “I type and go to meetings” option. However, over the last year I have found myself increasingly using the term “amplify” when describing what I do. Over the past two years blogging and twittering have become an integral part of my working life. Without actually realising it, the ins and outs of my working life have become increasingly amplified, visible and searchable through technology.

This week I’ve been reading (via a link from twitter of course) “The Future of Work Perspectives” report. This report starts with a section on the amplified individual worker of the future and outlines their four main characteristics which I (and I suspect many of my colleagues/peers) found myself identifying with.
*Social: “They use tagging software, wikis, social networks and other human intelliegence aggregators to understand what their individual contributions means in the context of the organization.” (which made me think of Adam’s presentation at the CETIS conference last year where he used our collective blog posts to illustrate connections across all the organizations areas of interest).
*Collective: “taking advantage of online collaboration software, mobile communications tools, and immersive virtual environments . . ..” (which made me reflect on how a 3G dongle has made me a true road warrior).
*Improvisational: “capable of banding together to form effective networks and infrastructures” (what would we do without “dear lazyweb” and the almost instant answers we can get from the twitterati?).
*Augmented: “they employ visualization tools, attention filters, e-displays and ambient presence systems to enhance their cognitive abilities and coordination skills, thus enabling them to quickly access and process massive amounts of information.” (I’m not sure I’m there yet, but I can see that coming too and the presentation from Adam mentioned above was I think the first time I really saw a coherent visualization of the collective intelligence of CETIS being represented through the collation of individual contributions).

The report is worth a read if a bit scary in parts. I’m not sure if I really want my laptop to be recording my biometric data and telling me to go home if I’m coughing too much. However three years ago if anyone told me I would be regularly broadcasting 140 character messages throughout the day I would have told them just where to stick their Orwellian Big Brother ideas.

I’m actually very comfortable with being an “amplifier” it has been a natural progression for me. However I have started to think more about the “amplified” student and how/if/can/should we translate these traits into students and lecturers. It has been relatively easy for me to integrate social networking into my working life. I’m pretty much desk bound – even when I’m travelling as long as I have my laptop + dongle and/or mobile phone I am pretty much always online. I don’t have teach x hours a week, and write research papers to secure my position. Using and being part of online social networks is easy and crucially, imho, relevant and useful to me. My direct professional peer network are in the same position. We’re all pretty much research as opposed to teaching focused.

Although I can see how the traits of an amplified individual equate with the kind of students we’d ideally like, I think we still have a way to go to persuade students and teachers alike of the real benefits of social networking in an educational context. There needs to be a fundamental change the recognition system for staff and assessment process for students to recognise/integrate these types of activities. I know there are many pockets of innovative work going on which are starting to address these issues and hopefully these will become more and more commonplace. It’s also up to us, the amplifiers, to continue to reach out and show how useful and relevant the use collaborative technologies can be in an educational context.

We also need to highlight the need to maintain and protect online identities as we gain more and more presence and professional recognition of them. As I’ve been musing around this post and thinking of ways of visualizing (or agumenting) some of this I came across the Personas project at MIT which analyzes your personal profile and creates a ‘DNA’ of your online character. Of course I had to have a go. I’d also just read about a new free web-based screencasting tool, screenr that links directly to twitter. So in the interests of killing two birds with one stone I signed up for screenr using my twitter username and password and recorded my online DNA being built. Halfway through I realised that I had just glibly given my twitter ID to an unknown third party without actually knowing if it was secure. Scott Wilson’s voice was in my head saying “remember oAuth, never sign in anywhere without it” or words to that effect.

My online identity is now more than ever, key to my professional identity. I should be more careful when I sign up for new toys, or should I say amplification opportunities and I should be reminding others too.

Just in case you’re curious about my online DNA here’s a picture or you can watch the screencast of it being created.

picture of Sheila MacNeill's online DNA

picture of Sheila MacNeill's online DNA

The changing nature of technology innovation

I’ve just watched Clay Shirky’s recent talk on ted.com on “how twitter can change history“. Although the content of the talk is very topical there are added nuances this week in particular with the explosion of community driven social media interactions around the Iranian election.

One of the key premises of the presentation it that the nature of technological innovation is changing; primarily due to the most participatory and social nature of collaborative web 2.0 technologies. He goes so far as to say technologies “don’t get socially interesting until they get technologically boring”. And once they become socially interesting the impact they have can be profound. (Clay references China in the talk but there are obvious parallels with the current situation in Iran). The real power is at not the “shiny” developer end but at the point where technologies become ubiquitous and can be harnessed by communities in ways not realised by developers.

This really got me thinking about the nature of an innovation centre such as CETIS. Traditionally we have been right at the “shiny edge” of things; playing with all the new things then leaving them behind once they become close to mainstream and moving on the the next glittering thing on the horizon. But are we missing a trick? Maybe we should be sticking around a bit longer with certain technologies to see how and if ubiquity fosters innovation in education.

In some ways I think we are starting to be more engaged at the socially innovative end of things. Undoubtedly for those of us who twitter it has provided us with an added communication dimension both with our direct work colleagues and our wider community. I’ve been on it for about 2 years now and still can’t see anything at the moment that is going to replace it.

In our outward facing service role, activities such as the widget working group are allowing us to be more engaged with teaching practitioners. From the outset we realised that there would be two distinct phases of this work, beginning with the technical infrastructure and then moving on to the user creation and use stage. I really hope that we can continue in this vein and that our own use of social technology can help us become more a part of the everyday experience for educators and not just the geek ship on the horizon.

Semantic Technologies in education survey site now available

The next stage of the SemTech project (as reported earlier in Lorna’s blog) is now underway. The team are now conducting an online survey of relevant semantic tools and services. The survey website provides a catalogue of relevant semantic tools and services and information on how they relate to education.

If you have an interest in the use of semantic technologies in teaching and learning, you can register on the site and add any relevant technologies you are using, or add tags to the ones already in documented. As the project is due for completion by the end of February, the project team are looking for feedback by 2 February.

Reports and being part of a wider conversation

Reports, love ‘em or hate ‘em there doesn’t really seem to be any escape from ‘em and, they are generally very long, text based and in my case, printed out and hang around in my handbag for far too long without being read :-)

One of the things that always strikes me is why we so often report about new technologies in the time honoured, text based format and don’t use the technologies we are reporting on. A case in point being this morning when I printed out a 150 page review of “current and developing international practice in the use of social networking (Web 2.0) in higher education”. At this point I should add a disclaimer -this post is not passing any judgement on the content of or the authors of this report. What really struck me this morning was the conversation that took place around a comment I made via twitter and the ideas of back-channels and participation and feedback.

So if you will, follow me back in time to about 9am this morning when I posted this:

“is it just me or does a 150 page report on web2.0 technologies miss the point on some level . . .” A couple of tweets later Andy Powell came in with “yes, totally – why don’t people explicitly fund a series of blog posts and/or tweets instead?” Good point – why not indeed? Which was answered to an extent by this: “ostephens @sheilmcn Depends what the point of the report is. It could conclude that Web 2.0 does not significantly add to the value of communication”. A few more tweets later it was this response that really got me thinking “psychemedia @sheilmcn one fn of a report is so that many people can be part of ‘that conversation'; but here it’s easy to be part of a wide conversation”. Yes, that is true, but is is the ‘right’ conversation? I seemed to have started tapped into something but the responses weren’t really about the report. I wonder if someone actually related to the report had posted a comment specifically looking for feedback how much of a conversation there would have been.

To some extent I know that in the small twittersphere I inhabit, there would probably have been a lot of comment and conversation. However with the proliferation of twitter extensions I’m starting to become a bit worried that the serendipitous nature of the tool maybe about to be destroyed by people trying to organise it, and use it in a more structured way. I wonder how often would I take the time to take part in organised twitter conversations?

I guess this kind of brings me back to an analogy related to web2.0 and education. I’ve certainly heard many times that the “kids” don’t want to use facebook in school because it’s part of their “real” life and not part of their education; and when we as educators try to integrate social software we fail, because the kids have moved on to the next cool thing. So if we to try to use twitter in a structured way will we all have moved onto the next big thing? I guess on that note I should actually now go and read that report and get twittering.

Cloud computing testbed – new research centre announced by Yahoo, Hewlett Packard and Intel

A new research centre for cloud computing initiatives has just been announced by Yahoo, Hewlett Packard and Intel. The center will create “a global, multi-data center, open source test bed for the advancement of cloud computing research and education. The goal of the initiative is to promote open collaboration among industry, academia and governments by removing the financial and logistical barriers to research in data-intensive, Internet-scale computing.”

At CETIS we have started our first venture into cloud computing with our content packaging transcoder service. It will be interesting to see if any upcoming JISC projects and the eFramework will be able to utilise the new services announced by Yahoo et al.

More coverage of the announcement is available from Techcrunch including a live blog from a conference call about the announcement, and the BBC.

I love sprouts!

And not just the green ones :-) David Sherlock in our Bolton office put me onto Sprout Builder, a very simple widget builder. I have had a play with some other so called simple widget building tools which lost my interest in about 5 minutes or when I realised that they didn’t work with macs, but I have to say this one has really got me hooked.

In about half an hour I had build a widget which displays the outputs for the JISC Design for Learning programme (just taking a feed from the programme delicious site), published it onto the Design for Learning wiki and in my netvibes page. I’ve now just created a widget for my last SIG meeting with audio/video files embedded and a location map which I put into facebook and the CETIS wiki.

Now I’m not claiming that these examples are anything unique, or particularly well designed. However, what I really like about this particular tool is the simplicity of it and the way it integrates services that I use such as rss feeds, maps, polldaddy polls, video, audio etc. Publishing is really straightforward with links to all the main sites such as facebook, beebo, netvibes, pagesflakes, igoogle, blogger . . . the list goes on. You can also make changes on the fly and when you republish it automatically updates all copies.

Tools like this really do put publishing (across multiple platforms/sites) and remixing content into the hands of us non-developers. There are many possibilities for education too, from simple things like creating a widget of a reading list/resources from a delicious feed to a simple countdown for assignments. (OK, that might be a bit scary, but heck a ticking clock works for most of us!). Simple tools like this combined with the widgets that the TenCompetence project are building (and showed at a recent meeting) are really starting to push the boundaries, and show the potential of how we can mix and match content and services to help enhance the teaching and learning experience.

Go Swurl yourself

Taking a break from ICALT 2008 I’ve just discovered Swurl a site that visualizes your digital life stream. You can add feeds from services like flickr, facebook, twitter, delicious, lastfm etc and it aggregates them and provides a timeline view of your online activities. Unfortunately my timeline is a bit twittertabulous at the moment as I’ve been at conferences for the last week or so, so it’s not that visually exciting. However if you do upload photos it’s probably a lot more visually appealing. I’m also thinking that it might be a good lightweight time-recording mechanism too.

Tweet Clouds

A post from Martin Weller put me onto Tweet Clouds – a new tag cloud generating service for twitter. As someone who uses twitter mainly for work purposes I was curious to see what kind of cloud my account would generate. As expected (particularly after a relatively heavy twitter session at the OAI-ORE open day on Friday) there are a lot of “resources” and “aggregations” in my cloud:-)

I’m not sure just how much of a gimmick this is and just how useful it is to have another view on what you are writing about. As Martin points out the addition of more filtering and links would certainly help. But I think because I twitter in bursts at selected times, it may well be of more value to someone like me than a more regular twitter user as any clouds I generate might be a bit more focussed. Then again, for a more regularly user it may well be useful to get an overview of what you have been talking about . . . or is it just another ‘neat’ web 2.0 application that you use once, smile at the results and never use again?