Blackboard moving towards IMS standards integration

Via Downes this morning, I came across Ray Henderson’s Blackboard’s Open Standards Commitments: Progess made blog post. Ray gives a summary of the work being done with IMS Common Cartridge and IMS LTI.

Having BB onboard in developments to truly “free the content” (as is the promise of such standards as IMS CC) is a major plus for implementation and adoption. From a UK perspective it’s good to see that implementation is being driven by one of our own – Stephen Vickers from the University of Edinburgh who has been developing powerlinks for BB and basic LTI. See the OSCELOT website for more information.

There are a number of models now emerging which show how learning environments can become more distributed. If you are interested in this area, we are holding a meeting in Birmingham on 4th March to discuss the notion of the distributed learning environment. There will be demos of a number of systems and we’ll also be launching a briefing paper new approaches to composing learning environments. More information including a link to register for the event is available here.

SHEEN Sharing – getting the web 2.0 habit

Sometimes I forget how integral web 2 technologies are to my working life. I blog, facebook, twitter, bookmark, aggregate RSS feed, do a bit of ‘waving’, you know all the usual suspects. And I’m always up for trying any new shiny service or widgety type thing that comes along. There are certain services that have helped to revolutionize the way I interact with my colleagues, peers and that whole “t’internet” thang. They’re a habit, part of my daily working life. So, last week I was fascinated to hear about the journey the SHEEN Sharing project has been on over the last year exploring the use of web2.0 tools for a group of practitioners that have barely got into the web 1 habit.

SHEEN, the Scottish Higher Education Employability Network, was set up in 2005. Employability is one of the SFC’s enhancement themes and almost £4million was made available to Scottish HE institutions towards developments in this area. This led to a network of professionals – the ECN (employabilty co-ordinators network) who had some fairly common needs. They all wanted to reduce duplication of effort in finding resources, share and comment on resources being used and to work collaboratively on creating new resources. As actual posts were on fixed term contracts, there was the additional need to capture developing expertise in the field. So, they started the way most networks do with an email list. Which worked to a point, but had more than a few issues particularly when it came to effectively managing resource sharing and collaboration.

One of the members of this network, Cherie Woolmer, is based in the same department as a number of us Scottish Cetisians. So in true chats that happen when making coffee style, we had a few discussions around what they were trying to do. They did have a small amount of funding and one early idea was to look at building their own repository. However we were able to give an alternative view where they didn’t actually need a full blown repository and that there were probably quite a few freely available services that could more than adequately meet their needs. So, the funding was used to conduct a study (SHEEN Sharing) into the potential of web2.0 tools for sharing.

Sarah Currier was hired as a consultant and her overview presentation of the project is available here. Over a period on just about a year (there was a extension of funding to allow some training at the end of last year/early this) without any budget for technology Sarah, along with a number of volunteers from the network explored what web tools/services would actually work for this community.

It was quite a journey summarized in the presentation linked to above. Sarah used videos (hosted on Jing) of the volunteers to illustrate some of the issues they were dealing with. However I think a lot of it boiled down to habit and getting people to be confident in use tools such as bookmarking, shared document spaces, rss feeds etc. It was also interesting to see tension between professional/formal use of technology and informal use. Web 2 does blur boundaries, but for some people, that blurring can be an uncomfortable space. One thing that came through strongly was the need for face to face training and support to help (and maybe very gently force!) people use or at least try new technologies and more importantly for them to see themselves how they could use it in their daily working lives. In effect how they could get into the habit of using some technologies.

The project explored a number of technologies including scribd (for public sharing documents), google docs (for collaborative working)twitter (which actually ended up being more effective at a project level in terms of extending connections /raising awareness) and diigo for bookmarking and sharing resources. Diigo has ended up being a core tool for the community, as well as providing bookmarking services the group and privacy functions it offers gave the flexibility that this group needed. Issues of online identity were key to members of the network – not everyone wants to have an online presence.

I hadn’t really explored diigo before this and I was really taken with the facility to create webslides widgets of bookmarked lists which could be embedded into other sites. A great way to share resources and something I’m playing around with now myself.

I think the SHEEN Sharing journey is a great example of the importance of supporting people in using technology. Yes, there is “loads of stuff” out there ready to be used, but to actually make choices and create effective sharing and use, we rely on human participation. Supporting people is just, if not more, important if we want to really exploit technology to its fullest potential. It also shows the growing need to share expertise in use of web2.0 technologies. You don’t need a developer to create a website/space to share resources – but you do need experience in how to use these technologies effectively to allow groups like SHEEN to exploit their potential. I was struck by how many tools I could see Sarah had used throughout the evaluation phase. Only a couple of years ago it would have been almost impossible for one person to easily (and freely) capture, edited and replayed video for example. A good example to highlight the changing balance of funding from software to “peopleware” perhaps?

More information about SHEEN sharing can be found on their recently launched web resources site – a great example of a community based learning environment.

CETIS 09 the video – some thoughts on the process

Regular visitors to the CETIS website may have noticed that we now have a video from the CETIS09 conference on the front page. As the content consists of “talking heads” from delegates, we hope that it gives a flavour as to why people came to the conference and what their expectations and overall impressions of the event were.

Although we have traditionally, and continue, to get feedback via feedback questionnaires we have been toying with the idea of using video to capture some more anecdotal feedback for sometime now. The old adage of a picture being worth a thousand words rings particularly true for an organization such as CETIS. It can take quite a while to explain what we are, what we do, and most importantly what impact we have on others – hell it can take about five minutes to even say Centre for Educational Technology and Interoperability Standards :-) So, using video has the potential to let others explain the benefits of what we do e.g. why do people take two days out of busy schedules to go to our annual conference?

However, as with anything getting to the point of the final video has been a bit of a journey which started as these things often do with a serendipitous meeting. Mark Magnate from 55 Degrees, had a meeting with my colleague Rowin Young about assessment related activities which I joined and during the course of the meeting he talked about the Voxur video capture system they had been developing. One thing led to another, and we decided that this “lite” video system might just provide a way for us to actually start getting some video feedback from our community and the obvious place to start was at our conference.

There are a number of video capture booths/systems on the market now, but the things that I particularly liked about the Voxur system, were:
*Size – it’s small, basically a macbook in a bright yellow flight case with a bit of additional built in lighting. So, it doesn’t need much space – just a table and somewhere relatively quiet.
*level of user control -a take is only saved when the person speaking is happy with it and they choose to move on. As it is basically an adapted laptop it looks pretty familiar to most people too.
*Editing – the user control above means that you don’t have all the “outtakes” to sift through and the system automatically tags and related answers. There are still of course editing decisions to be made but initial sifting time is cut down dramatically.
*Q&A style. With this system you have the option to have a real person record and ask questions so people aren’t just reading a question on screen then responding. Hearing and seeing someone ask you a question is a bit more personal and engaging.

In terms of actually preparing for using the system at the conference, the most time consuming, head scratching part was actually getting a set of questions which people would be able to answer. Making the switch from getting written to spoken answers did take some time. Also we had to bear in mind that this wasn’t like an interview where you could interject and ask additional/follow up questions. Once someone is sitting in front of the laptop they just work their way through the set of questions. In the end I know that we did leave in a couple of quite challenging questions – but the responses we got were fantastic and we didn’t have to bribe anyone.

During the event, as it was our first time using the system we did have Mark “manning” the box. And this is something we will continue to do really just to explain to people how the system works, and basically to reassure people that all they really have to do is hit the space bar. We had to do a wee bit of persuading to get people to come in but overall mostly people we asked were happy to take part. A mixture of natural curiosity and not scared of technology traits from our delegates probably did help.

It was quite a learning curve, but not too extreme and hopefully it is something that we can build into future events as a way to augment our other feedback channels.

Reviewing the VLE

One of the hot topics at this year’s ALT was “the VLE is dead debate”. Following on from this, ALT with colleagues at the University of Bradford have set up a new Learning Environments Review SIG (LERSIG) which has just had its inaugural meeting. Today’s event “reviewing the VLE: sharing experiences” brought together about 60 people in total (online via Elluminate and physically at the University of Bradford).

The morning was given over to presentations from representatives from five institutions (Nottingham Trent, City, LSE, UCL and York) who have/are in the process of changing their VLE. The afternoon was discussion/group work. As I was participating remotely (and like everyone else, multitasking) I didn’t join in the discussion session. However there were a number of key elements that did come through.

The early incarnations of VLEs may well be dead, but the notion and need for some kind of learning environment is still very much alive. The HE community is, I think, much becoming much clearer about articulating requirements from all the technologies (not just the VLE) used in institutions to support teaching and learning. A number of questions were raised about the use of portals and using other ‘non-traditional’ VLE systems for teaching and learning purposes. What also came through loudly today was the recognition that user requirements and continual user involvement in the change process are key to making successful transitions in technology use.

Currently, many institutions in the UK are in the process of reviewing their technology provision, and it would appear a growing number are migration from proprietary systems to open source platforms. There seems to be quite a bit of moving from BlackBoard to Moodle for example. There was some discussion around the lack of take up of Sakai in the UK. From participants it would seem that at the moment the overhead and support for Sakai is higher and less supported than Moodle. Most of the big implementations are within more research led institutions and perhaps not as well developed for a more teaching and learning focus. However, there was a recognition that this could well change and that looking at the “broader framework” is key for future developments so that new elements can be added to existing systems. This is where I would see the developments we discussed at the composing your learning environment session at the recent CETIS conference would be of relevance to this group.

What came through strongly from today’s meeting was that there is an appetite to share experiences of these processes. Stakeholder engagement is also key and again sharing strategies for engaging the key people (staff and students) emerged as another key area for sharing experiences. The SIG is in the process of setting up an online community where experiences, case studies etc can be shared. You can join the SIG at their crowdvine site.

Expert expectations of IMS LD systems – pre-publication report now available

As part of the iCOPER project, an IMS Learning Design Expert Workshop was held at the University of Vienna on November 20 & 21, 2008. The methodology and findings from the workshop have been written up in a paper which will be published (April 2010) in the International Journal of Emerging Technologies (iJET) . A pre-publication version of the report is now available from D-Space.

The report focuses on the outcomes of group working around two key issues – usability and the lifecycle of a unit of learning. The proposed solutions regarding the usability and utility problem were to investigate how teachers’ and learners’ representations of a learning design can be brought together, and to set up a research program to identify how teachers cognitively proceed when designing courses, and to map this knowledge to IMS LD. In regard to the life cycle of a unit of learning problem, the group suggested a system that continually exchanges information between runtime and editing systems so that units of learning can be updated accordingly.

UKOER session at LAMS 2009 Conference

The 2009 International LAMS Conference is being held today in Sydney. The focus of the conference is on Open Education, “looking at technologies, applications and approaches that support sharing, collaboration and open access to knowledge and resources. What are the differing implications for individuals and organisations?”.

Although very tempting, going to Sydney for a day wasn’t really possible. However the conference organisers have kindly allowed David Kernohan (JISC Programme Manager) and me to do a remote presentation on the current UKOER programme. The presentation gives an overview of the programme, some of the emerging issues (tecnhnical and cultural) which are coming through now projects are at the halfway point of this funding cycle. The presentation can be viewed here.

Il Foro

Last week I attended the Il Foro Conference in Baeza, Spain. This annual meeting brings together staff from the 10 universities in Andalusia who are involved in creating and running a shared virtual campus – Campus Andaluz Virtual. The focus of this conference was sharing best practice around teaching and learning strategies. The conference committee asked me to present about the role of standards in elearning.

Part of my reason for accepting the invitation to present was around my own PDP. I am trying to learn Spanish just now, so this seemed an ideal opportunity to practice. The thought of a bit of winter sunshine may have helped sway the decision too! Even with my limited knowledge of Spanish it was interesting to see how many similar issues around student engagement, creativity, web 2.0, mobile technologies and most importantly effective use of technology were being debated during the three days.

The virtual campus is a totally online option for students, with each of the universities offering a selection of courses to students in any of the participating Universities. There is a main portal which then links to each institutions learning environment.

In terms of my own learning and use of technology, having a Spanish dictionary on my ipod and google translate to hand did allow me to follow more quickly and easily and less obviously some of the bits I didn’t understand than I would have been able to without them. Though most web 2.0 terminology seems to be universally in English, reminding me of debates around the “e” in elearning standing for English.

The contrast to the conference surroundings to the CETIS conference in Birmingham the week before was quite also quite stark. Nothing against the Lakeside Centre but it really can’t compete with this:
Baeza

And the wifi worked perfectly:-)

My presentation gave an overview of CETIS and our development from a JISC project to our current status as an innovation support centre, our work with standards and our changing working practices. I think it’s only when you explain to people outwith the UK the level of support JISC provides to our sector, you really start to (re)appreciate what a valuable contribution it makes to developing infrastructure and take up and use of technology within our sector.

Another personal reflection was my decision to use a ‘traditional’ power point presentation with the oh so unfashionable bulleted list. I know, in one of Martin Weller’s futures I would have been condemned for that. I had thought I would do a prezi with lots of pictures etc, but actually as my presentation was being simultaneously translated and I had to send it in advance, I think it actually made more sense to be a bit text heavy. It meant that my translator knew in advance I was going to use some not very common words and acroynms. It also meant that those in the audience could do what I had been doing early and use their ipods etc to translate themselves. If you are interested, the slides are available from slideshare.

The headless VLE (and other approaches to composing learning environments)

CETIS conferences are always a great opportunity to get new perspectives and views around technology. This year it was Ross MacKenzie’s somewhat pithy, but actually pretty accurate “so what you’re really talking about is a headless VLE” during the Composing Your Learning Environment sessions that has resonated with me the most.

During the sessions we explored 5 models for creating a distributed learning environment. :
1 – system in the cloud, many outlets
2 – plug-in to VLEs
3 – many widgets from the web into one widget container
4 – many providers and many clients
5 – both a provider and a client
Unusually for a CETIS conference, the models were based on technologies and implementations that are available now. (A PDF containing diagrams for each of the systems is available for download here)

Warwick Bailey (Icodeon) started the presentations by giving a range of demo of the Icodeon Common Cartridge platform. Warwick showed us examples the plug-ins to existing VLEs model. Using content stored as IMS Common Cartridges and utilising IMS LTI and web services, Warwick illustrated a number of options for deploying content. By creating a unique url for each resource in the cartridge, it is possible to embed specific sections of content onto a range of platforms. So, although the content maybe stored in a VLE users can choose where they want to display the content – a blog, wiki, web-page, facebook, ebooks etc. Hence the headless VLE quote. Examples can been seen on the Icodeon blog. Although Warwick showed an example of an assessment resource (created using IMS QTI of course) they are still working on a way to feed user responses back to the main system. However he clearly showed how you can extend a learning environment through the use of plug-ins and how by identifying individual content resources you can allow for maximum flexibility in terms of deployment.

Chuck Severance then gave us an overview IMS Basic LTI and his vision for it (model 2). Describing Basic LTI as his “escape route” from existing LMSs. LTI allows an LMS to launch an external tool and securely provide user identity, course information, and role information to that tool. It uses a HTTP POST through the browser, secured by the OAuth security. This tied in nicely with Warwick’s earlier demo of exactly that. Chuck explained his visions of how LTI could provide the plumbing to allow new tools to be integrated into existing environments. As well as the Icodeon player, there is progress being made with a number of systems including Moodle, Sakai and Desire2Learn. Also highlighted was the Blackboard building block and powerlink from by Stephen Vickers (Edinburgh University).

Chuck hopes that by providing vendors with an easy to implement spec, we will be able to get to the stage where there are many more tools available for teachers and learning to allow them to be real innovative when creating their teaching and learning experiences.

Tony Toole then presented an alternative approach to building a learning (and/or teaching) environment using readily (and generally free or low cost) available web 2 tools (model 3). Tony has been exploring using tools such as Wetpaint, Ning, PBworks in creating aggregation sites with embed functionality. For example Tony showed us an art history course page he has been building with with Oxford University, that pulls in resources such as videos from museums, photos from flickr streams etc. Tony has also be investigating the use of conference tools such as Flash meeting. One of the strengths of this approach is that it takes a relatively short time to pull together resources (maybe a couple of hours). Of course a key draw back is that these tools aren’t integrated with existing institutional systems and more work on authorization integration is needed. However the ability to quickly show teachers and learners the potential for creating alternative ways to aggregate content in a single space is clearly evident, and imho, very appealing.

Our last presentation of day one came from Stuart Sim who showed us the plugjam system he has been developing (another version of model 1). Using a combination of open educational standards such as IMS LTI and CC, and open APIs, plugjam allows faculties to provide access to information in a variety of platforms. The key driver for developing this platform is to help ‘free’ data trapped in various places within an institution and make it available at the most useful point of delivery for staff and students.

So, after an overnight break involving uncooked roast potatoes (you probably had to be at the conference dinner to appreciate that:-) we stared the second half of our session with a presentation from Scott Wilson (CETIS and University of Bolton) on the development of the Wookie widget server and it’s integration into the Bolton Moodle installation (another version of model 1). More information about Wookie and its Apache Incubator status is available here. In contrast to a number of the approaches demoed in the previous session, Scott emphasised that they had chosen not to go down the LTI road as it wasn’t a generic enough specification. By choosing the W3C widget approach, they were able to build a service which provides much greater flexibility to build widgets which can be deployed in multiple platforms and utilise other developments such as the Bondi security framework .

Pat Parslow, University of Reading, then followed with a demo of Google Wave (model 4) and showed some of the experimental work he has been doing incorporating various bots and using it as a collaborative writing tool. Pat also shared some of his thoughts about how it could potentially be used to submit assignments through the use of private waves. However although there is potential he did emphasise that we need much more practice to effectively judge the affordances of using it in an educational setting. Although the freedom it gives is attractive in one sense, in an educational setting that freedom could be its undoing.

We then split into groups to discuss each merits of each of the models and do a ‘lite’ swot analysis of each of them. And the result? Well as ever no one model came out on top. Each one had various strengths and weaknesses and a model 6 taking the best bits of each one was proposed by one group. Interestingly, tho’ probably unsurprising, authentication was the most common risk. This did rise to an interesting discussion in my group about the fact that maybe we worry too much about authentication where and why we need it – but that’s a whole other blog post.

Another weakness was the lack of ability to sequence content to learners in spaces like blogs and wikis. Mind you, as a lot of content is fairly linear anyway that might not be too much of a problem for some:-) The view of students was also raised. Although we “in the know” in the learning technology community are chomping at the bit to destroy the last vestiges of the VLE as we know it, we have to remember that lots of students actually like them, don’t have iphones, don’t use RSS, don’t want to have their facebook space invaded by lecturers and value the fact that they can go to one place and find all the stuff related to their course.

We didn’t quite get round to model 5 but the new versions of Sakai and Blackboard seem to be heading in that direction. However, maybe for the rest of us, the next step will be to try being headless for a while.

Presentations and models from the session are available here.

Relating IMS Learning Design to web 2.0 technologies

Last week I attended the “relating IMS Learning Design to web 2.0 technologies” workshop at the EC-TEL conference. The objectives of the workshop were to to explore what has happened in the six years since the release of specification both in terms of developments in technology and pedagogy and to discuss how (and indeed if/can) the specification keep up with these changes.

After some of the discussions at the recent IMS meeting, I felt this was a really useful opportunity to redress the balance and spend some time reflecting on what the the spec was actually intended for and how web 2.0 technologies are now actually enabling some of the more challenging parts of its implementation – particularly the integration of services.

Rob Koper (OUNL) gave the first keynote presentation of the day staring by taking us all back to basics and reminding of the original intentions of the specification i.e. to create a standardized description of adaptive learning and teaching processes that take place in a computer managed course (the LD manages the course, not the teacher). Learning and support activities and not content are central to the experience.

The spec was intentionally designed to be as device neutral as possible and to provide an integrative framework for a large number of standards and technologies and to allow a course to be “designed” once (in advance of the actual course) and run many times with minimal changes. The spec was never intended to handle just in time learning scenarios, or in situations where there is little automation necessary of online components such as time based activities.

However as Rob pointed out many people have tried to use the spec for things it was really never intended to do. It wasn’t build to manage highly adaptive courses. It wasn’t intended for courses where teachers were in expected to “manage” every aspect of the course.

These misunderstanding are, in part, responsible for some of the negative feelings for the spec from some sectors of the community. However, it’s not quite as simple as that. Lack of usable tools, technical issues with integrating existing services (such as forums), the lack meaningful use-cases, political shenanigans in IMS, and actually the enthusiasm from potential users to extend the spec for their learning and teaching contexts have all played a part in initial enthusiasm being replaced by frustration, disappointment and eventual disillusionment.

It should be pointed out that Rob wasn’t suggesting that the specification was perfect and that there had just been a huge mis-interpretation by swathes of potential users. He was merely pointing out that some critisism has been unfair. He did suggest some potential changes to the specification including incorporating dynamic group functionality (however it isn’t really clear if that is a spec or run-time issue), and minor changes to some of the elements particularly moving some to the attribution elements from properties to method. However at this point in time there doesn’t seem to be a huge amount of enthusiasm from IMS to set up an LD working group.

Bill Olivier gave the second keynote of the day where reflecting on “where are we now and what next?”. Using various models including the Garner hype cycle, Bill explored reflected on the uptake of IMS LD and explored if it was possible to get it out of the infamous trough of disillusionment and onto the plateau of productivity.

Bill gave a useful summary of his analysis of the strengths and weaknesses of the spec. Strengths included:
*learning flow management,
*multiple roles for multiple management,
*powerful event driven declarative programming facilities.
Weaknesses included:
*limited services,
*the spec is large and monolithic,
*it is hard to learn and hard to implement
*it doesn’t define data exchange mechanism, doesn’t define an engine output XML schema,
*no spec for service instantiation and set up,
* hard to ensure interoperability
*run time services are difficult to set up.

Quite a list! So, is there a need to modularize the spec or add a series speclets to allow for a greater range of interoperable tools and services? Would this solve the “server paradox” where if you have maximum interoperability you are likely to have few services, whereas for maximum utility you need many services.

Bill then outlined where he saw web 2.0 technologies as being able to contribute to greater use of the specification. Primarily this would involve making IMS LD appear to be less like programming through easier/better integration of authoring and runtime environments. Bill highlighted the work that the 10Competence team at the University of Bolton have been doing around widget integration and the development of the wookie widget server in particular. In some ways this does begin to address the service paradox in that it is a good example of how to instantiate once and run many services. Bill also highlighted that alongside technological innovations more (market) research really needs to be done in terms of the institutional/human constraints around implementing what is still a high risk technological innovation into existing processes. There is still no clear consensus around where an IMS LD approach would be most effective. Bill also pointed out the need for more relevant use cases and player views. Something which I commented on at almost a year ago too.

During the technical breakout session in the afternoon, participants had a chance to discuss in more detail some of the emerging models for service integration and how IMS LD could integrate with other specifications such as course information related ones such as XCRI. Scott Wilson also raised the point that more business workflow management systems might actually be more appropriate than our current LD tools in an HE context. as they have developed more around document workflow. I’m not very familiar with these types of systems so I can’t really comment,but I do have a sneaky suspicion that we’d probably face a similar set of issues with user engagement and the “but it doesn’t do exactly what I want it to do” syndrome.

I think what was valuable about the end of the discussion was that were were able to see that significant progress has been made in terms of allow service integration to become significantly simpler for IMS LD systems. The wookie widget approach is one way forward as is the service integration that Abelardo Pardo, Luis de la Fuente Valentin and colleagues at the University of Madrid have been undertaking. However there is still a long way to go to make the transition out of “that” trough.

What I think what we always need to remember that teaching and learning is complex and although technology can undoubtedly help, it can only do so if used appropriately. As Rob said “there’s no point trying to use a coffee machine to make pancakes” which is what some people have tried to do with IMS LD. We’ll probably never have the perfect learning design specification for every context, and in some ways we shouldn’t get too hung up about that – we probably never will – we probably don’t really need to. However integrating services based on web 2.0 approaches can allow for a far greater choice of tools. What is crucial is that we keep sharing our experiences, integrations and real experiences with each other.