Communicating technical change – the trojan horse of technology

As the JISC funded Curriculum Design Programme is now entering its final year, the recent Programme meeting focused on effective sharing of outputs. The theme of the day was “Going beyond the obvious, talking about challenge and change”.

In the morning there were a number of breakout sessions around different methods/approaches of how to effectively tell stories from projects. I co-facilitated the “Telling the Story – representing technical change” session.

Now, as anyone who has been involved in any project that involved implementing of changing technology systems, one of the keys to success is actually not to talk too much about the technology itself – but to highlight the benefits of what it actually does/will do. Of course there are times when projects need to have in-depth technical conversations, but in terms of the wider project story, the technical details don’t need to be at the forefront. What is vital is that that the project can articulate change processes both in technical and human work-flow terms.

Each project in the programme undertook an extensive base-lining exercise to identify the processes and systems (human and technical) involved in the curriculum design process ( the PiP Process workflow model is a good example of the output of this activity).

Most projects agreed that this activity had been really useful in allowing wider conversations around the curriculum design and approval process, as there actually weren’t any formal spaces for these types of discussions. In the session there was also the feeling that actually, technology was the trojan horse around which the often trickier human process issues could be discussed. As with all educational technology related projects all projects have had issues with language and common understandings.

So what are the successful techniques or “stories” around communicating technical changes? Peter Bird and Rachael Forsyth from the SRC project shared their experiences with using and external consultant to run stakeholder engagement workshops around the development of a new academic database. They have also written a comprehensive case study on their experiences. The screen shot below captures some of the issues the project had to deal with – and I’m sure that this could represents views in practically any institution.
screen-capture2

MMU have now created their new database and have a documentation which is being rolled out. You can see a version of it in the Design Studio. There was quite a bit of discussion in the group about how they managed to get a relatively minimal set of fields (5 learning outcomes, 2 assessments) – some of that was down that well known BOAFP (back of a fag packet) methodology . . .

Conversely, the PALET team at Cardiff are now having to add more fields to their programme and module forms now they are integrating with SITS and have more feedback from students. Again you can see examples of these in the Design Studio. The T-Sparc project have also undertaken extensive stakeholder engagement (in which they used a number of techniques including video which was part of another break out session) and are now starting to work with a dedicated sharepoint developer to build their new webforms. To aid collaboration the user interface will have discussion tabs and then the system will create a definitive PDF for a central document store, it will also be able to route the data into other relevant places such as course handbooks, KIS returns etc.

As you can see from the links in the text we are starting to build up a number of examples of course and module specifications in the Design Studio, and this will only grow as more projects start to share their outputs in this space over the coming year. One thing the group discussed which the support team will work with the projects to try and create is some kind of check list for course documentation creation based on the findings of all the projects. There was also a lot of discussion around the practical issues of course information management and general data management e.g. data creation, storage, workflow, versioning, instances.

As I pointed out in my previous post about the meeting, it was great to see such a lot of sharing going on in the meeting and that these experiences are now being shared via a number of routes including the Design Studio.

Corporate memory, timelines and memolane

This week we had one of our quite rare all of CETIS staff meetings. During the discussions over the two days, communication and how to be smarter, better at sharing what we do amongst ourselves was a recurring theme. If you keep up with the CETIS news feed you’ll probably realise that we cover quite large range of activities and that’s only the “stuff” we blog about. It doesn’t represent all of our working activities.

This morning I was reminded of another time-based aggregation service called memolane. Being a bit of a sucker for these kind of things I had signed up to it last year, but actually had forgotten all about it. However I did have another look today and set up JISC CETIS account with RSS feeds from our twitter account and our news and features; and I was pleasantly surprised. The screenshot below gives an indication of the timeline.

JISC CETIS memolane timeline

JISC CETIS memolane timeline

The team at memolane have released embedding functionality, but as we are super-spam-conscious here our wordpress installation doesn’t like to embed things. I have to also add that the memolane team were super quick at picking on a tweet about embedding and have been really helpful. Top marks for customer service.

As a quick and easy way to create and share aspects of a collective memory of organisational activities and output, I think this has real potential. I also think that this could also be useful for projects as a collective memory of their activities (you can of course and multiple feeds from youtube, slideshare etc too). I’d be interested in hearing your thoughts – is this something that actually might only make sense for us and our funders? Or do you think this type of thing would be useful in a more visible section of the CETIS website?

**Update August 2011**

The embed feature now works!

Technologies update from the Curriculum Design Programme

We recently completed another round of PROD calls with the current JISC Curriculum Design projects. So, what developments are we seeing this time around?

Wordle of techs & standards used in Curriculum Design Prog, April 11

Wordle of techs & standards used in Curriculum Design Prog, April 11

Well, in terms of baseline technologies, integrations and approaches the majority of projects haven’t made any major deviations from what they originally planned. The range of technologies in use has grown slighty, mainly due to in parts to the addition of software being used for video capture (see my previous post on the use of video for capturing evidence and reflection).

The bubblegram below gives a view of the number of projects using a particular standard and/or technology.

XCRI is our front runner, with all 12 projects looking at it to a greater or lesser extent. But, we are still some way off all 12 projects actually implementing the specification. From our discussions with the projects, there isn’t really a specific reason for them not implementing XCRI, it’s more that it isn’t a priority for them at the moment. Whilst for others (SRC, Predict, Co-educate) it is firmly embedded in their processes. Some projects would like the spec to be more extensive than it stands which we have know for a while and the XCRI team are making inroads into further development particularly with its inclusion into the European MLO (Metadata for Learning Opportunities) developments. As with many education specific standards/specifications, unless there is a very big carrot (or stick) widespread adoption and uptake is sporadic however logical the argument for using the spec/standard is. On the plus side, most are confident that they could implement the spec, and we know from the XCRI mini-projects that there are no major technical difficulties in implementation.

Modelling course approval processes has been central to the programme and unsurprisingly there has been much interest and use of formal modelling languages such as BPMN and Archimate. Indeed nearly all the projects commented on how useful having models, however complex, has been to engage stakeholders at all levels within institutions. The “myth busting” power of models i.e. this shows what actually what happens and it’s not necessarily how you believe things happen, was one anecdote that made me smile and I’m sure resonates in many institutions/projects. There is also a growing use of the Archi tool for modelling and growing sharing of experience between a number of projects and the EA (Enterprise Architecture) group. As Gill has written, there are a number of parallels between EA and Curriculum Design.

Unsurprisingly for projects of this length (4 years) and perhaps heightened by “the current climate”, a number of the projects have (or are still) in the process of fairly major institutional senior staff changes. This has had some impact relating to purchasing decisions re potential institution wide systems, which are generally out of the control of the projects. There is also the issue of loss of academic champions for projects. This is generally manifesting itself in the projects by working on other areas, and lots of juggling by project managers. In this respect the programme clusters have also been effective with representatives from projects presenting to senior management teams in other institutions. Some of the more agile development processes teams have been using has also helped to allow teams to be more flexible in their approaches to development work.

One very practical development which is starting to emerge from work on rationalizing course databases is the automatic creation of course instances in VLEs. A common issue in many institutions is that there are no version controls for course within VLEs and it’s very common for staff to just create a new instance of a course every year and not delete older instances which apart from anything else can add up to quite a bit of server space. Projects such as SRC are now at the stage where there new (and approved) course templates are populating the course database which then triggers an automatic creation of a course in the VLE. Predict, and UG-Flex have similar systems. The UG-Flex team have also done some additional integration with their admissions systems so that students can only register for courses which are actually running during their enrollment dates.

Sharepoint is continuing to show a presence. Again there are a number of different approaches to using it. For example in the T-Spark project, their major work flow developments will be facilitated through Sharepoint. They now have a part time Sharepoint developer in place who is working with the team and central IT support. You can find out more at their development blog. Sharepoint also plays a significant role in the PiP project, however the team are also looking at integrations with “bigger” systems such as Oracle, and are developing a number of UI interfaces and forms which integrate with Sharepoint (and potentially Oracle). As most institutions in the UK have some flavour of Sharepoint deployed, there is significant interest in approaches to utilising it most effectively. There are some justifiable concerns relating to its use for document and data management, the later being seen as not one of its strengths.

As ever it is difficult to give a concise and comprehensive view from such a complex set of projects, who are all taking a slightly different approach to their use of technology and the methods they use for system integration. However many projects have said that the umbrella of course design has allowed them to discuss, develop the use of institutional administration and teaching and learning systems far more effectively than they have been able to previously. A growing number of resources from the projects is available from The Design Studio and you can view all the information we have gathered from the projects from our PROD database.

The Learning Design Support Environment (LDSE) and Curriculum Design

This morning I spent an hour catching up with seminar given last week by the LDSE team gave as part of a series of online seminars being run by the Curriculum Design and Delivery support project. You can view the session by following the link at the bottom of this page.

The LDSE (Learning Design Support Environment) is an ESRC/EPSRC TEL funded project involving the Institute of Education, Birkbeck College, University of Oxford, London Metropolitan University, London School of Economics and Political Science, Royal Veterinary College and ALT. The project builds on the work of previously JISC funded projects Phoebe and LPP and aims to discover how to use digital technologies to support teachers in designing effective technology-enhanced learning (you can read more on the project summary page or watch this video).

The are a number of overlapping interests with the LDSE and the current JISC funded Curriculum Design programme and the session was designed to give an overview of the system and an opportunity to discuss some common areas such as: how to model principles in educational design, guidelines and toolkits for staff, joining up systems and how do we join up institution-level business processes with learning-level design?

During the tour of the system Marion Manton explained how the system underpinned by an learning design ontology to help enhance the user experience. So the system is able to “understand” relationships of learning design properties (such as teaching styles) and provide the user with analysis of and different views of the pedagogical make-up of their design/learning experience.

LDSE pedagogy pie chart

LDSE pedagogy pie chart

The system also allows for a timeline view of designs and which again is something practitioners find very useful. There is some pre-population of fields (based on the ontology) but these are customizable. Each of the fields also links to further guidance and advice based on the Phoebe wiki based approach.

The ontology was created using Protégé and the team will be making the latest version of the ontology publicly available through the Protege sharing site.

I think the ontology based approach, the different views it provides, and the guidance the system gives are all major steps forward in terms of developing useful tools to aid practitioners in the design process. I know when I gave a very short demo of the LDSE at a seminar in my department a couple of weeks ago, there was real sense of engagement from staff. However in terms of joining up systems and integrating a tool like the LDSE into wider institutional systems and processes I did feel that there was something missing.

The team did point out that the system can import and export xml, but I’m still unclear exactly how/where/what a system would do with the xml from LDSE. How could you make it into either a runnable design in your VLE or indeed be able to be used as an “official” course document either in a course approval process or in a course handbook or both? One of the final outputs CETIS produced for the Design for Learning Programme was a mapping of programme outputs to IMS LD, and we were able to come up with a number of common field, this could be a starting point for the team.

There was some discussion about perhaps integrating XCRI, however the developers in the session didn’t seem to be familiar with it. And to be fair, why should computer scientists know about a course advertising spec? Probably most teachers and a fair few institutional marketing departments don’t know about it either. This is one area where hopefully the Design Programme and LDSE can share experiences. Most of the design projects are in the process of rolling out new course approval documents so maybe a list of common fields from them could be shared with the LDSE team to help build a generic template. We already know that the XCRI CAP profile doesn’t include the depth of educational information most of the design projects would like to gather. However this is starting to be addressed with XCRI being integrated into the CEN MLO work.

Hopefully the LDSE team will be able to make some in-roads now into allowing the system to produce outputs which people can start to re-use and share effectively with a number of systems. And this has got me thinking about the possibility of the next CETIS Design Bash being based around a number of challenges for import/exporting course approval documents into systems such as LDSE and the systems being used by the Design projects. I’d be really interested in hearing any more thoughts around this.

Widget Bash – what a difference two days make

“I got more more done here in a day than I would have in three or four days in the office”. Just one of the comments during the wrap up session at our widget bash (#cetiswb).

And judging from other comments from the other delegates, having two days to work on developing “stuff” is one of the best ways to get actually move past the “oh, that’s interesting, I might have a play with that one day” stage to actually getting something up and running.

The widget bash was the latest in our series of “bash” events, which began many years ago with code bashes (back in the early days of IMS CP) and have evolved to cover learning design with our design bashes. This event was an opportunity to share, explore and extend practice around the use of widgets/apps/gadgets and to allow delegates to work with the Apache Wookie (Incubating) widget server which deploys widgets built to the W3C widget specification.

We started with a number of short presentations starting with presentations from most of the projects in the current JISC funded DVLE programme. Starting with the rapid innovation projects, Jen Fuller and Alex Walker gave an overview of their Examview plugin, then Stephen Green from the WIDE project, University of Teeside explained the user centred design approach they took to developing widgets. (More information on all of the rapid innovation projects is available here). We then moved to the institutionally focused projects staring with Mark Stubbs from the W2C project who took us through their “mega-mash up” plans. The DOULS project was next with Jason Platts sharing their mainly google based approached. Stephen Vickers from the ceLTIc project then outlined the work they have been doing around tools integration using the IMS LTI specification. We also had a remote presentation around LTI implementation from the EILE project. Rounding up the DVLE presentations, Karsten Lundqvist from the Develop project shared the work they have been doing primarily around building an embed video BB building block. Mark Johnson (University of Bolton) then shared some very exciting developments coming from the iTEC project where smartboard vendors have implemented wookie and have widget functionality embedded in their toolset allow teachers to literally drag and drop collaborative activities onto their smartboards at any point during a lesson. Our final presentation came from Alexander Mikroyannidis on the ROLE project which is exploring the use of widgets and developing a widget store.

After lunch we moved from “presentation” to doing “mode”. Ross Gardler took everyone through a basic widget building tutorial, despite dodgy wifi connections and issues of downloading the correct version on Ant, most people seemed to be able to complete the basic “hello world” tutorial. We then split into two groups, with Ross continuing the tutorials and moving creating geo- location widgets and Scott Wilson working with some of the more experienced widget builders in what almost become a trouble shooting surgery. However his demo of repackaging a pac-mac game as W3C widget did prove very popular.

The sun shone again on day two and with delegates more familiar with wookie and how to build widgets, and potential applications for their own contexts, the serious bashing began.

One of the great things about working with open source projects such as Apache Wookie (Incubating), is the community sharing of code and problem solving We had a couple of really nice examples of this in action, starting with the MMU drop in pc-location widget. The team had managed to work out some IE issues that the wookie team were struggling with (see their blog post), and inspired by the geo-location templates Ross showed on day 1, managed to develop their widget to include geo-location data. Now if users access the service from a geo-location aware device it will return a list of free computers nearest to their real-time location. The team were able to successfully test this on ipad, galaxy tab, iphone and android phone. For non-location aware devices the service returns an alphabetical list. You can try it out here.

Sam Rowley and colleagues from Staffordshire university decided to work on some DOM and jQuery and issues. Whilst downloading the wookie software they noticed a couple of bugs, so they fixed them and submitted a patch to the Wookie community.

Other interesting developments emerged from discussions around ways of getting data out of VLEs. The team from Strathclyde realised that by using the properties settings in wookie they could pass a lot of information fairly easily from Moodle to a widget. On day two they converted a Moodle reading list block to a wookie widget with an enhanced interface allowing users to specify parameters (such as course code etc). The team have promised to tidy up the code and submit to both the wookie and moodle communitys. Inspired by this Stephen Vickers is going to have a look at developing a powerlink for webCT/BB with similar functionality.

On a more pedagogical focus some of the members of the Coeducate project worked on developing a widget version of the the 8LEM inspired Hybrid Learning Model from the University of Ulster. By the end of the second day they were well on the way to developing a drag and drop sequencer and were also exploring multiuser collaboration opportunities through the google wave api functionality which wookie has adopted.

Overall there seemed to be a really sense of accomplishment from delegates who managed to do a huge amount despite having to fight with very temperamental wifi connections. Having two experts on hand proved really useful to delegates as they were able to ask the “stupid” and more often than not, not so stupid questions. Having the event run over two days also seemed to be very popular as it allowed delegates to actually move from the thinking about doing something to actually doing it. It also highlighted the positive side of contributing to an open-source community and hopeful the Apache Wookie community will continue to see the benefit of increased users from the UK education sector. We also hope to run another similar event later in the year, so if you have any ideas or would like to contribute please let me know.

For another view of the event, I’ve also created a storify version of selected tweets from the event.

Using video to capture reflection and evidence

An emerging trend coming through from the JISC Curriculum Design programme is the use of video, particularly for capturing evidence and and reflection of processes and systems. Three of the projects (T-Sparc, SRC, OULDI) took part in an online session yesterday to share their experiences to-date.

T-Sparc at Birmingham City University have been using video extensively with both staff and students as part of their baselining activities around the curriculum design process. As part of their evaluation processes, the SRC project at MMU have been using video (flipcams) to get student feedback on their experiences of using e-portfolios to help develop competencies. And the OULDI project at the OU have been using video in a number of ways to get feedback from their user community around their experiences of course design and the tools that are being developed as part of the project.

There were a number of commonalities identified by each of the projects. On the plus side the immediacy and authenticity of video was seen as a strength, allowing in the case of SRC the team to integrate student feedback much earlier. The students themselves also liked the ease of use of video for providing feedback. Andrew Charlton-Perez (a lecturer who is participating in one of the OULDI pilots) has been keeping a reflective diary of his experiences. This is not only a really useful, shareable resource in its own right, but Andrew himself pointed out that he has found it really useful as self-reflective tool and in helping to him to re-engage with the project after periods of non-involvement. The T-Sparc team have been particularly creative in using the video clips as part of their reporting process both internally and with JISC. Hearing things straight from the horses mouth so to speak, is very powerful and engaging. Speaking as someone who has to read quite a few reports, this type of multi-media reporting makes for a refreshing change from text based reports.

Although hosting of video is becoming relatively straightforward and commonplace through services such as YouTube and Vimeo, the projects have faced some perhaps unforeseen challenges around consistency of file formats which can work both in external hosting sites, and internally. For example the version of Windows streaming used institutionally at BCU doesn’t support the native MP3 file formats from the flip-cams the team were using. The team are currently working on getting a codec update and they have also invested in additional storage capacity. At the OU the team are working with a number of pilot institutions who are supplying video and audio feedback in a range of formats from AVI to MP3 and almost everything in the middle, some which of need considerable time to encode into the systems the OU team are using for evaluation. So the teams have found that there have been some additional unforeseen resources implications (both human and hardware) when using video.

Another common issue to come through from the presentations and discussion was around data storage. The teams are generating considerable amounts of data, much of which they want to store permanently – particularly if it is being incorporated into project reports etc. How long should a project be expected to keep evaluative video evidence?

However despite these issues there seemed to be a general consensus that the strengths of using video did make up for some of the difficulties it brought with it. The teams area also developing experience and knowledge in using software such as Xtranormal and Overstream for creating anonymous content and subtitles. They are also creating a range of documentation around permissions of use for video too which will be shared with the wider community.

A recording of the session is available from The Design Studio.

Personal publishing – effective use of networks or just noise?

If you follow my twitter stream you may have noticed that everyday about 9am, you’ll see a tweet with a link to my daily paper and a number of @mentions of people featured in it. You may even have been one of those @ mentions.

I’ve actually had a paper.li account since last year, but it’s only recently that I’ve actually set the “automagic” tweet button live. Partly this was because I’ve found it quite interesting following links to other paper.li sites where I’ve been mentioned, and also partly as a bit of a social experiment to see (a) if anyone noticed and (b) what reactions, if any, it would solicit. In fact this post is a direct response to Tore Hoel’s tweet at the weekend asking if I was going to reflect on use.

Well, here goes. So being one of those people who likes to play (and follows every link Stephen Fry tweeets) I was intrigued when I came across paper.li at first and signed up. For those of you unfamiliar with the service it basically pulls in links from your twitter feed, categorizes them and produces an online paper. Something, and I’m not sure what it was prevented me from making the links public from the outset. On reflection I think it was that I wanted to see how the system works, and if it actually did provide something useful.

There’s no editorial control with the system. It selects and classifies articles, links and randomly generates your online paper, and (if you choose) sends a daily tweet message from your twitter account with a url and @mentions for selected contributions. Sometimes these are slightly odd – you might get mentioned because you tweeted a link to an article in “proper” paper, a blog entry or a link to a flickr stream. It’s not like getting a by-line in a proper paper by any stretch of the imagination. The website itself has an archive of your paper and there’s also the ability to embed a widget into other sites such as blogs. Other services I’ve used which utilise twitter (such as storify) generate more relevant @mention tweets i.e. only for those you actually quote in your story. You also have the option not to send an auto tweet. Something I missed the first time I used it and so tweeted myself about my story:-).

So, without editorial control is this service useful? Well like most things in life, it depends. Some people seem to find it irritating as it doesn’t always link to things they have actually written, rather links they have shared. So for the active self promoter it can detract from getting traffic to their own blog/website. Actually that’s one of the things I like- it collates links that often I haven’t seen and I can do a quick skim and scan and decide what I want to read more about. Sometimes they’re useful – sometimes not. But on the whole that’s the thing with twitter too – some days really useful, others a load of footballing nonsense. I don’t mind being quoted by other people using the service too. It doesn’t happen that often and I don’t follow too many people, and guess what -sometimes I don’t actually read everything in my twitter stream, and I don’t follow all the links people post – shocking confession I know! However when you post on to twitter it’s all publicly available so why collate it? If it’s a bit random, then so be it. But some others see it differently.

If you don’t like being included in these things then, like James Clay get yourself removed from the system.

There have been another couple of instances where I have found the service useful too. For the week after the CETIS10 conference last year, we published the CETIS10 daily via the JISC CETIS twitter account. As there was quite a lot of post conference activity on blogs etc it was another quite useful collation tool – but only for a short period of time where there was enough related activity to the conference hashtag for the content to be nearly always related to the conference. Due to the lack of editorial control, I don’t think a daily JISC CETIS paper.li would be appropriate. The randomness that I like in my personal paper isn’t really appropriate at an organisational communication level.

I recently took part in the LAAK11 course, and one of the other participants (Tony Searle, set up a paper.li using the course hashtag. I found this useful as it quickly linked me other students, articles etc which I might not have seen/connected with and vice versa. Again the key here was having enough relevant activity Tore asked if it would be useful for projects? I’m in two minds about that – on the one hand it might – in terms of marketing, getting followers. But again the lack of editorial control might lead to promotion of something that wasn’t as closely related to the project as you would like. If however you have an active project community then it might work.

For the moment the Sheila MacNeill daily will continue but I’d be interested to hear other thoughts and experiences.

The University of Southampton opening up its data

The University of Southampton have just launched their Open Data Home site, providing open access to some of the University’s administrative data.

The Open Data Home site provides a number of RDF data sets from teaching room features to on campus bus-stops, a range of apps showing how the University itself is using the data, and their own SPARQL endpoint to query the data. As well as links to presentations from linked data luminaries Tim Berners-Lee and Nigel Shadbolt, the site also contains a really useful FAQ section. This question in particular is one I’m sure lots of institutions will be asking, and what a great answer.

“Aren’t you worried about the legal and social risks of publishing your data?
No, we are not worried. We will consider carefully the implications of what we are publishing and manage our risk accordingly. We have no intention of breaking the UK Data Protection Act or other laws. Much of what we publish is going to be data which was already available to the public, but just not as machine-readable data. There are risks involved, but as a university — it’s our role to try exciting new things!”

Let’s hope we see many more Universities following this example in the very near future.

IMS Global Learning Consortium announces release of Common Cartridge v1.1

IMS has announced the final release of Common Cartridge v1.1.

According to the press release: “The Common Cartridge standard provides a means for interoperability, reusability, and customization of digital learning content, assessments, collaborative discussion forums, and a diverse set of learning applications. The standard offers both end-users and vendors the possibility of greater choice in both content and platforms. This latest version of Common Cartridge includes support for Basic Learning Tools Interoperability which provides a standard way of integrating rich learning applications or premium content with platforms such as Learning Management Systems, portals, or other systems.”

The standard is available for download from the IMS website.