Design Bash: moving towards learning design interoperability

Question: How do you get a group of projects with a common overarching goal, but with disparate outputs to share outputs? Answer: Hold a design bash. . .

Codebashes and CETIS are quite synonymous now and they have proved to be an effective way for our community to feedback into specification bodies and increase our own knowledge of how specs actually need to be implemented to allow interoperability. So, we decided that with a few modifications, the general codebash approach would be a great way for the current JISC Design for Learning Programme projects to share their outputs and start to get to grips with the many levels of interoperability the varied outputs of the programme present.

To prepare for the day the projects were asked to submit resources which fitted into four broad categories (tools, guidelines/resources, inspirational designs and runnable designs). These resources were tagged into the programmes’ del.icio.us site and using the DFL SUM (see Wilbert’s blog for more information on that) we were able to aggregrate resources and use rss feeds to pull them into the programme wiki. Over 60 resources were submitted, offering a great snapshot of the huge level activity within the programme.

One of the main differences between the design bash and the more established codebashes was the fact that there wasn’t really much code to bash. So we outlined three broad areas of interoperability to help begin conversations between projects. These were:
* conceptual interoperability: the two designs or design systems won’t work together because they make very different assumptions about the learning process, or are aimed at different parts of the process;
* semantic interoperability: the two designs or design systems won’t work together because they provide or expect functionality that the other doesn’t have. E.g. a learning design that calls for a shared whiteboard presented to a design system that doesn’t have such a service;
* syntactic interoperability:the two designs or design systems won’t work together because required or expected functionality is expressed in a format that is not understood by the other.

So did it work? Well in a word yes. As the programme was exploring general issues around designing for learning and not just looking at for example the IMS LD specification there wasn’t as much ‘hard’ interoperability evidence as one would expect from a codebash. However there were many levels of discussions between projects. It would be nigh on impossible to convey the depth and range of discussions in this article, but using the three broad categories above, I’ll try and summarize some of the emerging issues.

In terms of conceptual interoperability one of the main discussion points was the role of context in designing for learning. Was the influence coming from bottom up or top down? This has a clear effect on the way projects have been working and the tools they are using and outcomes produced. Also in some cases the tools sometimes didn’t really fit with the pedagogical concepts of some projects which led to a discussion around the need to start facilitating student design tools -what would these tools look like/work?

In terms of semantic interoperability there were wide ranging discussions around the levels of granularity of designs from the self contained learning object level to the issues of extending and embellishing designs created in LAMS by using IMS LD and tools such as Reload and SLeD.

At the syntactic level there were a number of discussions not just around the more obvious interoperability issues between systems such as LAMS and Reload, but also around the use of wikis and how best to access and share resources It was good to hear that some of the projects are now thinking of looking at the programme SUM as a possible way to access and share resources. There was also a lot of discussion around the incorporation of course description specifications such as XCRI into the pedagogic planner tools.

Overall a number of key issues were teased out over the day, with lots of firm commitment shown by all the projects to continue to work together and increase all levels of interoperability. There was also the acknowledgement that these discussions cannot take place in a vacuum and we need to connect with the rest of the learning design community. This is something which the CETIS support project will continue during the coming months.

More information about the Design Bash and the programme in general can be found on the programme support wiki.

Winning Learning Objects Online

The winners of the 2007 ALT-C Learning Object Competition are now available to view from the Intrallect website.

The winners are:

    *1st prize – All in a day’s work (Colin Paton, Social Care Institute for Excellence, Michael Preston-Shoot, University of Luton, Suzy Braye, University of Sussex and CIMEX Media Ltd)
    *2nd Prize – Need, Supply and Demand (Stephen Allan and Steven Oliver, IVIMEDS)
    *3rd Prize – Enzyme Inhibition and Mendelian Genetics (Kaska Hempel, Jillian Hill, Chris Milne, Lynne Robertson, Susan Woodger, Stuart Nicol, Jon Jack, Academic Reviewers, CeLLS project, Dundee University, Napier University, Interactive University and Scottish Colleges Biotechnology Consortium)

Shortlised Entires (in no particular order):

    *Photographic composition (David Bryson, University of Derby)
    *Human Capital Theory (Barry Richards, Dr. Joanna Cullinane, Catherine Naamani, University of Glamorgan)
    *Tupulo Array Manipulation
    (Tupulo project team at Dublin City University, Institute of Technology Tallaght, Institute of Technology Blanchardstown, Ireland, System Centros de Formacion, Spain, Societatea Romania pentru Educatie Permanenta, Romania)
    *Introduction to Pixel Group Processing (Peter McKenna, Manchester Metropolitan University).

Getting virtual – joint Eduserv/CETIS meeting, 20/09/07

Last Thursday (20 September) Eduserv and CETIS held a joint event at the Institute of Education’s London Knowlege Lab primarily to showcase the four project’s Eduserv have awarded their annual research grant to. The common theme for the projects is the use of Second Life.
A common complaint or should I say issue:-) with using Second Life in many institutions is actually getting access to it from institutional networks. After some frantic efforts by Martin Oliver (and the judicious use of cables) we were able to connect to Second Life so our presenters could give some in-world demo’s. However the irony of almost not being able to do so from the wireless network wasn’t lost on any of us.

Andy Powell started the day with an overview of Eduserv and the rational behind this year’s research grant. He then gave his view on second life through the use of his extensive (and growing) wardrobe of Second Life t-shirts. The ability to create things is a key motivator for most users of virtual worlds such as SL; and these worlds can be seen as the ultimate in user-generated content. However, there are many issues that need to be explored in relation to the educational use of spaces like SL, such as the commercial nature of SL, and what the effects of the ban of gambling might be?What will be the effect of the increasing use of voice? It’s relatively simple to change your ‘persona’ just now when communication is text based, but the increasing use of real voices will have a dramatic impact and could fundamentally impact some users within the space. There is a huge amount of hype around SL, however Andy proposed that in education we are a bit more grounded and are starting to make some inroads into the hype – which is exactly what the Eduserv projects have been funded to do.

Lawrie Phipps followed with an overview of some JISC developments related to virtual worlds. Although JISC are not funding any projects directly working in Second Life this may change in the near future as there is currently a call in the users and innovations strand of the elearning programme which closes in early October. The Emerge project (a community to help support the users and innovations strand) does have an island in Second Life and there is a bit of activity around that. Lawrie did stress that it is JISC policy to fund projects which have clear, shareable institutional and sectoral outputs and aren’t confined to one proprietary system.

We then moved to the projects themselves, starting with Hugh Denard (Kings College, London) on the Theatron Project. In a fascinating in-world demo, Hugh took us to one of the 20 theatres the project is going to create in-world. Building on a previous web-based project, Second Life is allowing the team to extend the vision of the original project into a 3-D space. In fact the project has been able to create versions of sets which until now had just been drawings never realised within the set designers lifetime. Hugh did point out the potential pitfalls of developing such asset rich structures within Second Life – they take up lots of space. Interestingly the team have chosen to build their models outside SL and then import and ‘tweak’ in-world. This of course highlights the need to think about issues of interoperability and asset storage.

Ken Kahn (University of Oxford) followed giving us a outline of the Modelling for All project he is leading. Building on work of the Constructing2Learn project (part of the current JISC Design for Learning programme) Ken and his team are proposing to extend the functionality of their toolset so that scripts of models of behaviours constructed by learners will be able to be exported and then realised in virtual worlds such as Second Life. The project is in very early stages and Ken gave an overview of their first seven weeks, and then a demo of the their existing web based modeling tool.

We started again after lunch with our hosts, Diane Carr and Martin Oliver, (London Knowledge Lab) talking about their project; “Learning in Second Life: convention, context and methods”. As the title suggest this project is concerned with exploring the motivations and conventions of virtual worlds such as Second Life. Building on previous work undertaken by the team, the project is going to undertake some comparative studies between World of Warcraft and Second Life to see what are the key factors to providing successful online experiences in such ‘worlds’ and also to see what lessons need be taken into mainstream education when using such technologies.

The final project presentation came from Daniel Livingstone (University of Paisley). Daniel’s “Learning support in Second Life with Sloodle” project is building links between the open source VLE Moodle and SL – hence ‘Sloodle’. Once again we were taken in-world on a tour of their Sloodle site as Daniel explained his experiences with using SL with students. Daniel has found that students do need a lot of support (or scaffolding) to be able to exploit environments such as SL within an educational context – even the digital natives don’t always ‘get’ SL. There are also issues in linking virtual environments with VLE systems – authentication being a key issue even for the open source Moodle.

The day ended with a discussion session chaired by Paul Hollins (CETIS). The discussion broadened out from the project specific focus of the presentations and into more a more general discussion about where we are with second life in education. Does it (and other similar virtual worlds) really offer something new for education? Are the barriers too high and can we prove the educational benefits? Should we make students use this type of technology? Unsurprisingly it seemed that most people in the room were convinced on the educational benefits of virtual worlds but as with all technology it should only be used as and when appropriate. Issues of accessibility and FE involvement were also brought up during the session.

Personally I found the day very informative and re-assuring – practically all the speakers noted their initial disappointment and lack of engagement with Second Life: so I’m now going to go back in-world and try to escape from orientation island:-) It will be interesting to follow the developments of all the projects over the coming year.

Further information about the day and copies of the presentations are available from the [http://wiki.cetis.org.uk/EduservCETIS_20Sep2007 EC wiki].

The problem with pedagogic planners . . .

. . .is the fact we can’t decide what we want them to be and who and what they are really for. Although this is said with my tongue firmly in my cheek, I’ve just been at a meeting hosed by Diana Laurillard (IOE) and James Dalziel (LAMS Foundation) where a group of people involved in developing a number of tools which could be collectively described as “pedagogic planners” spent the day grappling with the issues of what exactly is a pedagogic planner and what makes it/them different from any other kind of planning/decision making tool.

Unsurprisingly we didn’t arrive at any firm conclusions – I did have to leave early to catch my (delayed) flight home so I did miss the final discussion. However the range of tools/projects demonstrated clearly illustrated that there is a need for such tools; and the drivers are coming not just from funders such as the JISC (with their Phoebe and London Projects ), but from teachers themselves as demonstrated by Helen Walmsley (University of Staffordshire) with her best practice models for elearning project.

The number of projects represented showed the growing international interest and need for some kind of pre (learning)design process. Yet key questions remain unanswered in terms of the fundamental aims of such tools. Are they really about changing practice by encouraging and supporting teachers to expand their knowledge of pedagogic approaches? Or is this really more about some fundamental research questions for educational technologist and their progression of knowledge around e-learning pedagogies? What should the outputs of such tools be – XML, word documents, a LAMS template? Is there any way to begin to draw some common elements that can then be used in learning systems? Can we do the unthinkable and actually start building schemas of pedagogic elements that are common across all learning systems? Well of course I can’t answer that, but there certainly seems a genuine willingness continue the dialogue started at the meeting and to explore these issues more most importantly a commitment to building tools that are easy to use and useful to teachers.

SUMs = the eFramework?

Over the last year or so as the vision of the international eFramework as started to take shape I’ve been hearing more and more about SUMs (service usage models). I went along to the SUMs workshop to see if I could find out exactly what a SUM is.

The event was run by the international eFramework so we had the benefit of having Dan Rehak (consultant to the eFramework), Phil Nichols (one of the eFramework editors) and Lyle Winton (of DEST who has been involved in creating SUMs) facilitating the workshop. This was particularly useful (for me anyway) as it helped to distinguish the aims of the international eFramework from those of the partners involved. The partners in the international eFramework have common goals of interoperability and the use of service orientated approaches, but each country has their own priorities and interpretations of the framework. The eFramework does not mandate any one approach, it should be seen as a reference point for developers where proven technical interoperable scenarios are documented using a set of standard (hotly debated – for example ‘reference model’ has been blacklisted) terms. (Copies of Dan and Lyle’s presentations are available from the e-Framework website)

Although the aim of the day was to actually create some SUMs, we started with an overview from Dan Rehak on the eFramework and SUMs. Services provide the technical infrastructure to make things work – they describe interfaces between applications. A SUM is the description of the combination of services, which meet a specific requirement (or business need). So in some respects a SUM is analogous to a blueprint as it (should) describe the overall ‘business story’ (i.e. what it is supposed to do), with a technical description of the process(es) involved e.g. the services used, the bindings for service expressions and then examples of service implementations. Ideally a SUM should be developed by a community (e.g. JISC or a subset of JISC funded projects working in a specific domain area). That way it is hoped the best of top down (in terms of describing high level business need) and bottom up (in terms of having real instances of deployment) can be combined. I can see a role for JISC CETIS SIGs in helping to coordinate our communities in the development of SUMs.

At this point no official modelling language has been adopted for the description of SUMs. To an extent this will probably evolve naturally as communities begin to develop SUMs and submit them to the framework. Once a SUM has been developed it can be proposed to the eFramework SUM registry and hopefully it will be picked up, reused and/or extended by the wider eFramework community.

Some key points came out of a general discussion after Dan’s presentation:
*SUMs can be general or specific – but have to be one or the other.
*SUMs can be described in terms of other SUMs (particularly in the cases of established services such as open id and shibboleth).
*SUMs can be made up of overlapping or existing SUMs
*Hopefully some core SUMS will emerge which will describe widespread common reusable behaviours.

So what are the considerations for creating a SUM? Well there are three key areas – the description, the functionality and the structure. The description should provide a non-technical, narrative or executive summary of what the SUM does, what problem it solves and its intended function. The functionality should outline the individual functions provided within the SUM – but with no implementation details. The structure should give the technical view of the SUM as a whole, illustrate how the functions are integrated e.g. services, data sources, coordination of services. It can also have a diagrammatic illustration of any coordination. There are a number of SUMs available from the eFramework website as well as more detailed information on actually developing SUMs.

The main part of the workshop was devoted to group working where we actually tried to develop a SUM from a provided scenario. Unsurprisingly each group came up with very different pseudo SUMs. As we worked through the process the need for really clear and concise descriptions and clear boundaries on the number of services you really need became glaringly obvious. Also, although this type of business process may be of use for certain parts of our community, I’m not sure if it would be of use for all. It was agreed that there is a need for best practice guides to help contextualise the development and use of SUMs for different domains/communities. However that is a bit of a chicken and egg situation at the moment.

One very salient point was made by Howard Noble (University of Oxford) when he pointed out that maybe what we should be documenting are ‘anti-sums’ i.e. the things that we do now and the reasons why we take non soa approaches in certain circumstances. Hopefully as each community within the eFramework starts to build SUMs the potential benefits of collecting, documenting and sharing ways for people, systems and services to interoperate will outweigh other approaches. But what is needed most of all (imho) are more real SUMs so that that developers can really start to see the usefulness of the eFramework SUMs approach.

Free tools to create online games and animations – no coding required

Despite the recent shennanigans surround the BBC Jam project, the corporation continues to be a key player in interactive web developments – even if it just as a conduit for providing information. Yesterday (15th May) the BBC website posted a story about SCRATCH a free set of tools which allows anyone to “create their own animated stories, video games and interactive artworks” without having to create any code. Developed by the MIT Media Lab, the site is primarily aimed at children, but that’s no reason for grown ups not to use it; particularly in an educational setting. One of the downsides of being featured on the BBC site is that the SCRATCH website has been inundated with traffic and so isn’t working to capacity (I still haven’t been able to access it). However, the blogsphere is full of the story and there are a number of videos on YouTube about it. Looking at these it certainly does look like a fairly intutive system. Tony Hirst has an interesting article on it too on his blog .

Joint Pedagogy Forum and EC SIG meeting, 26 April

Liverpool Hope University hosted the joint meeting of the Pedagogy Forum and EC SIG last Thursday. The meeting focused on design for learning, with presentations from a number of the projects involved in the current JISC Design for Learning Programme.

The team from Liverpool Hope started the meeting with an overview of their experiences of using IMS Learning Design with teachers and students. Mark Barrett- Baxendale, Paul Hazelwood and Amanda Oddie explained the work they are doing in the LD4P (learning design for practitioners) project where they are working on making a more user-friendly interface for the RELOAD LD editor, the DesignShare project (part of the current JISC tools and demonstrators projects) where they are linking a learning design repository into reload, and the the D4LD (developing for learning design) where they are working on improving the presentation of the OU learning design player. The team are working with practitioners in both HE and FE (and are running a number of courses with students) and so far, have received positive feedback about using learning design. The screenshots they showed of the interface they are working on for RELOAD certainly looked much more user friendly and intuitive. The team are also looking at the role of web2.0 in the DesignShare project as it will link RELOAD and the Opendoc repository using widget like technology.

Professor Diana Laurillard then gave us an overview of the the London Pedagogic Planner tool. This system, although still very much in prototype, has been designed to help scaffold the planning process for staff. Taking a process driven approach, the system prompts the user to input all the factors relating to a course/session/lesson design i.e. room availability, number of teaching hours, number of student hours outside the classroom available. It is hoped that this scaffolded approach to planning can help to exploit the pedagogic value of learning technology as it allows the user to ensure that their designs (whatever their pedagogical approach and what technology they exploit) are workable within the instutional constraints they have to sit in. An important focus of the tool is to put control back into the hands of teachers and so in turn help the wider teaching community come to more informed decisions about how to integrate learning technology into their own practice.

After lunch we were joined remoted by James Dalziel – thanks to James for staying up very late due to the audio gremlins having lots of fun in the morning :-) . James gave us an overview of LAMSv2 and some of his thoughts on the need and potential for pedagogic planners. LAMS v2 is based on a new modular architecture which the team hope will stand them in good stead for the foreseeable future. Whilst retaining the core concepts of the original system, this version introduces a number of new and improved features including: improved support for branching; live editing of sequences – no more runtime lock-in and the ability to export sequences as IMS LD Level A. (There’s no support for importing IMS designs as yet, but it’s something on the team’s to do list.) One other interesting feature is the inclusion of a portfolio export. Basically this feature allows a student to keep a record of all their activities. The system creates and exports a zip file which contains html copies of activies. Through work with the New Zealand Ministry of Education, the v2 can now provide joint classes using the Shibboleth federation system. In terms of pedagogic planners, James outlined his thoughts on current needs. He believes that we need lots of different versions of planners and more research on the the decision making process for designers and teachers. This is obviously an area of increasing focus, but hopefully the two JISC planners are making a good start in this area and it’s something that will be discussed at the LAMS UK conference in July.

Marion Manton and David Balch then gave us an overview of the Phoebe planner tool they have been developing at the University of Oxford. In contrast to the London planner tool, Phoebe has taken a wiki based approach with more emphasis being given to providing advice and support on potential pedagogic approaches. Though there is no reason why the two system couldn’t be used together and that is something that both projects are exploring. As with the London planner, Phoebe is now entering phase 2 and is looking at ways to improve the interface for users. David and Marion outlined the approaches they have been considering, which hopefully will provide looser connections between the content in the wiki and the notes that user create when they are using the system. They are hoping to take a more drag and drop, web 2.0 approach so that users can feel more in control of the system.

Dai Griffiths from the University of Bolton rounded up the day by giving an overview of his impressions of the learning design space. Dai has been involved in many projects relating to IMS Learning Design – notably the UNFOLD project. Dai began by questioning if IMS is agile enough to take advantage of the web 2 world. Increasingly specifications such as content packaging and learning design seem to be at odds with developments in social software. He then went on to highlight some of the confusions that exist around the purpose of IMS Learning Design. Being both a modeling language and an interoperability system there is still confusion about the purpose of the specification. Often projects only focus on one area and forget about the other. There is still a need for interoperability, but perhaps now we need to move to thinking about looser couplings between content, activities and infrastructure and not try to do everything by following one complex specification. IMS LD as it currently stands deals with formal education systems but what about informal learning, can it play a role there too? This is something the TenCompetence project is investigating and they are hoping to have a number of extension to RELOAD launched later this year which start to address that space. Dai closed by re-itterating the need for community engagement and sustaining and building of contacts within the learning design space which is one of the aims of the support wiki for the Design for Learning programme.

Thanks to everyone who presented and attended the meeting for making it such a worthwhile meeting. Also a big thank you to everyone at Liverpool Hope for being such generous hosts and having the patience to work through all our technological gremlins. Copies of all the presentations from the day are available @ http://wiki.cetis.org.uk/April_2007_Meeting.

Communities more important than models in developing learning design (thoughts on the Mod4L final report)

The final report from the Mod4L project is now available online. The project is part of the current JISC Design for Learning Programme.

The aim of Mod4L was to “develop a range of practice models that could be used by practitioners in real life contexts and have a high impact on improving teaching and learning practice.” The project adopted a working definition of practice model as “common but decontextualised, learning designs that are respresented in a way that is usable by practitioners (teachers, managers etc).”

The report is structured into six main categories covering issues of representation (which I talked about in a previous post), granularity, an evaluation of several types of representation, the sharing of learning designs and the role of taxonomies.

The report highlights the difference between design as a product and design as a process. It questions the current metaphor for learning design (particularly IMS learning design) as being too product driven and reliant on stable components and the linkages between them, which often don’t accurately reflect the unstable process that take place in most teaching and learning situations. It goes on to suggest that we may be better off thinking of design for learning as a loosely coupled system, which can provide access to the stable components of a design but also allow for the richer, more adaptive process that take place within a learning context. An analogy to a map of the London underground is given as an example. The map can show you (the learner) the entry points and how to get from point A to point B (even giving one some degree of flexibility of route or personalisation on how to get there) but what the map doesn’t give you is other information which could make your journey much more successful i.e. by letting you know that it might be quicker to get off the train one stop earlier, or (as a teacher) how to drive the train.

A detailed evaluation of a number of practice types are contained within the report, however it does also point out that “providing support for communities may be more important in changing practice, than developing particular representation types which will, inevitably, have limited audiences, and have prescribed forms.” Let’s hope that the current projects within the programme can help to build and sustain these types of communities.