It only takes about half an hour . . .

said Tony Hirst as he took us on a mini journey of exploration of just a few of the mashups he has been creating with the OU OpenLearn content and (generally) freely available tools at the Mashup Market session at the JISC-CETIS conference yesterday. From creating the almost obligatory google map to mini federated searches to scrapping content for video, audio, urls to daily feeds of course content, Tony showed just some of the possibilities mash-up technologies can offer educators. He also highlighted how (relatively) simple these things are now and how little time (generally half an hour) it takes. He did concede that some half hours took a bit longer than others :-) A number of the tools Tony talked about are listed on the session conference webpage.

Of course, having well structured, open content has helped enormously to allow someone like Tony to begin to experiment. In terms of reusing content the content scraping that Tony has been doing was really exciting as it showed a simple way to get at the stuff that people (I think) would want to re-use – like videos, urls etc. Also, using an embedded iframe just now allows you to display just the video, not any surrounding advertising. However this may well change over time as advertising becomes more embedded into actual content.

So if it’s so simple to remix, reuse and republish content now, why aren’t we all doing it? Well partly I guess it’s down to people (teachers, learning technologists, students) actually knowing how and what they can do this. But also, there are other wider issues in terms around getting people/institutions to create and open up well structured data. Issues of privacy and our conceptions of what that actually means to us, students etc – particularly relevant given the current government debacle over lost data – and (as ever) IPR and copyright were discussed at length.

Clearly this implications of this type of technology challenges institutions not only in terms of what IT services for users they support, but also how and to whom they open their data to – if at all. Paul Walk suggested that institutions and individuals need to start with the non-contentious things first to show what can be done, without risk. Brian Kelly pointed out that there could be a tension between a mash-up based approach and a more structured semantic approach. Unfortunately this session clashed with the semantic technologies session; but maybe it’s a theme for next year’s conference or something we can explore at a SIG meeting in the coming months.

There was a really full and frank discussion around many issues, but generally there is a clear need for strategies to allow simple exposure of structured data, allow people to get to small pieces of data and easy tools to put it back together and republish in accessible ways. Again the need for clear guidelines around rights issues was highlighted. Some serious thought also needs to be given to the economic implications for our community of creating and sustaining truly open content.

Thoughts on OpenLearn 2007

Last week I attended the OU Openlearn conference in Milton Keynes. Presentations will be available from the conference website (agumented with audio recordings) as well as links to various blogs about the conference.

There were a couple of presentations I’d like to highlight. Firstly Tony Hirst’s on the use of RSS feeds and OPML bundles to distribute openlearn material really gave an insight into how easy it should be to create delivery mechanisms on demand from open content. I also really enjoyed Ray Corrigan’s talk “is there such a thing as sustainable infodiversity?” Ray highlighted a number of issues around sustainability of technology, energy consumption, disposable hardware. It’s all too easy to forget just how much of our natural resources are being consumed by all the technology which is so common place now. (As an aside, this was another conference where delegates were given a vast amount of paper as well as conference proceedings on a memory stick – something we are trying to avoid at the up coming JISC CETIS conference.) He also highlighted some of the recent applications of copyright laws that cut to the core of any ‘open’ movement. This view was nicely complimented by Eric Duval’s presentation where he encouraged the educational community to be more assertive and aggressive about copyright and use of materials for educational purposes – encouraging more of a ‘bring it on’ attitude. All well and good but only if academics have the security of institutional back up to do that. On that note it’s been interesting to see this weekend that the University of Oregon is refusing to give over names of students downloading music to the RIAA (see SlashDot for more information on that one).

Design Bash: moving towards learning design interoperability

Question: How do you get a group of projects with a common overarching goal, but with disparate outputs to share outputs? Answer: Hold a design bash. . .

Codebashes and CETIS are quite synonymous now and they have proved to be an effective way for our community to feedback into specification bodies and increase our own knowledge of how specs actually need to be implemented to allow interoperability. So, we decided that with a few modifications, the general codebash approach would be a great way for the current JISC Design for Learning Programme projects to share their outputs and start to get to grips with the many levels of interoperability the varied outputs of the programme present.

To prepare for the day the projects were asked to submit resources which fitted into four broad categories (tools, guidelines/resources, inspirational designs and runnable designs). These resources were tagged into the programmes’ del.icio.us site and using the DFL SUM (see Wilbert’s blog for more information on that) we were able to aggregrate resources and use rss feeds to pull them into the programme wiki. Over 60 resources were submitted, offering a great snapshot of the huge level activity within the programme.

One of the main differences between the design bash and the more established codebashes was the fact that there wasn’t really much code to bash. So we outlined three broad areas of interoperability to help begin conversations between projects. These were:
* conceptual interoperability: the two designs or design systems won’t work together because they make very different assumptions about the learning process, or are aimed at different parts of the process;
* semantic interoperability: the two designs or design systems won’t work together because they provide or expect functionality that the other doesn’t have. E.g. a learning design that calls for a shared whiteboard presented to a design system that doesn’t have such a service;
* syntactic interoperability:the two designs or design systems won’t work together because required or expected functionality is expressed in a format that is not understood by the other.

So did it work? Well in a word yes. As the programme was exploring general issues around designing for learning and not just looking at for example the IMS LD specification there wasn’t as much ‘hard’ interoperability evidence as one would expect from a codebash. However there were many levels of discussions between projects. It would be nigh on impossible to convey the depth and range of discussions in this article, but using the three broad categories above, I’ll try and summarize some of the emerging issues.

In terms of conceptual interoperability one of the main discussion points was the role of context in designing for learning. Was the influence coming from bottom up or top down? This has a clear effect on the way projects have been working and the tools they are using and outcomes produced. Also in some cases the tools sometimes didn’t really fit with the pedagogical concepts of some projects which led to a discussion around the need to start facilitating student design tools -what would these tools look like/work?

In terms of semantic interoperability there were wide ranging discussions around the levels of granularity of designs from the self contained learning object level to the issues of extending and embellishing designs created in LAMS by using IMS LD and tools such as Reload and SLeD.

At the syntactic level there were a number of discussions not just around the more obvious interoperability issues between systems such as LAMS and Reload, but also around the use of wikis and how best to access and share resources It was good to hear that some of the projects are now thinking of looking at the programme SUM as a possible way to access and share resources. There was also a lot of discussion around the incorporation of course description specifications such as XCRI into the pedagogic planner tools.

Overall a number of key issues were teased out over the day, with lots of firm commitment shown by all the projects to continue to work together and increase all levels of interoperability. There was also the acknowledgement that these discussions cannot take place in a vacuum and we need to connect with the rest of the learning design community. This is something which the CETIS support project will continue during the coming months.

More information about the Design Bash and the programme in general can be found on the programme support wiki.

Winning Learning Objects Online

The winners of the 2007 ALT-C Learning Object Competition are now available to view from the Intrallect website.

The winners are:

    *1st prize – All in a day’s work (Colin Paton, Social Care Institute for Excellence, Michael Preston-Shoot, University of Luton, Suzy Braye, University of Sussex and CIMEX Media Ltd)
    *2nd Prize – Need, Supply and Demand (Stephen Allan and Steven Oliver, IVIMEDS)
    *3rd Prize – Enzyme Inhibition and Mendelian Genetics (Kaska Hempel, Jillian Hill, Chris Milne, Lynne Robertson, Susan Woodger, Stuart Nicol, Jon Jack, Academic Reviewers, CeLLS project, Dundee University, Napier University, Interactive University and Scottish Colleges Biotechnology Consortium)

Shortlised Entires (in no particular order):

    *Photographic composition (David Bryson, University of Derby)
    *Human Capital Theory (Barry Richards, Dr. Joanna Cullinane, Catherine Naamani, University of Glamorgan)
    *Tupulo Array Manipulation
    (Tupulo project team at Dublin City University, Institute of Technology Tallaght, Institute of Technology Blanchardstown, Ireland, System Centros de Formacion, Spain, Societatea Romania pentru Educatie Permanenta, Romania)
    *Introduction to Pixel Group Processing (Peter McKenna, Manchester Metropolitan University).

Content is infrastructure – lastest in Terra Incognita series

David Wiley is the current contributor to the excellent Terra Incognita series on Open Source Software and Open Educational Resources on Education. In his article, titled ‘Content is Infrastructure’, David puts forward the somewhat controversial view that for any experimentation to take place within education systems: “we must deploy a sufficient amount of content, on a sufficient number of topics, at a sufficient level of quality, available at sufficiently low cost”. Only then will we be able to “realize that content is infrastructure in order to more clearly understand that the eventual creation of a content infrastructure which is free to use will catalyze and support the types of experiments and innovations we hope to see in the educational realm.

I feel this is a very timely article, refocusing on the role of content and content related services with the education sector. It does seem to me that a the role of content is often over-looked, particularly in our (UK) HE sector and that there is a somewhat pervasive ‘been there done that’ attitude. But as David points out, if we are to fully reap the potential rewards of open content initiatives then we really need start looking at content as being as an infrastructure on which we can build and experiment with.

There have already been a number of comments (and replies) to David’s post all of which available from the Terra Incognita blog.

Getting virtual – joint Eduserv/CETIS meeting, 20/09/07

Last Thursday (20 September) Eduserv and CETIS held a joint event at the Institute of Education’s London Knowlege Lab primarily to showcase the four project’s Eduserv have awarded their annual research grant to. The common theme for the projects is the use of Second Life.
A common complaint or should I say issue:-) with using Second Life in many institutions is actually getting access to it from institutional networks. After some frantic efforts by Martin Oliver (and the judicious use of cables) we were able to connect to Second Life so our presenters could give some in-world demo’s. However the irony of almost not being able to do so from the wireless network wasn’t lost on any of us.

Andy Powell started the day with an overview of Eduserv and the rational behind this year’s research grant. He then gave his view on second life through the use of his extensive (and growing) wardrobe of Second Life t-shirts. The ability to create things is a key motivator for most users of virtual worlds such as SL; and these worlds can be seen as the ultimate in user-generated content. However, there are many issues that need to be explored in relation to the educational use of spaces like SL, such as the commercial nature of SL, and what the effects of the ban of gambling might be?What will be the effect of the increasing use of voice? It’s relatively simple to change your ‘persona’ just now when communication is text based, but the increasing use of real voices will have a dramatic impact and could fundamentally impact some users within the space. There is a huge amount of hype around SL, however Andy proposed that in education we are a bit more grounded and are starting to make some inroads into the hype – which is exactly what the Eduserv projects have been funded to do.

Lawrie Phipps followed with an overview of some JISC developments related to virtual worlds. Although JISC are not funding any projects directly working in Second Life this may change in the near future as there is currently a call in the users and innovations strand of the elearning programme which closes in early October. The Emerge project (a community to help support the users and innovations strand) does have an island in Second Life and there is a bit of activity around that. Lawrie did stress that it is JISC policy to fund projects which have clear, shareable institutional and sectoral outputs and aren’t confined to one proprietary system.

We then moved to the projects themselves, starting with Hugh Denard (Kings College, London) on the Theatron Project. In a fascinating in-world demo, Hugh took us to one of the 20 theatres the project is going to create in-world. Building on a previous web-based project, Second Life is allowing the team to extend the vision of the original project into a 3-D space. In fact the project has been able to create versions of sets which until now had just been drawings never realised within the set designers lifetime. Hugh did point out the potential pitfalls of developing such asset rich structures within Second Life – they take up lots of space. Interestingly the team have chosen to build their models outside SL and then import and ‘tweak’ in-world. This of course highlights the need to think about issues of interoperability and asset storage.

Ken Kahn (University of Oxford) followed giving us a outline of the Modelling for All project he is leading. Building on work of the Constructing2Learn project (part of the current JISC Design for Learning programme) Ken and his team are proposing to extend the functionality of their toolset so that scripts of models of behaviours constructed by learners will be able to be exported and then realised in virtual worlds such as Second Life. The project is in very early stages and Ken gave an overview of their first seven weeks, and then a demo of the their existing web based modeling tool.

We started again after lunch with our hosts, Diane Carr and Martin Oliver, (London Knowledge Lab) talking about their project; “Learning in Second Life: convention, context and methods”. As the title suggest this project is concerned with exploring the motivations and conventions of virtual worlds such as Second Life. Building on previous work undertaken by the team, the project is going to undertake some comparative studies between World of Warcraft and Second Life to see what are the key factors to providing successful online experiences in such ‘worlds’ and also to see what lessons need be taken into mainstream education when using such technologies.

The final project presentation came from Daniel Livingstone (University of Paisley). Daniel’s “Learning support in Second Life with Sloodle” project is building links between the open source VLE Moodle and SL – hence ‘Sloodle’. Once again we were taken in-world on a tour of their Sloodle site as Daniel explained his experiences with using SL with students. Daniel has found that students do need a lot of support (or scaffolding) to be able to exploit environments such as SL within an educational context – even the digital natives don’t always ‘get’ SL. There are also issues in linking virtual environments with VLE systems – authentication being a key issue even for the open source Moodle.

The day ended with a discussion session chaired by Paul Hollins (CETIS). The discussion broadened out from the project specific focus of the presentations and into more a more general discussion about where we are with second life in education. Does it (and other similar virtual worlds) really offer something new for education? Are the barriers too high and can we prove the educational benefits? Should we make students use this type of technology? Unsurprisingly it seemed that most people in the room were convinced on the educational benefits of virtual worlds but as with all technology it should only be used as and when appropriate. Issues of accessibility and FE involvement were also brought up during the session.

Personally I found the day very informative and re-assuring – practically all the speakers noted their initial disappointment and lack of engagement with Second Life: so I’m now going to go back in-world and try to escape from orientation island:-) It will be interesting to follow the developments of all the projects over the coming year.

Further information about the day and copies of the presentations are available from the [http://wiki.cetis.org.uk/EduservCETIS_20Sep2007 EC wiki].

ALT-C 2007 – moving from ‘e’ to ‘p’?

Another year, another ALT-C . . . as usual this year’s conference was a great opportunity to catch up with colleagues, see and hear some new things, and some not quite so new things. There has been a lot of coverage of this year’s conference and ALT-C themselves have produced a RSS feed aggregating blogs of people who have commented on the conference – nice to see an another useful example of mash up technology.

One of the overriding messages I took away from the conference was the move from talking about ‘e’ learning initiatives to more discussions about the issues surrounding the process of learning – presence, persistence and play to name a few.

It was great to see so many projects from JISC’s Design for Learning programme presenting. I couldn’t get to see all the presentations but I did go to a couple of the more evaluation led projects (DeSila and eLidaCamel). Both projects are focusing on the practitioner experience of designing for learning and both highlight the strengths and weaknesses of the current tools and the need for more support mechanisms to allow ‘ordinary’ teachers to use them. However both projects (and other findings from the programme) illustrate how engaging in dialogue around designing for learning can have an impact on practitioners as it really does make them reflect on their practice.

The first keynote, Dr Michelle Selinger (CISCO), reminded us all of the chasms that exist within education systems, between education and industry and of course the wider social, cultural and economic chasms which exist in the world today. Technology can provide mechanisms to start to bridge these gaps but it can’t do everything. We need to consider seriously how we take the relevant incremental steps towards achieving shared goals. Our education system(s) is key to providing opportunities for learners to gain the relevant global citizenship skills which industry is now looking for. If we really want lifelong learners then we need to ensure that the relevant systems (such as eportfolios) are interoperating. Michelle also highlighted the need to move from the 3 ‘r’s to the 3 ‘p’s which she described as – persistence, power tools and play. The challenge to all involved in education is how to allow this shift to occur. The final chasm Michelle broached was assessment and the increasing chasm between what types of learners we ideally want (technology literate, lifelong learners, team workers) and the assessment systems that our political leaders impose onto us which really don’t promote any of these aspirations.

This led nicely onto the second key note from Professor Dylan Wiliam from the Institute of Education who gave a really engaging talk around issues of ‘pedagogies of engagement and of contingency classroom aggregation technologies’. Dylan gave an insightful overview of the challenges creating effective schools and creating quality control of learning – a huge challenge when we consider how chaotic a classroom really is. He then went on to describe some innovative ways where technology enhanced formative assessment techniques could help teachers to engage learners and creative effective learning environments – well worth a listen if you have the time.

The final key note came from Peter Norvig, Director of Research, Google. I have to say I was slightly disappointed that Peter didn’t give us some inside information on Google developments however he did give an entertaining talk around ‘learning in an an open world’. Taking us through a well illustrated history of education systems he highlighted the need for projects based on engaging real world scenarios which are explored through group tasks. Copies of all the keynotes (including audio) are available from the conference website.

This year also marked the first ALT-C Learning Object Competition (sponsored by Intrallect). The prize winners were announced at the conference dinner and full details are available on the Intrallect website.

The problem with pedagogic planners . . .

. . .is the fact we can’t decide what we want them to be and who and what they are really for. Although this is said with my tongue firmly in my cheek, I’ve just been at a meeting hosed by Diana Laurillard (IOE) and James Dalziel (LAMS Foundation) where a group of people involved in developing a number of tools which could be collectively described as “pedagogic planners” spent the day grappling with the issues of what exactly is a pedagogic planner and what makes it/them different from any other kind of planning/decision making tool.

Unsurprisingly we didn’t arrive at any firm conclusions – I did have to leave early to catch my (delayed) flight home so I did miss the final discussion. However the range of tools/projects demonstrated clearly illustrated that there is a need for such tools; and the drivers are coming not just from funders such as the JISC (with their Phoebe and London Projects ), but from teachers themselves as demonstrated by Helen Walmsley (University of Staffordshire) with her best practice models for elearning project.

The number of projects represented showed the growing international interest and need for some kind of pre (learning)design process. Yet key questions remain unanswered in terms of the fundamental aims of such tools. Are they really about changing practice by encouraging and supporting teachers to expand their knowledge of pedagogic approaches? Or is this really more about some fundamental research questions for educational technologist and their progression of knowledge around e-learning pedagogies? What should the outputs of such tools be – XML, word documents, a LAMS template? Is there any way to begin to draw some common elements that can then be used in learning systems? Can we do the unthinkable and actually start building schemas of pedagogic elements that are common across all learning systems? Well of course I can’t answer that, but there certainly seems a genuine willingness continue the dialogue started at the meeting and to explore these issues more most importantly a commitment to building tools that are easy to use and useful to teachers.