We need to make more mistakes – MUVEs session @ CETIS conference

Making mistakes and sharing experiences was one of the key points made by Mark Bell at the MUVEs (multi-user virtual environments) session at the JISC CETIS conference last week.

The aim of the session was to take a closer look at some of the issues emerging in this area, “including an examination of the range of systems available, technical interoperability and the current and future challenges it poses, and whether there’s more to teaching in MUVEs than hype…” The three presenters (Daniel Livingstone, Mark Bell and Sarah Robbins) shared their experiences of working in such environments, the challenges they’ve faced and the potential for the future.

Daniel Livingstone (University of Paisley) started the session with a presentation about a SLOODLE ( Second Life and Moodle) project he is currently working (funded by Eduserv). The SLOODLE project is exploring integrating the two enviroments to see if they can offer a richer learning and teaching experience when they are combined than they currently do individually. So they are exploring how, why and where you would want a 3-D representation of a moodle course, what bits of each need to be used at what stage etc. For example should assignments be posted in SL or to in Moodle? Although Moodle is the primary focus, Daniel did explain that the project is now beginning to think in a more generic fashion about the applicability of their scripts for other environments, but this more interoperable approach is at a very early stage. At the moment the key challenges for the project are: authentication between environments and how to ensure roles are propagated properly; the need to support flexibility and what they can add to moodle to make sloodle more ‘standard’ in terms of features that can be exported into SL and vice versa.

Mark Bell (Indianna University) then gave a presentation on his research and experiences of developing rich, multiuser experiences within an educational context. The over-riding message Mark gave us was that mistakes are being made in this area, but we need to make more and share our experiences so we can all learn from them and move our practice forward. Mark has been involved in a number of projects trying to create rich and complex multiuser environments and he gave a very honest evaluation of the mistakes that had been made – like an environment not being able to support more than one avatar which kind of defeats the point of a MUVE :-)

Mark’s research is looking at testing economic theories within virtual worlds and he used the analogy of microbiologists using petrie dishes then extrapolating out findings to describe their approach to research within virtual worlds. According to Mark, there is no such thing as the real world anymore as the boundaries between real and virtual are becoming more blurred. I’m not sure if I can fully go along with that theory – but maybe that’s more to do with my personal virtual world ludditeness.

Mark argued that currently there isn’t a good platform available for academic researchers to develop large scale virtual games/simulations and that the academic development model doesn’t fit into the industry way of building things (one or two part-time developers versus teams of full time ones). So what is needed are more small scale projects/experiments – not the creation of new vast worlds and more work on co-creation and working with the commercial sector.

After the break Sarah Robbins (Ball State University) gave us an extremely informative description of her experiences of using Second Life to enhance her teaching and how harnessing students use of web 2.0 technology can enhance the learning process. One of the concepts she discussed was that of the ‘prosumer’ – the producer and consumer. With web 2.0 technologies we are all increasingly becoming prosumers and educators need to acknowledge and utilise this. Sarah was keen to stress that everything she does is driven by pedagogy not technology and she only uses technologies such as SL teach topics/concepts that are difficult to illustrate in a classroom setting. However being a keen gamer and user of technology she can see ways in which technology can enhance learning and wants to use new technologies wherever and whenever they are possible and appropriate. One example she highlighted was radius.im which is a mash up between a chat client, google maps and user profiles. A screen shot comparing that interface with a typical VLE chat client clearly illustrated how much richer the former is. Sarah’s vision of the future learning environment(s) being some kind of mash-up between things like twitter, facebook and secondlife which allowed everyone to benefit from the opportunities afforded by participatory and immersive networks.

There is clearly lots of interest in MUVEs in education, but we are still are the early stages of discovering what we can/can’t do with them. It would seem we are also just beginning to have the technical conversations about interoperability between systems and there is clearly a need for these issues to be discussed in as much depth as the pedagogical ones.

Copies of the presentations and podcasts are available from the conference website.

It only takes about half an hour . . .

said Tony Hirst as he took us on a mini journey of exploration of just a few of the mashups he has been creating with the OU OpenLearn content and (generally) freely available tools at the Mashup Market session at the JISC-CETIS conference yesterday. From creating the almost obligatory google map to mini federated searches to scrapping content for video, audio, urls to daily feeds of course content, Tony showed just some of the possibilities mash-up technologies can offer educators. He also highlighted how (relatively) simple these things are now and how little time (generally half an hour) it takes. He did concede that some half hours took a bit longer than others :-) A number of the tools Tony talked about are listed on the session conference webpage.

Of course, having well structured, open content has helped enormously to allow someone like Tony to begin to experiment. In terms of reusing content the content scraping that Tony has been doing was really exciting as it showed a simple way to get at the stuff that people (I think) would want to re-use – like videos, urls etc. Also, using an embedded iframe just now allows you to display just the video, not any surrounding advertising. However this may well change over time as advertising becomes more embedded into actual content.

So if it’s so simple to remix, reuse and republish content now, why aren’t we all doing it? Well partly I guess it’s down to people (teachers, learning technologists, students) actually knowing how and what they can do this. But also, there are other wider issues in terms around getting people/institutions to create and open up well structured data. Issues of privacy and our conceptions of what that actually means to us, students etc – particularly relevant given the current government debacle over lost data – and (as ever) IPR and copyright were discussed at length.

Clearly this implications of this type of technology challenges institutions not only in terms of what IT services for users they support, but also how and to whom they open their data to – if at all. Paul Walk suggested that institutions and individuals need to start with the non-contentious things first to show what can be done, without risk. Brian Kelly pointed out that there could be a tension between a mash-up based approach and a more structured semantic approach. Unfortunately this session clashed with the semantic technologies session; but maybe it’s a theme for next year’s conference or something we can explore at a SIG meeting in the coming months.

There was a really full and frank discussion around many issues, but generally there is a clear need for strategies to allow simple exposure of structured data, allow people to get to small pieces of data and easy tools to put it back together and republish in accessible ways. Again the need for clear guidelines around rights issues was highlighted. Some serious thought also needs to be given to the economic implications for our community of creating and sustaining truly open content.

Thoughts on OpenLearn 2007

Last week I attended the OU Openlearn conference in Milton Keynes. Presentations will be available from the conference website (agumented with audio recordings) as well as links to various blogs about the conference.

There were a couple of presentations I’d like to highlight. Firstly Tony Hirst’s on the use of RSS feeds and OPML bundles to distribute openlearn material really gave an insight into how easy it should be to create delivery mechanisms on demand from open content. I also really enjoyed Ray Corrigan’s talk “is there such a thing as sustainable infodiversity?” Ray highlighted a number of issues around sustainability of technology, energy consumption, disposable hardware. It’s all too easy to forget just how much of our natural resources are being consumed by all the technology which is so common place now. (As an aside, this was another conference where delegates were given a vast amount of paper as well as conference proceedings on a memory stick – something we are trying to avoid at the up coming JISC CETIS conference.) He also highlighted some of the recent applications of copyright laws that cut to the core of any ‘open’ movement. This view was nicely complimented by Eric Duval’s presentation where he encouraged the educational community to be more assertive and aggressive about copyright and use of materials for educational purposes – encouraging more of a ‘bring it on’ attitude. All well and good but only if academics have the security of institutional back up to do that. On that note it’s been interesting to see this weekend that the University of Oregon is refusing to give over names of students downloading music to the RIAA (see SlashDot for more information on that one).

Design Bash: moving towards learning design interoperability

Question: How do you get a group of projects with a common overarching goal, but with disparate outputs to share outputs? Answer: Hold a design bash. . .

Codebashes and CETIS are quite synonymous now and they have proved to be an effective way for our community to feedback into specification bodies and increase our own knowledge of how specs actually need to be implemented to allow interoperability. So, we decided that with a few modifications, the general codebash approach would be a great way for the current JISC Design for Learning Programme projects to share their outputs and start to get to grips with the many levels of interoperability the varied outputs of the programme present.

To prepare for the day the projects were asked to submit resources which fitted into four broad categories (tools, guidelines/resources, inspirational designs and runnable designs). These resources were tagged into the programmes’ del.icio.us site and using the DFL SUM (see Wilbert’s blog for more information on that) we were able to aggregrate resources and use rss feeds to pull them into the programme wiki. Over 60 resources were submitted, offering a great snapshot of the huge level activity within the programme.

One of the main differences between the design bash and the more established codebashes was the fact that there wasn’t really much code to bash. So we outlined three broad areas of interoperability to help begin conversations between projects. These were:
* conceptual interoperability: the two designs or design systems won’t work together because they make very different assumptions about the learning process, or are aimed at different parts of the process;
* semantic interoperability: the two designs or design systems won’t work together because they provide or expect functionality that the other doesn’t have. E.g. a learning design that calls for a shared whiteboard presented to a design system that doesn’t have such a service;
* syntactic interoperability:the two designs or design systems won’t work together because required or expected functionality is expressed in a format that is not understood by the other.

So did it work? Well in a word yes. As the programme was exploring general issues around designing for learning and not just looking at for example the IMS LD specification there wasn’t as much ‘hard’ interoperability evidence as one would expect from a codebash. However there were many levels of discussions between projects. It would be nigh on impossible to convey the depth and range of discussions in this article, but using the three broad categories above, I’ll try and summarize some of the emerging issues.

In terms of conceptual interoperability one of the main discussion points was the role of context in designing for learning. Was the influence coming from bottom up or top down? This has a clear effect on the way projects have been working and the tools they are using and outcomes produced. Also in some cases the tools sometimes didn’t really fit with the pedagogical concepts of some projects which led to a discussion around the need to start facilitating student design tools -what would these tools look like/work?

In terms of semantic interoperability there were wide ranging discussions around the levels of granularity of designs from the self contained learning object level to the issues of extending and embellishing designs created in LAMS by using IMS LD and tools such as Reload and SLeD.

At the syntactic level there were a number of discussions not just around the more obvious interoperability issues between systems such as LAMS and Reload, but also around the use of wikis and how best to access and share resources It was good to hear that some of the projects are now thinking of looking at the programme SUM as a possible way to access and share resources. There was also a lot of discussion around the incorporation of course description specifications such as XCRI into the pedagogic planner tools.

Overall a number of key issues were teased out over the day, with lots of firm commitment shown by all the projects to continue to work together and increase all levels of interoperability. There was also the acknowledgement that these discussions cannot take place in a vacuum and we need to connect with the rest of the learning design community. This is something which the CETIS support project will continue during the coming months.

More information about the Design Bash and the programme in general can be found on the programme support wiki.

Winning Learning Objects Online

The winners of the 2007 ALT-C Learning Object Competition are now available to view from the Intrallect website.

The winners are:

    *1st prize – All in a day’s work (Colin Paton, Social Care Institute for Excellence, Michael Preston-Shoot, University of Luton, Suzy Braye, University of Sussex and CIMEX Media Ltd)
    *2nd Prize – Need, Supply and Demand (Stephen Allan and Steven Oliver, IVIMEDS)
    *3rd Prize – Enzyme Inhibition and Mendelian Genetics (Kaska Hempel, Jillian Hill, Chris Milne, Lynne Robertson, Susan Woodger, Stuart Nicol, Jon Jack, Academic Reviewers, CeLLS project, Dundee University, Napier University, Interactive University and Scottish Colleges Biotechnology Consortium)

Shortlised Entires (in no particular order):

    *Photographic composition (David Bryson, University of Derby)
    *Human Capital Theory (Barry Richards, Dr. Joanna Cullinane, Catherine Naamani, University of Glamorgan)
    *Tupulo Array Manipulation
    (Tupulo project team at Dublin City University, Institute of Technology Tallaght, Institute of Technology Blanchardstown, Ireland, System Centros de Formacion, Spain, Societatea Romania pentru Educatie Permanenta, Romania)
    *Introduction to Pixel Group Processing (Peter McKenna, Manchester Metropolitan University).

Content is infrastructure – lastest in Terra Incognita series

David Wiley is the current contributor to the excellent Terra Incognita series on Open Source Software and Open Educational Resources on Education. In his article, titled ‘Content is Infrastructure’, David puts forward the somewhat controversial view that for any experimentation to take place within education systems: “we must deploy a sufficient amount of content, on a sufficient number of topics, at a sufficient level of quality, available at sufficiently low cost”. Only then will we be able to “realize that content is infrastructure in order to more clearly understand that the eventual creation of a content infrastructure which is free to use will catalyze and support the types of experiments and innovations we hope to see in the educational realm.

I feel this is a very timely article, refocusing on the role of content and content related services with the education sector. It does seem to me that a the role of content is often over-looked, particularly in our (UK) HE sector and that there is a somewhat pervasive ‘been there done that’ attitude. But as David points out, if we are to fully reap the potential rewards of open content initiatives then we really need start looking at content as being as an infrastructure on which we can build and experiment with.

There have already been a number of comments (and replies) to David’s post all of which available from the Terra Incognita blog.

Getting virtual – joint Eduserv/CETIS meeting, 20/09/07

Last Thursday (20 September) Eduserv and CETIS held a joint event at the Institute of Education’s London Knowlege Lab primarily to showcase the four project’s Eduserv have awarded their annual research grant to. The common theme for the projects is the use of Second Life.
A common complaint or should I say issue:-) with using Second Life in many institutions is actually getting access to it from institutional networks. After some frantic efforts by Martin Oliver (and the judicious use of cables) we were able to connect to Second Life so our presenters could give some in-world demo’s. However the irony of almost not being able to do so from the wireless network wasn’t lost on any of us.

Andy Powell started the day with an overview of Eduserv and the rational behind this year’s research grant. He then gave his view on second life through the use of his extensive (and growing) wardrobe of Second Life t-shirts. The ability to create things is a key motivator for most users of virtual worlds such as SL; and these worlds can be seen as the ultimate in user-generated content. However, there are many issues that need to be explored in relation to the educational use of spaces like SL, such as the commercial nature of SL, and what the effects of the ban of gambling might be?What will be the effect of the increasing use of voice? It’s relatively simple to change your ‘persona’ just now when communication is text based, but the increasing use of real voices will have a dramatic impact and could fundamentally impact some users within the space. There is a huge amount of hype around SL, however Andy proposed that in education we are a bit more grounded and are starting to make some inroads into the hype – which is exactly what the Eduserv projects have been funded to do.

Lawrie Phipps followed with an overview of some JISC developments related to virtual worlds. Although JISC are not funding any projects directly working in Second Life this may change in the near future as there is currently a call in the users and innovations strand of the elearning programme which closes in early October. The Emerge project (a community to help support the users and innovations strand) does have an island in Second Life and there is a bit of activity around that. Lawrie did stress that it is JISC policy to fund projects which have clear, shareable institutional and sectoral outputs and aren’t confined to one proprietary system.

We then moved to the projects themselves, starting with Hugh Denard (Kings College, London) on the Theatron Project. In a fascinating in-world demo, Hugh took us to one of the 20 theatres the project is going to create in-world. Building on a previous web-based project, Second Life is allowing the team to extend the vision of the original project into a 3-D space. In fact the project has been able to create versions of sets which until now had just been drawings never realised within the set designers lifetime. Hugh did point out the potential pitfalls of developing such asset rich structures within Second Life – they take up lots of space. Interestingly the team have chosen to build their models outside SL and then import and ‘tweak’ in-world. This of course highlights the need to think about issues of interoperability and asset storage.

Ken Kahn (University of Oxford) followed giving us a outline of the Modelling for All project he is leading. Building on work of the Constructing2Learn project (part of the current JISC Design for Learning programme) Ken and his team are proposing to extend the functionality of their toolset so that scripts of models of behaviours constructed by learners will be able to be exported and then realised in virtual worlds such as Second Life. The project is in very early stages and Ken gave an overview of their first seven weeks, and then a demo of the their existing web based modeling tool.

We started again after lunch with our hosts, Diane Carr and Martin Oliver, (London Knowledge Lab) talking about their project; “Learning in Second Life: convention, context and methods”. As the title suggest this project is concerned with exploring the motivations and conventions of virtual worlds such as Second Life. Building on previous work undertaken by the team, the project is going to undertake some comparative studies between World of Warcraft and Second Life to see what are the key factors to providing successful online experiences in such ‘worlds’ and also to see what lessons need be taken into mainstream education when using such technologies.

The final project presentation came from Daniel Livingstone (University of Paisley). Daniel’s “Learning support in Second Life with Sloodle” project is building links between the open source VLE Moodle and SL – hence ‘Sloodle’. Once again we were taken in-world on a tour of their Sloodle site as Daniel explained his experiences with using SL with students. Daniel has found that students do need a lot of support (or scaffolding) to be able to exploit environments such as SL within an educational context – even the digital natives don’t always ‘get’ SL. There are also issues in linking virtual environments with VLE systems – authentication being a key issue even for the open source Moodle.

The day ended with a discussion session chaired by Paul Hollins (CETIS). The discussion broadened out from the project specific focus of the presentations and into more a more general discussion about where we are with second life in education. Does it (and other similar virtual worlds) really offer something new for education? Are the barriers too high and can we prove the educational benefits? Should we make students use this type of technology? Unsurprisingly it seemed that most people in the room were convinced on the educational benefits of virtual worlds but as with all technology it should only be used as and when appropriate. Issues of accessibility and FE involvement were also brought up during the session.

Personally I found the day very informative and re-assuring – practically all the speakers noted their initial disappointment and lack of engagement with Second Life: so I’m now going to go back in-world and try to escape from orientation island:-) It will be interesting to follow the developments of all the projects over the coming year.

Further information about the day and copies of the presentations are available from the [http://wiki.cetis.org.uk/EduservCETIS_20Sep2007 EC wiki].

The problem with pedagogic planners . . .

. . .is the fact we can’t decide what we want them to be and who and what they are really for. Although this is said with my tongue firmly in my cheek, I’ve just been at a meeting hosed by Diana Laurillard (IOE) and James Dalziel (LAMS Foundation) where a group of people involved in developing a number of tools which could be collectively described as “pedagogic planners” spent the day grappling with the issues of what exactly is a pedagogic planner and what makes it/them different from any other kind of planning/decision making tool.

Unsurprisingly we didn’t arrive at any firm conclusions – I did have to leave early to catch my (delayed) flight home so I did miss the final discussion. However the range of tools/projects demonstrated clearly illustrated that there is a need for such tools; and the drivers are coming not just from funders such as the JISC (with their Phoebe and London Projects ), but from teachers themselves as demonstrated by Helen Walmsley (University of Staffordshire) with her best practice models for elearning project.

The number of projects represented showed the growing international interest and need for some kind of pre (learning)design process. Yet key questions remain unanswered in terms of the fundamental aims of such tools. Are they really about changing practice by encouraging and supporting teachers to expand their knowledge of pedagogic approaches? Or is this really more about some fundamental research questions for educational technologist and their progression of knowledge around e-learning pedagogies? What should the outputs of such tools be – XML, word documents, a LAMS template? Is there any way to begin to draw some common elements that can then be used in learning systems? Can we do the unthinkable and actually start building schemas of pedagogic elements that are common across all learning systems? Well of course I can’t answer that, but there certainly seems a genuine willingness continue the dialogue started at the meeting and to explore these issues more most importantly a commitment to building tools that are easy to use and useful to teachers.