Widget working group start up meeting

The first meeting of our new widgets working group took place last Wednesday at CETIS HQ , University of Bolton. The group has been formed as a response to the widget session at the CETIS conference and part of the day was spent trying to define the groups aims and objectives.

Some broad overarching objectives were identified after the conference, primarily to investigate an infrastructure whereby a collaborative widget server is available to anyone in the H/FE sector – including developing widget server plug-ins for all major web platforms and making widgets easily discoverable and embeddable. Followed by determining models of widget use in teaching practice and support development, sharing, embedding of teaching and learning specific widgets. Whilst these are pretty much in-tact some short terms aims were decided upon, and these were prioritized by the participants.

In terms of infrastructure, two main approaches to widget development and deployment were discussed. These came from the TenCompetence team at University of Bolton and the CARET team, University of Cambridge. The Bolton team have developed their own widget server (called wookie) and have been developing a number of collaborative, shared state widgets (such as chat, voting etc) using (and extending) the W3C widget specification. These can be deployed in a number of systems including Moodle and elgg (see previous blog and David’s post re creating widgets for wookie).

The team from Cambridge on the other hand have taken the shindig/opensocial/ approach and are starting to embed “gadgets” into their new (under development) SAKAI environment. So a short term goal is going to to try and get an instance of wookie server running along side shindig and vice versa and report back on any issues. Over lunch a couple of “lite” interoperability tests showed that widgets built by each team could run in both systems. It was also agreed that some kind of overview briefing paper on widgets (whys, what’s where etc) would be a valuable community output.

Once these initial tasks have been completed we can then look at some of the other issues that arose during the date such as a common widget engine interface, deployment and security of widgets, as well as accessibility, how to write widgets and of course actually using widgets in teaching and learning.

The next meeting is provisionally scheduled for 23rd March in Edinburgh (to tie in with the JISC conference the next day). More information will be available soon and if you are interested please let us know and come along to the meeting.

Are there compelling use cases for using semantic technolgoies in teaching and learning?

On Monday the SemTech project had a face to face meeting in London to update on progress with the project and their survey of semantic technologies being used in education.

The day started with Thanassis Tiropanis giving an overview of the project today and in particular the survey site (see previous blog post) which has collated 40 semantic applications that can/are being used in teaching and learning. The team are now grappling with trying to make sense of the data collected. Some early findings, perhaps not surprisingly, show that there is most activity around information collection, publishing and data gathering. However there are some examples of more collaborative type activities being supported through semantic technologies. There is still time to contribute to the survey if you want to add any or your experiences.

I found the afternoon group discussions the most interesting part of the day. I chaired a group looking at the institutional perspective around using/adopting semantic technologies in respect of the following four questions:

1 what are the most important challenges in HE today?
2 how might semantic tech be part of the solution?
3 what are the current barriers to semantic technology adoption?
4 what areas of semantic technology require investing additional effort in?

As you would expect we had a fairly wide ranging discussion; but ultimately agreed that the key to getting some institutional traction would be to have some examples/use cases of how semantic technologies could help with key institutional concerns such as student retention. We came to the consensus that if data was more rigorously defined,categorized and normalized ie in RDF/triple stores, then it would be easier to query disparate data sources with added intelligence and so provide more tailored feedback/ early warning signs to teachers and administrators. However at the moment most institutions suffer from having numerous data empires who don’t see the need to communicate with each other and who don’t always have the most rigorous approach to data quality. Understanding data workflow within the institution is central to this. It will be interesting to see if any of the current JISC Curriculum Design projects decide to adopt a more semantic approach to workflow issues.

So in the answers we came up with were:

1 External influences eg HEFC, student retention, recruitment, course provisioning, research profile
2 Let us think of the questions we haven’t thought of yet.
3 Data empires, lack of knowledge, use cases, good examples in practice
4 Demonstrators to show value of adding semantic layer to existing data

The view from behind the bike sheds, joint Eduserv/CETIS meeting, 16 January 2009

Last Friday, we held a joint meeting with our colleagues from the Eduserv Foundation entitled “maximising the effectiveness virtual worlds in teaching and learning”. The meeting was a follow on from the joint meeting we held in 2007 and the presenters gave a range of perspectives on the challenges and affordances of using virtual worlds in teaching and learning.

As expected the challenges of getting institutional systems “geared up” for allowing access to virtual worlds was a theme for discussion – particularly during the discussion session during Ren Reynolds’ presentation. As with the last meeting, we as organisers had to to struggle to get access to Second Life so we could stream audio in-world. Limited wired access points, weird log-in configurations, sound card issues, emergency USB dongles etc all came into play. Although the room we used has wifi access, users can’t log into Second Life over the wireless network. In fact some of our audience had to struggle even to get any access to the internet. I suspect out of experience none of our presenters actually needed to go “in world”. But I do wonder if we will ever have ubiquitous access to the internet on campus – wired or not. Conversely we almost has a one woman fail whale situation when Lorna Campbell was kicked out of twitter for sending too many messages in one hour :-)

During most of the presentations notions of identity and presence arose. Of course one of the unique features of virtual worlds is that they allow users to experience different identities. Peter Twinning raised some very interesting points about this in the work he has being doing with school children in Second Life with the Schome Park project, particularly relating to some role play exercise the children participated in. One group of children wanted to “get married” (deliberate quotation marks) and Peter was asked to “give the bride away”. A long discussion ensued with the children about the activity and the consequences if certain quarters of the media got hold of the story. The children’s reaction to this – “but you do realise that it’s not real.” So they seemed to have a very clear idea of their real and virtual identities. However I think that this raises a number complex of issues – most of which I’m not really qualified to comment on. There are many people who are immersed in virtual worlds and are increasingly blurring the boundaries between virtual and the non-virtual worlds. I’m sure that they would argue that they have experiences in virtual worlds that in are every sense real. David White (Oxford University) also discussed this in terms of the acceptance/normalization of different types communication e.g. telephone/msn/twitter etc.

One of the best quotes of the day came from Peter when he told us about an inspection the project had. The Schome Park Second Life Island was described as being a dangerous learning environment. The children have very high level of autonomy in the environment which led one inspector to comment “it’s like being behind the bike sheds all the time.” But maybe that’s where we as educators need to be sometimes.

All in all it was a very stimulating day and thanks to everyone who took part and were patient whilst we fought with the technical gremlins. Copies of the presentations are available from the wiki.

Ada Lovelace day – sign up today.

“I, Sheila MacNeill, will publish a blog post on Tuesday 24th March about a woman in technology whom I admire but only if 1,000 other people will do the same.”

Ada Lovelace Day is an international day of blogging to draw attention to women excelling in technology. Women’s contributions often go unacknowledged, their innovations seldom mentioned, their faces rarely recognised. We want you to tell the world about these unsung heroines. Entrepreneurs, innovators, sysadmins, programmers, designers, games developers, hardware experts, tech journalists, tech consultants.

Go on – you know you want to – let’s hear it for the girls :-) More information is available here.

Semantic Technologies in education survey site now available

The next stage of the SemTech project (as reported earlier in Lorna’s blog) is now underway. The team are now conducting an online survey of relevant semantic tools and services. The survey website provides a catalogue of relevant semantic tools and services and information on how they relate to education.

If you have an interest in the use of semantic technologies in teaching and learning, you can register on the site and add any relevant technologies you are using, or add tags to the ones already in documented. As the project is due for completion by the end of February, the project team are looking for feedback by 2 February.

Mapping outputs of Design for Learning programme to IMS LD – a level of interoperability for learning designs

As additional output to the JISC Design for Learning programme, we were asked to produce a mapping of programme outputs to IMS Learning Design. Six different outputs, including a LAMS sequence, outputs from the pedagogy planner tools, and poster designs, were transformed into IMS LD using the ReCourse and RELOAD tools.

In most cases it was possible to identify common IMS LD elements and a working uol (unit of learning)was produced for each example. However the completeness of these varied considerably depending on the level of descriptions provided about the actual details of the activities to be undertaken.

The findings illustrate the need for more explicit descriptions of activities to allow designs to be “run” either online or off-line. There also appears to emerge an almost natural point where the planning process ( e.g. what is my design called, who is it for, what activities will I use etc) ends and the design process (e.g. how will a learner actually participate in activities, what is the sequence of activities ) begins.

The report is available to download from the Design for Learning support wiki. A discussion topic has also been started on the report in the pedagogical planners group in facebook. Please feel free to join the group and contribute to the discussion.

Reports and being part of a wider conversation

Reports, love ‘em or hate ‘em there doesn’t really seem to be any escape from ‘em and, they are generally very long, text based and in my case, printed out and hang around in my handbag for far too long without being read :-)

One of the things that always strikes me is why we so often report about new technologies in the time honoured, text based format and don’t use the technologies we are reporting on. A case in point being this morning when I printed out a 150 page review of “current and developing international practice in the use of social networking (Web 2.0) in higher education”. At this point I should add a disclaimer -this post is not passing any judgement on the content of or the authors of this report. What really struck me this morning was the conversation that took place around a comment I made via twitter and the ideas of back-channels and participation and feedback.

So if you will, follow me back in time to about 9am this morning when I posted this:

“is it just me or does a 150 page report on web2.0 technologies miss the point on some level . . .” A couple of tweets later Andy Powell came in with “yes, totally – why don’t people explicitly fund a series of blog posts and/or tweets instead?” Good point – why not indeed? Which was answered to an extent by this: “ostephens @sheilmcn Depends what the point of the report is. It could conclude that Web 2.0 does not significantly add to the value of communication”. A few more tweets later it was this response that really got me thinking “psychemedia @sheilmcn one fn of a report is so that many people can be part of ‘that conversation'; but here it’s easy to be part of a wide conversation”. Yes, that is true, but is is the ‘right’ conversation? I seemed to have started tapped into something but the responses weren’t really about the report. I wonder if someone actually related to the report had posted a comment specifically looking for feedback how much of a conversation there would have been.

To some extent I know that in the small twittersphere I inhabit, there would probably have been a lot of comment and conversation. However with the proliferation of twitter extensions I’m starting to become a bit worried that the serendipitous nature of the tool maybe about to be destroyed by people trying to organise it, and use it in a more structured way. I wonder how often would I take the time to take part in organised twitter conversations?

I guess this kind of brings me back to an analogy related to web2.0 and education. I’ve certainly heard many times that the “kids” don’t want to use facebook in school because it’s part of their “real” life and not part of their education; and when we as educators try to integrate social software we fail, because the kids have moved on to the next cool thing. So if we to try to use twitter in a structured way will we all have moved onto the next big thing? I guess on that note I should actually now go and read that report and get twittering.

Widgets, web 2.0 and learning design @ CETIS08

As I’ve already publically declared my love for widgets, I was delighted to co-chair the “planning and designing learning in a world of widgets and web 2.0″ with Wilbert Kraan at this years CETIS conference.

Wilbert started the session with an overview of his experiences of building an assessment widget using google docs and sprout builder and integrating it into moodle. The gory details are available from his blog.

One of the many appealing traits of widgets is that they are relatively simple to create and integrate into websites (particularly using wysiswyg builders such as sproutbuilder). However integrating widgets into closed systems such as VLEs can be problematic and generally would require a level of admin rights which most teachers (and learners) are unlikely to have.

This is one area that the TenCompetence project team at the University of Bolton has been investigating. Scott Wilson demoed the Wookie widget server they have built. By adding the wookie plug-in to moodle it is possible to seemlesly integrate widgets to the learning environment. Scott explained how they have been using the W3C widget guidelines to build widgets and also to convert widgets/gadgets from the Apple apps collection to put into wookie and therefore into moodle. This potentially gives teachers (and students) a whole range of additional tools/activities in addition to the standard features that most VLEs come with. A USP of the Bolton project is that they are the first to build collaborative widgets (chat, forums etc) using the W3C guidelines.

Scott made a very salient point when he reminded us that VLEs are very good at allowing us to group people. Most web2.0 tools don’t provide the same ability to group/distinguish between groups of people – there are few distinctions between “friends” if you like. So by creating and/or converting widgets and integrating them with VLEs we can potentially extend functionality at relatively low cost (no system upgrade needed – just add the plug-in) whilst retaining the key elements that VLEs are actually good at e.g. grouping, tracking etc. This is where the learning design element comes into play.

Dai Griffiths, University of Bolton, outlined how the IMS LD specification can provide a way to orchestrate widgets with a set of rules and roles. He showed examples of UoLs (units of learning) which use the chat widget in the wookie server. This is one of the most exciting developments for IMS LD (imho) as it is starting to show how the authoring process can become much easier for the average teacher. You just choose what widget you want to use, decided which students can use it, and when, add other content – widgets -whatever and you have your runnable learning design. I know it’s not quite that simple, but I do think this goes along way to address some of the “lack of easy to use tools” barriers that the learning design community have been facing.

There was a lot of discussion about students creating widgets. Tony Toole told us of one of his projects where they are trying to get students to build widgets. At the moment they don’t seem that keen as they (probably quite rightly) see it as secondary to their “real” learning tasks. However they do like and make use of more informational type widgets which utilise RSS feeds to push out content such as reading lists, course announcements etc. Ross McLarnon from Youthwire showed us the desktop widget application they have just started to roll out to over 130 Universities. Interestingly the university news widget is the most popular so far.

As ever it is practically impossible to summarize the discussions from the session, but some other key issues that arose were authorization. There was a discussion around the emerging oAuth specification which is based on a ticketing system so users don’t have to give username/password details to 3rd parties. It was also agreed that some kind of national educational widget server based on the wookie system would be useful. As well as storing relevant, safe, widgets it could have information/examples of widgets being used in practice and relevant system plug-ins etc.

As a follow on from this session we would like form a working group to explore all these issues in more detail and provide feedback to JISC for potential funding opportunities. If you would like to be involved please let me know either by leaving a comment or by sending me an email (s.macneill@strath.ac.uk)

More details of the session can be found on the wiki.

Conference stories

There was a huge amount of twitter activity at this years CETIS conference. In fact at one point yesterday one of the session hashtags was second only to “happy thanksgiving”.

Prior to the conference there had been a bit of negativity in the twittersphere about the the number of tags and hashtags we had set up for the conference. Whilst I can see why just having one tag makes life simpler, having tags for each of the session allows us to aggregate and separate out the comments into the relevant session areas (like here where we have a feed displaying tweets related to that session). I think that the CETIS conference is just about the right size and has just about the right level of geek-type audience to appreciate the decision to have individual session tags. For larger conferences multiple tags might not work.

I’ve tried out various lifestreaming tools before (like swurl and dipity) but last week I came across the Storytlr site, which adds a nice twist to the lifestream idea by allowing users to create date specific “stories” from their various digital presence sites (such as blogs, twitter, flickr etc). I think this time specific story idea has potential – particularly for conferences. I’ve created 2 “stories” from my tweets over the past two days just try it out.

*Day One (sorry to Dai for cutting his head off in the first picture)
*Day 2

This morning, I have been trying to add other rss feeds such as the feeds for the individual sessions and some photo feeds, but it didn’t seem to like any of the rss feeds I was giving it. It’s probably something Tony Hirst could sort out in the blink of an eye:-) But if anyone else has some stories from the conference, or anywhere else I’d love to see them.

It’s not what you share, but how you share

Scott Leslie has written a reallyinteresting post about some of the issues he has with institutional collaboration projects. I’m sure anyone who has tried to share any kind of “stuff” will find resonance in what he says.

The post is particularly timely for myself and others in CETIS as we are working closely with JISC colleagues who are planning the pilot OER call for next year. This is a major investment by JISC and the HEA with £5million worth of funding being made available.

We have been having extensive discussions around the types of architectures/sharing solutions that should be in place. Hopefully we can avoid the scenarios that Scott describes and allow as flexible an approach as possible, ensuring people can use existing tools and networks and that we don’t re-invent another un-necessary technical layer/network(s). However some decisions need to be made to ensure that any resources funded through the programme can be found and tracked.

We’ll be discussing these issues at the CETIS conference in a couple of weeks at the OER scoping session. If you would like to attend the session and haven’t had an invitation yet, please get in touch. Or just leave any thoughts about what you think here.