Design bash 11 pre-event ponderings and questions

In preparation for the this year’s Design Bash, I’ve been thinking about some of the “big” questions around learning design and what we actually want to achieve on the day.

When we first ran a design bash, 4 years ago as part of the JISC Design for Learning Programme we outlined three areas of activity /interoperability that we wanted to explore:
*System interoperability – looking at how the import and export of designs between systems can be facilitated;
*Sharing of designs – ascertaining the most effective way to export and share designs between systems;
*Describing designs – discovering the most useful representations of designs or patterns and whether they can be translated into runnable versions.

And to be fair I think these are still the valid and summarise the main areas we still need more exploration and sharing – particularly the translation into runnable versions aspect.

Over the past three years, there has been lots of progress in terms of the wider context of learning design in course and curriculum design contexts (i.e. through the JISC Curriculum Design and Delivery programmes) and also in terms of how best to support practitioners engage, develop and reflect on their practice. The evolution of the pedagogic planning tools from the Design for Learning programme into the current LDSE project being a key exemplar. We’ve also seen progress each year as a directly result of discussions at previous Design bashes e.g. embedding of LAMS sequences into Cloudworks (see my summary post from last year’s event for more details).

The work of the Curriculum Design projects in looking at the bigger picture in terms of the processes involved in formal curriculum design and approval processes, is making progress in bridging the gaps between formal course descriptions and representations/manifestations in such areas as course handbooks and marketing information, and what actually happens in the at the point of delivery to students. There is a growing set of tools emerging to help provide a number of representations of the curriculum. We also have a more thorough understanding of the wider business processes involved in curriculum approval as exemplified by this diagram from the PiP team, University of Strathclyde.

PiP Business Process workflow model

PiP Business Process workflow model

Given the multiple contexts we’re dealing with, how can we make the most of the day? Well I’d like to try and move away from the complexity of the PiP diagram concentrate a bit more on the “runtime” issue ie transforming and import representations/designs into systems which then can be used by students. It still takes a lot to beat the integration of design and runtime in LAMS imho. So, I’d like to see some exploration around potential workflows around the systems represented and how far inputs and outputs from each can actually go.

Based on some of the systems I know will be represented at the event, the diagram below makes a start at trying to illustrates some workflows we could potentially explore. N.B. This is a very simplified diagram and is meant as a starting point for discussion – it is not a complete picture.

Design Bash Workflows

Design Bash Workflows

So, for example, starting from some initial face to face activities such as the workshops being so successfully developed by the Viewpoints project or the Accreditation! game from the SRC project at MMU, or the various OULDI activities, what would be the next step? Could you then transform the mostly paper based information into a set of learning outcomes using the Co-genT tool? Could the file produced there then be imported into a learning design tool such as LAMS or LDSE or Compendium LD? And/ or could the file be imported to the MUSKET tool and transformed into XCRI CAP – which could then be used for marketing purposes? Can the finished design then be imported into a or a course database and/or a runtime environment such as a VLE or LAMS?

Or alternatively, working from the starting point of a course database, e.g. SRC where they have developed has a set template for all courses; would using the learning outcomes generating properties of the Co-genT tool enable staff to populate that database with “better” learning outcomes which are meaningful to the institution, teacher and student? (See this post for more information on the Co-genT toolkit).

Or another option, what is the scope for integrating some of these tools/workflows with other “hybrid” runtime environments such as Pebblepad?

These are just a few suggestions, and hopefully we will be able to start exploring some of them in more detail on the day. In the meantime if you have any thoughts/suggestions, I’d love to hear them.

Betweenness Centrality – helping us understand our networks

Like many others I’m becoming increasingly interested in the many ways we can now start to surface and visualise connections on social networks. I’ve written about some aspects social connections and measurement of networks before.

My primary interest in this area just now is more at the CETIS ISC (innovation support centre) level, and to explore ways which we can utilise technology better to surface our networks, connections and influence. To this end I’m an avid reader of Tony Hirst’s blog, and really appreciated being able to attend the recent Metrics and Social Web Services workshop organised by Brian Kelly and colleagues at UKOLN to explore this topic more.

Yesterday, promoted by a tweet of a visualisation of the twitter community at the recent eAssessment Scotland conference, the phrase “betweenness centrality” came up. If you are like me, you may well be asking yourself “what on earth is that?” And thanks to the joy of twitter this little story provides an explanation (the zombie reference at the end should clarify everything too!)

View “Betweenness centrality – explained via twitter” on Storify

In terms of CETIS, being able to illustrate aspects of our betweenness centrality is increasingly important. Like others involved in innovation and community support, it is often difficult to qualify and quantify impact and reach, and we often have to rely on anecdotal evidence. On a personal level, I do feel my own “reach” an connectedness has been greatly enhanced via social networks. And through various social analysis tools such as Klout, Peer Index and SocialBro I am now gaining a greater understand of my network interactions. At the CETIS level however we have some other factors at work.

As I’ve said before, our social media strategy has raised more through default that design with twitter being our main “corporate” use. We don’t have a CETIS presence on the other usual suspects Facebook, Linkedin , Google+. We’re not in the business of developing any kind of formal social media marketing strategy. Rather we want to enhance our existing network, let our community know about our events, blog posts and publications. At the moment twitter seems to be the most effective tool to do that.

Our @jisccetis twitter account has a very “lite” touch. It primarily pushes out notifications of blog posts and events, we don’t follow anyone back. Again this is more by accident by design, but this has resulted in a very “clean” twitter stream. On a more serious note, our main connections are built and sustained through our staff and their personal interactions (both online and offline). However, even with this limited use of twitter (and I should point out here that not all CETIS staff use twitter) Tony has been able to produce some visualisations which start to show the connections between followers of the @jisccetis account and their connections. The network visualisation below shows a view of those connections sized by betweenness centrality.

@jisccetis twitter followers betweenness centrality

So using this notion of betweenness centrality we can start to see, understand and identify some key connections, people and networks. Going back to the twitter conversation, Wilbert pointed out ” . . . innovation tends to be spread by people who are peripheral in communities”. I think this is a key point for an Innovation Support Centre. We don’t need to be heavily involved in communities to have an impact, but we need to be able to make the right connections. One example of this type of network activity is illustrated through our involvement in standards bodies. We’re not at always at the heart of developments but we know how and where to make the most appropriate connections at the most appropriate times. It is also increasingly important that we are able to illustrate and explain these types of connections to our funders, as well as allowing us to gain greater understanding of where we make connections, and any gaps or potential for new connections.

As the conversation developed we also spoke about the opportunities to start show the connections between JISC funded projects. Where/what are the betweenness centralities across the e-Learning programme for example? What projects, technologies and methodologies are cross cutting? How can the data we hold in our PROD project database help with this? Do we need to do some semantic analysis of project descriptions? But I think that’s for another post.

Initial thoughts on “Follower networks, and “list intelligence” list contexts” for @jisccetis

As many of you will probably know, Tony Hirst, has been doing some really interesting work recently around data visualisation. Last week he blogged about some work he had been doing visualising his twitter network, and at the end of his post offered to “spend 20 minutes or so” creating visualisations for others – for a donation to charity. Co-incidentally we had an internal CETIS communications meeting last week where we were talking about our reach/networks etc so I decided to take up Tony’s offer

Hi Tony
happy to make a donation to ovacome if you would do a map for me -well actually for CETIS, Would like to see if we can make sense of (any) links between our corporate “jisccetis” twitter account and our individual ones.
Sheila

and the results are here. Tho’ I suspect it probably took more than 20 minutes :-)

I haven’t spent a great deal of time yet analysing the graphs in detail, but there are a couple of thoughts that Tony’s post triggered that I feel merit a bit more contexualisation.

Firstly, yes the @jisccetis has a relatively low number of followers (currently 193) and doesn’t follow anyone. This is partly due to the way we manage (or perhaps mis-mangage) the account. As most of the staff in CETIS have personal twitter accounts, we haven’t really been using the corporate one much. However recently we have been making more of an effort to use it, and now have set up automated tweets from the RSS feeds for our news and events which are augmented with other notable items e.g. the joint UKOLN/CETIS survey on use institutional use of mobile web services.

We haven’t really put an awful lot of time or effort into a corporate twitter strategy – other than reckoning we should have a “corporate” account and use it:-) We didn’t take a decision about not following anyone, that just sort of happened. We (the small group of us who are ‘the keepers’ of the @jisccetis login details) don’t really look at the actual account page much now as most of the output is automated. Actually I feel that this approach works well for this type of account. As it isn’t ‘owned’ by one person it doesn’t (and won’t) build the kinds of relationships more personal accounts have. Not following people doesn’t seen to stop people following the account – and if you don’t follow @jisccetis, then quick plug, please do – we don’t spam and send out pretty useful info for edutechie types.

Tony’s work on the lists for the account is interesting too. TBH I hadn’t really had a close look at what lists the account was on – and thanks to all eight of you for listing the account. As so many CETIS staff are on twitter, people may wonder why we don’t have our own CETIS list. Well there is a bit of historical background there too. When lists came out at first, some members of staff did create such a list, however there were other staff members who didn’t want to be listed in that way, so we never really took the list idea any further forward in a corporate sense. As anyone who uses twitter knows, there is a fine line between personal and work use ( personally I tend to it for more for the later now) and our twitter accounts are personal accounts. Like most of our use of web 2.0 communication tools, we take a very light touch approach – no one has to tweet and we have, and wouldn’t want to have, editorial control. We rely on common sense and judgement; which for the most part works remarkably well. We use the same policy for blogging too.

The visualisations are really fascinating and my colleagues and I will be taking a much closer look at them over the coming weeks. I’m sure they will be key for us in our continuing development and (mis)management of the@jisccetis twitter account. One thing I now would love to do is hire Tony for a week or so to get him to do the same for all our individual accounts and cross reference them all. However I think that given “the current climate” we may have to do that ourselves, but there is certainly plenty of food for thought to be going on with.

Approaching The Learning Stack case study

Over the past couple of years, I’ve seen a number of presentations by various colleagues from the Univeristat Oberta de Catalunya about the development of their learning technology provision. And last September I was privileged to join with other international colleagues for their OpenEd Tech summit.

Eva de Lera (Senior Strategist at UOC) has just sent me a copy of a case study they have produced for Gartner (Case Study: Approaching the Learning Stack: The Third Generation LMS at Univeristat Oberta de Catalunya). The report gives an overview of how and why UOC have moved from a traditional monolithic VLE to their current “learning stack”, which is based on a SOA approach. NB you do have to register to access the report.

The key findings and recommendations are salient and resonate with many of the findings that are starting to come through for example the JISC Curriculum Design programme and many (if not all) of the JISC programmes which we at CETIS support. The findings and recommendations focus on the need for development of community collaboration which UOC has fostered. Both in terms of the internal staff/student community and in terms of the community driven nature of open source sofware development. Taking this approach has ensured that their infrastructure is flexible enough to incorporate new services whilst still maintaining tried and trusted ones and allowed them the flexibility to implement a range of relevant standards and web 2 technologies. The report also highlights the need to accept failure when supporting innovation – and importantly the need to document and share any failures. It is often too easy to forget that many (if not most of) the best innovation comes from the lessons learned from the experience of failure.

If we want to build flexible, effective systems (both in terms of user experience and cost) then we need to ensure that we have foster an culture which supports open innovation. I certainly feel that that is one thing which JISC has enabled the UK HE and FE sectors to do, and long may it continue.

Corporate memory, timelines and memolane

This week we had one of our quite rare all of CETIS staff meetings. During the discussions over the two days, communication and how to be smarter, better at sharing what we do amongst ourselves was a recurring theme. If you keep up with the CETIS news feed you’ll probably realise that we cover quite large range of activities and that’s only the “stuff” we blog about. It doesn’t represent all of our working activities.

This morning I was reminded of another time-based aggregation service called memolane. Being a bit of a sucker for these kind of things I had signed up to it last year, but actually had forgotten all about it. However I did have another look today and set up JISC CETIS account with RSS feeds from our twitter account and our news and features; and I was pleasantly surprised. The screenshot below gives an indication of the timeline.

JISC CETIS memolane timeline

JISC CETIS memolane timeline

The team at memolane have released embedding functionality, but as we are super-spam-conscious here our wordpress installation doesn’t like to embed things. I have to also add that the memolane team were super quick at picking on a tweet about embedding and have been really helpful. Top marks for customer service.

As a quick and easy way to create and share aspects of a collective memory of organisational activities and output, I think this has real potential. I also think that this could also be useful for projects as a collective memory of their activities (you can of course and multiple feeds from youtube, slideshare etc too). I’d be interested in hearing your thoughts – is this something that actually might only make sense for us and our funders? Or do you think this type of thing would be useful in a more visible section of the CETIS website?

**Update August 2011**

The embed feature now works!

Widget Bash – what a difference two days make

“I got more more done here in a day than I would have in three or four days in the office”. Just one of the comments during the wrap up session at our widget bash (#cetiswb).

And judging from other comments from the other delegates, having two days to work on developing “stuff” is one of the best ways to get actually move past the “oh, that’s interesting, I might have a play with that one day” stage to actually getting something up and running.

The widget bash was the latest in our series of “bash” events, which began many years ago with code bashes (back in the early days of IMS CP) and have evolved to cover learning design with our design bashes. This event was an opportunity to share, explore and extend practice around the use of widgets/apps/gadgets and to allow delegates to work with the Apache Wookie (Incubating) widget server which deploys widgets built to the W3C widget specification.

We started with a number of short presentations starting with presentations from most of the projects in the current JISC funded DVLE programme. Starting with the rapid innovation projects, Jen Fuller and Alex Walker gave an overview of their Examview plugin, then Stephen Green from the WIDE project, University of Teeside explained the user centred design approach they took to developing widgets. (More information on all of the rapid innovation projects is available here). We then moved to the institutionally focused projects staring with Mark Stubbs from the W2C project who took us through their “mega-mash up” plans. The DOULS project was next with Jason Platts sharing their mainly google based approached. Stephen Vickers from the ceLTIc project then outlined the work they have been doing around tools integration using the IMS LTI specification. We also had a remote presentation around LTI implementation from the EILE project. Rounding up the DVLE presentations, Karsten Lundqvist from the Develop project shared the work they have been doing primarily around building an embed video BB building block. Mark Johnson (University of Bolton) then shared some very exciting developments coming from the iTEC project where smartboard vendors have implemented wookie and have widget functionality embedded in their toolset allow teachers to literally drag and drop collaborative activities onto their smartboards at any point during a lesson. Our final presentation came from Alexander Mikroyannidis on the ROLE project which is exploring the use of widgets and developing a widget store.

After lunch we moved from “presentation” to doing “mode”. Ross Gardler took everyone through a basic widget building tutorial, despite dodgy wifi connections and issues of downloading the correct version on Ant, most people seemed to be able to complete the basic “hello world” tutorial. We then split into two groups, with Ross continuing the tutorials and moving creating geo- location widgets and Scott Wilson working with some of the more experienced widget builders in what almost become a trouble shooting surgery. However his demo of repackaging a pac-mac game as W3C widget did prove very popular.

The sun shone again on day two and with delegates more familiar with wookie and how to build widgets, and potential applications for their own contexts, the serious bashing began.

One of the great things about working with open source projects such as Apache Wookie (Incubating), is the community sharing of code and problem solving We had a couple of really nice examples of this in action, starting with the MMU drop in pc-location widget. The team had managed to work out some IE issues that the wookie team were struggling with (see their blog post), and inspired by the geo-location templates Ross showed on day 1, managed to develop their widget to include geo-location data. Now if users access the service from a geo-location aware device it will return a list of free computers nearest to their real-time location. The team were able to successfully test this on ipad, galaxy tab, iphone and android phone. For non-location aware devices the service returns an alphabetical list. You can try it out here.

Sam Rowley and colleagues from Staffordshire university decided to work on some DOM and jQuery and issues. Whilst downloading the wookie software they noticed a couple of bugs, so they fixed them and submitted a patch to the Wookie community.

Other interesting developments emerged from discussions around ways of getting data out of VLEs. The team from Strathclyde realised that by using the properties settings in wookie they could pass a lot of information fairly easily from Moodle to a widget. On day two they converted a Moodle reading list block to a wookie widget with an enhanced interface allowing users to specify parameters (such as course code etc). The team have promised to tidy up the code and submit to both the wookie and moodle communitys. Inspired by this Stephen Vickers is going to have a look at developing a powerlink for webCT/BB with similar functionality.

On a more pedagogical focus some of the members of the Coeducate project worked on developing a widget version of the the 8LEM inspired Hybrid Learning Model from the University of Ulster. By the end of the second day they were well on the way to developing a drag and drop sequencer and were also exploring multiuser collaboration opportunities through the google wave api functionality which wookie has adopted.

Overall there seemed to be a really sense of accomplishment from delegates who managed to do a huge amount despite having to fight with very temperamental wifi connections. Having two experts on hand proved really useful to delegates as they were able to ask the “stupid” and more often than not, not so stupid questions. Having the event run over two days also seemed to be very popular as it allowed delegates to actually move from the thinking about doing something to actually doing it. It also highlighted the positive side of contributing to an open-source community and hopeful the Apache Wookie community will continue to see the benefit of increased users from the UK education sector. We also hope to run another similar event later in the year, so if you have any ideas or would like to contribute please let me know.

For another view of the event, I’ve also created a storify version of selected tweets from the event.

Personal publishing – effective use of networks or just noise?

If you follow my twitter stream you may have noticed that everyday about 9am, you’ll see a tweet with a link to my daily paper and a number of @mentions of people featured in it. You may even have been one of those @ mentions.

I’ve actually had a paper.li account since last year, but it’s only recently that I’ve actually set the “automagic” tweet button live. Partly this was because I’ve found it quite interesting following links to other paper.li sites where I’ve been mentioned, and also partly as a bit of a social experiment to see (a) if anyone noticed and (b) what reactions, if any, it would solicit. In fact this post is a direct response to Tore Hoel’s tweet at the weekend asking if I was going to reflect on use.

Well, here goes. So being one of those people who likes to play (and follows every link Stephen Fry tweeets) I was intrigued when I came across paper.li at first and signed up. For those of you unfamiliar with the service it basically pulls in links from your twitter feed, categorizes them and produces an online paper. Something, and I’m not sure what it was prevented me from making the links public from the outset. On reflection I think it was that I wanted to see how the system works, and if it actually did provide something useful.

There’s no editorial control with the system. It selects and classifies articles, links and randomly generates your online paper, and (if you choose) sends a daily tweet message from your twitter account with a url and @mentions for selected contributions. Sometimes these are slightly odd – you might get mentioned because you tweeted a link to an article in “proper” paper, a blog entry or a link to a flickr stream. It’s not like getting a by-line in a proper paper by any stretch of the imagination. The website itself has an archive of your paper and there’s also the ability to embed a widget into other sites such as blogs. Other services I’ve used which utilise twitter (such as storify) generate more relevant @mention tweets i.e. only for those you actually quote in your story. You also have the option not to send an auto tweet. Something I missed the first time I used it and so tweeted myself about my story:-).

So, without editorial control is this service useful? Well like most things in life, it depends. Some people seem to find it irritating as it doesn’t always link to things they have actually written, rather links they have shared. So for the active self promoter it can detract from getting traffic to their own blog/website. Actually that’s one of the things I like- it collates links that often I haven’t seen and I can do a quick skim and scan and decide what I want to read more about. Sometimes they’re useful – sometimes not. But on the whole that’s the thing with twitter too – some days really useful, others a load of footballing nonsense. I don’t mind being quoted by other people using the service too. It doesn’t happen that often and I don’t follow too many people, and guess what -sometimes I don’t actually read everything in my twitter stream, and I don’t follow all the links people post – shocking confession I know! However when you post on to twitter it’s all publicly available so why collate it? If it’s a bit random, then so be it. But some others see it differently.

If you don’t like being included in these things then, like James Clay get yourself removed from the system.

There have been another couple of instances where I have found the service useful too. For the week after the CETIS10 conference last year, we published the CETIS10 daily via the JISC CETIS twitter account. As there was quite a lot of post conference activity on blogs etc it was another quite useful collation tool – but only for a short period of time where there was enough related activity to the conference hashtag for the content to be nearly always related to the conference. Due to the lack of editorial control, I don’t think a daily JISC CETIS paper.li would be appropriate. The randomness that I like in my personal paper isn’t really appropriate at an organisational communication level.

I recently took part in the LAAK11 course, and one of the other participants (Tony Searle, set up a paper.li using the course hashtag. I found this useful as it quickly linked me other students, articles etc which I might not have seen/connected with and vice versa. Again the key here was having enough relevant activity Tore asked if it would be useful for projects? I’m in two minds about that – on the one hand it might – in terms of marketing, getting followers. But again the lack of editorial control might lead to promotion of something that wasn’t as closely related to the project as you would like. If however you have an active project community then it might work.

For the moment the Sheila MacNeill daily will continue but I’d be interested to hear other thoughts and experiences.

What technologies have been used to transform curriculum delivery?

The Transforming Curriculum Delivery through Technology (aka Curriculum Delivery) Programme is now finished. Over the past two years, the 15 funded projects have all been on quite a journey and have between them explored the use of an array of technologies (over 60) from excel to skype to moodle to google wave.

The bubblegram and treegraph below give a couple of different visual overviews of the range technologies used.

As has been reported before, there’s not been anything particularly revolutionary or cutting edge about the technologies being used. The programme did not mandate any particular standards or technical approaches. Rather, the projects have concentrated on staff and student engagement with technology. Which of course is the key to having real impact in teaching and learning. The technologies themselves can’t do it alone.

The sheer numbers of technologies being used does, I think, show an increasing confidence and flexibility not only from staff and students but also in developing institutional systems. People are no longer looking for the magic out of the box solution and are more willing to develop their own integrations based on their real needs. The ubiquity of the VLE does come through loud and clear.

There are still some key lessons coming through.

* Simple is best – don’t try and get staff (and students) to use too many new things at once.
* Have support in place for users – if you are trying something new, make sure you have the appropriate levels of support in place for users.
*Tell people what you are doing – talk about your project, wherever you can and share your objectives as widely as possible. Show people the benefits of what you are doing. Encourage others to share too.
*Talk to institutional IT support teams about what you are planning – before trying to use a new piece of software, make sure it does work within your institutional network. IT teams can provide invaluable information and advice about will/won’t work. They can also provide insights into scalability issues for future developments. A number of the projects have found that although web 2.0 technologies can be implemented relatively quickly, there are issues when trying to increase the scale of trial projects.

A full record of the technologies in use for the projects is available from our PROD project database. More information on the projects and a selection of very useful shareable outputs (including case studies and resources) is available from the Design Studio.

Happy New Media Year

Over the holidays I’ve tried to take a proper break from twitter. It’s becoming such an integral part of my work life, I wanted a break. However, twitter is one of those things that does cross work/life boundaries so it is hard to keep completely away and tonight (again) twitter and the BBC illustrated the power of the social web and data visualisation.

In case you weren’t aware tonight was the 60th anniversary of the equally loved and maligned radio soap “The Archers“. Tension has been building in the press over the past few weeks. Being Radio 4 it’s been in all the broadsheets!

Ultimately this extended half hour episode was a bit of a let down (no thud, not that much screaming). But the twitter stream using the #sattc (shake Ambridge to the core) hash tag more than made up for script deficits. And the live website, mashing up tweets and plot lines with some great visualisations really showed how real-time social data from an engaged and (mostly) articulate community can be used.

I’m hoping in 2011 we’ll be able to see some similar experiments within the educational community. What’s our equivalent sattc hash tag? What messages can we effectively visualise – innovation? impact? itcc (in the current climate)? And how can we ensure that the people making decisions about funding for HE can see the the collective thoughts of our equally engaged and articulate community?

SHEEN Sharing – getting the web 2.0 habit

Sometimes I forget how integral web 2 technologies are to my working life. I blog, facebook, twitter, bookmark, aggregate RSS feed, do a bit of ‘waving’, you know all the usual suspects. And I’m always up for trying any new shiny service or widgety type thing that comes along. There are certain services that have helped to revolutionize the way I interact with my colleagues, peers and that whole “t’internet” thang. They’re a habit, part of my daily working life. So, last week I was fascinated to hear about the journey the SHEEN Sharing project has been on over the last year exploring the use of web2.0 tools for a group of practitioners that have barely got into the web 1 habit.

SHEEN, the Scottish Higher Education Employability Network, was set up in 2005. Employability is one of the SFC’s enhancement themes and almost £4million was made available to Scottish HE institutions towards developments in this area. This led to a network of professionals – the ECN (employabilty co-ordinators network) who had some fairly common needs. They all wanted to reduce duplication of effort in finding resources, share and comment on resources being used and to work collaboratively on creating new resources. As actual posts were on fixed term contracts, there was the additional need to capture developing expertise in the field. So, they started the way most networks do with an email list. Which worked to a point, but had more than a few issues particularly when it came to effectively managing resource sharing and collaboration.

One of the members of this network, Cherie Woolmer, is based in the same department as a number of us Scottish Cetisians. So in true chats that happen when making coffee style, we had a few discussions around what they were trying to do. They did have a small amount of funding and one early idea was to look at building their own repository. However we were able to give an alternative view where they didn’t actually need a full blown repository and that there were probably quite a few freely available services that could more than adequately meet their needs. So, the funding was used to conduct a study (SHEEN Sharing) into the potential of web2.0 tools for sharing.

Sarah Currier was hired as a consultant and her overview presentation of the project is available here. Over a period on just about a year (there was a extension of funding to allow some training at the end of last year/early this) without any budget for technology Sarah, along with a number of volunteers from the network explored what web tools/services would actually work for this community.

It was quite a journey summarized in the presentation linked to above. Sarah used videos (hosted on Jing) of the volunteers to illustrate some of the issues they were dealing with. However I think a lot of it boiled down to habit and getting people to be confident in use tools such as bookmarking, shared document spaces, rss feeds etc. It was also interesting to see tension between professional/formal use of technology and informal use. Web 2 does blur boundaries, but for some people, that blurring can be an uncomfortable space. One thing that came through strongly was the need for face to face training and support to help (and maybe very gently force!) people use or at least try new technologies and more importantly for them to see themselves how they could use it in their daily working lives. In effect how they could get into the habit of using some technologies.

The project explored a number of technologies including scribd (for public sharing documents), google docs (for collaborative working)twitter (which actually ended up being more effective at a project level in terms of extending connections /raising awareness) and diigo for bookmarking and sharing resources. Diigo has ended up being a core tool for the community, as well as providing bookmarking services the group and privacy functions it offers gave the flexibility that this group needed. Issues of online identity were key to members of the network – not everyone wants to have an online presence.

I hadn’t really explored diigo before this and I was really taken with the facility to create webslides widgets of bookmarked lists which could be embedded into other sites. A great way to share resources and something I’m playing around with now myself.

I think the SHEEN Sharing journey is a great example of the importance of supporting people in using technology. Yes, there is “loads of stuff” out there ready to be used, but to actually make choices and create effective sharing and use, we rely on human participation. Supporting people is just, if not more, important if we want to really exploit technology to its fullest potential. It also shows the growing need to share expertise in use of web2.0 technologies. You don’t need a developer to create a website/space to share resources – but you do need experience in how to use these technologies effectively to allow groups like SHEEN to exploit their potential. I was struck by how many tools I could see Sarah had used throughout the evaluation phase. Only a couple of years ago it would have been almost impossible for one person to easily (and freely) capture, edited and replayed video for example. A good example to highlight the changing balance of funding from software to “peopleware” perhaps?

More information about SHEEN sharing can be found on their recently launched web resources site – a great example of a community based learning environment.