Innovation, sustainability and community – reflections on #cetis13

The theme of this years CETIS conference was Open for Education: Technology Innovation in Universities and Colleges, as usual we had a wide and diverse range of sessions but if there was one theme that underpinned them all it was how can we sustain innovation in the face of the challenges currently facing the sector?

Sustainability was the explicit theme of the Open Practice and OER Sustainability session Phil and I ran. Three years of HEFCE UKOER funding came to an end last autumn and, while there’s no denying that the programmes produced a significant quantity of open educational resources, did they also succeed in changing practice and embedding open education innovation across the English HE sector? Judging by the number of speakers and participants at the session I think it’s fair to say that the answer is a resounding “Yes”. At least in the short term. Patrick MacAndrew, who has been involved in organising this year’s OER13 conference, pointed out that while they expected a drop in numbers this year, as UKOER funding has ended and the event is not running in conjunction with OCWC, in actual fact numbers have risen significantly. Practice has changed and many institutions really are more aware of the potential and benefits of open educational resources and open educational practices. Though as several participants pointed out, MOOCs have rather eclipsed OERs over the last 12 months and the relationship between the two is ambiguous to say the least. As Amber Thomas put it: “MOOCs stole OERs girlfriend”.


David Kernohan used the memorable image of a teddy bear lecturer playing happily on a seesaw with his friends and with lots of open educational resources and innovative technologies until all the money ran out and all that was left was the teddy bear and the resources. However I can’t help thinking that the real threat to OER sustainability is that the next thing to disappear might be the teddy bear, and after all it’s the teddy bears, or rather the people, that sustain communities of innovation and practice. With this in mind, there was some discussion of the importance of subject communities in sustaining innovative educational practice and Suzanne Hardy of Newcastle reminded us that Humbox, an excellent example of an innovative and sustainable development presented by Yvonne Howard of Southampton, was originally a collaboration between four HEA subject centres. The legacy of the subject centres is certainly still visible in the sector, however as many talented people have had to move into other roles and those that have managed to hang on are increasingly under threat, how much longer will the community of open educational innovation be able to sustain itself?

The latter half of Scott Wilson’s session on Open Innovation and Open Development also focused on sustainability and again the discussion circled round to how can we sustain the community of developers that drive innovation forward? It’s more years than I can recall since their demise, but the CETIS SIGS were put forward yet again as a good model for sustaining innovative communities of developers and practitioners. I also suggested that it was still possible to see the legacy of the SHEFC Use of the MANs Initiative in the sector as a surprising number of people still working in educational technology innovation first cut their teeth on UMI projects.

There was some discussion of the emergence of “boundary spanning people and blended professionals” but also a fear that institutions are increasingly falling back on very traditional and strictly delineated professional roles. At a time when innovation is increasingly important, many institutions are shedding the very people who have been responsible for driving innovation forward in the sector. At the end of the session, Scott asked what is the one thing that organisations such as Cetis and OSSwatch should do over the next six months to help sustain open innovation and open development? The answer that came back was Survive! Just survive, stay alive, keep the innovation going, don’t loose the people. The fact that Scott was wearing a zombie t-shirt while facilitating the session was verging on the poignant :}

Meanwhile over in Martin Hawksey and David Sherlock’s Analytics and Institutional Capabilities session Ranjit Sidhu of SiD was laying into all manner of institutional nonsense including the sector wide panic that followed clearing, the brutal reality of the competitive education market, the millions spent on google advertising, the big data projects that are little more than a big waste of money and, last but not least, the KIS. Ranjit showed the following slide which drew a collective murmur of horror, though not surprise, from the audience.


If you look carefully you’ll notice that the number of daily request to Unistats for data is….9. Yep. 9. It hasn’t even hit double figures. One colleague who was responsible KIS returns recently estimated that the cost to their institution was in the region of a hundred thousand. Multiply that across the sector…Does anyone know what the total cost of the KIS has been? And the return on investment? As one participant commented in response to Ranjit’s presentation, KIS is not a tool for students, it’s a tool to beat VCs over the head with. I’ll leave you to draw your own conclusions…

I think it’s fair to say that a lot of us went to CETIS13 not knowing quite what to expect and even fewer of us know what the future holds. Despite these uncertainties the conference had a noticeably positive vibe, which more than a few people remarked on over the course of the event. We’re all living in “interesting times” but the brutal reality of the crisis facing HE has done little to dent people’s belief that sustaining open innovation, and the community of open innovators, is a fundamental necessity if the sector is to face these challenges. I certainly felt there was a real spirit of determination at CETIS13, here’s hoping it will see us through the “interesting times”.

#chatopen Open Access and Open Education

Do open access and open education need to work together more? That was the question posed by Pat Lockley and discussed on twitter on Friday evening by a group of open education folks using the hashtag #chatopen.

Open access in this instance was taken to refer to open access repositories of peer-reviewed papers and other scholarly works and associated open access policies and agendas. There was general agreement that open access and open education proponents should work together but also recognition that it was important to be aware of different agendas, workflows, technical requirements, etc. Suzanne Hardy of the University of Newcastle added that it was equally important to take heed of open research data too.

Although the group acknowledged that open access still faced considerable challenges, there was a general consensus that it was more mature, both in terms of longevity and uptake, and that it was embedded more widely in institutions. Amongst other factors, the relative success of open access was attributed to the fact that most universities already had policies and repositories for publishing and managing scholarly outputs, while few had comparable strategies for managing teaching and learning materials. Phil Barker added that research outputs were always intended for publication whereas teaching and learning materials were generally kept within the institution. Nick Sheppard of Leeds Met also pointed out that most institutional repositories could not handle teaching and learning resources and research data without significant modification. This led to the suggestion that while institutional repositories fit the culture of scholarly works and open access well, research data and OERs are much harder to manage and share.

In terms of uptake and maturity, although there was general agreement that open access was some way ahead of open education, it appears that open data is catching up fast due to institutional drivers such as the REF, high level policy support and initiatives such as Funding council mandates were also recognised as being an important driver in this regard.

Different interpretations of the term ‘open” were discussed as the open in open access and open education were felt to be quite different. The distinction between gratis and libre was felt to be useful, though it is important to recognise more subtle variations of open.

There was some consensus that teaching and learning resources tend to be regarded as being of lesser importance to institutions than scholarly works and research data and that this was reflected in policy developments, staff appointments and promotion criteria. Furthermore, until impact measures, funding and business models change this is likely to remain the case. Open access and open education both reflect institutional culture but they are separate processes and this separation reflects university polices, priorities and funding streams.

The group also felt that different communities had emerged around open access and open education, with open access mainly being the concern of librarians and open education the domain of eLearning staff. Phil refined this distinction by suggesting that open access is driven by researchers but managed by librarians. However Nick Sheppard of Leeds Met suggested that the zeitgeist was changing and that open access, open education and open research data are starting to converge.

In response to the question “what open education could learn form open access?” one lesson may be that top down policy can help. Although open education processes are more complex and diverse than open access, the success of open access could aid open education.

Pat wrapped up the session by asking where next for open education? What do we do? Lis Parcell of RSC Wales cautioned against open education becoming the domain of “experts” and emphasised the importance of enabling new audiences to join the open debate, by using plain language where possible, meeting people where they are and providing routes to help them get a step on the ladder. There was also some appetite for open hackdays and codebashes that would bring teachers, researchers and developers together to build OA/OER mashups. Nick put forward the following usecase:

“I want to read a research paper, text mined & processed, AI takes me to relevant OER to consolidate learning!”

Finally everyone agreed that it’s important to keep talking, to keep open education on the agenda and try to transform open practice into open policy.

So there you have it! A brief summary of a wide-ranging debate conducted using only 140 characters! Who says you can’t have a proper conversation on twitter?! If you’re interested in reading the full transcript of the discussion, Martin Hawksey has helpfully set up a TAGS Viewer archive of the #chatopen here.

If you want to follow up any of the points or opinions raised here than feel free to comment below or send a mail to

Many thanks once again to Pat Lockley for setting up the discussion and to all those who participated.

Back to the Future – revisiting the CETIS codebashes

As a result of a request from the Cabinet Office to contribute to a paper on the use of hackdays during the procurement process, CETIS have been revisiting the “Codebash” events that we ran between 2002 and 2007. The codebashes were a series of developer events that focused on testing the practical interoperability of implementations of a wide range of content specifications current at the time, including IMS Content Packaging, Question and Test Interoperability, Simple Sequencing (I’d forgotten that even existed!), Learning Design and Learning Resource Meta-data, IEEE LOM, Dublin Core Metadata and ADL SCORM. The term “codebash” was coined to distinguish the CETIS events from the ADL Plugfests, which tested the interoperability and conformance of SCORM implementations. Over a five year period CETIS ran four content codebashes that attracted participants from 45 companies and 8 countries. In addition to the content codebashes, CETIS also additional events focused on individual specifications such as IMS QTI, or the outputs puts of specific JISC programmes such as the Designbashes and Widgetbash facilitated by Sheila MacNeill. As there was considerable interest in the codebashes and we were frequently asked for guidance on running events of this kind, I wrote and circulated a Codebash Facilitation document. It’s years since I’ve revisited this document, but I looked it out for Scott Wilson a couple of weeks ago as potential input for the Cabinet Office paper he was in the process of drafting together with a group of independents consultants. The resulting paper Hackdays – Levelling the Playing Field can be read and downloaded here.

The CETIS codebashes have been rather eclipsed by hackdays and connectathons in recent years, however it appears that these very practical, focused events still have something to offer the community so I thought it might be worth summarising the Codebash Facilitation document here.

Codebash Aims and Objectives

The primary aim of CETIS codebashes was to test the functional interoperability of systems and applications that implemented open learning technology interoperability standards, specifications and application profiles. In reality that meant bringing together the developers of systems and applications to test whether it was possible to exchange content and data between their products.

A secondary objective of the codebashes was to identify problems, inconsistencies and ambiguities in published standards and specifications. These were then fed back to the appropriate maintenance body in order that they could be rectified in subsequent releases of the standard or specification. In this way codebashes offered developers a channel through which they could contribute to the specification development process.

A tertiary aim of these events was to identify and share common practice in the implementation of standards and specifications and to foster communities of practice where developers could discuss how and why they had taken specific implementation decisions. A subsidiary benefit of the codebashes was that they acted as useful networking events for technical developers from a wide range of backgrounds.

The CETIS codebashes were promoted as closed technical interoperability testing events, though every effort was made to accommodate all developers who wished to participate. The events were aimed specifically at technical developers and we tried to discourage companies from sending marketing or sales representatives, though I should add that we were not always scucessful! Managers who played a strategic role in overseeing the development and implementation of systems and specifications were encouraged to participate however.

Capturing the Evidence

Capturing evidence of interoperability during early codebashes proved to be extremely difficult so Wilbert Kraan developed a dedicated website built on a Zope application server to facilitate the recording process. Participants were able to register the tools applications that they were testing and to upload content or data generated by these application. Other participants could then take this content test it in their own applications, allowing “daisy chains” of interoperability to be recorded. In addition, developers had the option of making their contributions openly available to the general public or visible only to other codebash participants. All participants were encouraged to register their applications prior to the event and to identify specific bugs and issues that they hoped to address. Developers who could not attend in person were able to participate remotely via the codebash website.

IPR, Copyright and Dissemination

The IPR and copyright of all resources produced during the CETIS codebashes remained with the original authors, and developers were neither required nor expected to expose the source code of their tools and applications to other participants.

Although CETIS disseminated the outputs of all the codebashes, and identified all those that had taken part, the specific performance of individual participants was never revealed. Bug reports and technical issues were fed back to relevant standards and specifications bodies and a general overview on the levels of interoperability achieved was disseminated to the developer community. All participants were free to publishing their own reports on the codebashes, however they were strongly discouraged from publicising the performance of other vendors and potential competitors. At the time, we did not require participants to sign non-disclosure agreements, and relied entirely on developers’ sense of fair play not to reveal their competitors performance. Thankfully no problems arose in this regard, although one or two of the bigger commercial VLE developers were very protective of their code.

Conformance and Interoperability

It’s important to note that the aim of the CETIS codebashes was to facilitate increased interoperability across the developer community, rather than to evaluate implementations or test conformance. Conformance testing can be difficult and costly to facilitate and govern and does not necessarily guarantee interoperability, particularly if applications implement different profiles of a specification or standard. Events that enable developers to establish and demonstrate practical interoperability are arguably of considerably greater value to the community.

Although CETIS codebashes had a very technical focus they were facilitated as social events and this social interaction proved to be a crucial component in encouraging participants to work closely together to achieve interoperability.


These days the value of technical developer events in the domain of education is well established, and a wide range of specialist events have emerged as a result. Some are general in focus such as the hugely successful DevCSI hackdays, others are more specific such as the CETIS Widgetbash, the CETIS / DecCSI OER Hackday and the EDINA Wills World Hack running this week which aims to build a Shakespeare Registry of metadata of digital resources relating to Shakespeare covering anything from its work and live to modern performance, interpretation or geographical and historical contextual information. At the time however, aside from the ADL Plugfests, the CETIS codebashes were unique in offering technical developers an informal forum to test the interoperability of their tools and applications and I think it’s fair to say that they had a positive impact not just on developers and vendors but also on the specification development process and the education technology community more widely.


Facilitating CETIS CodeBashes paper
Codebash 1-3 Reports, 2002 – 2005
Codebash 4, 2007
Codebash 4 blog post, 2007
Designbash, 2009
Designbash, 2010
Designbash, 2011
Widgetbash, 2011
OER Hackday, 2011
QTI Bash, 2012
Dev8eD Hackday, 2012

NPG adopts Creative Commons licence

Last month the National Portrait Gallery changed their image licencing policy to allow free downloads for non-commercial and academic purposes.

Writing in Museums Journal today Rebecca Atkinson explained that:

The change means that more than 53,000 low-resolution images are now available free of charge to non-commercial users through a standard Creative Commons licence.

Atkinson quotes Tom Morgan, head of rights and reproductions at the NPG saying”

“Obviously this is quite complex – on one hand, if people are making money from a museum’s content then it’s right the museum should share that profit but we also want to support academic and education activity. So we took the opportunity to look at the way in which we could deliver this service and automate it.”

A new automated interface on all the NPG’s collection item pages now leads users to a “Use this image page” with links to request three different licences. Each license is accompanied by clear and concise information on how the image can be used:

Professional licence: can be used in books, films, TV, merchandise, commercial and promotional activities, display and exhibition.

Academic licence: can be used in your research paper, classroom or scholarly publication.

Creative Commons licence: can be used in non-commercial, amateur projects (e.g. blogs, local societies and family history).

In order to apply for a Professional or Academic licence users must register to use the NPG’s lightbox and then apply for the appropriate license. For print works, the academic license covers images for non-commercial publications with a print run of less than 4000, images must also be used inside the publication.

To access the lower resolution Creative Common’s licensed image, users are not required to register, but they must submit a valid e-mail address before they can download the image in the form of zip file. The images themselves do not appear to carry any embedded license information or watermarks, but they are accompanied by the following text file

Please find, attached, a copy of the image, which I am happy to supply to you with permission to use solely according to your licence, detailed at

It is essential that you ensure images are captioned and credited as they are on the Gallery’s own website (search/find each item by NPG number at

This has been supplied to you free of charge. I would be grateful if you would please consider making a donation at in support of our work and the service we provide.

Now I should probably point out that I have a personal interest in this change of policy as I recently contacted the NPG to request permission to use some of their images in an academic publication. I was delighted when they pointed me to the new automated licence interface and confirmed that the images in question could be used free of charge. What really struck me at the time though was what a valuable resource this could prove to be for open education, as the NPG has effectively released 53,000 free and clearly licensed potential open educational resources into the public domain. The CC license chosen by the gallery may be on the restrictive side, but it certainly demonstrates a growing and very welcome commitment to openness from the cultural heritage sector that could be of direct benefit to education.

Come to Dev8D and tell JISC what you think!

Are you going to Dev8D next week? Would you like to give JISC a piece of your mind?

On Wednesday 15th there will be an opportunity to tell JISC what you think the key opportunities and challenges are in supporting the creation, sharing and management of learning materials. Lorna Campbell (JISC CETIS) and Amber Thomas (JISC Programme Manager) will be circulating on the day to gather views from delegates.

We want to know from you:

  • What are the most common requests you get from the people you develop for and support?
  • What are their greatest needs?
  • What software and formats would you relegate to Room 101?
  • What would be the killer app for learning content?

Stop us for a chat anytime throughout the day, or pop along to see us at the Digital Infrastructure Directions for Educational Content drop-in from 2-4 on Wednesday 15th February and help to shape JISC’s priorities for the future.

Alternatively if you are so brimful of thoughts and ideas you can post them in the comments below or blog them with the tag #deved.

Look forward to seeing you in at Dev8D!

CETIS OER Visualisation Project

As part of our work in the areas of open educational resources and data analysis CETIS are undertaking a new project to visualise the outputs of the JISC / HEA Open Educational Resource Programmes and we are very lucky to have recruited data wrangler extraordinaire Martin Hawksey to undertake this work. Martin’s job will be to firstly develop examples and workflows for visualising OER project data stored in the JISC CETIS PROD database, and secondly to produce visualisations around OER content and collections produced by the JISC / HEA programmes. Oh, and he’s only got 40 days to do it! You can read Martin’s thoughts on the task ahead over at his own blog MASHe:

40 days to let you see the impact of the OER Programme #ukoer

PROD Data Analysis

A core aspect of CETIS support for the OER Phase 1 and 2 Programmes has been the technical analysis of tools and systems used by the projects. The primary data collection tool used for this purpose is the PROD database. An initial synthesis of this data has already been completed by R. John Robertson, however there is potential for further analysis to uncover potentially richer information sets around the technologies used to create and share OERs.
This part of the project will aim to deliver:

  • Examples of enhanced data visualisations from OER Phase 1 and 2.
  • Recommendations on use and applicability of visualisation libraries with PROD data to enhance the existing OER dataset.
  • Recommendations and example workflows including sample data base queries used to create the enhanced visualisations.

And we also hope this work will uncover some general issues including:

  • Issues around potential workflows for mirroring data from our PROD database and linking it to other datasets in our Kasabi triple store.
  • Identification of other datasets that would enhance PROD queries, and some exploration of how transform and upload them.
  • General recommendations on wider issues of data, and observed data maintenance issues within PROD.

Visualising OER Content Outputs

The first two phases of the OER Programme produced a significant volume of content, however the programme requirements were deliberately agnostic about where that content should be stored, aside from a requirement to deposit or reference it in Jorum. This has enabled a range of authentic practices to surface regarding the management and hosting of open educational content; but it also means that there is no central directory of UKOER content, and no quick way to visualise the programme outputs. For example, the content in Jorum varies from a single record for a whole collection, to a record per item. Jorum is working on improved ways to surface content and JISC has funded the creation of a prototype UKOER showcase, in the meantime though it would be useful to be able to visualise the outputs of the Programmes in a compelling way. For example:

  • Collections mapped by geographical location of the host institution.
  • Collections mapped by subject focus.
  • Visualisations of the volume of collections.

We realise that the data that can be surfaced in such a limited period will be incomplete, and that as a result these visualisations will not be comprehensive, however we hope that the project will be able to produce compelling attractive images that can be used to represent the work of the programme.

The deliverables of this part of the project will be:

  • Blog posts on the experience of capturing and using the data.
  • A set of static or dynamic images that can be viewed without specialist software, with the raw data also available.
  • Documentation/recipes on the visualisations produced.
  • Recommendations to JISC and JISC CETIS on visualising content outputs.