Back to the Future – revisiting the CETIS codebashes

As a result of a request from the Cabinet Office to contribute to a paper on the use of hackdays during the procurement process, CETIS have been revisiting the “Codebash” events that we ran between 2002 and 2007. The codebashes were a series of developer events that focused on testing the practical interoperability of implementations of a wide range of content specifications current at the time, including IMS Content Packaging, Question and Test Interoperability, Simple Sequencing (I’d forgotten that even existed!), Learning Design and Learning Resource Meta-data, IEEE LOM, Dublin Core Metadata and ADL SCORM. The term “codebash” was coined to distinguish the CETIS events from the ADL Plugfests, which tested the interoperability and conformance of SCORM implementations. Over a five year period CETIS ran four content codebashes that attracted participants from 45 companies and 8 countries. In addition to the content codebashes, CETIS also additional events focused on individual specifications such as IMS QTI, or the outputs puts of specific JISC programmes such as the Designbashes and Widgetbash facilitated by Sheila MacNeill. As there was considerable interest in the codebashes and we were frequently asked for guidance on running events of this kind, I wrote and circulated a Codebash Facilitation document. It’s years since I’ve revisited this document, but I looked it out for Scott Wilson a couple of weeks ago as potential input for the Cabinet Office paper he was in the process of drafting together with a group of independents consultants. The resulting paper Hackdays – Levelling the Playing Field can be read and downloaded here.

The CETIS codebashes have been rather eclipsed by hackdays and connectathons in recent years, however it appears that these very practical, focused events still have something to offer the community so I thought it might be worth summarising the Codebash Facilitation document here.

Codebash Aims and Objectives

The primary aim of CETIS codebashes was to test the functional interoperability of systems and applications that implemented open learning technology interoperability standards, specifications and application profiles. In reality that meant bringing together the developers of systems and applications to test whether it was possible to exchange content and data between their products.

A secondary objective of the codebashes was to identify problems, inconsistencies and ambiguities in published standards and specifications. These were then fed back to the appropriate maintenance body in order that they could be rectified in subsequent releases of the standard or specification. In this way codebashes offered developers a channel through which they could contribute to the specification development process.

A tertiary aim of these events was to identify and share common practice in the implementation of standards and specifications and to foster communities of practice where developers could discuss how and why they had taken specific implementation decisions. A subsidiary benefit of the codebashes was that they acted as useful networking events for technical developers from a wide range of backgrounds.

The CETIS codebashes were promoted as closed technical interoperability testing events, though every effort was made to accommodate all developers who wished to participate. The events were aimed specifically at technical developers and we tried to discourage companies from sending marketing or sales representatives, though I should add that we were not always scucessful! Managers who played a strategic role in overseeing the development and implementation of systems and specifications were encouraged to participate however.

Capturing the Evidence

Capturing evidence of interoperability during early codebashes proved to be extremely difficult so Wilbert Kraan developed a dedicated website built on a Zope application server to facilitate the recording process. Participants were able to register the tools applications that they were testing and to upload content or data generated by these application. Other participants could then take this content test it in their own applications, allowing “daisy chains” of interoperability to be recorded. In addition, developers had the option of making their contributions openly available to the general public or visible only to other codebash participants. All participants were encouraged to register their applications prior to the event and to identify specific bugs and issues that they hoped to address. Developers who could not attend in person were able to participate remotely via the codebash website.

IPR, Copyright and Dissemination

The IPR and copyright of all resources produced during the CETIS codebashes remained with the original authors, and developers were neither required nor expected to expose the source code of their tools and applications to other participants.

Although CETIS disseminated the outputs of all the codebashes, and identified all those that had taken part, the specific performance of individual participants was never revealed. Bug reports and technical issues were fed back to relevant standards and specifications bodies and a general overview on the levels of interoperability achieved was disseminated to the developer community. All participants were free to publishing their own reports on the codebashes, however they were strongly discouraged from publicising the performance of other vendors and potential competitors. At the time, we did not require participants to sign non-disclosure agreements, and relied entirely on developers’ sense of fair play not to reveal their competitors performance. Thankfully no problems arose in this regard, although one or two of the bigger commercial VLE developers were very protective of their code.

Conformance and Interoperability

It’s important to note that the aim of the CETIS codebashes was to facilitate increased interoperability across the developer community, rather than to evaluate implementations or test conformance. Conformance testing can be difficult and costly to facilitate and govern and does not necessarily guarantee interoperability, particularly if applications implement different profiles of a specification or standard. Events that enable developers to establish and demonstrate practical interoperability are arguably of considerably greater value to the community.

Although CETIS codebashes had a very technical focus they were facilitated as social events and this social interaction proved to be a crucial component in encouraging participants to work closely together to achieve interoperability.

Legacy

These days the value of technical developer events in the domain of education is well established, and a wide range of specialist events have emerged as a result. Some are general in focus such as the hugely successful DevCSI hackdays, others are more specific such as the CETIS Widgetbash, the CETIS / DecCSI OER Hackday and the EDINA Wills World Hack running this week which aims to build a Shakespeare Registry of metadata of digital resources relating to Shakespeare covering anything from its work and live to modern performance, interpretation or geographical and historical contextual information. At the time however, aside from the ADL Plugfests, the CETIS codebashes were unique in offering technical developers an informal forum to test the interoperability of their tools and applications and I think it’s fair to say that they had a positive impact not just on developers and vendors but also on the specification development process and the education technology community more widely.

Links

Facilitating CETIS CodeBashes paper
Codebash 1-3 Reports, 2002 – 2005
Codebash 4, 2007
Codebash 4 blog post, 2007
Designbash, 2009
Designbash, 2010
Designbash, 2011
Widgetbash, 2011
OER Hackday, 2011
QTI Bash, 2012
Dev8eD Hackday, 2012

The Learning Registry at #cetis12

Usually after our annual CETIS conference we each write a blog post that attempts to summarise each session and distil three hours of wide ranging discussion into a succinct synthesis and analysis. This year however Phil and I have been extremely fortunate as Sarah Currier of the JLeRN Experiment has done the job for us! Over at the JLERN Experiment blog Sarah has written a detailed and thought provoking summary of the Learning Registry: Capturing Conversations About Learning Resources session. Rather than attempting to replicate Sarah’s excellent write up we’re just going to point you over there, so here it is: The Learning Registry and JLeRN at the CETIS Conference: Report and Reflections. Job done!

Well, not quite. Phil and I do have one or two thoughts and reflections on the session. There still seems to be growing interest and enthusiasm in the UK ed tech community (if such a thing exists) for both the Learning Registry development in the US and the JLeRN Experiment at Mimas. However in some instances the interest and expectations are a little way head of the actual projects themselves. So it perhaps bears repeating at this stage that the Learning Registry is still very much under development. As a result the technical documentation may be a little raw, and although tools are starting to be developed, it may not be immediately obvious where to find them or figure out how they fit together. Having said that, there is a small but growing pool of keen developers working and experimenting with the Learning Registry so expertise growing.

That cautionary note aside one of the really interesting things about the Learning Registry is that people are already coming up with a wide range of potential use cases. As Sarah’s conference summary shows we had Terry McAndrew of TechDis suggesting that Learning Registry nodes could be used for capturing accessibility data about resources, Scott Wilson of CETIS and the University of Bolton thought the LR would be useful for sharing user ratings between distributed widget stores, a group from the Open University of Catalunya were interested in the possibility of using the LR as a decentralised way of sharing LTI information and Suzanne Hardy of the University of Newcastle was keen to see what might happen if Dynamic Learning Maps data was fed into an LR node.

Paradata is a topic that also appears to get people rather over excitable. Some people, me included, are enthusiastic about the potential ability to capture all kinds of activity data about how teachers and learners use and interact with resources. Others seem inclined to write paradata off as unnecessary coinage. “Why bother to develop yet another metadata standard?” is a question I’ve already heard a couple of times. Bearing this in mind it was very useful to have Learning Registry developer Walt Grata over from the US to remind us that although there is indeed a Learning Registry paradata specification, it is not mandated, and that users can express their data any way they want, as long as it’s a string and as long as it’s JSON.

We’re aware that the JLeRN Experiment were hoping to get a strong steer from the conference session as to where they should go next and I had hoped to round off this post with a few ideas that Phil and I had prioritised out of the many discussed. However Phil and I have completely failed to come to any kind of agreement on this so that will have to be another blog post for another day!

Finally we’d like to thank all those who contributed to a the Learning Registry Session at CETIS12 and in particular our speakers; Stephen Cook, Sarah Currier, Walt Grata, Bharti Gupta, Pat Lockley, Terry McAndrew, Nick Syrotiuk and Scott Wilson. Many thanks also to Dan Rehak for providing his slides and for allowing Phil to impersonate him!