UK Government Crosses the Rubicon with Open Document Formats

Last week (July 22nd 2014), the UK Government announced the open document formats to be used by government: PDF/A, HTML, and ODF. This is the second tranche of open standards that have been adopted following open consultation, detailed work by technical panels, and recommendation by the Open Standards Board. The first tranche, which I wrote about in October 2013, was rather prosaic in dealing with HTTP, URL, Unicode, and UTF-8, and these do not really affect people outside government, whether citizens or suppliers. Document formats – both for viewing documents and 2-way exchanges – are an entirely different matter, and particularly with ODF, I have a real sense of government crossing the Rubicon of open standards.

This is a move that is likely to affect what we all do with documents in five years time, and to affect procurement of software and services inside and outside government. It will take some time for this policy, which is described in the policy paper “Sharing or collaborating with government documents“, to work its way through and the transition will require care and support, but the signs are that this has been well thought through and that the Government Digital Service (GDS) has both the plans and expertise to see through the transition. They are not, for example, naive about the practicalities of achieving interoperability across different pieces of software, and GDS is publically-committed to the idea of citizens and businesses having choice in the software they use (the ODF move, for example, is not a “LibreOffice by the back door” tactic).

Microsoft, naturally enough,  have been critical, as reported in Computer Weekly, but an article in CIO online magazine (strapline “Informing the UK’s business technology leaders”) is quite neutral in tone, and I hope that this is more reflective of the wider IT community. Actually, I perceive that Microsoft has been improving its ODF support for several years, and I doubt that this announcement will have much business impact for them; the writing has been on the wall for a while and there is a sales opportunity for product updates that help government departments meet their obligations while continuing to use MS Office products. And yet, things will not be the same… and we might even get practically-useful levels of interoperability between LibreOffice and MS Office.


Two members of Cetis have contributed to the process that informed this policy: Wilbert Kraan is a member of the Technical Standards Panel, and I am a member of the Open Standards Board.

Open Learning Analytics – progress towards the dream

In 2011, a number of prominent figures in learning analytics and educational data mining published a concept paper on the subject of Open Learning Analytics (PDF), which they described as a  “proposal to design, implement and evaluate an open platform to integrate heterogeneous learning analytics techniques.” This has the feel of a funding proposal vision, a grand vision of an idealised future state. I was, therefore a little wary of the possibility that the recent Open Learning Analytics Summit (“OLA Summit”) would find it hard to get any traction, given the absence of a large pot of money. The summit was, however, rather interesting.

The OLA Summit, which is described in a SoLAR press release, immediately followed the Learning Analytics and Knowledge Conference and was attended by three members of the LACE project. A particular area of shared special interest between LACE and the OLA Summit is in open standards (interoperability) and data sharing.

One of the factors that contributed to the success of the event was the combined force of SoLAR, the Society for Learning Analytics Research, with the Apereo Foundation, which is an umbrella organisation for a number of open source software projects. Apereo has recently started a Learning Analytics Initiative, which has quite open-ended aims: “accelerate the operationalization of Learning Analytics software and frameworks, support the validation of analytics pilots across institutions, and to work together so as to avoid duplication”. This kind of soft-edged approach is appropriate for the current state of learning analytics; while institutions are still finding their way, a more hard-edged aim, such as building the learning analytics platform to rule the world, would be forced to anticipate rather than select requirements.

The combination of people from the SoLAR and Apereo communities, and an eclectic group of “others”, provided a balance of perspective; it is rare to find deep knowledge about both education and enterprise-grade IT in the same person. I think the extent to which the OLA Summit helped to integrate people from these communities is one of its key, if intangible, outcomes. This provides a (metaphorical) platform for future action. In the mean time, the various discussion groups intend to produce a number of relatively small scale outputs that further add to this platform, in a very bottom-up approach.

There is certainly a long way to go, and a widening of participation will be necessary, but a start has been made on developing a collaborative network from which various architectures, and conceptual and concrete assets will, I hope, emerge.

This post was first published on the Learning Analytics Community Exchange website,

Learning Analytics Interoperability – The Big Picture in Brief

Learning Analytics is now moving from being a research interest to a wider community who seek to apply it in practice. As this happens, the challenge of efficiently and reliably moving data between systems becomes of vital practical importance. System interoperability can reduce this challenge in principle, but deciding where to drill down into the details will be easier with a view of the “big picture”.

Part of my contribution to the Learning Analytics Community Exchange (LACE) project is a short briefing on the topic of learning analytics and interoperability (PDF, 890k). This introductory briefing, which is aimed at non-technical readers who are concerned with developing plans for sustainable practical learning analytics, describes some of the motivations for better interoperability and outlines the range of situations in which standards or other technical specifications can help to realise these benefits.

In the briefing, we expand on benefits such as:

  • efficiency and timeliness,
  • independence from disruption as software components change,
  • adaptability of IT architectures to evolving needs,
  • innovation and market growth,
  • durability of data and archival,
  • data aggregation, and
  • data sharing.

Whereas the focus of attention in learning analytics is often on data collected during learner activity, the briefing paper looks at the wider system landscape within which interoperability might contribute to practical learning analytics initiatives, including interoperability of models, methods, and analytical results.

The briefing paper is available from: (PDF, 890k).

LACE is a project funded by the European Commission to support the sharing of knowledge, and the creation of new knowledge through discourse. This post was first published on the LACE website.

Interoperability Incubation and Pre-standardisation Activity – A View on Desirable Qualities

There is an important process that should feed into the development of good standards (that are used in practice) and this process is currently in need of repair and reformation. They key idea behind this is that good standards to support educational technology, to take our area of particular interest, are not created on a blank sheet of paper by an elite but emerge from practice, collaborative design, experimentation, selective appropriation of web standards, … etc. Good standards documents are underpinned by a thoughtful analysis of these aspects such that what emerges is useful, usable, and used. The phrase “pre-standardisation and interoperability incubation forum” is an attempt to capture the character of such a process. Indeed, some industry partners may prefer to see a collaboration to incubate interoperability as the real thing, with the formal standardization politics as an optional, and sometimes problematic, add-on. It is our belief that all except the suppliers with a dominant market share stand to benefit from better interoperability – i.e. common means to share common data – and that there is a great deal of latent value that could be unlocked by better pre-standardisation activity and interoperability incubation.

Some recent changes to the pre-standardisation landscape indicate that this process, which is not assumed to exist within a single entity, is in need of repair and reformation. Some of these changes, and the problems the changes present is described in recent Cetis staff posts by Simon Grant, “Educational Technology Standardization in Europe”, and by Lorna Campbell, “CEN Learning Technologies Workshop Online Consultation Meeting”. The gist of these descriptions is that what we thought was a usefully-open access pre-standardisation forum is no more. This does not mean that “repair and reformation” means we should re-create what has been lost, rather that the loss has tipped the balance down on the side of taking action. What emerges may, quite rationally, be rather different in form to what went before.

This post makes public a discussion of the background and some statements about what I consider the desirable qualities of a pre-standardisation and interoperability incubation forum, and draws extensively on the ideas and insights of colleagues in Cetis and in the wider interoperability community.

Download the document as: PDF, 190kB, or Open Document Format (ODT), 55kB.

Cabinet Office Consults on Open Standards for Government – URI Patterns and Document Formats

Feedback is invited on three proposals by Feb 24th (and 26th). The proposals relate to the following challenges (which apply to UK government use, not the whole of the public sector or the devolved governments):

  • URI patterns for identifiers. These will be for resolvable URIs to identify things and codes within data published by government.
  • Viewing government documents. This covers access by citizens, businesses and government officials from diverse devices.
  • Sharing or collaborating with government documents. This extends the requirements of the previous proposal to cases where the documents must be editable.

The proposals will be further developed based on feedback, and with due diligence on the proposed open standards, and will be presented for approval by the Open Standards Board later in the year (the next meeting is March 25th 2014).

Comments may be added to the proposals.




Learning Analytics Interoperability – some thoughts on a “way ahead” to get results sometime soon

The “results” of the title are the situation where increased interoperability contributes to practical learning analytics (exploratory, experimental, or operational). The way ahead to get results sometime soon requires care; focussing on the need and the hunger without restraining ambition will surely mean a failure to be timely, successful, or both. On the other hand, although it would be best (in an ideal world) to spend a good deal of time characterising the scope of data and charting the range of methods and targets, it is feared that this would totally block progress. Hence a middle way seems necessary, in which a little time is spent on discussing the most promising and the best-understood targets. i.e. to look for the low hanging fruit. This represents a middle way between the tendencies of the sales-man and the academic.

I have written a short-ish (working) document to help me to explore my own thoughts on the resolution of tension between these several factors, which I see as:

  1. The difficulty in getting standardised data out of information systems in a consistent way is a barrier to conducting learning analytics. There is a need now.
  2. There is a hunger to taste the perceived benefits of using learning analytics.
  3. The scope of data relevant to learning analytics is enormous. To reach the minimal common ground necessary to declare “a standard” or interoperability across all of these is intractable given available human resource because experience shows either analysing the breadth of actual practice or defining anything by consensus is slow.
  4. The range of methods and targets of learning analytics is diverse and emerging as experience grows. This places limits on what it is rational to attempt to standardise. In other words, we don’t really know what LA is yet and this brings the risk that any spec work may fail to define the right things.

I hope that making this work-in-progress available will stimulate thoughts in the wider community. Please feel free to comment.

The document (current version 0.2) is available:


Open Standards Board – First Recommendations Announced

The Open Standards Board, which exists to make recommendations to the UK (Westminster) Government Cabinet Office, met last week and completed the journey to the first milestone in the new process by which open standards are to be selected. This process is based around the idea that “challenges” are raised on the Standards Hub, proposals worked up to meet challenges under the supervision of two  panels (“data standards” and “technical standards”), and subsequently put before the Open Standards Board.

It has taken a few months to get here since the inception of the Open Standards Board in May, but quite a lot of effort was involved in working up the proposals. In some ways, the requirements to evaluate candidate open standards and to make the case for a proposal seem quite bureaucratic – there are two quite formidable templates – but I think this is ultimately necessary to support good decisions, and also to provide evidence that due diligence was undertaken.

The first recommendations – which were immediately accepted by Liam Maxwell, HM Government’s Chief Technology Officer, and announced via the Government Digital Service blog – appear rather prosaic (links are to the evaluations/proposal):

Mundane they may be, but the introduction of a standardised approach to these matters across an entire government is  not to be sniffed at. These standards now fall under a “comply or explain” policy.

A third proposal concerning metadata and controlled vocabularies was not accepted in its current form and is likely to be re-cast with tighter scope and possibly split into two.

There are now a number of other challenges that have been accepted in the Standards Hub (and some more at the earliest stage of the process, possibly in need of some redefinition before developing and open-standards response), some prosaic – such as IP v6 – and others a bit closer to the “front line” – such as multi-agency incident transfer.

There is still a long way to go before the benefits start to accrue, but there is commendable progress.

Learning Analytics Interoperability

The ease with which data can be transferred without loss of meaning from a store to an analytical tool – whether this tool is in the hands of a data scientist, a learning science researcher, a teacher, or a learner – and the ability of these users to select and apply a range of tools to data in formal and informal learning platforms are important factors in making learning analytics and educational data mining efficient and effective processes. I have recently written a report that describes, in summary form, the findings of a survey into: a) the current state of awareness of, and research or development into, this problem of seamless data exchange between multiple software systems, and b) standards and pre-standardisation work that are candidates for use or experimentation. The coverage is, intentionally, fairly superficial but there are abundant references. The paper is available in three formats:  Open Office, PDF, MS Word. If printing, note that the layout is “letter” rather than A4. Comments are very welcome since I intend to release an improved version in due course.

Open Standards Board and the Cabinet Office Standards Hub

Early last week the government announced the Open Standards Board had finally been convened via a press release from Francis Maude, the Minister for the Cabinet Office, and via a blog post from Liam Maxwell, the government’s Chief Technology Officer. This is a welcome development but what chuffed me most was that my application to be a Board member had been successful.

I say “finally” because it has taken quite a while for the process to move from a shadow board and a consultation on policy (Cetis provided feedback), through an extension of the consultation to allay fears of bias in a stakeholder event, analysis of the comments, publication of the final policy, and deciding on the role of the Open Standards Board. The time taken has been a little frustrating but I take comfort from my conclusion that these delays are signs of a serious approach, that this is not an empty gesture.

Before going on, I should publicly recognise the contribution of others that enabled me to make a successful application. Firstly: Jisc has provided the funding for Cetis and a series of supporters(*) for the idea of open standards in Jisc has kept the flame alive. Many years ago they had the vision and stuck with it in spite of wider scepticism, progress that has been often slow, a number of flawed standards (mistakes do happen), and the difficulty in assessing return on investment for activities that are essentially systemic in their effect. Secondly: my colleagues in Cetis from whom I have harvested wisdom and ideas and with whom I have shared revealing (and sometimes exhausting) debate. Looking back at what we did in the early 2000’s, I think we were quite naive but so was everyone else. I believe we now have much more sophisticated ideas about the process of standards-development and adoption, and of the kinds of interventions that work. I hope that is why I was successful in my application.

The Open Standards Board is concerned with open standards for government IT and is closely linked with actions dealing with Open Source Software and Open Data. All three of these are close to our hearts in Cetis and we hope both to contribute to their development (in government and the wider public sector) as well as helping there to be a bit more spill-over into the education system.

The public face of Cabinet Office open standards activity is the Standards Hub, which gives anyone the chance to nominate candidates to be addressed using open standards and to comment on the nominations of others. I believe this is the starting point for the business of the Board. The suggestions are bit of a mixed bag and the process is in need of more suggestions so – to quote the Hub website – if you know of an open standard that could be “applied consistently across the UK government to make our services better for users and to keep our costs down”, you know what to do!

The Open Standards Board has an interesting mix of members and I’m full of enthusiasm for what promises to be an exciting first meeting in early May.


* – there are too many to mention but the people Cetis had most contact with include Tish Roberts, Sarah Porter and Rachel Bruce.

Open Source and Open Standards in the Public Sector

Yesterday I attended day 1 of a conference entitled “Public Sector: Open Source” and, while Open Source Software (OSS) was the primary subject, Open Standards were very much on the agenda. I went in particular because of an interest in what the UK Government Cabinet Office is doing in this area.

I have previously been quite positive about both the information principles and the open standards consultation (blog posts here and here respectively). We provided a response to the consultation and were pleased to see the Nov 1st announcement that government bodies must comply with a set of open standards principles.

The speaker from the Cabinet Office was Tariq Rashid (IT Reform group) and we were treated to a quite candid assessment of the challanges faced by government IT, with particular reference to OSS. His assessment of the issues and how to deal with them was cogent and believable, if also a little scary.

Here are a few of the things that caught my attention.

Outsource the Brawn not the Brain

Over a period of many years the supply of well-informed and deeply technical capability in government has been depleted such that too many decisions are made without there being an appropriate “intelligent customer“. To quote Tariq: “we shouldn’t be spending money unless we know what the alternatives are.” The particular point being made was about OSS alternatives – and they have produced an Open Source Procurement Toolkit to challenge myths and to guide people to alternatives – but the same line of argument extends to there being a poor understanding of the sources of technical lock-in (as opposed to commercial lock-in) and how chains of dependency could introduce inertia through decisions that are innocuous from a naive analysis.

By my analysis, the Cabinet Office IT reform team are the exception that proves the general point. It is also a point that universities and colleges should be wary of as their senior management tries to cut out “expensive people we don’t really need”.

The Current Procurement Approach is Pathological

There is something slightly ironic that it takes a Tory government to seriously attack an approach which sees the greatest fraction of the incredible £21 billion p.a. central government spend on IT go to a handful of big IT houses (yes, countable on 2 hands).

In short: the procurement approach, which typically involves a large amount of bundling-up, reduces competition and inhibits SMEs and providers of innovative solutions as well as blocking more agile approaches.

At the intersection between procurement approach and brain-outsourcing is the critical issue that the IT that is usually acquired lacks a long term view of architecture; this becomes reduced to the scope of tendered work and build around the benefits of the supplier.

Emphasis on Procurement

Most of the presentations placed most emphasis on the benefits of OSS in terms of procurement and cost and this was a central theme of Tariq’s talk also. Having spent long enough consorting with OSS-heads I found this to be rather narrow. What, for example, about the opportunities for public sector bodies to engage in acts of co-creation, either to lead or significantly contribute to OSS projects. There are many examples of commercial entities making significant investments in developer salaries while taking a hands-off approach to governance of the open source product (e.g. IBM and the eclipse platform).

For now, it seems, this kind of engagement is one step ahead of what is feasible in central government; there is a need for thinking to move on, to mature, from where it is now. I also suspect that there is plenty of low-hanging fruit – easy cases to make for cost savings in the near term – whereas co-creation is a longer term strategy. Tariq added that it might be only 2-3 years before government was ready to begin making direct contributions to LibreOffice, which is already being trialled in some departments.

Another of the speakers, representing sambruk (one of the partners in OSEPA, the project that organised the conference) seems to be heading towards more of a consortium model that could lead to something akin to the Sakai or Kuali model for Swedish municipality administration.


For all the Cabinet Office has a fairly small budget, its gatekeeper role – it must approve all spending proposals over £5 million and has some good examples of having prompted significant savings (e.g. £12 -> £2 million on a UK Borders procurement) – makes it a force to be reckoned with. Coupled with an attitude (as I perceive it) of wanting to understand the options and best current thinking on topics such as open source and open standards, this makes for a potent force in changing government IT.

The challenge for universities and colleges is to effect the same kind of transformation without an equivalent to the Cabinet Office and in the face of sector fragmentation (and, at best, some fairly loose alliances of sovereign city states).