Last week (July 22nd 2014), the UK Government announced the open document formats to be used by government: PDF/A, HTML, and ODF. This is the second tranche of open standards that have been adopted following open consultation, detailed work by technical panels, and recommendation by the Open Standards Board. The first tranche, which I wrote about in October 2013, was rather prosaic in dealing with HTTP, URL, Unicode, and UTF-8, and these do not really affect people outside government, whether citizens or suppliers. Document formats – both for viewing documents and 2-way exchanges – are an entirely different matter, and particularly with ODF, I have a real sense of government crossing the Rubicon of open standards.
An article appeared in the Times Higher Education online magazine recently (April 3, 2014) under the heading “More data can lead to poor student choices, Hefce [Higher Education Funding Council for England] learns”. The article was not about learning analytics, but about the data provided to prospective students with the aim of supporting their choice of Higher Education provider (HEp). The data is accessible via the Unistats web site, and includes various statistics on the cost of living, fees, student satisfaction, teaching hours, and employment prospects. In principle, this sounds like a good idea; I believe students are genuinely interested in these aspects, and the government and funding council see the provision of this information as a means of driving performance up and costs down. So, although this is not about learning analytics, there are several features in common: the stakeholders involved, the idea of choice informed by statistics, and the idea of shifting a cost-benefit balance for identified measures of interest.
Last week was a significant one for UK academics and those interested in accessing scholarship; the funding councils announced a new policy mandating open access for the post-2014 research evaluation exercises. In the same week, Cetis added its name to the list of members of the Open Policy Network, (strap-line, “ensuring open access to publicly funded resources”). Thinking back only 5 years, the change in policy is not something I could imagine would have happened by now and I think it is a credit to the people who have pushed this through in the face of resistance from vested interests, and to the people in Jisc who have played a part in making this possible.
Learning Analytics is now moving from being a research interest to a wider community who seek to apply it in practice. As this happens, the challenge of efficiently and reliably moving data between systems becomes of vital practical importance. System interoperability can reduce this challenge in principle, but deciding where to drill down into the details will be easier with a view of the “big picture”.
Part of my contribution to the Learning Analytics Community Exchange (LACE) project is a short briefing on the topic of learning analytics and interoperability (PDF, 890k). This introductory briefing, which is aimed at non-technical readers who are concerned with developing plans for sustainable practical learning analytics, describes some of the motivations for better interoperability and outlines the range of situations in which standards or other technical specifications can help to realise these benefits.
There is an important process that should feed into the development of good standards (that are used in practice) and this process is currently in need of repair and reformation. They key idea behind this is that good standards to support educational technology, to take our area of particular interest, are not created on a blank sheet of paper by an elite but emerge from practice, collaborative design, experimentation, selective appropriation of web standards, … etc. Good standards documents are underpinned by a thoughtful analysis of these aspects such that what emerges is useful, usable, and used. The phrase “pre-standardisation and interoperability incubation forum” is an attempt to capture the character of such a process. Indeed, some industry partners may prefer to see a collaboration to incubate interoperability as the real thing, with the formal standardization politics as an optional, and sometimes problematic, add-on. It is our belief that all except the suppliers with a dominant market share stand to benefit from better interoperability – i.e. common means to share common data – and that there is a great deal of latent value that could be unlocked by better pre-standardisation activity and interoperability incubation.
Feedback is invited on three proposals by Feb 24th (and 26th). The proposals relate to the following challenges (which apply to UK government use, not the whole of the public sector or the devolved governments):
- URI patterns for identifiers. These will be for resolvable URIs to identify things and codes within data published by government.
- Viewing government documents. This covers access by citizens, businesses and government officials from diverse devices.
- Sharing or collaborating with government documents. This extends the requirements of the previous proposal to cases where the documents must be editable.
George Siemens hosted an online seminar to explore issues around the systemic deployment of learning analytics in mid October 2013.This post is intended to be equivalent in message my presentation at the seminar; I think the written word might be more clear, not least because my oratorial skills are not what they could be. The result is still a bit rambling but I lack the time to develop a nicely-styled essay. A Blackboard Collaborate recording of the online presentation is available, as are the slides I used (pptx, 1.3M, also as pdf, 1M).
Knitr support in RStudio is a nice but the default styling of the HTML output, in particular the treatment of tables, is not to my taste. It is possible to override the default handler for markdown, as described on the RStudio site, but this doesn’t immediately work when using knitr in RStudio as several posts to stackoverflow etc testify (with some interesting workarounds proposed involving post-processing the output). This blog post (Neil Saunders) explains how to make it work but requires sourcing a file manually.[..]
The “results” of the title are the situation where increased interoperability contributes to practical learning analytics (exploratory, experimental, or operational). The way ahead to get results sometime soon requires care; focussing on the need and the hunger without restraining ambition will surely mean a failure to be timely, successful, or both. On the other hand, although it would be best (in an ideal world) to spend a good deal of time characterising the scope of data and charting the range of methods and targets, it is feared that this would totally block progress. Hence a middle way seems necessary, in which a little time is spent on discussing the most promising and the best-understood targets. i.e. to look for the low hanging fruit. This represents a middle way between the tendencies of the sales-man and the academic.
I have written a short-ish (working) document to help me to explore my own thoughts on the resolution of tension between these several factors, which I see as: