On the Question of Validity in Learning Analytics

The question of whether or not something works is a basic one to ask when investing time and money in changing practice and in using new technologies. For learning analytics, the moral dimension adds a certain imperative, although there is much that we do by tradition in teaching and learning in spite of questions about efficacy.  

Reusing Open Resources: Learning in Open Networks for Work, Life and Education

rorBack in 2003 I contributed a chapter to Allison LIttlejohn’s book Reusing Online Resources: A Sustainable Approach to E-learning and I’m delighted to say that, together with co-authors Sheila MacNeill and Martin Hawksey, I have another paper in the subsequent book in this series Reusing Open Resources: Learning in Open Networks for Work, Life and Education edited by Allison Littlejohn and Chris Pegler.

More Data Doesn’t Always Lead to Better Choices – Lessons for Analytics Initiatives

An article appeared in the Times Higher Education online magazine recently (April 3, 2014) under the heading “More data can lead to poor student choices, Hefce [Higher Education Funding Council for England] learns”. The article was not about learning analytics, but about the data provided to prospective students with the aim of supporting their choice of Higher Education provider (HEp). The data is accessible via the Unistats web site, and includes various statistics on the cost of living, fees, student satisfaction, teaching hours, and employment prospects. In principle, this sounds like a good idea; I believe students are genuinely interested in these aspects, and the government and funding council see the provision of this information as a means of driving performance up and costs down. So, although this is not about learning analytics, there are several features in common: the stakeholders involved, the idea of choice informed by statistics, and the idea of shifting a cost-benefit balance for identified measures of interest.

Learning Analytics Interoperability – The Big Picture in Brief

Learning Analytics is now moving from being a research interest to a wider community who seek to apply it in practice. As this happens, the challenge of efficiently and reliably moving data between systems becomes of vital practical importance. System interoperability can reduce this challenge in principle, but deciding where to drill down into the details will be easier with a view of the “big picture”.

Part of my contribution to the Learning Analytics Community Exchange (LACE) project is a short briefing on the topic of learning analytics and interoperability (PDF, 890k). This introductory briefing, which is aimed at non-technical readers who are concerned with developing plans for sustainable practical learning analytics, describes some of the motivations for better interoperability and outlines the range of situations in which standards or other technical specifications can help to realise these benefits.

Policy and Strategy for Systemic Deployment of Learning Analytics – Barriers and Potential Pitfalls

George Siemens hosted an online seminar to explore issues around the systemic deployment of learning analytics in mid October 2013.This post is intended to be equivalent in message my presentation at the seminar; I think the written word might be more clear, not least because my oratorial skills are not what they could be. The result is still a bit rambling but I lack the time to develop a nicely-styled essay. A Blackboard Collaborate recording of the online presentation is available, as are the slides I used (pptx, 1.3M, also as pdf, 1M).

Learning Analytics Interoperability – some thoughts on a “way ahead” to get results sometime soon

The “results” of the title are the situation where increased interoperability contributes to practical learning analytics (exploratory, experimental, or operational). The way ahead to get results sometime soon requires care; focussing on the need and the hunger without restraining ambition will surely mean a failure to be timely, successful, or both. On the other hand, although it would be best (in an ideal world) to spend a good deal of time characterising the scope of data and charting the range of methods and targets, it is feared that this would totally block progress. Hence a middle way seems necessary, in which a little time is spent on discussing the most promising and the best-understood targets. i.e. to look for the low hanging fruit. This represents a middle way between the tendencies of the sales-man and the academic.

I have written a short-ish (working) document to help me to explore my own thoughts on the resolution of tension between these several factors, which I see as:

Small is beautiful: an antidote to big data #altc2013

Over the past year Cetis has been spending quite a bit of time exploring the context and potential of analytics within the education sector.  The Cetis analytics series is our on-going contribution to the debate. As part of our investigations we undertook a survey of UK institutions to try and get a baseline of where institutions are “at” in terms of analytics (see this post for more information ).