Adam Cooper » analytics http://blogs.cetis.org.uk/adam Cetis Blogs Sat, 18 Oct 2014 11:23:00 +0000 en-US hourly 1 http://wordpress.org/?v=4.1.22 On the Question of Validity in Learning Analytics http://blogs.cetis.org.uk/adam/2014/10/17/on-the-question-of-validity-in-learning-analytics/ http://blogs.cetis.org.uk/adam/2014/10/17/on-the-question-of-validity-in-learning-analytics/#comments Fri, 17 Oct 2014 16:15:36 +0000 http://blogs.cetis.org.uk/adam/?p=830 The question of whether or not something works is a basic one to ask when investing time and money in changing practice and in using new technologies. For learning analytics, the moral dimension adds a certain imperative, although there is much that we do by tradition in teaching and learning in spite of questions about efficacy.

I believe the move to large-scale adoption of learning analytics, with the attendant rise in institution-level decisions, should motivate us to spend some time thinking about how concepts such as validity and reliability apply in this practical setting. Motivation comes from: large scale adoption has “upped the stakes”, and non-experts are now involved in decision-making. This article is a brief look at some of the issues with where we are now, and some of the potential pit-falls going forwards.

There is, of course, a great deal of literature on the topics of validity (and reliability) and various disciplines have their own conceptualisations, and sometimes multiple kinds of validity. The wikipedia disambiguation page for “validity” illustrates the variety and the disambiguation page for “validation” adds further to it.

For the purpose of this article I would like to avoid choosing one of these technical definitions, because it is important to preserve some variety; the Oxford English Dictionary definition will be assumed: “the quality of being logically or factually sound; soundness or cogency”. Before looking at some issues, it might be helpful to first clarify the distinction between “reliability” and “validity” in statistical parlance(diagram, below).

Distinguishing between reliability and validity in statistics.

Distinguishing between reliability and validity in statistics.

Issues

Technical terminology may mislead

The distinction between reliability and validity in statistics leads us straight to the issue that terms may be used with very specific technical meanings that are not appreciated by a community of non-experts, who might be making decisions about what kind of learning analytics approach to adopt. This is particularly likely where terms with every-day meaning are used, but even when technical terms are used without everyday counterparts, non-expert users will often employ them without recognising their misunderstanding.

Getting validity (and reliability) universally on the agenda

Taking a fairly hard-edged view of “validation”, as applied to predictive models, a good start would be to see this being universally adopted, following established best practice in statistics and data mining. The educational data mining research community is very hot on this topic but the wider community of contributors to learning analytics scholarship is not always so focused. More than this, it should be on the agenda of the non-researcher to ask the question about the results and the method, and to understand whether “85% correct classification judged by stratified 10-fold cross-validation” is an appropriate answer, or not.

Predictive accuracy is not enough

When predictive models are being described, it is common to hear statements such as “our model predicted student completion with 75% accuracy.” Even assuming that this accuracy was obtained using best practice methods it glosses over two kinds of fact that we should seek in any prediction (or, generally “classification”), but which are too often neglected:

  • How does that figure compare to a random selection? If 80% completed then the prediction is little better than picking coloured balls from a bag (68% predicted correctly). The “kappa statistic” gives a measure of performance that takes account of improved predictive performance compared to chance, but it doesn’t have such an intuitive feel.
  • Of the incorrect predictions, how many were false positives and how many were false negatives? How much we value making each kind of mistake will depend on social values and what we do with the prediction. What is a sensible burden of proof when death is the penalty?

Widening the conception of validity beyond the technical

One of the issues faced in learning analytics is that the paradigm and language of statistics and data mining could dominate the conceptualisation of validity. The focus on experiment and objective research that is present in most of the technical uses of “validity” should have a counterpart in the situation of learning analytics in practice.

This counterpart has an epistemological flavour, in part, and requires us to ask whether a community of practice would recognise something as being a fact. It includes the idea that professional practice utilising learning analytics would have to recognise an analytics-based statement as being relevant. An extreme example of the significance of social context to what is considered valid (fact) is the difference in beliefs between religious and non-religious communities.

For learning analytics, it is entirely possible to have some kind of prediction that is highly statistically-significant and scores highly in all objective measures of performance but is still irrelevant to practice, or which produces predictions that it would be unethical to use (or share with the subjects), etc.

Did it really work?

So, lets say all of the above challenges have been met. Did the adoption of a given learning analytics process or tool make a desirable difference?

This is a tough one. The difficulty in answering such questions in an educational setting is considerable, and has led to the recent development of new research methodologies such as Design-based Research, which is gaining adoption in educational technology circles (see, for example, Anderson and Shattuck in Educational Researcher vol. 41, no. 1).

It is not realistic to expect robust evidence in all cases, but in the absence of robust evidence we also need to be aware of proxies such as “based on the system used by [celebrated university X]”.

We also need to be aware of the pitfall of the quest for simplicity in determining whether something worked; an excessive focus on objective benefits neglects much of value. Benefits may be intangible, indirect, found other than where expected, or out of line with political or business rhetoric. As the well worn aphorism has it, “not everything that counts can be counted, and not everything that can be counted counts.” It does not follow, for example, that improved attainment is either a necessary or sufficient guide to whether a learning analytics system is a sound (valid) proposition.

Does it generalise? (external validity)

As we try to move from locally-contextualised research and pilots towards broadly adoptable patterns, it becomes essential to know the extent to which an idea will translate. In particular, we should be interested to know which of the attributes of the original context are significant, in order to estimate its transferability to other contexts.

This thought opens up a number of possibilities:

  • It will sometimes be useful to make separate statements about validity or fitness for purpose for a method and for the statistical models it might produce. e.g. is the predictive model transferable, or the method by which it is discovered?
  • It may be that learning analytics initiatives that are founded on some theorisation about cause and effect, and which somehow test that theorisation, are more easily judged in other contexts.
  • As the saying goes “one swallow does not a summer make” (Aristotle, but still in use!), so we should gather evidence (assess validity and share the assessment) as an initially-successful initiative migrates to other establishments and as time changes.

Transparency is desirable but challenging

The above points have been leading us in the direction of the need to share data about the effect of learning analytics initiatives. The reality of answering questions about what is effective is non-trivial, and the conclusions are likely to be hedged with multiple provisos, open to doubt, requiring revision, etc.

To some extent, the practices of good scholarship can address this issue. How easy is this for vendors of solutions (by this I mean other than general purpose analytics tools)? It certainly implies a humility not often attributed to the sales person.

Even within the current conventions of scholarship we face the difficulty that the data used in a study of effectiveness is rarely available for others to analyse, possibly asking different questions, making different assumptions, or for meta-analysis. This is the realm of reproducible research (see, for example, reproducibleresearch.net) and subject to numerous challenges at all levels, from ethics and business sensitivities down to practicalities of knowing what someone else’s data really means and the additional effort required to make data available to others. The practice of reproducible research is generally still a challenge but these issues take on an extra dimension when we are considering “live” learning analytics initiatives in use at scale, in educational establishments competing for funds and reputation.

To address this issue will require some careful thought to imagine solutions that side-step the immovable challenges.

Conclusion… so what?

In conclusion, I suggest that we (a wide community including research, innovation, and adoption) should engage in a discourse, in the context of learning analytics, around:

  • What do we mean by validity?
  • How can we practically assess validity, particularly in ways that are comparable?
  • How should we communicate these assessments to be meaningful for, and perceived as relevant by, non-experts, and how should we develop a wider literacy to this end?

This post is a personal view, incomplete and lacking academic rigour, but my bottom line is that learning analytics undertaken without validity being accounted for would be ethically questionable, and I think we are not yet where we need to get to… what do you think?


Target image is “Reliability and validity” by Nevit Dilmen. Licensed under Creative Commons Attribution-Share Alike 3.0 via Wikimedia Commons – http://commons.wikimedia.org/wiki/File:Reliability_and_validity.svg

This post was first published on the Learning Analytics Community Exchange website, www.laceproject.eu.

]]>
http://blogs.cetis.org.uk/adam/2014/10/17/on-the-question-of-validity-in-learning-analytics/feed/ 6
Learning Analytics Watchdog – a job description of the future for effective transparency? http://blogs.cetis.org.uk/adam/2014/09/24/learning-analytics-watchdog-a-job-description-of-the-future-for-effective-transparency/ http://blogs.cetis.org.uk/adam/2014/09/24/learning-analytics-watchdog-a-job-description-of-the-future-for-effective-transparency/#comments Wed, 24 Sep 2014 16:12:18 +0000 http://blogs.cetis.org.uk/adam/?p=827 We should all be worried when data about us is used, but when the purpose for which it is used and the methods employed are opaque. Credit ratings and car insurance are long-standing examples we have got used to, and for which the general principles are generally known. Importantly, we believe that there is sufficient market-place competition that, within the limits of the data available to the providers, the recipes used are broadly fair.

Within both educational establishments and work-place settings, an entirely different situation applies. There is not the equivalent of competition and our expectations of what constitutes ethical (and legal) practice is different. Our expectations of what the data should be used for, and by whom, differ. The range of data that could be used, and the diversity of methods that could be harnessed, is so enormous that it is tempting not to think about the possibilities, and to hide one’s head in the sand.

One of the ideas proposed to address this situation is transparency, i.e. that we, as the subjects of analytics can look and see how we are affected, and as the objects of analytics can look and see how data about is is being used. Transparency could be applied at different points, and make visible information about:

  • the data used, including data obtained from elsewhere,
  • who has access to the data, as raw data or derived to produce some kind of metric,
  • to whom the data is disclosed/transferred,
  • the statistical and data mining methods employed,
  • the results of validation tests, both at a technical level and at the level of the education/training interventions,
  • what decisions are taken that affect me.

Frankly, even speaking as someone with some technical knowledge of databases, statistics and data mining, it would make my head hurt to make sense of all that in a real-world organisation! It would also be highly inefficient for everyone to have to do this. The end result would be that little, if any, difference in analytics practice would be caused.

I believe we should consider transparency as not only being about the freedom to access information, but as including an ability to utilise it.  Maybe “transparency” is the wrong word, and I am risking an attempt at redefinition. Maybe “openness for inspection” would be better, not just open, but for inspection. The problem with stopping at making information available in principle, without also considering use, applies to some open data initiatives, for example where public sector spending data is released; the rhetoric from my own (UK) government about transparency has troubled me for quite some time.

It could be argued that the first challenge is to get any kind of openness, argued that the tendency towards black-box learning analytics should first be countered. My argument is that this could well be doomed to failure unless there is a bridge from the data and technicalities to the subjects of analytics.

I hope that the reason for the title of this article is now obvious. I should also add that the idea emerged in the Q&A following Viktor Mayer-Schönberger’s keynote at the recent i-KNOW conference.

WatchdogOne option would be to have Learning Analytics Watchdogs: independent people with the expertise to inspect the way learning analytics is being conducted, to champion the interests of the those affected, both learners and employees, and to challenge the providers of learning analytics as necessary. In the short term, this will make it harder to roll-out learning analytics, but in the long term it will, I believe, pay off:

  • Non-transparency will ultimately lead to a breakdown of trust, with the risk of public odium or being forced to take down whole systems.
  • A watchdog would force implementers to gain more evidence of validity, avoiding analytics that is damaging to learners and organisations. Bad decisions hurt everyone.
  • Attempts to avoid being savaged by the watchdog would promote more collaborative design processes, involving more stakeholders, leading to solutions that are better tuned to need.

 

Watchdog image is CC-BY-SA Gary Schwitzer, via Wikimedia Commons.

This post was first published on the Learning Analytics Community Exchange website, www.laceproject.eu.

 

]]>
http://blogs.cetis.org.uk/adam/2014/09/24/learning-analytics-watchdog-a-job-description-of-the-future-for-effective-transparency/feed/ 0
Do Higher Education Institutions Need a Learning Analytics Strategy? http://blogs.cetis.org.uk/adam/2014/06/27/do-higher-education-institutions-need-a-learning-analytics-strategy/ http://blogs.cetis.org.uk/adam/2014/06/27/do-higher-education-institutions-need-a-learning-analytics-strategy/#comments Fri, 27 Jun 2014 16:07:49 +0000 http://blogs.cetis.org.uk/adam/?p=825 The LACE Workshop, “Developing a Learning Analytics Strategy for a Higher Education Institution” took place on June 17th 2014, with over 35 participants exploring the issues and considering the question of what such a strategy would look like.

Approaching Strategy as a Business Model

The approach taken was to use an adapted version of the Business Model Canvas – see the workshop home page for more information – to attempt to frame a strategic response to Learning Analytics (LA). The use of this approach, in which the Canvas was used flexibly rather than rigidly, was predicated on the idea that most of the factors inherent in a successful business model are also important for the success of a LA initiative, and that pitching such an initiative to senior management would be more successful if these factors had been considered. Considering all of these factors at the same time, albeit in a shallow fashion, should be a good check for how realistic the approach is. The factors, which are further teased apart in the Canvas are:

  • The stakeholders (“interested parties” might be a better term, or “the people for whom value is to be created”).
  • What value (not necessarily financial) would be reaped by these stakeholders
  • How the stakeholders will be related to, engaged with, etc.
  • Which stakeholders are prepared to pay, and how.
  • The human, physical, etc resources required, and indicated costs.
  • The activities to be undertaken, and indicated costs
  • Key partners and suppliers.

The Business Model Canvas approach appears, on face value, to be a sensible way of approaching the question of what a LA strategy might look like but it presumes a subset of approaches to strategy. The workshop could be viewed as a thought experiment about the nature of this subset.

A separate web-page contains the results of the group work, Canvas templates with post-it notes attached.

Different Kinds of Strategy

Three differing approaches to a LA strategy seemed to emerge in discussions:

  1. A major cross-functional programme, “LA Everywhere”.
  2. LA in the service of particular institution-level strategic objectives, “Targetted LA”.
  3. A strategy based around putting enabling factors in place, “Latent LA”.

The majority of the workshop groups ended up following the second approach when using the Canvas, although it should be noted that budget and human resource limitations were assumed.

One aspect that emerged clearly in discussion, feedback, and comment, was that even targetted LA initiatives have a complex set of inter-related, sometimes conflicting, stakeholders and values. Attempts to accommodate these relationships leads to a rapidly inflating challenge and pushes discussion in the direction of “LA Everywhere”. LA Everywhere implies a radical re-orientation of the institution around Learning Analytics and is unlikely to be feasible in most Higher Education establishments, even assuming substantial financial commitments.

The Latent LA approach can be seen as a means of addressing this complexity, but also as a strategy driven by a need to learn more before targeting institution-level strategic objectives. Latent LA may not be recognised by some observers as a LA strategy per se but it is a strategic response to the emergence of the concept of learning analytics and the potential benefits it offers.

The enabling factors could include aspects such as:

  • data and statistical literacy;
  • amending policies and practices around information management, ownership, and governance to encompass LA-relevant systems and LA uses;
  • ethical and value-based principles;
  • fact-finding and feasibility assessment;
  • changing culture towards greater use data as evidence (in an open-minded way, with limitations understood);
  • developing a set of policies, principles, and repeatable patterns for the identification, prioritisation, and design of LA initiatives.

Issues with the Business Model Approach

Although the template based on the Business Model Canvas provided a framework to consider Learning Analytics strategy, there were a number of practical and conceptual limitations for this application. The practical issues were principally that time was rather limited (slightly over 1 hour) and that the activity was not being undertaken in a particular organisational context. This limited the extent to which conclusions could be drawn vs ideas explored.

The conceptual limitations for considering LA strategy include:

  • The Canvas did not lend itself to an exploration of inter-relationships and dependencies, and the differing kinds of relationship (this point is not about linking stakeholders to value proposition, etc, which is possible using colour coding, for example).
  • One of the key questions that is more important in education than in a normal business context is: what effect will this have on teaching/educational practice? The Canvas is not designed to map out effect on practice, and how that relates to values or educational theory.
  • A business model approach is not a good fit to a strategic response that is akin to Latent LA.

Do HE Institutions Need a LA Strategy?

This is a question that will ultimately be answered by history, but it may be more appropriate to ask whether a HE institution can avoid having a LA strategy. The answer is probably that avoidance is not an option; it will become progressively harder for leaders of HE institutions to appear to be doing nothing about LA when jostling for status with their peers.

In other words, whether or not there is a need, it seems inevitable that LA strategies will be produced. The tentative conclusion I draw from the workshop is that a careful blend of Latent and Targeted LA will be the best approach to having a strategy that delivers benefit, with the balance between Latent and Targeted varying between institutions. In this model, Latent LA lays the foundations for long term success while some shorter-term “results” and something identifiable arising from Targeted LA are a political necessity, both internal to the institution, and externally.

Web links to a selection of resources related to the question of learning analytics strategy may be found on the workshop home page.


This post was first published on the Learning Analytics Community Exchange website, www.laceproject.eu.

]]>
http://blogs.cetis.org.uk/adam/2014/06/27/do-higher-education-institutions-need-a-learning-analytics-strategy/feed/ 0
Open Learning Analytics – progress towards the dream http://blogs.cetis.org.uk/adam/2014/04/17/open-learning-analytics-progress-towards-the-dream/ http://blogs.cetis.org.uk/adam/2014/04/17/open-learning-analytics-progress-towards-the-dream/#comments Thu, 17 Apr 2014 15:58:53 +0000 http://blogs.cetis.org.uk/adam/?p=821 In 2011, a number of prominent figures in learning analytics and educational data mining published a concept paper on the subject of Open Learning Analytics (PDF), which they described as a  “proposal to design, implement and evaluate an open platform to integrate heterogeneous learning analytics techniques.” This has the feel of a funding proposal vision, a grand vision of an idealised future state. I was, therefore a little wary of the possibility that the recent Open Learning Analytics Summit (“OLA Summit”) would find it hard to get any traction, given the absence of a large pot of money. The summit was, however, rather interesting.

The OLA Summit, which is described in a SoLAR press release, immediately followed the Learning Analytics and Knowledge Conference and was attended by three members of the LACE project. A particular area of shared special interest between LACE and the OLA Summit is in open standards (interoperability) and data sharing.

One of the factors that contributed to the success of the event was the combined force of SoLAR, the Society for Learning Analytics Research, with the Apereo Foundation, which is an umbrella organisation for a number of open source software projects. Apereo has recently started a Learning Analytics Initiative, which has quite open-ended aims: “accelerate the operationalization of Learning Analytics software and frameworks, support the validation of analytics pilots across institutions, and to work together so as to avoid duplication”. This kind of soft-edged approach is appropriate for the current state of learning analytics; while institutions are still finding their way, a more hard-edged aim, such as building the learning analytics platform to rule the world, would be forced to anticipate rather than select requirements.

The combination of people from the SoLAR and Apereo communities, and an eclectic group of “others”, provided a balance of perspective; it is rare to find deep knowledge about both education and enterprise-grade IT in the same person. I think the extent to which the OLA Summit helped to integrate people from these communities is one of its key, if intangible, outcomes. This provides a (metaphorical) platform for future action. In the mean time, the various discussion groups intend to produce a number of relatively small scale outputs that further add to this platform, in a very bottom-up approach.

There is certainly a long way to go, and a widening of participation will be necessary, but a start has been made on developing a collaborative network from which various architectures, and conceptual and concrete assets will, I hope, emerge.


This post was first published on the Learning Analytics Community Exchange website, www.laceproject.eu.

]]>
http://blogs.cetis.org.uk/adam/2014/04/17/open-learning-analytics-progress-towards-the-dream/feed/ 0
More Data Doesn’t Always Lead to Better Choices – Lessons for Analytics Initiatives http://blogs.cetis.org.uk/adam/2014/04/04/more-data-doesnt-always-lead-to-better-choices-lessons-for-analytics-initiatives/ http://blogs.cetis.org.uk/adam/2014/04/04/more-data-doesnt-always-lead-to-better-choices-lessons-for-analytics-initiatives/#comments Fri, 04 Apr 2014 14:41:47 +0000 http://blogs.cetis.org.uk/adam/?p=807 An article appeared in the Times Higher Education online magazine recently (April 3, 2014) under the heading “More data can lead to poor student choices, Hefce [Higher Education Funding Council for England] learns”. The article was not about learning analytics, but about the data provided to prospective students with the aim of supporting their choice of Higher Education provider (HEp). The data is accessible via the Unistats web site, and includes various statistics on the cost of living, fees, student satisfaction, teaching hours, and employment prospects. In principle, this sounds like a good idea; I believe students are genuinely interested in these aspects, and the government and funding council see the provision of this information as a means of driving performance up and costs down. So, although this is not about learning analytics, there are several features in common: the stakeholders involved, the idea of choice informed by statistics, and the idea of shifting a cost-benefit balance for identified measures of interest.

For the rest of this article, which I originally published as “More Data Can Lead to Poor Student Choices”, please visit the LACE Project website.

]]>
http://blogs.cetis.org.uk/adam/2014/04/04/more-data-doesnt-always-lead-to-better-choices-lessons-for-analytics-initiatives/feed/ 0
Learning Analytics Interoperability – The Big Picture in Brief http://blogs.cetis.org.uk/adam/2014/03/28/learning-analytics-interoperability-the-big-picture-in-brief/ http://blogs.cetis.org.uk/adam/2014/03/28/learning-analytics-interoperability-the-big-picture-in-brief/#comments Fri, 28 Mar 2014 18:52:43 +0000 http://blogs.cetis.org.uk/adam/?p=797 Learning Analytics is now moving from being a research interest to a wider community who seek to apply it in practice. As this happens, the challenge of efficiently and reliably moving data between systems becomes of vital practical importance. System interoperability can reduce this challenge in principle, but deciding where to drill down into the details will be easier with a view of the “big picture”.

Part of my contribution to the Learning Analytics Community Exchange (LACE) project is a short briefing on the topic of learning analytics and interoperability (PDF, 890k). This introductory briefing, which is aimed at non-technical readers who are concerned with developing plans for sustainable practical learning analytics, describes some of the motivations for better interoperability and outlines the range of situations in which standards or other technical specifications can help to realise these benefits.

In the briefing, we expand on benefits such as:

  • efficiency and timeliness,
  • independence from disruption as software components change,
  • adaptability of IT architectures to evolving needs,
  • innovation and market growth,
  • durability of data and archival,
  • data aggregation, and
  • data sharing.

Whereas the focus of attention in learning analytics is often on data collected during learner activity, the briefing paper looks at the wider system landscape within which interoperability might contribute to practical learning analytics initiatives, including interoperability of models, methods, and analytical results.

The briefing paper is available from: http://laceproject.eu/publications/briefing-01.pdf (PDF, 890k).

LACE is a project funded by the European Commission to support the sharing of knowledge, and the creation of new knowledge through discourse. This post was first published on the LACE website.

]]>
http://blogs.cetis.org.uk/adam/2014/03/28/learning-analytics-interoperability-the-big-picture-in-brief/feed/ 0
Policy and Strategy for Systemic Deployment of Learning Analytics – Barriers and Potential Pitfalls http://blogs.cetis.org.uk/adam/2013/10/31/policy-and-strategy-for-systemic-deployment-of-learning-analytics-barriers-and-potential-pitfalls/ http://blogs.cetis.org.uk/adam/2013/10/31/policy-and-strategy-for-systemic-deployment-of-learning-analytics-barriers-and-potential-pitfalls/#comments Thu, 31 Oct 2013 14:11:28 +0000 http://blogs.cetis.org.uk/adam/?p=701 George Siemens hosted an online seminar to explore issues around the systemic deployment of learning analytics in mid October 2013.This post is intended to be equivalent in message my presentation at the seminar; I think the written word might be more clear, not least because my oratorial skills are not what they could be. The result is still a bit rambling but I lack the time to develop a nicely-styled essay. A Blackboard Collaborate recording of the online presentation is available, as are the slides I used (pptx, 1.3M, also as pdf, 1M).

The perspective is largely of ‘systemic’ being in the context of a higher education institution deploying learning analytics. There is, of course, a wider regulatory and political dimension to a systemic analysis, and this is touched upon but not explored. My understanding of the HE institutional context is based on work Cetis has done from 2006-2013 as an Innovation Support Centre for the Joint Information Systems Committee (Jisc), which aims to support both further and higher education within the UK.

According to the scope of the seminar I will be considering learning analytics. As I’ve written before I am cautious of defining terms but, when I have to, I take the stance that learning analytics is defined (and potentially redefined over time) according to the questions or decisions an educator cares about.

The position I take in what follows is also informed by the fact that Cetis is based in the Institute for Educational Cybernetics (IEC) at the University of Bolton. Cybernetics conjures up an image of robotics, but the development of cybernetics in the middle of the 20th century took a much wider view on problems of control, information flow, the relationship between structure and function of complex systems, etc which includes social and non-deterministic phenomena. IEC’s mission follows this wider conception, and is “to develop a better understanding of how information and communications technologies affect the organisation of education from individual learning to the global system.”

How Do I Feel About Systemic Learning Analytics?

I hope it doesn’t sound too peculiar to be talking about feelings…

Mihály Csíkszentmihályi’s flow model provides a convenient, if academically un-justifiable, reference point for me both to describe how I feel about learning analytics as an individual with some practical analytical ability and to to describe how I feel about how the post-compulsory education community is approaching learning analytics. Asking these questions in our discussion groups – where am I and where are we – and an exploration of why would make a useful overture to more practical discussion of what to DO about learning analytics.

Depiction of Stereotypes and the Relationship between Skill and Challenge in Mihály Csíkszentmihályi's Flow Model

Depiction of stereotypes and the relationship between skill and challenge in Mihály Csíkszentmihályi’s Flow Model. Image: public domain, by Oliver Beatson.

My self-assessment is “anxiety” about how the community at large is approaching learning analytics. By this I mean to say that my perception of the level of skill we collectively have is low and the challenge level is high. My sense from what I hear and read is that quite a few prominent voices would estimate skill as being higher and challenge as being lower. This difference only makes me more anxious. It is conceivable that the difference between these estimates and my own arises from alternative conceptions of what we are aiming at with learning analytics. Although this might betray some arrogance, I would put myself on the anxious/aroused boundary.

The rest of this post can be thought of as an exploration of skill, challenge, and aim, and why I am anxious.

 Practical Issues

Some evidence on perception of obstacles to adoption of analytics, and of the capabilities that are in place (or otherwise) is available. EDUCAUSE published a report into their survey of analytics readiness in US universities in 2012 [1] and we published the findings of our own survey in 2013, which was UK-specific and of quite small scale. Both considered analytics generally and were not specific to learning analytics.

The relevant summary charts from each publication are:

What is in Place? From [1]

What is in place? From [1]

Obstacles to Adoption of Analytics. From Cetis Survey.

Obstacles to adoption of analytics. From Cetis survey.

Both surveys have qualitatively similar findings in the areas where they overlap (we did not, for example, ask specifically about funding/investment), although the issue of staff acceptance is a clear difference and is one where we doubt the findings of our survey.

In contrast to many initiatives involving data and IT systems, senior leader interest is not an issue. In our survey we also asked about levels of awareness of developments in analytics outside the education sector and found that our respondents believed this group had the highest level of awareness.

Analytical expertise is clearly perceived as a key obstacle or capability that is  not in place, with a number of data-related factors being in need of attention, leading [1] to recommend:

Invest in people over tools. Recognize that analytics is at least as much about people as it is about data or tools. Investment needs to begin at the level of hiring and/or training an appropriate number of analysts to begin addressing strategic questions.

Both surveys indicate scope for some direct investment and examples can be found of organisations that have anticipated this recommendation. I quite like the way the Open University has approached the problem by building capability within and around their student survey team and engaging with academic staff. One of the presentations in the online seminar explains where OU is going with its systemic organsisational analytics infrastructure.

The capability gap could be partially filled, or compensated for by various realisations of the idea of a shared service. In the US, the PAR Framework, which describes itself as being ‘a non-profit multi-institutional data mining cooperative’ benefits its members by providing individual risk models as well as aggregated anonymised data. In the UK, two organisations with a great deal of data from across the whole HE sector continue to develop their offerings, backed up by their in-house analytical expertise: the Higher Education Statistics Agency (HESA) and the Universities and Colleges Admissions Service (UCAS). Both organisations have historically focussed on benchmarking and statistical analysis for policy but are well positioned to offer diverse services direct to institutions based on their sector-wide data, potentially linked to institutionally-owned information subject to suitable protocols.

So, We Need to Hire Biddable Experts, Right?

Well… yes, and no. I think there are dangers  in a response to the challenge of establishing systemic analytics that is exclusively based on a centre of expertise model.

Jeanne Harris wrote in the Harvard Business Review blog on the topic under the heading ‘Data is Useless without the Skills to Analyze it‘ in 2012:

…employees need to become:
Ready and willing to experiment: Managers and business analysts must be able to apply the principles of scientific experimentation to their business. They must know how to construct intelligent hypotheses. They also need to understand the principles of experimental testing and design, including population selection and sampling, in order to evaluate the validity of data analyses.

Adept at mathematical reasoning: How many of your managers today are really “numerate” — competent in the interpretation and use of numeric data?’

The message that I take home from this post is that, for all we perceive the business world outside higher education to be more advanced in its adoption of analytics, there has been a lack of attention to the need for a certain level of critical thought and expertise among those who receive the results of analysis. I am reminded of comments made by my colleague Phil Barker back in 2008 about the book Rethinking Expertise by Harry Collins and Robert Evans. Phil commented that:

The novel idea in the book was that it is possible to have “interactional expertise”, which is the ability to talk sensibly to domain experts about a topic (e.g. gravitational wave physics) without being able to make a contribution. It is implied that this level of expertise would be useful in the management of projects and setting of public policy that have scientific or technical elements.

Rather than follow the business world into this deficiency, we should avoid it. ‘But…’, I hear someone say universities already have people who are experts in scientific experimentation, testing, design of statistical tests, mathematics, hypothesis generation and testing, …

Read on.

Our Experimental and Mathematical Expertise is Not Where We Need It

Yes, we do have people who are experts in scientific experimentation, testing, design of statistical tests, mathematics, hypothesis generation and testing, … but, they are busy teaching students and doing research (possibly in areas that do not easily translate to learning analytics methods).

The consequence seems to me – and it is an impression based on observation rather than investigation – is that we are importing bad habits from the business world and using inappropriate products peddled to us by the IT industry. The level of expertise I think we should have fairly well distributed throughout our organisations, wherever data-informed decisions are being made, should be such that each of the following examples is challenged. Until then, I fear we will have an inadequate level of data/analytical literacy for meaningful systemic [learning] analytics. I call this level of appreciation of analytics, and ability to comprehend and query validity, “soft capability”. It is a counterpart to the kind of expertise necessary to originate analytical method or deal with most practical data, for which most universities will have to hire talent.

3D Visualisations and False Perspective

3D visualisations may be useful for specialised information visualisation tasks but for presentation of common numerical data, they introduce distortion that fundamentally undermines the purpose of visualisation: to appeal to our “visual intelligence”, giving near-instantaneous apprehension of proportion, pattern, relationship, etc.

This 2008 image with Steve Jobs ri

This 2008 image with Steve Jobs was rightly criticised as visual trickery to give a false impression of Apple’s market share. Image (c) Ryan Block/Engadget 2008.

The pie chart is generally considered to be a poor way to visualise relative proportions – it is much easier for most people to judge this from the length of bars – but to compound this with false perspective of a 3D representation is surely to be avoided. And yet, it is easy to find examples from people who are proficient at handling data.

In my world of adequate soft capability, reports containing 3D and perspective would be sent back to the creator.

 Sparklines – is Edward Tufte in Despair

The introduction of sparklines is ascribed to Edward Tufte, introduced in his book Beautiful Evidence (for more information, see Tufte’s excerpt and notes on theory and practice of sparklines). Tufte described them as ‘data-intense, design-simple, word-sized graphics’ and described how to maximise value within the minimalist context.

Sparklines showing normal range, according to Tufte.

Sparklines showing normal range and final value, according to Tufte.

Sparkline with minimal scale information, according to Tufte.

Sparkline with highly condensedscale information, according to Tufte.

Unfortunately, although the sparkline idea found its way into Excel2010 and into many business intelligence dashboards, it has become debased to be simple squiggle that is often lacking the features Tufte devised to concisely convey normal range, scale, proportional change, and range of data.

sparkline Google

It is almost impossible to draw meaningful information from a debased sparkline so why have it clutter up your report or dashboard?

In my world of adequate soft capability, people would complain about dashboards and visualisations lacking enough information to judge scale of change, range of duration, suppressed zero, etc.

Significance and Sample Size

Whereas the previous two cases do not require a knowledge of statistics to produce, questions of significance and sample size are slightly more difficult concepts to apply because of the need to know some statistics, but the basic non-statistical question is: how does what we observe differ from what we would expect by chance alone. This question is rarely asked by the recipients of analytical reports. Worse than than, too many reports fail to indicate the significance, or otherwise, of findings in relation to the hypothesis that there is no real change/difference that cannot fairly be ascribed to chance (with a certain level of confidence).

Here is an example, made-up but based on a real example of a report of a student questionnaire and believed to be similar in its deficiency to many end-of-module survey reports: ‘Flagged for action because 72% of students felt they were well supported by their supervisor compared to 80% in similar institutions (difference >5% action threshold)’.

A threshold for action of 5%, picked out of thin air without reference to sample size or baseline value (80%) is not defensible. If the sample size were only 25 then 72% equates to 18 satisfied responses. If we assumed these 25 students had been randomly sampled from a population with an 80% satisfaction then there is actually around a 22% chance that there would be 18 or fewer responses of “satisfied”. Worthy of action? Probably not.

In my world of adequate soft capability, people would always ask: “how does what we observe differ from what we would expect by chance alone?”

Significance and Baseline

Here is a further, more subtle, example where a failure to address the difference between what you would expect by chance and what you observe leads to misleading conclusions.

A map showing geographical distribution of people who attended Cetis events. Note that this is also an example of an "outlier" indicating an error somewhere - look NE of East Anglia.

A map showing geographical distribution of people who attended Cetis events. Note that this is also an example of an “outlier” indicating an error somewhere – look NE of East Anglia.

Now, as it goes, this is a reasonable qualitative indicator of Cetis’ reach in the UK (and a number of continental points have been cropped) but much of the geographical distribution reflects UK population density rather than anything more interesting about Cetis. In this case, there are too few data points to sensibly answer the question: ‘so how does that differ from what you would expect by chance?’ To do that, we might need to find out the distribution and staff-size of universities, to which most (but not all) of our community belong.

A more current example might be MOOC participant distribution. How different is the distribution of participants compared to what would be expected from the distribution of speakers of English with easy access to a computer and internet connectivity? This isn’t to say that a plain map showing the distribution of participants is valueless, but it is to say that if we are claiming to be doing analytics on the distribution of people it may be valueless or worse, misleading. (NB I don’t see this as black-and-white, a plain map would be useful to better understand statistics such as average travel distance to class, for example)

In my world of adequate soft capability, people would ask: “how does that Google Map overlay of participants differ from population density?”

The Benefits of “Soft Capability”

The primary message of the preceding sections is that, in order for learning analytics to be sensibly acted upon, there should be an adequate supply of expertise tuned to critical engagement with the method and outputs of analytics. I call this a “soft capability” to distinguish it from the necessity to also have people who can run the IT systems and wrangle the data, which are “hard” both in the sense of being easier to identify and quantify, and more technically difficult. The essence of soft capability is that it exists in people who have contextual knowledge. This is essential to bridge the gap from numbers and visualisations to action. This must happen before systemic learning analytics, if we are to take “systemic” as being a meaningful transformation rather than an imposed, and doubtless often circumvented, process . As Macfadyen and Dawson [2] conclude in their interesting paper about the difficulty in bridging the gap between numbers and strategic decision-making:

Interpretation remains critical. Data capture, collation and analysis mechanisms are becoming increasingly sophisticated, drawing on a diversity of student and faculty systems. Interpretation and meaning-making, however, are contingent upon a sound understanding of the specific institutional context. As the field of learning analytics continues to evolve we must be cognizant of the necessity for ensuring that any data analysis is overlaid with informed and contextualized interpretations.

Informed decision-making this is not the only benefit of establishing soft capability. It can also greatly increase the effectiveness of collaborative design. Such a process is enriched by people who have enough skill to prototype and explore through to people who might only ask ‘so how is this different from what we would expect by chance?’ or ‘what assumptions have you made about …?’. It becomes possible to greatly increase the quantity and quality of participation in the design of systemic learning analytics.

A Side-bar: Variety, Control, Organisation, and Boundaries of Decidability

In the introduction, I referred to the mission of the department I belong to, the Institute for Educatoonal Cybernetics, which was “to develop a better understanding of how information and communications technologies affect the organisation of education from individual learning to the global system.” One of the lenses we use to look at this problem is cybernetics, which was developed into a multi-disciplinary field from the middle of the 20th century. The idea of “system”, the role of information, and effective forms of organising systems-within-systems has been applied to machines, management, psychoanalysis, … . The view of “system” is inclusive of social and technical components; the behaviour of a system cannot be understood without embracing the whole (socio-technical) and apprehending that systems have properties that arise from the relationship between individual components of the system, not just from the summation of properties individuals.

“Variety” is a building-block concept, which can be explained as the number of possible states of a controller or controlled-entity (see Ashby’s Law of Requisite Variety, below). Approximate colloquial equivalents from the point of view of a controller without requisite variety might be “I don’t have the bandwidth for that”, or “I don’t have the head-space for that”. The appearance of “control” might appear odd, evoke a response that education is not about control. But it is. Not total control, but the purpose of education is to develop, steer, bring-out (etymology of education is from “educere”, to lead out).

Ross Ashby, who stated his Law of Requisite Variety as: "variety absorbs variety, defines the minimum number of states necessary for a controller to control a system of a given number of states"

Ross Ashby, who stated his Law of Requisite Variety as: “variety absorbs variety, defines the minimum number of states necessary for a controller to control a system of a given number of states.”

Turning Ashby’s Law around suggests limits on the extent of control that a controller should be applied. The teacher does not, and should not attempt to, control the learner in all respects. One of the features of effective education is the degree to which the learner can progressively increase their self-regulation and their own variety as learners: good habits and metacognition. The same thought applies throughout the whole educational system; an effective and viable system adopts a recursive structure of control and inner regulation, right down from the state, the institutional management, the teacher and the learner.

If we are thinking about learning analytics then we should ask which questions are decidable from data. In education, things are already bad enough. The outcomes we care about – developments in the learner – are only ever guessable. Even if you could lurk on their shoulders as an invisible demon observing all actions, you would still have to make inferences about what intervention in their learning process would be fitting. The data we have is one further step removed from the learner; one must use it to infer what the learner was doing before then using these inferences to infer how what they were doing relates to their learning. The information about the state of the learner that is available is necessarily incomplete; even the highest-variety controller cannot access the necessary number of states. Even if there is complete instrumentation and capture of every action of the learner, there is much that remains unknowable. Informed action is bounded.

This fundamental problem suggests that our ambition for learning analytics should be tempered with scepticism that having Big Data can reveal much just because of its scale. We should first recognise the variety gained through experience as a reflective educational practitioner or experienced educational researcher, second the attenuation of information when people are interacting (in a suitable affective state). What appears to be intuition is complex pattern matching.

There remains a great deal of data that an educator lacks the variety to deal with. Information where quantity is high but each item is relatively simple. This is where analytics comes in to its own, so long as we bear in mind that the system is socio-technical; it includes pedagogic practice. The most effective learning analytics is, according to my analysis, likely to be where it can reduce large volumes of data to simple context-specific and [pedagogic] practice-relevant facts.

The Management/Practice Accommodation

As noted in the previous section, the concept of variety can be used to explain why organisational structure is necessary; the finite variety of a controller limits what it can control and structure within a system allows action to be taken on units that are in other ways self-regulating and self-organising. Many organisations limit the extent of self-regulation and self-organisation because the the (implicit or explicit) management theories of senior staff or because of imposed accountability. This can lead to undesirable and unintended consequences. For example, the setting of targets for healthcare by the UK Government focussed attention on specific measures such as waiting time distorted management in the NHS away from managing their system as a whole. The availability of this data, which is justifiable on the grounds of public good (transparency and improvement through research on health-care delivery etc), enabled government intervention. They should have been wise enough to seek measures of control that do not involve reaching inside the cab and pulling levers.

To turn to education, the boundary between what has been the provenance of management and teachers has evolved over time. How this has happened in Higher Education (in the UK in particular, but not exclusively), and the likely disruptive effect of learning analytics has been explored in one of the Cetis Analytics Series papers by Dai Griffiths [3], in which he says:

The introduction of these techniques [Learning Analytics] cannot be understood in isolation from the methods of educational management as they have grown up over the past two centuries. These methods are conditioned by the fact that educational managers are limited in their capability to monitor and act upon the range of states which are taken up by teachers and learners in their learning activities. … Over the years, an accommodation has developed between regulatory authorities, management and teaching professionals: educational managers indicate the goals which teachers and learners should work towards, provide a framework for them to act within, and ensure that the results of their activity meet some minimum standards. The rest is left up to the professional skills of teachers and the ethical integrity of both teachers and learners.

Now, it might be contended that the accommodation should be challenged, that there is benefit to be won from changing where we draw the boundaries of the sub-systems and consequently the information transacted. I see the issue not as being that change is bad – indeed, resistance to change may well be self-desctructive – but that threats may arise in moves towards systemic learning analytics:

  • Changes in the management/practice relationship may be introduced without consideration of what makes for an effective and viable structure. There is a temptation to pull levers.
  • Systemic learning analytics may be attempted too quickly and either lead to the imposition of a new relationship rather than the settling of a new accommodation or to change that exceeds the pace by which evidence of effect can be gathered.

Systemic learning analytics brings with it a risk that educational systems may be made less effective if the focus is on management control and subtlety is lacking.

So You Want to Optimise Student Success?

On the face of it “do you want to optimise student success?” sounds like a classic rhetorical device. How could anyone disagree? What follows in rhetoric is a proposed response. Everyone cheers at the most charismatic orator and we all go away feeling good to be part of such a worthy project. The expression is widespread; put “optimize student success” analytics into your favourite search engine and take a look.

There are two aspects of this expression that trouble me. The first is that what constitutes success is largely taken as a common-sense concept, whereas I contend that it is; a) a contested concept, b) personal to the student, c) unknowable because it is an integral over the lifetime of the student (which is likely to include jobs we cannot yet conceive of). Too many high profile pronouncements about learning analytics fail to even acknowledge this problem. Its not that no-one has thought about it. Many have (I particularly like Ruth Deakin Crick‘s line of research).

At least some reflection on what constitutes success would be nice. For example, here are four views (the first 3 are similar to the measures used by Ofsted, which has responsibility for school inspection etc in the UK):

Attainment: an academic standard, typically shown by test and examination results.

Progress: how far a learner has moved between assessment events.

Achievement: takes into account the standards of attainment reached by learners and the progress they have made to reach those standards.

Enrichment: the extent to which a learner gains professional attitudes, meta-cognition, technical or artistic creativity, delight, intellectual flexibility, …

Without drawing distinctions among the first three objective measures or considering what lies within the fourth and how we might recognise it, can we really go forward? We have, so far, muddled along with education – which I see as a process not a thing – without being very clear about these things except where school inspectors made us take notice or in university Education Departments. The conservatism of the education system, and society’s view of it, has partially protected it because habits and practices have persisted. These lead to intangible, or unmeasured, desirable consequences because they have been saved from objectivist and industrialist thinking. Optimising student success without exploring what success is risks causing damage by attending to a narrow or inarticulate view of “success”. This is not a new problem, but it is one that analytics could make a lot worse.

The Map is not the Territory is a phrase attributed to Alfred Korzybski. It is nicely captured in this image, entitled "The Treachery of Images" by the Belgian artist René Magritte (digital image CC-BY-ND Nad Renrel).

The Map is not the Territory is a phrase attributed to Alfred Korzybski. It is nicely captured in this image, entitled “The Treachery of Images” by the Belgian artist René Magritte (digital image CC-BY-ND Nad Renrel). Korzybski was interested in the limitations language and neurology impose on what is knowable and he was strongly influenced by his experience of the First World War, which led him to explore what it was about people that could allow such a war to occur. The insight captured in “the map is not the territory” is that measurements, descriptions, and other forms representation are not the thing itself. This seems obvious when expressed but it is common for these to be acted upon as if they were real. This is certainly true of education, where I accept the necessity of surrogates for learning such as measures of attainment but the associated trap is the optimisation of attainment as if it were the thing that mattered.

There is a second issue with optimising student success at institutional level compared to the individual level. The marketing, evangelism and high profile statements tend to be at the level of the institution and not the individual. There is an ethical tension here (refer also back to [3] and Dai Griffith’s quotation above); if success is judged by a completion rate, for example, interventions will focus on marginal students, with “no hopers” and top students comparatively neglected. So long as the management/practice accommodation remains, there will be insulation against this tendency because of the ethos of “the teacher”.

The phrase “maximise the probability of [student] success” falls into this ethical trap when applied at the group/department/institution level, whether intentionally or not, because it mis-specifies the problem by turning success into a binary outcome. It invites selection of the students most likely to attain a threshold, whereas seeking to increase the aggregated progress of individual students invites selection of those most educationally-disadvantaged, for whom there is the greatest potential for progress. There are better ways to specify aims and objectives.

Actually, I believe “optimise success” is fundamentally under-specified, even if we  replace the idea of success with a neat objective measure of attainment, because the constraints are not specified. Optimisation maximises or minimises something with respect to limited resources by choosing the most effective method, or finds a trade-off between costs and benefits (which are somehow measured and weighted). “Optimise success” must, therefore, be qualified to be meaningful and this qualification is not an implementation detail, it should be right up there with the headline strategic statement.

Worried?

At this point, you might be saying “yes, but everyone knows about these issues; hardly anyone is going to do systemic learning analytics like you suggest”. I hope so, but its a faint hope, because most of what I read and hear does make me worried. The following chart from the EDUCAUSE report on their 2012 survey underlines this impression, that the people likely to be making decisions about analytics in HE are not concerned about much of what I have talked about above (with the possible exception of “HE doesn’t know how to use data to make decisions, which could be seen as touching on my claims for soft capability).

From: “Analytics in Higher Education: Benefits, Barriers, Progress, and Recommendations” [1]

From: “Analytics in Higher Education: Benefits, Barriers, Progress, and Recommendations” [1]

Potential Misadventure and Bad Role Models

Here are four further snippets I think are worth considering when systemic learning analytics is proposed.

For What is this a Good Metaphor?

The dashboard metaphor works for the simple display of simplifiable facts.

The dashboard metaphor works for the simple display of simplifiable facts. In this case, the Nissan Leaf electrically-powered car.

The business intelligence vendors latched onto the dashboard metaphor some time ago now, and the idea of a simple and easy to understand and act upon visual representation of the state of a complicated system is appealing. It works for cars but look at the image. Enumerate the actions. There are not many and they are simple, arising from managing fuel vs planned journeys, stopping when there is an engine fault, cancelling indicators, etc . I doubt most people even use their rev counter.

A dashboard is conceivably useful to summarise prosaic aspects of teaching and learning such as task completion, level of engagement, or progress towards attainment targets but implemented dashboards are often cluttered and lack the simplifying characteristic. Maybe it is because they are trying to be more ambitious and to deal with that slippery character “learning” but a 777 cockpit style visualisation of a bundle of parameters doesn’t cut it. BI dashboards often have the same problem with clutter and complexity. They indicate design without clarity about which actions they are intended to support.

Automation and Practice

In the section on variety, I touched upon two points that I will now relate to a specific example. I do not mean to imply that this example is particularly worthy of criticism compared to similar kinds if initiative but an example will help to show the potential for misadventure.

The example is the E2Coach (Michigan Tailoring System), which was the subject of the first talk in the symposium. The essence of E2Coach is the provision of automated guidance to students and it is clear that the development of the system at Michigan was thorough. Let us assume that it is no more or less likely to provide what turns out to be wrong guidance than a member of staff would. After all, we accept that the humans are fallible.

We can look at this system as relieving pressure on teaching staff by automating the provision of the same advice that they would probably have given anyway. It reduces the variety required of the teacher to manage certain aspects of the students’ study, freeing this time for other activities. These might be more valuable activities but still, the question to be asked is whether there was anything else going on in the face-to-face interactions that was valuable and is now lost. What about those serendipitous conversations, or the affective benefit of interpersonal contact? How many issues are there that would only be picked up by the sensitivity of a human being?

The point here is that there are potential pitfalls from disembodying parts of what goes on without considering how these parts relate to practice. A systemic view should consider automation within a system that includes teaching practice and explore how the whole will be changed. Hypotheses about change can be explored  by observation, role-play, and possibly social simulation. Most importantly, the post-hoc effect on wider practice and student experience should be explored in addition to studying the immediate effectiveness of the automation.

The Folly of Technological Solutionism

I’ve recently discovered Evgeny Morozov and his new book, “To Save Everything Click Here: The Folly of Technological Solutionism”. While he says much that is open to dispute, or about which I am not entirely convinced, i think he identifies a very real problem and explores it with insight. In addition to mentioning education within the first ten pages, Morozov says the following about technological solutionism which I think pertains well to the adoption of learning analytics:

Recasting all complex social situations either as neatly defined problems with definite, computable solutions or as transparent and self-evident processes that can be easily optimized—if only the right algorithms are in place!—this quest is likely to have unexpected consequences that could eventually cause more damage than the problems they seek to address.

I call the ideology that legitimizes and sanctions such aspirations “solutionism.” I borrow this unabashedly pejorative term from the world of architecture and urban planning, where it has come to refer to an unhealthy preoccupation with sexy, monumental, and narrow-minded solutions—the kind of stuff that wows audiences at TED Conferences—to problems that are extremely complex, fluid, and contentious … solutionism presumes rather than investigates the problems that it is trying to solve, reaching “for the answer before the questions have been fully asked.” How problems are composed matters every bit as much as how problems are resolved.

This reminds me of  Martin Heidegger’s comments in “The Question Concerning Technology and Other Essays”, (original 1954, quotation translated by William Lovitt 1977):

Everywhere we remain unfree and chained to technology, whether we passionately affirm or deny it. But we are delivered over to it in the worst possible way when we regard it as something neutral; for this conception of it, to which today we particularly like to do homage, makes us utterly blind to the essence of technology.

We cannot escape the fact that our technologies – which I would take to include farming and the most basic tool-using – influence the way we think about the world. Heidegger was pessimistic about where technology was going to lead us. So far, I think he was over-pessimistic but that does not mean we should not be blind.

The “What Works” Trap

The phrase “what works” has been cropping up more over the last few years, often associated with “evidence-based policy”. For example, the UK Government published a policy paper entitled “What Works: evidence centres for social policy” in March 2013. The USA has its What Works Clearinghouse, which “…by focusing on the results from high-quality research, we try to answer the question ‘What works in education?'”.

Now, I am all for the idea that evidence is important and I hope the link to talk of analytics is clear but there is a large trap in the language of “what works” and in decision-making paradigms that are data-driven.

The language of “what works” and the presentation of it is problematical because it makes a singular “best practice” out of what should be contexualised information. For example, there is ample evidence that intelligent tutoring systems “work”. The LearnLab can demonstrate unequivocal learning gains from applying some neat artificial intelligence techniques to cognitive theories of learning. The problems arise when this evidence is taken out of context and the tool – the cognitive tutor – is applied to a context it is ill-suited to because a policy-maker has not a nuanced view.

Politicians especially, and senior decision-makers too, like simplicity and clarity. Politicians like defensible positions. Research that shows what works is attractive. They key question is whether the research is defensible, whether the evidence has a sound epistemological base. In the UK, and over educational questions in schools in particular, the current view of the most influential people is that randomised controlled trials (RCTs) are the gold standard. There is, in my view, inadequate attention given to the cases where RCTs are unreliable and the emphasis given to them, coupled with their extensive use in medicine and the way they acquire status by reflecting methods in experimental physical science, leads to poor experimental design and evaluation in social and educational settings due to un-reflective research methods. False confidence in misleading evidence is a recipe for causing harm. Intuition may be a better friend.

I have strayed into policy and government here, which is not the main focus of this piece, but the I hope the possible misadventure from applying learning analytics extra-institutionally is clear. Evidence-based policy is fashionable but, too often: the scope & context of applicability is significant yet marginalised; models of cause and effect are unexplored or ignored; there is a failure to recognise that the evidence relates to groups, not individuals, in cases where action should often be specific to individuals.

This blog post is not the space to explore this issue properly, or to argue the case, but (as elsewhere) I would like to give a call-out to some people who have described the issues rather more clearly and who explain how to deal with the issues.

This book by Pawson and Tilley

This book by Pawson and Tilley [5] describes an alternative approach to experimental design and evaluation in the social sciences, along with some examples

An overview of Realistic Evaluation (pdf) by Nick Tilley captures the essence of the book and is the source of the following cautionary tale about randomised trials:

The most evaluated intervention in criminal justice has been mandatory arrest for domestic violence as a means of reducing rates of repeated assault. … Police officers attending calls for service where domestic violence was reported, and where there was no serious injury, were randomly allocated one of three responses. One of these was to arrest of the perpetrator though he was not necessarily charged, the others were either to provide advice or simply to send the perpetrator away.

There was a statistically significant lower rate of repeat calls for domestic violence amongst the group where arrest occurred (10% of repeat incidents within six months) … On the basis of this finding other cities were encouraged to adopt a mandatory arrest policy in relation to domestic violence as a means of reducing repeat assaults… In six follow-up studies [in US cities] … the results were mixed. In three of them those randomly allocated mandatory arrest experienced a higher rate of repeat domestic violence than those randomly allocated alternative responses… What this means is that those cities that adopted the mandatory arrest policy on the basis of the first conclusions look as if they have increased the risks of domestic assault…

Why were there these mixed findings? Sherman suggests that they can be explained by the different community, employment and family structures in the different cities. He suggests that in stable communities with high rates of employment arrest produces shame on the part of the perpetrator who is then less likely to re-offend. On the other hand, in less stable communities with high rates of unemployment arrest is liable to trigger anger on the part of the perpetrator who is liable thus to become more violent. The effect of arrest varies thus by context. The same measure is experienced differently by those in different circumstances. It thus triggers a different response, producing a different outcome. The effectiveness of the measure is thus contingent on the context in which it is introduced. What works to produce an effect in one circumstance will not produce it in another.

 

Phil Winne from Simon Fraser University

Phil Winne from Simon Fraser University has an excellent online video entitled “Improving Education” in which he describes how RCTs fail to account for the variation in individuals and the ways his team uses fine-grained data and data-mining to overcome this issue.

Avoiding the trap requires that we pay a lot more attention to what is knowable, the experimental or data-collection environment, and the models of cause and effect we bring to interpretation. We need to consider the evidence for the evidence – “what works for evaluation” – if we are to acquire useful knowledge (things we can act on as it they are objectively true without having to worry whether they are really objective).

Rays of Sunshine

You might be forgiven for labelling me in the “anti” camp from the preceding text. Far from it. I think there is a great deal of potential for learning analytics to shine a light on what we do in education and expose where convention or hear-say is a poor guide. Decisions are often made today without recourse to evidence while data that could be used to increase evidence goes unused. Learning analytics offers us a means to understand and to recognise a wider range of learning activities, broadening the toolkit of assessment and the scope of what is valued. It has the potential to make learners and teachers better informed about the efficacy of their actions.

I am “pro” but worried that in five years someone will turn to me and say “I remember you were keen on analytics in 2012, now look at the mess we’re in”. To which “that is not what I meant” would be a lame reply. The over-arching threat to efficacious systemic learning analytics is, I think, to follow the path of business intelligence. The habits of management reporting, dashboards and the way BI has often been implemented without really acknowledging what people do in practice are recognised as issues by thinkers in the business world. We should avoid this path and also be very cautious about adopting the language of KPIs for learning, teaching, and the management of education. The language of KPIs and learning metrics risks narrowing focus and the false presumption of repeatability and regularity: an industrial perspective on “systemic”. Where we do use various measures of activity or performance, we need to be wary of the limitations of what is knowable from them: we must not be epistemically blind.

I should finish with some “rays of sunshine”, some positive comments and examples of how some  of the issues can be overcome.

I have already mentioned Nick Tilley and Ray Pawson, and Phil Winne, and I think they give us some good pointers. Tilley and Pawson emphasise experimental design and data collection that accounts for the mechanism of change rather than considering only the effect. They make the case that without attending to mechanism, it is likely that attention will not be paid to the relevant attributes and their relation to candidate interventions. In their case, interventions to reduce crime were of particular interest, but the same approach is applicable in institutions where the interventions have educational aims. Phil Winne shows the importance of data at scale, coupled with the design of a browser plugin to capture relevant data (not incidental data).

A counter-measure to reduce the risk of mis-adventure and to dodge the pit-falls is needed. I think this is possible if we, the community of people talking and writing about learning analytics, take more time to articulate the role of method, not only the outcome, of successful systemic learning analytics. In other words, the most important thing is not the improvement in completion rates, or whatever, but how the project was undertaken. The following are my list of “how” that I have seen working to good effect (note, I don’t say they are “what works”):

  • Organic development & prototyping, starting with spreadsheets and more manual processes before embarking on IT infrastructure or systems projects.
  • The principle of parsimony (or “Occam’s Razor”) is a statistical mantra for good reason; it is easier to judge the reliability of a simple method.
  • A realistic approach to the scale/quality of your data is important. Do not assume that apparent bigness will counteract error and bias.
  • Design and develop with the educator IN the system.
  • Optimise the environment for learning, not the outcomes of learning. This keeps attention on the fact that learning is a process and not an outcome, and that the outcomes we measure are not real.
  • Reflect on the effect LA innovation has on practice
  • Embrace an action-research (or action-learning) approach. This is an evolutionary and iterative approach rather than a buy (or build) and then operate.

Of all the feted early adopters of learning analytics, Course Signals, is an example of all of the above points. On the whole, though, Signals has received less praise for its methods than it should. They are unusual, for example, in having discussed the effect of Signals on practice in their publications. It remains to be seen how relatively successful institutions will be that buy the productised version from SunGard (now Ellucian Course Signals). My contention is that what Purdue gained by doing it cannot all be bought off the shelf and that the most successful institutions at employing learning analytics systemically will combine purchased LA with an environment for innovation and change that embodies many items on the list, above.

References and Further Reading

Although I usually prefer linked text, there are some references where a more conventional form of citation is to my taste.

[1] Jacqueline Bichsel, “Analytics in Higher Education: Benefits, Barriers, Progress, and Recommendations”. Available from http://www.educause.edu/library/resources/2012-ecar-study-analytics-higher-education

[2] Leah P. Macfadyen and Shane Dawson, “Numbers Are Not Enough. Why e-Learning Analytics Failed to Inform an Institutional Strategic Plan”. Educational Technology and Society 15(3). Available from www.ifets.info/journals/15_3/11.pdf‎

[3] Dai Griffiths, “CETIS Analytics Series: The impact of analytics in Higher Education on academic practice”. Available from http://publications.cetis.org.uk/2012/532.

[4] Evgeny Morozov, “To Save Everything Click Here: The Folly of Technological Solutionism”
ISBN-10: 9781610391382

[5] Ray Pawson and Nick Tilley, “Realistic Evaluation”
ISBN-10: 0761950087

I would like to particularly acknowledge Dai Griffiths’ influence on what I have written, and ideas I have borrowed from him (but I hope not misunderstood).

]]>
http://blogs.cetis.org.uk/adam/2013/10/31/policy-and-strategy-for-systemic-deployment-of-learning-analytics-barriers-and-potential-pitfalls/feed/ 2
Learning Analytics Interoperability – some thoughts on a “way ahead” to get results sometime soon http://blogs.cetis.org.uk/adam/2013/10/17/learning-analytics-interoperability-some-thoughts-on-a-way-ahead-to-get-results-sometime-soon/ http://blogs.cetis.org.uk/adam/2013/10/17/learning-analytics-interoperability-some-thoughts-on-a-way-ahead-to-get-results-sometime-soon/#comments Thu, 17 Oct 2013 16:18:04 +0000 http://blogs.cetis.org.uk/adam/?p=732 The “results” of the title are the situation where increased interoperability contributes to practical learning analytics (exploratory, experimental, or operational). The way ahead to get results sometime soon requires care; focussing on the need and the hunger without restraining ambition will surely mean a failure to be timely, successful, or both. On the other hand, although it would be best (in an ideal world) to spend a good deal of time characterising the scope of data and charting the range of methods and targets, it is feared that this would totally block progress. Hence a middle way seems necessary, in which a little time is spent on discussing the most promising and the best-understood targets. i.e. to look for the low hanging fruit. This represents a middle way between the tendencies of the sales-man and the academic.

I have written a short-ish (working) document to help me to explore my own thoughts on the resolution of tension between these several factors, which I see as:

  1. The difficulty in getting standardised data out of information systems in a consistent way is a barrier to conducting learning analytics. There is a need now.
  2. There is a hunger to taste the perceived benefits of using learning analytics.
  3. The scope of data relevant to learning analytics is enormous. To reach the minimal common ground necessary to declare “a standard” or interoperability across all of these is intractable given available human resource because experience shows either analysing the breadth of actual practice or defining anything by consensus is slow.
  4. The range of methods and targets of learning analytics is diverse and emerging as experience grows. This places limits on what it is rational to attempt to standardise. In other words, we don’t really know what LA is yet and this brings the risk that any spec work may fail to define the right things.

I hope that making this work-in-progress available will stimulate thoughts in the wider community. Please feel free to comment.

The document (current version 0.2) is available:

 

]]>
http://blogs.cetis.org.uk/adam/2013/10/17/learning-analytics-interoperability-some-thoughts-on-a-way-ahead-to-get-results-sometime-soon/feed/ 0
Report on a Survey of Analytics in Higher and Further Education (UK) http://blogs.cetis.org.uk/adam/2013/09/16/report-on-a-survey-of-analytics-in-higher-and-further-education-uk/ http://blogs.cetis.org.uk/adam/2013/09/16/report-on-a-survey-of-analytics-in-higher-and-further-education-uk/#comments Mon, 16 Sep 2013 11:16:24 +0000 http://blogs.cetis.org.uk/adam/?p=687 We have just published the results of an informal survey undertaken by Cetis to:

  • Assess the current state of analytics in UK FE/HE.
  • Identify the challenges and barriers to using analytics.

The report is available from the Cetis publications site.

For the purpose of the survey, we defined our use of “analytics” to be the process of developing actionable insights through problem definition and the application of statistical models and analysis against existing and/or simulated future data. In practical terms, it involves trying to find out things about an organisation, its products services and operations, to help inform decisions about what to do next.
Various domains of decision-making are encompassed in this definition: the kinds of decision that is readily understood by a business-person, whatever their line of business; questions of an essentially educational character; and decisions relating to the management of research. The line of questioning was inclusive of these three perspectives. The questions asked were:

1. Which education sector do you work in (or with)?
2. What is your role in your institution?
3. In your institution which department(s) are leading institutional analytics activities and services?
4. In your institution, how aware are staff about recent developments in analytics?
5. Do the following roles use the results of statistical analysis such as correlation or significance testing rather than simple reporting of data in charts or tables?
6. Which of the following sources are used to supply data for analytics activities?
7. Which of the following data collection and analytics technologies are in place in your institution?
8. Please name the supplier/product of the principle software in use (e.g. IBM Cognos, SPSS, Tableau, Excel)
9. Which of the following staff capabilities are in place in your institution?
10a. What are the drivers for taking analytics based approaches in your institution?
10b. What are the current barriers for using of analytics in your institution?

The informal nature of the survey, coupled with the small number of responses, means that the resulting data cannot be assumed to represent the true state of affairs. Hence, no “analytics” has been done using the data and the report is written as a stimulus both for discussion and for more thorough investigation into some of the areas where the survey responses hint at an issue.

If you have any reactions – surprise, agreement, contention, etc – or evidence that helps to build a better picture of the state of analytics, please comment.

[edit: I should have mentioned the EDUCAUSE/ECAR 2012 survey of US HE for comparison – http://www.educause.edu/library/resources/2012-ecar-study-analytics-higher-education]

]]>
http://blogs.cetis.org.uk/adam/2013/09/16/report-on-a-survey-of-analytics-in-higher-and-further-education-uk/feed/ 1
Learning Analytics Interoperability http://blogs.cetis.org.uk/adam/2013/05/03/learning-analytics-interoperability/ http://blogs.cetis.org.uk/adam/2013/05/03/learning-analytics-interoperability/#comments Fri, 03 May 2013 15:43:10 +0000 http://blogs.cetis.org.uk/adam/?p=660 The ease with which data can be transferred without loss of meaning from a store to an analytical tool – whether this tool is in the hands of a data scientist, a learning science researcher, a teacher, or a learner – and the ability of these users to select and apply a range of tools to data in formal and informal learning platforms are important factors in making learning analytics and educational data mining efficient and effective processes. I have recently written a report that describes, in summary form, the findings of a survey into: a) the current state of awareness of, and research or development into, this problem of seamless data exchange between multiple software systems, and b) standards and pre-standardisation work that are candidates for use or experimentation. The coverage is, intentionally, fairly superficial but there are abundant references. The paper is available in three formats:  Open Office, PDF, MS Word. If printing, note that the layout is “letter” rather than A4. Comments are very welcome since I intend to release an improved version in due course.

]]>
http://blogs.cetis.org.uk/adam/2013/05/03/learning-analytics-interoperability/feed/ 1