What does “Analytics” Mean? (or is it just another vacuuous buzz word?)

“Analytics” certainly is a buzz word in the business world and almost impossible to avoid at any venue where the relationship between technology and post-compulsory education is discussed, from bums-on-seats to MOOCs. We do bandy words like analytics or cloud computing around rather freely and it is so often the case with technology-related hype words that  they are used by sellers of snake oil or old rope to confuse the ignorant and by the careless to refer vaguely to something that seems to be important.

Cloud computing is a good example. While it is an occasionally useful umbrella term for a range of technologies, techniques and IT service business models, it masks differences that matter in practice. Any useful thinking about cloud must work on a more clear understanding of the kinds of cloud computing service delivery level and the match to the problem to be solved. To understand the very real benefits of cloud computing, you need to understand the distinct offerings; any discussion that just refers to cloud computing is likely to be vacuuous.  These distinctions are discussed in a CETIS briefing paper on cloud computing.

But is analytics like cloud computing, is the word itself useful? Can a useful and clear meaning, or even a definition, of analytics be determined?

I believe the answer is “yes” and the latest paper in our Analytics Series, which is entitled “What is Analytics? Definition and Essential Characteristics” explores the background and discusses previous work on defining analytics before proposing a definition. It then extends this to a consideration of what it means to be analytical as opposed to being just quantitative. I realise that the snake oil and old rope salesmen will not be interested in this distinction; it is essentially a stance against uncritical use of “analytics”.

There is another way in which I believe the umbrella terms of cloud computing and analytics differ. Whereas cloud computing  becomes meaningful by breaking it down and using terms such as “software as a service”, I am not convinced that a similar approach is applicable to analytics. The explanation for this may be that cloud computing  is bound to hardware and software, around which different business models become viable, whereas analytics is foremost about decisions, activity and process.

Terms for kinds of analytics, such as “learning analytics”, may be useful to identify the kind of analytics that a particular community is doing but to define such terms is probably counter-productive (although working definitions may be very useful to allow the term to be used in written or oral communications). One of the problems with definitions is the boundaries they draw. Where would learning analytics and business analytics have boundary in an educational establishment? We could probably agree that some cases of analytics were on one side or the other but not all cases. Furthermore, analytics is a developing field that certainly has not covered all that is possible and is very immature in many industries and public sector bodies. This is likely to mean revision of definitions is necessary, which rather defeats the object.

Even the use of nouns, necessary though it may be in some circumstances, can be problematical. If we both say “learning analytics”, are we talking about the same thing? Probably not, because we are not really talking about a thing but about processes and practices. There is a danger that newcomers to something described as “learning analytics” will construct quite a narrow view of “learning analytics is ….” and later declaim that learning analytics doesn’t work or that learning analytics is no good because it cannot solve problem X or Y. Such blinkered sweeping statements are a warning sign that opportunities will be missed.

Rather than say what business analytics, learning analytics, research analytics, etc is, I think we should focus on the applications, the questions and the people who care about these things. In other words, we should think about what analytics can and cannot help us with, what it is for, etc. This is reflected in most of the titles in the CETIS Analytics Series, for example our recently-published paper entitled “Analytics for Learning and Teaching“. The point being made about avoiding definitions of kinds of analytics is expanded upon in “What is Analytics? Definition and Essential Characteristics“.

The full set of papers in the series is available from the CETIS Publications site.

Modelling Social Networks

Social network analysis has become rather popular over the last five (or so) years; the proliferation of different manifestations of the social web has propelled it from being a relatively esoteric method in the social sciences to become something that has touched many people, if only superficially. The network visualisation – not necessarily a social network, e.g. Chris Harrison’s internet map – has become a symbol of the transformation in connectivity and modes of interaction that modern hardware, software and infrastructure has brought.

This is all very well, but I want more than network visualisations and computed statistics such as network density or betweenness centrality. The alluring visualisation that is the sociogram tends to leave me rather non-plussed.

How I often feel about the sociogram

How I often feel about the sociogram

Now, don’t get me wrong: I’m not against this stuff and I’m not attacking the impressive work of people like Martin Hawksey or Tony Hirst, the usefulness of tools like SNAPP or recent work on the Open University (UK) SocialLearn data using the NAT tool. I just want more and I want an approach which opens up the possibility of model building and testing, of hypothesis testing, etc. I want to be able to do this to make more sense of the data.

Warning:
this article assumes familiarity with Social Network Analysis.

Tools and Method

Several months ago, I became rather excited to find that exactly this kind of approach – social network modelling – has been a productive area of social science research and algorithm development for several years and that there is now a quite mature package called “ergm” for R. This package allows its user to propose a model for small-scale social processes and to evaluate the degree of fit to an observed social network. The mathematical formulation involves an exponential to calculate probability hence the approach is known as “Exponential Random Graph Models” (ERGM). The word “random” captures the idea that the actual social network is only one of many possibilities that could emerge from the same social forces, processes, etc and that this randomness is captured in the method.

I have added some what I have found to be the most useful papers and a related book to a Mendeley group; please consult these for an outline of the historical development of the ERGM method and for articles introducing the R package.

The essential idea is quite simple, although the algorithms required to turn it into a reality are quite scary (and I don’t pretend to understand enough to do proper research using the method). The idea is to think about some arguable and real-world social phenomena at a small scale and to compute what weightings apply to each of these on the basis of a match between simulations of the overall networks that could emerge from these small-scale phenomena and a given observed network. Each of the small-scale phenomena must be expressed in a way that a statistic can be evaluated for it and this means it must be formulated as a sub-graph that can be counted.

Example sub-graphs that illustrate small-scale social process.

Example sub-graphs that illustrate small-scale social process.

The diagram above illustrates three kinds of sub-graph that match three different kinds of evolutionary force on an emerging network. Imagine the arrows indicate something like “I consider them my friend”, although we can use the same formalism for less personal kinds of tie such as “I rely on” or even the relation between people and resources.

  • The idea of mutuality is captured by the reciprocal relationships between A and B. Real friendship networks should be high in mutuality whereas workplace social networks may be less mutual.
  • The idea of transitivity is captured in the C-D-E triangle. This might be expressed as “my friend’s friend is my friend”.
  • The idea of homophily is captured in the bottom pair of subgraphs, which show preference for ties to the same colour of person. Colour represents any kind of attribute, maybe a racial label for studies of community polarisation or maybe gender, degree subject, football team… This might be captured as “birds of a feather fly together”.

One of the interesting possibilities of social network modelling is that it may be able to discover the likely role of different social processes, which we cannot directly test, with qualitatively similar outcomes. For example, both homophily and transitivity favour the formation of cohesive groups. A full description of research using ERGMs to deal with this kind of question is “Birds of a Feather, or Friend of a Friend? Using Exponential Random Graph Models to Investigate Adolescent Social Networks” (Goodreau, Kitts & Morris): see the Mendeley group.

A First Experiment

In the spirit of active learning, I wanted to have a go. This meant using relatively easily-available data about a community that I knew fairly well. Twitter follower networks are fashionable and not too hard to get, although the API is a bit limiting, so I wrote some R to crawl follower/friends and create a suitable data structure for use with the ERGM package.

Several evenings later I concluded that a network defined as followers of the EC-TEL 2012 conference was unsuitable. The problem seems to be that the network is not at all homogeneous while at the same time there are essentially no useful person attributes to use; the location data is useless and the number of tweets is not a good indicator of anything. Without some quantitative or categorical attribute you are forced to use models that assume homogeneity. Hence nothing I tried was a sensible fit.

Lesson learned: knowledge of person (vertex) attributes is likely to be important.

My second attempt was to consider the Twitter network between CETIS staff and colleagues in the JISC Innovation Group. In this case, I know how to assign one attribute that might be significant: team membership.

Without looking at the data, it seems reasonable to hypothesise as follows:

  1. We might expect a high density network since:
    • Following in Twitter is not an indication of a strong tie; it is a low cost action and one that may well persist due to a failure to un-follow.
    • All of the people involved work directly or indirectly (CETIS) for JISC and within the same unit so we might expect.
  2. We might expect a high degree of mutuality since this is a professional peer network in a university/college setting.
  3. The setting and the nature of Twitter may lead to a network that does not follow organisational hierarchy.
  4. We might expect teams to form clusters with more in-team ties than out-of-team ties. i.e. a homphily effect.
  5. There is no reason to believe any team will be more sociable than another.
  6. Since CETIS was created primarily to support the eLearning Team we might expect there to be a preferential mixing-effect.
CETIS and JISC Innovation Group Twitter follower network. Colours indicate the team and arrows show the "follows" relationship in the direction of the arrow.

CETIS and JISC Innovation Group Twitter follower network. Colours indicate the team and arrows show the "follows" relationship in the direction of the arrow.

Nonplussed? What of the hypotheses?

Well… I suppose it is possible to assert that this is quite a dense network that seems to show a lot of mutuality and, assuming the Fruchterman-Reingold layout algorithm hasn’t distorted reality, which shows some hints at team cohesiveness and a few less-connected individuals. I think JISC management should be quite happy with the implications of this picture, although it should be noted that there are some people who do not use Twitter and that this says nothing about what Twitter mediates.

A little more attention to the visualisation can reveal a little more. The graph below (which is a link to a full-size image) was created using Gephi with nodes coloured according to team again but now sized according to the eigenvector centrality measure (area proportional to centrality), which gives an indication of the influence of that person’s communications within the given network.

Visualising the CETIS and JISC Innovation network with centrality measures.

Visualising the CETIS and JISC Innovation network with centrality measures. The author is among those who do not tweet.

This does, at least, indicate who is most, least and middling in centrality. Since I know most of these people, I can confirm there are no surprises.

Trying out several candidate models in order to try to decide on the previously enumerated hypotheses (and some others omitted for brevity) leads to the following tentative conclusions, i.e. to a model that appeared to be consistent with the observed network. “Appeared to be consistent” means that my inexperienced eye considered that there was acceptable goodness of fit between a range of statistics computed on the observed network and ensembles of networks simulated using the given model and best-fit parameters.

Keeping the same numbering as the hypotheses:

  1. ERGM isn’t needed to judge network density but the method does show the degree to which connections can adequately be put down to pure chance.
  2. There is indeed a large positive coefficient for mutuality, i.e. that reciprocal “follows” are not just a consequence of chance in a relatively dense network.
  3. It is not possible to make conclusions about organisational hierarchy.
  4. There is a statistically significant greater density within teams. i.e. team homophily seems to be affecting the network. This seems to be strongest for the Digital Infrastructure team, then CETIS then the eLearning team but the standard errors are too large to claim this with confidence. The two other teams were considered too small to draw a conclusion
  5. None of CETIS, the eLearning team or the Digital Infrastructure team seem to be more sociable. The two other teams were considered too small to draw a conclusion. This is known as a “main effect”.
  6. There is no statistically significant preference for certain teams to follow each other. In the particular case of CETIS, this makes sense to an insider since we have worked closely with JISC colleagues across several teams.

One factor that was not previously mentioned but which turned out to be critical to getting the model to fit was individual effects. Not everyone is the same. This is the same issue as was outlined for the EC-TEL 2012 followers: heterogeneity. In the present case, however, only a minority of people stand out sufficiently to require individual-level treatment and so it is reasonable to say that, while these are necessary for goodness of fit, they are adjustments. To be specific, there were four people who were less likely to follow and another four who were less likely to be followed. I will not reveal the names but suffice to say that, surprising though the results was at first, it is explainable for the people in CETIS.

A Technical Note

This is largely for anyone who might play with the R package. The Twitter rules prevent me from distributing the data but I am happy to assist anyone wishing to experiment (I can provide csv files of nodes and edges, a .RData file containing a network object suitable for use with the ERGM package or the Gephi file to match the picture above).

The final model I settled on was:

twitter.net ~ edges +
sender(base=c(-4,-21,-29,-31)) +
receiver(base=c(-14,-19,-23,-28)) +
nodematch("team", diff=TRUE, keep=c(1,3,4)) +
mutual

This means:

  • edges = > the random chance that A follows B unconditionally on anything.
  • sender => only these four vertices are given special treatment in terms of their propensity to follow.
  • receiver => special treatment for propensity to be followed.
  • nodematch => consider the team attribute for teams 1, 3 and 4 and use a different parameter for each team separately (i.e. differential homophily).
  • mutual => the propensity for a person to reciprocate being followed.

And for completeness the estimated model parameters for my last run. The parameter for “edges” indicates the baseline random chance and, if the other model elements are ignored, an estimate of -1.64 indicates that there is about a 16% chance of a randomly chosen A->B tie being present (the estimate = logit(p)). The interpretation of the other parameters is non-trivial but in general terms, a randomly chosen network containing a higher value statistic for a given sub-graph type will be more probable than one containing a lower value when the estimated parameter is positive and less probable when it is negative. The parameters are estimated such that the observed network has the maximum likelihood according to the model chosen.

                         Estimate Std. Error MCMC %  p-value
edges                     -1.6436     0.1580      1  < 1e-04 ***
sender4                   -1.4609     0.4860      2 0.002721 **
sender21                  -0.7749     0.4010      0 0.053583 .
sender29                  -1.9641     0.5387      0 0.000281 ***
sender31                  -1.5191     0.4897      0 0.001982 **
receiver14                -2.9072     0.7394      9  < 1e-04 ***
receiver19                -1.3007     0.4506      0 0.003983 **
receiver23                -2.5929     0.5776      0  < 1e-04 ***
receiver28                -2.5625     0.6191      0  < 1e-04 ***
nodematch.team.CETIS       1.9119     0.3049      0  < 1e-04 ***
nodematch.team.DI          2.6977     0.9710      1 0.005577 **
nodematch.team.eLearning   1.1195     0.4271      1 0.008901 **
mutual                     3.7081     0.2966      2  < 1e-04 ***
---
Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

Outlook

The point of this was a learning experience; so what did I learn?

  1. It does seem to work!
  2. Size is an issue. Depending on the model used, a 30 node network can take several tens of seconds to either determine the best fit parameters or to fail to converge.
  3. Checking goodness of fit is not simple; the parameters for a proposed model are only determined for the statistics that are in the model and so goodness of fit testing requires consideration of other parameters. This can come down to “doing it by eye” with various plots.
  4. Proper use should involve some experimental design to make sure that useful attributes are available and that the network is properly sampled if not determined a-priori.
  5. There are some pathologies in the algorithms with certain kinds of model. These are documented in the literature but still require care.

The outlook, as I see it, is promising but the approach is far from being ready for “real users” in a learning analytics context. In the near term I can, however, see this being applied by organisations whose business involves social learning and as a learning science tool. In short: this is a research tool that is worthy of wider application.

This is an extended description of a lightning talk given at the inaugural SoLAR Flare UK event held on November 19th 2012. It may contain errors and omissions.

Open Source and Open Standards in the Public Sector

Yesterday I attended day 1 of a conference entitled “Public Sector: Open Source” and, while Open Source Software (OSS) was the primary subject, Open Standards were very much on the agenda. I went in particular because of an interest in what the UK Government Cabinet Office is doing in this area.

I have previously been quite positive about both the information principles and the open standards consultation (blog posts here and here respectively). We provided a response to the consultation and were pleased to see the Nov 1st announcement that government bodies must comply with a set of open standards principles.

The speaker from the Cabinet Office was Tariq Rashid (IT Reform group) and we were treated to a quite candid assessment of the challanges faced by government IT, with particular reference to OSS. His assessment of the issues and how to deal with them was cogent and believable, if also a little scary.

Here are a few of the things that caught my attention.

Outsource the Brawn not the Brain

Over a period of many years the supply of well-informed and deeply technical capability in government has been depleted such that too many decisions are made without there being an appropriate “intelligent customer“. To quote Tariq: “we shouldn’t be spending money unless we know what the alternatives are.” The particular point being made was about OSS alternatives – and they have produced an Open Source Procurement Toolkit to challenge myths and to guide people to alternatives – but the same line of argument extends to there being a poor understanding of the sources of technical lock-in (as opposed to commercial lock-in) and how chains of dependency could introduce inertia through decisions that are innocuous from a naive analysis.

By my analysis, the Cabinet Office IT reform team are the exception that proves the general point. It is also a point that universities and colleges should be wary of as their senior management tries to cut out “expensive people we don’t really need”.

The Current Procurement Approach is Pathological

There is something slightly ironic that it takes a Tory government to seriously attack an approach which sees the greatest fraction of the incredible £21 billion p.a. central government spend on IT go to a handful of big IT houses (yes, countable on 2 hands).

In short: the procurement approach, which typically involves a large amount of bundling-up, reduces competition and inhibits SMEs and providers of innovative solutions as well as blocking more agile approaches.

At the intersection between procurement approach and brain-outsourcing is the critical issue that the IT that is usually acquired lacks a long term view of architecture; this becomes reduced to the scope of tendered work and build around the benefits of the supplier.

Emphasis on Procurement

Most of the presentations placed most emphasis on the benefits of OSS in terms of procurement and cost and this was a central theme of Tariq’s talk also. Having spent long enough consorting with OSS-heads I found this to be rather narrow. What, for example, about the opportunities for public sector bodies to engage in acts of co-creation, either to lead or significantly contribute to OSS projects. There are many examples of commercial entities making significant investments in developer salaries while taking a hands-off approach to governance of the open source product (e.g. IBM and the eclipse platform).

For now, it seems, this kind of engagement is one step ahead of what is feasible in central government; there is a need for thinking to move on, to mature, from where it is now. I also suspect that there is plenty of low-hanging fruit – easy cases to make for cost savings in the near term – whereas co-creation is a longer term strategy. Tariq added that it might be only 2-3 years before government was ready to begin making direct contributions to LibreOffice, which is already being trialled in some departments.

Another of the speakers, representing sambruk (one of the partners in OSEPA, the project that organised the conference) seems to be heading towards more of a consortium model that could lead to something akin to the Sakai or Kuali model for Swedish municipality administration.

Conclusion

For all the Cabinet Office has a fairly small budget, its gatekeeper role – it must approve all spending proposals over £5 million and has some good examples of having prompted significant savings (e.g. £12 -> £2 million on a UK Borders procurement) – makes it a force to be reckoned with. Coupled with an attitude (as I perceive it) of wanting to understand the options and best current thinking on topics such as open source and open standards, this makes for a potent force in changing government IT.

The challenge for universities and colleges is to effect the same kind of transformation without an equivalent to the Cabinet Office and in the face of sector fragmentation (and, at best, some fairly loose alliances of sovereign city states).

How to do Analytics Right…

There is, of course, no simple recipe, no cookie-cutter template and perfection is an unattainable… but there are some good examples.

The Signals Project at Perdue University is among the most celebrated examples of analytics in Higher Education at the moment so I was intrigued as to what the person behind it would have to say when I met him just prior to his presentation at the recent SURF Education Day (actually “Dé Onderwijsdagen 2012“; SURF is a similar organisation to JISC but in the Netherlands). This person is John Campbell and he is not at all the slightly exhausting (to dour Brits) kind of American IT leader, full of hyperbole and sweeping statement; his is a level-headed and grounded story. It is also a story from which I think we can draw some tips on how to do analytics right. These are my take-home thoughts.

Analytics = Actionable Intelligence

Anyone who has read my previous blog posts on analytics will know I’m rather passionate about “actionable insight” as a key point about analytics so I was naturally pleased to hear John’s similar take on the subject. We vigorously agreed that more reports is not what we need. If you can’t use the results of analysis to act differently it isn’t worth the effort. The corollary is that we should design systems around the people who need to take action.

Take a Multi-disciplinary Approach

Putting analytics into practice (at scale) is not “just” addressing IT or a statistical matters but requires domain knowledge of the area to be addressed and an understanding of the operational and cultural realities of the context of use. John stressed the varied team as a means to taking this kind of rounded approach. Important actors in this kind of team are people who understand how to influence change in organisational culture: politics.

You do still need good technical knowledge to avoid false insights, of course.

Take Account of “User” Psychology

The people who use the analytics – whether driving it or intended to be influenced by it – are the engine for change. This is really pointing out aspects of a multi-disciplinary approach; think soft systems, participatory design, and a team with some direct experience as a teacher/tutor/etc.

Signals has several examples, all elementary is some respects but significant by their presence:

  • teaching staff trigger the analysis and can over-ride the results (although rarely do);
  • it is emphasised to students that signals is NOT about grades but about engagement;
  • there are helpful suggestions given to students in addition to the traffic-light and, although these come from a repertoire, the teachers have a hand in targetting these.

Start Off Manually

OK, a process based on spreadsheets and people manually pushing and pulling data between databases and analysis software is not scalable but this can be an important stage. Is it really wise to start investing money and reputation in a big system before you have properly established what you really need, what your data quality can sustain, what works in practice?

This provides opportunity to move from research into practice, to properly adapt (rather than blindly adopt or superficially replicate) effective practice from elsewhere,etc. A manual start-off helps to expose limitations and risks (see next point).

KISS

The old adage “keep it simple stupid” (a modern vernacular expression of Occam’s razor) is not what John actually said, but he got close. Signals uses some well established and thoroughly mainstream statistical methods. It does not use the latest fancy predictive algorithms.

Why? Because fancy treatments would be like putting F1 tyres on a Citroen 2CV: worse than pointless. The data quality and a range of systematic biases* means that the simpler method and a traffic-light result is appropriate technology. John made it clear that quoting a percentage chance of drop-out (etc) is simply an indefensible level of precision given the data: red, amber and green with teacher over-ride is.

(*- VLE data, for example, does not mean the same across all courses/modules, teachers)

Be Part of a Community

OK… I liked this one because it is the kind of thing that JISC and CETIS has been promoting across all of their areas of work for many years. Making sense of what is possible, imagining and realising new ideas works so much better when ideas, experiences and reflections are shared.

This is why we were pleased to be part of the first SoLAR Flare UK event earlier this week and hope to be working with that community for some time.

Conclusion

Many have, and will, attempt to replicate the success of Signals in addressing student retention but not all will succeed. The points I mentioned above are indicative of an approach that worked in totality; a superficial attempt to replicate Signals will probably fail. This is about matching an appropriate level technology with organisational culture and context. It is innovation in socio-technical practice. So: doing analytics right is about holism directed towards action.

The views above include my own and not necessarily John Campbell’s.

Snapshots on the Changing Landscape of “Open …”

A little bit of text mining on a fairly large number of blogs with an educational technology (or technology enhanced learning…) makes a neat set of snapshots on “open …”.

Considering the words following “open” from January 2009 to the end of October 2012 shows the following distribution (where words with a relative frequency of <2% are ignored, as are low-value words like “and”). Hence it shows a share of the dominant themes.

Share of "Open ..." from Jan 2009 to Oct 2012

Share of "Open ..." from Jan 2009 to Oct 2012

The share for “online+course” is largely attributable to MOOCs and similar, although some of it is likely to be the use of “open online” referring to something else. This probably confirms the guesswork of followers of Ed Tech fashion but it may be a bit more of a surprise to see that open educational/content has taken such a tumble. I wonder whether some of the “open education” share has been diverted into “open online/course”. I’m also pleased to see “open standards” gaining more of a foothold but a left with a feeling that “open data” got a bit over-hyped in 2011.

About the data: 28116 blog posts were harvested and these contained 13723 uses of “open”. The blog post harvesting was done by the Mediabase and the analysis was done by the author, both as part of the EC funded TELMap project.

Innovation Networks

Realising benefits from applying ICT in post-compulsory education is something that might reasonably be described as innovation and networks of people and organisations provide an interesting means to achieve this aim. This is something that JISC and CETIS have been involved with for many years and we collectively have many lessons learned.

I recently set out to think about both innovation and innovation networks in a more structured way, being aware that these are complex topics with considerable existing literature, and to try to capture some of the “what it means for us” in the form of an essay. This is available in PDF and DOC formats.

Exploratory Data Analysis

It doesn’t take much to trigger me into a rant about the weaknesses of reports on data and “dashboards” purporting to be “analytics” or “business intelligence”. Lots of pie charts and line graphs with added bling are as the proverbial red rag to a bull.

Until recently my response was to demand more rigorous statistics: hypothesis testing, confidence limits, tests for reverse causality (but recognising that causality is a slippery concept in complex systems). Having recently spent some time thinking about using data analysis to gain actionable insights, particularly in the setting of an educational institution, it has become clear to me that this response is too shallow. It embeds an assumption of a linear process: ask a question, operationalise it in terms of data and statistics and crunch some numbers. As my previous post indicates, I don’t suppose all questions are approachable. Actually, thinking back to the ways I’ve  done a little text and data mining in the past, it wasn’t quite like this either.

The label “exploratory data analysis” captures the antithesis to the linear process. It was popularised in statistical circles by John W Tukey in the early 1960’s and he used it as a title for a highly influential book. Tukey was trying to challenge a statistical community that was very focused on hypothesis testing and other forms of “confirmatory data analysis”. He argued that statisticians should do both, approaching data with flexibility and an open frame of mind and he saw having a well-stocked toolkit of graphical methods as being essential for exploration (Tukey was responsible for inventing a number of plot types that are now widely used).

Tukey read a paper entitled “The Technical Tools of Statistics” at the 125th Anniversary Meeting of the American Statistical Association in 1964 which anticipated the development of computational tools (e.g. R and RapidMiner), is well worth a read and has timeless gems like:

“Some of my friends felt that I should be very explicit in warning you of how much time and money can be wasted on computing, how much clarity and insight can be lost in great stacks of computer output. In fact, I ask you to remember only two points:

  1. The tool that is so dull that you cannot cut yourself on it is not likely to be sharp enough to be either useful or helpful.
  2. Most uses of the classical tools of statistics have been, are, and will be, made by those who know not what they do.”

There is a correspondence between the open-minded and flexible approach to exploratory data analysis that Tukey advocated and the Grounded Theory (GT) Method of the social sciences. As a non-social scientist, GT seems to be a trying a bit too hard to be a Methodology (academic disputes and all) but the premise of using both inductive and deductive reasoning and going in to a research question free of the prejudice of a hypothesis that you intend to test (prove? how often is data analysed to find a justification for a prejudice?) is appealing.

Although GT is really focussed on qualitative research, some of the practical methods that the GT originators and practitioners have proposed might be applicable to data captured in IT systems and for practitioners of analytics. I quite like the dictum of “no talk” (see the wikipedia entry for an explanation).

My take home, then, is something like: if we are serious about analytics we need to be thinking about exploratory data analysis and confirmatory data analysis and the label “analytics” is certainly inappropriate if neither is occurring. For exploratory data analysis we need: visualisation tools, an open mind and an inquisitive nature.

A Poem for Analytics

There are many traps for the unwary in the practice of analytics, which I take to be the process of developing actionable insights through problem definition and the application of statistical models. The technical traps are most obvious but the epistemological traps are better disguised.

That these traps exist and are seemingly not recognised in the commercial and corporate rhetoric around analytics worries the more philosphically-minded; Virginia Tech’s Garner Campbell has shared some clear and well-received thoughts on the potential for damaging reductionism in Learning Analytics. I particularly like Anne Zelenka’s blogged reaction to Gardner’s LAK12 MOOC (I believe there is a recording but elluminate recordings don’t seem to play on linux) and my colleague Sheila has also blogged on the topic.

I don’t see reduction as being the issue per se but careless reductionism and failing to remember that our models are surrogates for what might be does worry me. Analytics does give us power for “myth busting” and a means to reduce the degree to which anecdote, prejudice and the opinion of the powerful determines action but let us be very wary indeed.

This all reminded me of the following poem by my favourite poet and mythographer, Robert Graves. Let us be slow.

In Broken Images

He is quick, thinking in clear images;
I am slow, thinking in broken images.

He becomes dull, trusting to his clear images;
I become sharp, mistrusting my broken images,

Trusting his images, he assumes their relevance;
Mistrusting my images, I question their relevance.

Assuming their relevance, he assumes the fact,
Questioning their relevance, I question the fact.

When the fact fails him, he questions his senses;
When the fact fails me, I approve my senses.

He continues quick and dull in his clear images;
I continue slow and sharp in my broken images.

He in a new confusion of his understanding;
I in a new understanding of my confusion.

Robert Graves

Making Sense of “Analytics”

There is currently a growing interest in increasing the degree to which data from various sources can be put to use by organisations to be more effective and a growing number of strategies for doing this. The term “analytics” is frequently being applied to descriptions of these situations but often without clarity as to what the word is intended to mean. This makes it difficult to make sense of what is happening, to decide what to appropriate from other sectors, and to make creative leaps forward in exploring how to adopt analytics.

I have just completed a public draft of a paper entitled “Making Sense of Analytics: a framework for thinking about analytics” [link removed – please visit our publications site to access the final versions] in an attempt to help anyone who is grappling with these questions in relation to post-compulsory education (as I am). It does so by:

  • considering the definition of “analytics”;
  • outlining analytics in relation to research management, teaching and learning or whole-institution strategy and operational concerns;
  • describing some of the key characteristics of analytics (the Framework).

The Framework is intended to support critical evaluation of examples of analytics, whether from commerce/industry or the research community, without resorting to definition of application or product categories. The intention behind this approach is to avoid discussion of “what it is” and to focus on “what it does” and “how it does it”.

This is a draft. Please feel free to comment via this blog or directly to me. A revised version will be published in June.

This paper is the first of a series that CETIS is producing and commissioning. These will be emerging during the coming months and collected together in a unified online resource in July/August. This is referred to briefly by Sheila MacNeill in her recent post “Learning Analytics, where do you stand?

UK Government Open Standards Consultation – CETIS Response

Earlier this year the UK Government Cabinet Office published what I thought was a rather good set of proposals for the role of open standards in government IT. They describe it as a “formal public consultation on the definition and mandation of open standards for software interoperability, data and document formats in government IT.” There are naturally points where we have critical comments but the direction of travel is broadly one that CETIS supports. The topic of mandation is, however, one to be approached with a great deal of caution in our view.

Our full response, which should be read alongside the consultation document (which includes the questions), is available for your information.

The consultation has now been extended to June 4th 2012 following the revelation of a conflict of interest; the chair of a public consultation meeting in April was found to be also working for Microsoft. This is the latest in a long series of concerns about Microsoft lobbying reported in Computer Weekly and elsewhere. I am actually encouraged by the Cabinet Office response both to FoI requests linked to meetings with Microsoft and to this recent revelation; they do seem to be trying to do the right thing.