Small is beautiful: an antidote to big data #altc2013

Over the past year Cetis has been spending quite a bit of time exploring the context and potential of analytics within the education sector.  The Cetis analytics series is our on-going contribution to the debate. As part of our investigations we undertook a survey of UK institutions to try and get a baseline of where institutions are “at” in terms of analytics (see this post for more information ).

One of the issues around analytics at the moment is ownership and responsibility. Just who in your institution is responsible for learning analytics for example – someone in your VLE/Learning technology team, the stats team, someone in IS? We’re not sure either so we did try to hit as many areas and mailing lists as possible to get feedback. Unfortunately we didn’t get a huge response so we can’t draw anything conclusive from it apart from the fact that there is something happening, but it’s not quite clear what or where. However the data is providing a valuable starting point/ potential baseline which Adam Cooper has written up.  Adam’s post gives more information including links to his report and the actual data.

What does seem to be clear is that despite the hype of big data, at the institutional level small data is indeed beautiful and useful.  Last week at ALT-C 2013, Stephen Powell led a workshop around this theme.  During the session we used the case studies from  Cetis Analytics Series and the results of the survey to stimulate discussion around data and analytics in education. There is undoubtedly still lots of interest in analytics (particularly learning analytics) within the learning technology community as our very busy session demonstrated, however the discussion highlighted key concerns including:

  • An overall uncertainty about what might emerge
  • Are small scale initiatives more achievable than large scale institutional ones?
  • Ethics – including concerns about to what purposes analytics might be put – in the hand of managers who may use it in an unknowing way
  • Where is the data? who can access it? and what can they do with it?

You can also access the slides from the workshop via slideshare.

LASI-UK a twitter summary

The LASI UK even held last Friday (5 July), brought over 50 people from across the UK to Edinburgh to join in the international learning analytics-fest accompanying the face to face Learning Analtyics Summer Institute being held at Stanford University.

I’m still trying to process all the great presentations and discussions from the day, but to give a flavour of the day I’ve pulled together some of the tweets from the #lasiuk back channel to provide a summary of the day. Martin Hawksey also live blogged the morning and afternoon sessions.

I’d also like to take this opportunity to give a public thank you to Naomi Jeffery and Hannah Jones from the OU Scotland for all their hard work in organising and ensuring the smooth running of the day.

IT departments – the institutional fall guy for MOOCs?

The Fall Guy

The Fall Guy


(Image from IMDB http://www.imdb.com/media/rm782014464/tt0081859)

As Martin Weller pointed out earlier this week there are a growing number of MOOC metaphors being created. As I’ve been following the tweets from today’s “#moocapalooza ” (a hashtag I think invented by David Kernohan) a.k.a. Open and Online Learning making the most of MOOCs and other Models Conference I think I need to to add the Fall Guy to Martin’s list, particularly after reading this tweet.

I’m going to try and not too much in this post, and I apologise for taking this tweet at face value and out with its original context, but . . . Isn’t this just another one of those MOOC myths that twist the reality of what happens within institutions to suit the “education is broken we must build something else” mind set? As Martin Hawskey and Lorna Campbell both said in response to David’s tweet it’s not the systems that are the problem.

I’m going to stick my neck out (not too far) and say every technology you need to run a MOOC is available within every University. I’ve not seen anything in my adventures in MOOC-land that has made me think “oh wow, wish we could have one of those back in the non-MOOC world”. There are VLEs, blogs, wikis aplenty. And IT departments do a sterling job in keeping these running for all that “non MOOC stuff” that Universities do. You know, the dull and boring things you need to do “traditional” teaching.

Yesterday during a webinar on analytics and assessment and feedback, Rachel Forsyth (MMU) shared some of their learning system analytics data. Since the beginning of this year they’ve had over 8 million hits on their mobile interface which allows students to access key information like assessment marks, timetables and reading lists. At key points of in the year they have over 100,000 assignments being submitted electronically. I suspect many institutions are working at this scale. So I don’t think it’s a question of IT department’s not being up to delivering MOOCs, I think it’s more that they have quite a lot to do already and adding another potentially x000,000 of users is not something that can be undertaken lightly, or without any cost implications.

Investing in internal IT resources isn’t seen as a key part of MOOC development strategy. Why would it be when Coursera etc have been able to get money to build systems. In many ways using an external platform like FutureLearn is a very sensible option. It means that experiments with MOOCs can take place without putting additional strain on existing resources. We all know, or should do by now, that there’s no such thing as a free MOOC and that includes the infrastructure they sit within. So let’s not let another myth develop that the HE sector don’t have the technology or the ability to deliver MOOCs. They do, it’s just that it’s already working at capacity delivering their day to day business.

Analytics in UK Further and Higher Education Survey

Over the past few months, we at Cetis have been involved in a number of analytics related activities, most notably our Analytics Series of papers and case studies. Although we know there are pockets of really exciting developments here in the UK, we are keen to find out more about what is actually happening in our Universities and Colleges. In order to give us (and the community) a more accurate insight we are launching our Analytics in UK Further and Higher Education survey. From teaching and learning to the library to registry and business intelligence, we need to hear from you!

The survey is quite short (12 questions) and has been designed to try and allow us to undertake a “lite” benchmark of activity in the UK sector. We’d really appreciate if you could take 10 minutes or so to give us your feedback. The survey will stay open until June 16. Once we have all the data we will of course publish the results. We will be sharing our initial analysis of the data at a session at this years ALT-C.

The survey can be accessed here, please feel free to pass the link on to any relevant colleagues.

Learning Analytics for Assessment and Feedback Webinar, 15 May

**update 16 May**
Link to session recording

Later this week I’ll be chairing a (free) webinar on Learning Analytics for Assessment and Feeback. Featuring work from three projects in the current Jisc Assessment and Feedback Programme. I’m really looking forward to hearing first hand about the different approaches being developed across the programme.

“The concept of learning analytics is gaining traction in education as an approach to using learner data to gain insights into different trends and patterns but also to inform timely and appropriate support interventions. This webinar will explore a number of different approaches to integrating learning analytics into the context of assessment and feedback design; from overall assessment patterns and VLE usage in an institution, to creating student facing workshops, to developing principles for dashboards.”

The presentations will feature current thinking and approaches from teams from the following projects:
*TRAFFIC, Manchester Metropolitan University
*EBEAM, University of Huddersfield,
*iTeam, University of Hertfordshire

The webinar takes place Wednesday 15 May at 1pm (UK time) and is free to attend. A recording will also be available after the session. You can register by following this link.

Deconstructing my (dis)engagement with MOOCs part 2

Following from my early post, I’ve attempted to use the classifiers outlined in the #lak13 paper on disengagement in MOOCs, in the context of my experiences. Obviously I’ve modified things a bit as what I’m doing is more of a self reflection of my personal context -so I’ve made the labels past tense. I’m also doing a presentation next week at the University of Southampton on the learner perspective of MOOCs and thought that these classifications would be a good way to talk about my experiences.

Firstly here are the MOOCs I’ve signed up for over ( the ? years are when I was aware but not active in MOOCs)

MOOCs I've took!

MOOCs I've took!

Now with the course engagement labels

My MOOC engagement with labels

My MOOC engagement with labels

And finally aligned to trajectory labels

My MOOC participation using trajectory labels

My MOOC participation using trajectory labels

A big caveat, not completing, disengaging and dropping out does not mean I didn’t learn from each he experience and context of each course.

More to come next week including the full presentation.

Deconstructing my own (dis)engagement with MOOCs

No educational technology conference at the moment is complete without a bit of MOOC-ery and #lak13 was no exception. However the “Deconstructing disengagement: analyzing learner sub-populations in massive open online courses” paper was a move on from the familiar territory of broad, brush stroke big numbers towards a more nuanced view of some of the emerging patterns of learners across three Stanford based Coursera courses.

The authors have created:

” a simple, scalable, and informative classification method that identifies a small number of longitudinal engagement trajectories in MOOCs. Learners are classified based on their patterns of interaction with video lectures and assessments, the primary features of most MOOCs to date . . .”

” . . .the classifier consistently identifies four prototypical trajectories of engagement.”

As I listened to the authors present the paper I couldn’t help but reflect on my own recent MOOC experience. Their classifier labels (auditing, completing, sampling, disengaging) made a lot of sense to me. At times I have been in all four “states” of auditing, completing, disengaging and sampling.

The study investigated typical Coursera courses which mainly take the talking head video, quiz, discussion forum, final assignment format and suggested that use of the framework to identify sub-populations of learners would allow more customisation of courses and (hopefully) more engagement and I guess ultimately completion.

I did find it interesting that they identified that completing learners were most active on forums, something that contradicts my (limited) experience. I’ve signed up for a number of the science-y type Coursera courses and have sampled and disengaged. Compare that to the recent #edcmooc which again was run through Coursera but didn’t use the talking head-quiz-forum design. Although I didn’t really engage with the discussion forums (I tried but they just “don’t do it for me”) I did feel very engaged with the content, the activities, my peers and I completed the course.

I’ve spoken to a number of fellow MOOC-ers recently and they’re not that keen on the discussion forums either. Of course, it’s highly likely that people I speak to are like me and probably interact more on their blogs and twitter than in discussion forums. Maybe its an arts/science thing ? Shorter discussions? I don’t really know, but at scale I find any discussion forum challenging, time consuming and to be completely honest a bit of a waste of time.

The other finding to emerge from the study was that completing and auditing (those that just watch the videos and don’t necessarily contribute to forums or submit assignments) sub-populations have the best experiences of the courses. Again drawing on my own experiences, I can see why this could be the case. Despite dropping out of courses, the videos I’ve watched have all been “good” in the sense that they were of a high technical quality, and the content was very clear. So I’ve watched and thought “oh, I didn’t know that/ oh, so that’s what that means? oh that’s what I need to do”. The latter being the point that I usual disengage as there is something far more pressing I need to do :-) But I have to say that the experience of actually completing (I’m now at 3 for that) MOOCs was far richer. Partly that was down to the interaction with my peers on each occasion, and the cMOOC ethos of each course design.

That said, I do think the auditing, completing, disengaging, sampling labels are a very useful addition to the discourse and understanding of what is actually going on within the differing populations of learners in MOOCs.

A more detailed article on the research is available here.

Learning analytics – a bridge to the middle space? #lak13

It’s not quite a chicken and egg situation, but there is a always a tension between technology and pedagogy. A common concern being that technology is being used in education “just because it can” and not because it has a sound pedagogical impact. Abelardo Pardo’s keynote at the recent #lak13 conference described how learning analytics could potentially sit in the middle space between technology and teaching.

Learning analytics could provide additional bridges between each community to help make real improvements to teaching and learning.  Analytical tools can provide data driven insights into how people interact with systems, activities and each other and learn, but in turn we need to have the expertise of teachers to help developers/data scientists frame questions, develop potential data collection points and contextualize findings. Abelardo’s personal story about his own engagement both with pedagogy and analytics was a powerful example of this. The bridge analogy really resonated with me and many other of the delegates.  I’ve often described, and indeed hope that, a large part of my job is being a bridge between technology and teaching.  

On the final day of the conference  there was a healthy debate around what the focus of the LAK conference and community should be.  On the one hand learning analytics is a relatively new discipline. It is trying hard to establish its research credentials, and so needs to be active in producing “serious” research papers. On the other, if it really wants live up its own hypothesis and gain traction with practitioners/institutions, then it needs to not only to provide insights but also accessible, scalable tools and methodologies.  The “science bit” of some of  the LAK research papers were quite challenging to put into a real world context, even for the enthusiastic data amateur such as myself.

However we do need valid research to underpin the discipline and also to validate  any claims that are being made.  Extension of action research projects could provide one solution to this which was encompassed by a number of papers. I’m a strong believer in action research in education, it seems a natural fit with how most teachers actually work, and also can provide real opportunities for students to be involved in the process too.  ( As an aside, like last year, I did get the feeling that what was being discussed was actually teaching analytics – not learning analytics, i.e it was still about teacher intervention understanding and what could be done to students). 

Part of what we have been trying to at CETIS with our Analytics Series, is to try and provide a bridge into this whole area. The set of case studies I’ve been working on in particular are specifically aimed at illustrating applications of analytics in a variety of real world contexts. But they are not the kind of papers that would be accepted (or submitted ) to the LAK conference. One suggestion my colleague Martin Hawskey came up with during the final day of the conference was the idea of a more “relaxed” stream/session.  

Perhaps something along the lines of the lightning presentations we used at both the UK SoLAR Flare meeting and the recent CETIS conference. This could provide a bridge between the research focus of the conference and actual practice, and give an opportunity to quickly share some of the exciting work that many people are doing, but for a variety of reasons, aren’t writing research papers on. Maybe that would  bring a bit more of an experimentation/what’s actually happening now/fun element to the proceedings.  

If you want to catch up on conference proceedings, I’d thoroughly recommend reading some of the excellent live blogs from Doug Clow, Sharon Slade and Myles Danson, which Doug has rather handily collated here. 

I’ll also be following up with a couple of more posts in the next few days based on some of the really exciting work I saw presented at the conference. 

Acting on Assessment Analytics – new case study

Despite the hype around it, getting started with learning analytics can be a challenge for most everyday lecturers. What can you actually do with data once you get it? As more “everyday” systems (in particular online assessment tools) are able to provide data and/or customised reports, it is getting easier to start applying and using analytics approaches in teaching and learning.  

The next case study in our Analytics series focuses on the work of Dr Cath Ellis and colleagues at the University of Huddersfield. It illustrates how they are acting on the data from their e-submission system, not only to enhance and refine their feedback to students, but also to help improve their approaches to assessment and overall curriculum design.  
 
At the analytics session at #cetis13 Ranjit Sidhu pointed out that local data can be much more interesting and useful than big data. This certainly rings true for teaching and learning.  Using very local data, Cath and her colleagues are developing a workshop approach to sharing generic assessment data with students in a controlled and emotionally secure environment. The case study also highlights issues around data handling skills and the need for more evidence of successful interventions through using analtyics. 

You can access the full case study here

We are always looking for potential case studies to add to our collection, so if you are doing some learning analtyics related work and would be willing to share your experiences in this way, then please get in touch.

Avoiding getting caught in the data slick: thoughts from Analtyics and Institutional Capabilities session, #cetis13

Data, data everywhere data, but what do we actually do with it? Do we need “big” data in education? What is it we are trying to find out? What is our ROI both at institutional and national levels? Just some the questions that were raised at the Analytics and Institutional Capabilities session at #cetis13 last week.

Is data our new oil? asked Martin Hawksey in his introduction to the session. And if, as many seem to think, it is, do we we really have the capabilities to “refine” it properly? How can we ensure that we aren’t putting the equivalent of petrol into a diesel engine? How can we ensure that institutions (and individuals) don’t end getting trapped in a dangerous slick of data? Are we ensuring that everyone (staff and students) are developing the data literacy skills they need to use and ultimately understand the visualisations we can produce from data?

Bird in an oil slick

Bird in an oil slick

Ranjit Sidhu (Statistics into Decisions) gave an equally inspiring and terrifying presentation around the hype of big data. He pointed out that in education “local data” and not “big data” is really where we should be focusing our attention, particularly in relation to our core business of attracting students. In relation to national level data he also questions the ROI on some “quite big” data national data collection activities such as the KIS. From the embarrassingly low figures he showed us of the traffic to the UniStats site, it would appear not. We may have caused a mini spike in the hits for one day in March :-)

However, there are people who are starting to ask the right questions and use their data in ways that are meaningful. A series of lightning talks which highlighted a cross section of approaches to using institutional data. This was followed by three inspiring talks from Jean Mutton (University of Derby), Mark Stubbs (MMU) and Simon Buckingham Shum (OU). Jean outlined the work she and her team have been doing at Derby on enhancing the student experience (more information on this is available through our new case study); Mark then gave a review of the work they have been doing around deeper exploration of NSS returns data and their VLE data. Both Jean and Mark commented that their work started without them actually realising they were “doing analytics”. Marks analytics cycle diagram was really useful in illustrating their approach.

screen shot of analtyics cycle

screen shot of analtyics cycle

Simon, on the other hand, of course very much knew that he was “doing analytics” and gave an overview of some the learning analtyics work currently being undertaken at the OU, including a quick look at some areas FutureLearn could potentially be heading.

Throughout all the presentations the key motivator has, and continues to be, framing and then developing the “right” questions to get the most out of data collection activity and analysis.

More information including links to the slides from the presentations are available on the CETIS website.