Deconstructing my (dis)engagement with MOOCs part 2

Following from my early post, I’ve attempted to use the classifiers outlined in the #lak13 paper on disengagement in MOOCs, in the context of my experiences. Obviously I’ve modified things a bit as what I’m doing is more of a self reflection of my personal context -so I’ve made the labels past tense. I’m also doing a presentation next week at the University of Southampton on the learner perspective of MOOCs and thought that these classifications would be a good way to talk about my experiences.

Firstly here are the MOOCs I’ve signed up for over ( the ? years are when I was aware but not active in MOOCs)

MOOCs I've took!

MOOCs I've took!

Now with the course engagement labels

My MOOC engagement with labels

My MOOC engagement with labels

And finally aligned to trajectory labels

My MOOC participation using trajectory labels

My MOOC participation using trajectory labels

A big caveat, not completing, disengaging and dropping out does not mean I didn’t learn from each he experience and context of each course.

More to come next week including the full presentation.

Deconstructing my own (dis)engagement with MOOCs

No educational technology conference at the moment is complete without a bit of MOOC-ery and #lak13 was no exception. However the “Deconstructing disengagement: analyzing learner sub-populations in massive open online courses” paper was a move on from the familiar territory of broad, brush stroke big numbers towards a more nuanced view of some of the emerging patterns of learners across three Stanford based Coursera courses.

The authors have created:

” a simple, scalable, and informative classification method that identifies a small number of longitudinal engagement trajectories in MOOCs. Learners are classified based on their patterns of interaction with video lectures and assessments, the primary features of most MOOCs to date . . .”

” . . .the classifier consistently identifies four prototypical trajectories of engagement.”

As I listened to the authors present the paper I couldn’t help but reflect on my own recent MOOC experience. Their classifier labels (auditing, completing, sampling, disengaging) made a lot of sense to me. At times I have been in all four “states” of auditing, completing, disengaging and sampling.

The study investigated typical Coursera courses which mainly take the talking head video, quiz, discussion forum, final assignment format and suggested that use of the framework to identify sub-populations of learners would allow more customisation of courses and (hopefully) more engagement and I guess ultimately completion.

I did find it interesting that they identified that completing learners were most active on forums, something that contradicts my (limited) experience. I’ve signed up for a number of the science-y type Coursera courses and have sampled and disengaged. Compare that to the recent #edcmooc which again was run through Coursera but didn’t use the talking head-quiz-forum design. Although I didn’t really engage with the discussion forums (I tried but they just “don’t do it for me”) I did feel very engaged with the content, the activities, my peers and I completed the course.

I’ve spoken to a number of fellow MOOC-ers recently and they’re not that keen on the discussion forums either. Of course, it’s highly likely that people I speak to are like me and probably interact more on their blogs and twitter than in discussion forums. Maybe its an arts/science thing ? Shorter discussions? I don’t really know, but at scale I find any discussion forum challenging, time consuming and to be completely honest a bit of a waste of time.

The other finding to emerge from the study was that completing and auditing (those that just watch the videos and don’t necessarily contribute to forums or submit assignments) sub-populations have the best experiences of the courses. Again drawing on my own experiences, I can see why this could be the case. Despite dropping out of courses, the videos I’ve watched have all been “good” in the sense that they were of a high technical quality, and the content was very clear. So I’ve watched and thought “oh, I didn’t know that/ oh, so that’s what that means? oh that’s what I need to do”. The latter being the point that I usual disengage as there is something far more pressing I need to do :-) But I have to say that the experience of actually completing (I’m now at 3 for that) MOOCs was far richer. Partly that was down to the interaction with my peers on each occasion, and the cMOOC ethos of each course design.

That said, I do think the auditing, completing, disengaging, sampling labels are a very useful addition to the discourse and understanding of what is actually going on within the differing populations of learners in MOOCs.

A more detailed article on the research is available here.

Learning analytics – a bridge to the middle space? #lak13

It’s not quite a chicken and egg situation, but there is a always a tension between technology and pedagogy. A common concern being that technology is being used in education “just because it can” and not because it has a sound pedagogical impact. Abelardo Pardo’s keynote at the recent #lak13 conference described how learning analytics could potentially sit in the middle space between technology and teaching.

Learning analytics could provide additional bridges between each community to help make real improvements to teaching and learning.  Analytical tools can provide data driven insights into how people interact with systems, activities and each other and learn, but in turn we need to have the expertise of teachers to help developers/data scientists frame questions, develop potential data collection points and contextualize findings. Abelardo’s personal story about his own engagement both with pedagogy and analytics was a powerful example of this. The bridge analogy really resonated with me and many other of the delegates.  I’ve often described, and indeed hope that, a large part of my job is being a bridge between technology and teaching.  

On the final day of the conference  there was a healthy debate around what the focus of the LAK conference and community should be.  On the one hand learning analytics is a relatively new discipline. It is trying hard to establish its research credentials, and so needs to be active in producing “serious” research papers. On the other, if it really wants live up its own hypothesis and gain traction with practitioners/institutions, then it needs to not only to provide insights but also accessible, scalable tools and methodologies.  The “science bit” of some of  the LAK research papers were quite challenging to put into a real world context, even for the enthusiastic data amateur such as myself.

However we do need valid research to underpin the discipline and also to validate  any claims that are being made.  Extension of action research projects could provide one solution to this which was encompassed by a number of papers. I’m a strong believer in action research in education, it seems a natural fit with how most teachers actually work, and also can provide real opportunities for students to be involved in the process too.  ( As an aside, like last year, I did get the feeling that what was being discussed was actually teaching analytics – not learning analytics, i.e it was still about teacher intervention understanding and what could be done to students). 

Part of what we have been trying to at CETIS with our Analytics Series, is to try and provide a bridge into this whole area. The set of case studies I’ve been working on in particular are specifically aimed at illustrating applications of analytics in a variety of real world contexts. But they are not the kind of papers that would be accepted (or submitted ) to the LAK conference. One suggestion my colleague Martin Hawskey came up with during the final day of the conference was the idea of a more “relaxed” stream/session.  

Perhaps something along the lines of the lightning presentations we used at both the UK SoLAR Flare meeting and the recent CETIS conference. This could provide a bridge between the research focus of the conference and actual practice, and give an opportunity to quickly share some of the exciting work that many people are doing, but for a variety of reasons, aren’t writing research papers on. Maybe that would  bring a bit more of an experimentation/what’s actually happening now/fun element to the proceedings.  

If you want to catch up on conference proceedings, I’d thoroughly recommend reading some of the excellent live blogs from Doug Clow, Sharon Slade and Myles Danson, which Doug has rather handily collated here. 

I’ll also be following up with a couple of more posts in the next few days based on some of the really exciting work I saw presented at the conference.