What can I do with my educational data? (#lak13)

Following on from yesterday’s post, another “thought bomb” that has been running around my brain is something far closer to the core of Audrey’s “who owns your educational data?” presentation. Audrey was advocating the need for student owned personal data lockers (see screen shot below). This idea also chimes with the work of the Tin Can API project, and closer to home in the UK the MiData project. The latter is more concerned with more generic data around utility, mobile phone usage than educational data, but the data locker concept is key there too.

Screen shot of Personal Education Data Locker (Audrey Watters)

Screen shot of Personal Education Data Locker (Audrey Watters)

As you will know dear reader, I have turned into something of a MOOC-aholic of late. I am becoming increasingly interested in how I can make sense of my data, network connections in and across the courses I’m participating in and, of course, how I can access and use the data I’m creating in and across these “open” courses.

I’m currently not very active member of the current LAK13 learning analytics MOOC, but the first activity for the course is, I hope, going to help me frame some of the issues I’ve been thinking about in relation to my educational data and in turn my personal learning analytics.

Using the framework for the first assignment/task for LAK13, this is what I am going to try and do.

1. What do you want to do/understand better/solve?

I want to compare what data about my learning activity I can access across 3 different MOOC courses and the online spaces I have interacted in on each and see if I can identify any potentially meaningful patterns, networks which would help me reflective and understand better, my learning experiences. I also want to explore see how/if learning analytics approaches could help me in terms of contributing to my personal learning environment (PLE) in relation to MOOCs, and if it is possible to illustrate the different “success” measures from each course provider in a coherent way.

2. Defining the context: what is it that you want to solve or do? Who are the people that are involved? What are social implications? Cultural?

I want to see how/if I can aggregate my data from several MOOCs in a coherent open space and see what learning analytics approaches can be of help to a learner in terms of contextualising their educational experiences across a range of platforms.

This is mainly an experiment using myself and my data. I’m hoping that it might start to raise issues from the learner’s perspective which could have implications for course design, access to data, and thoughts around student created and owned eportfolios/and or data lockers.

3. Brainstorm ideas/challenges around your problem/opportunity. How could you solve it? What are the most important variables?

I’ve already done some initial brain storming around using SNA techniques to visualise networks and connections in the Cloudworks site which the OLDS MOOC uses. Tony Hirst has (as ever) pointed the way to some further exploration. And I’ll be following up on Martin Hawksey’s recent post about discussion group data collection .

I’m not entirely sure about the most important variables just now, but one challenge I see is actually finding myself/my data in a potentially huge data set and finding useful ways to contextualise me using those data sets.

4. Explore potential data sources. Will you have problems accessing the data? What is the shape of the data (reasonably clean? or a mess of log files that span different systems and will require time and effort to clean/integrate?) Will the data be sufficient in scope to address the problem/opportunity that you are investigating?

The main issue I see just now is going to be collecting data but I believe there some data that I can access about each MOOC. The MOOCs I have in mind are primarily #edc (coursera) and #oldsmooc (OU). One seems to be far more open in terms of potential data access points than the other.

There will be some cleaning of data required but I’m hoping I can “stand on the shoulders of giants” and re-use some google spreadsheet goodness from Martin.

I’m fairly confident that there will be enough data for me to at least understand the problems around the challenges for letting learners try and make sense of their data more.

5. Consider the aspects of the problem/opportunity that are beyond the scope of analytics. How will your analytics model respond to these analytics blind spots?

This project is far wider than just analytics as it will hopefully help me to make some more sense of the potential for analytics to help me as a learner make sense and share my learning experiences in one place that I chose. Already I see Coursera for example trying to model my interactions on their courses into a space they have designed – and I don’t really like that.

I’m thinking much more about personal aggregation points/ sources than the creation of actual data locker. However it maybe that some existing eportfolio systems could provide the basis for that.

As ever I’d welcome any feedback/suggestions.

#edcmooc week 3 – computer says no

It’s been a very reflective week for me in #edcmooc as we move to the “being human” element of the course. In week three we’re being specifically asked:

“what does it mean to be human within a digital culture, and what does that mean for education?”

and more specifically:

“Who or what, in your view, will define what it means to be human in the future? Who or what defines it now? These are crucial questions for those of us engaged in education in all its forms, because how we define ‘desirable humanity’ will inform at the deepest level our understanding of how and why education might be conducted and why it matters. Paying attention to online education foregrounds these issues in a new way, helping us look at them afresh.”

Fantastically chin stroking stuff :-) As usual there are a good range of readings and videos. David Hopkins has written an excellent critique.

I’ve had quite a surprisingly emotional response to all of this and I’ve been finding it difficult to articulate my thoughts. Maybe it’s because the resources and questions are making me question my own humanity. As educational technology is central to my job and takes up a huge amount of my life, and I am a fairly optimistic wee soul perhaps what’s been nagging away at me is a fear that I am contributing, without thinking of the consequences, towards a horribly dystopian future where we those that can afford it are bio-engineered up to the max, controlled by technology which allows us to think humans are still in control whilst it plots humanity’s demise.

On the other hand, my other reaction is that this is all a load of academic nonsense, which allows people to have never ending circular discussions; whilst in the ‘real world’ the rest of humanity just get on with it. We’re all going to die anyway and our species is just a blip in the history of our planet. For some reason this phrase from Little Britain keeps running through my head, it seems to sum up the wonderful way that humans can subvert technology.

As I’ve been reflecting on my experiences with technology in an educational context. I have to say that overall it has been the human element which has, and continues to be, the most rewarding and most innovative. I’ve seen online education offer alternative access to education at all levels from the most under-privileged to the most privileged. Technology has allowed me to connect with a range of wonderfully intelligent people in ways I would never imagined even less than 10 years ago. It has in many ways strengthened my sense of being human, which I think is fundamentally about communication. I still get very frustrated that there isn’t equal investment in human development every time a new system/technology is bought by a school/college/university, but I’m heartened by the fact that almost every project I know of emphasises the need for time to develop human relationships for technology to be a success and bring about change.

Ghosts in the machine? #edcmooc

Following on from last week’s post on the #edcmooc, the course itself has turned to explore the notion of MOOCs in the context of utopian/dystopian views of technology and education. The questions I raised in the post are still running through my mind. However they were at a much more holistic than personal level.

This week, I’ve been really trying to think about things from my student (or learner) point of view. Are MOOCs really changing the way I engage with formal education systems? On the one hand yes, as they are allowing me (and thousands of others) to get a taste of courses from well established institutions. At a very surface level who doesn’t want to say they’ve studied at MIT/Stanford/Edinburgh? As I said last week, there’s no fee so less pressure in one sense to explore new areas and if they don’t suit you, there’s no issue in dropping out – well not for the student at this stage anyway. Perhaps in the future, through various analytical methods, serial drop outs will be recognised by “the system” and not be allowed to join courses, or have to start paying to be allowed in.

But on the other hand, is what I’m actually doing really different than what I did at school and when I was an undergraduate or was a student on “traditional’ on line, distance courses. Well no, not really. I’m reading selected papers and articles, watching videos, contributing to discussion forums – nothing I’ve not done before, or presented to me in a way that I’ve not seen before. The “go to class” button on the Coursera site does make me giggle tho’ as it’s just soo American and every time I see it I hear a disembodied American voice. But I digress.

The element of peer review for the final assignment for #edcmooc is something I’ve not done as a student, but it’s not a new concept to me. Despite more information on the site and from the team this week I’m still not sure how this will actually work, and if I’ll get my certificate of completion for just posting something online or if there is a minimum number of reviews I need to get. Like many other fellow students the final assessment is something we have been concerned about from day 1, which seemed to come as a surprise to some of the course team. During the end of week 1 google hang out, the team did try to reassure people, but surely they must have expected that we were going to go look at week 5 and “final assessment” almost before anything else? Students are very pragmatic, if there’s an assessment we want to know as soon as possible the where,when, what, why, who,how, as soon as possible. That’s how we’ve been trained (and I use that word very deliberately). Like thousands of others, my whole education career from primary school onwards centred around final grades and exams – so I want to know as much as I can so I know what to do so I can pass and get that certificate.

That overriding response to any kind of assessment can very easily over-ride any of the other softer (but just as worthy) reasons for participation and over-ride the potential of social media to connect and share on an unprecedented level.

As I’ve been reading and watching more dystopian than utopian material, and observing the general MOOC debate taking another turn with the pulling of the Georgia Tech course, I’ve been thinking a lot of the whole experimental nature of MOOCs. We are all just part of a huge experiment just now, students and course teams alike. But we’re not putting very many new elements into the mix, and our pre-determined behaviours are driving our activity. We are in a sense all just ghosts in the machine. When we do try and do something different then participation can drop dramatically. I know that I, and lots of my fellow students on #oldsmooc have struggled to actually complete project based activities.

The community element of MOOCs can be fascinating, and the use of social network analysis can help to give some insights into activity, patterns of behaviour and connections. But with so many people on a course is it really possible to make and sustain meaningful connections? From a selfish point of view, having my blog picked up by the #edcmooc news feed has greatly increased my readership and more importantly I’m getting comments which is more meaningful to me than hits. I’ve tried read other posts too, but in the first week it was really difficult to keep up, so I’ve fallen back to a very pragmatic, reciprocal approach. But with so much going on you need to have strategies to cope, and there is quite a bit of activity around developing a MOOC survival kit which has come from fellow students.

As the course develops the initial euphoria and social web activity may well be slowing down. Looking at the twitter activity it does look like it is on a downwards trend.

#edcmooc Twitter activity diagram

#edcmooc Twitter activity diagram

Monitoring this level of activity is still a challenge for the course team and students alike. This morning my colleague Martin Hawskey and I were talking about this, and speculating that maybe there are valuable lessons we in the education sector can learn from the commercial sector about managing “massive” online campaigns. Martin has also done a huge amount of work aggregating data and I’d recommend looking at his blogs. This post is a good starting point.

Listening to the google hang out session run by the #edcmooc team they again seemed to have under estimated the time sink reality of having 41,000 students in a course. Despite being upfront about not being everywhere, the temptation to look must be overwhelming. This was also echoed in the first couple of weeks of #oldsmooc. Interestingly this week there are teaching assistants and students from the MSc course actively involved in the #edcmooc.

I’ve also been having a play with the data from the Facebook group. I’ve had a bit of interaction there, but not a lot. So despite it being a huge group I don’t get the impression, that apart from posting links to blogs for newsfeed, there is a lot of activity or connections. Which seems to be reflected in the graphs created from the data.

#edc Facebook group friends connections

#edc Facebook group friends connections


This is a view based on friends connections. NB it was very difficult for a data novice like me to get any meaningful view of this group, but I hope that this gives the impression of the massive number of people and relative lack of connections.

There are a few more connections which can be drawn from the interactions data, and my colleagye David Sherlock manage create a view where some clusters are emerging – but with such a huge group it is difficult to read that much into the visualisation – apart from the fact that there are lots of nodes (people).

#edcmooc Facebook group interactions

#edcmooc Facebook group interactions


I don’t think any of this is unique to #edcmooc. We’re all just learning how to design/run and participate at this level. Technology is allowing us to connect and share at a scale unimaginable even 10 years ago, if we have access to it. NB there was a very interesting comment on my blog about us all being digital slaves.

Despite the potential affordances of access at scale it seems to me we are increasingly just perpetuating an existing system if we don’t take more time to understand the context and consequences of our online connections and communities. I don’t need to connect with 40,000 people but I do want to understand more about how, why and how I could/do. That would be a really new element to add to any course, not just MOOCs (and not something that’s just left to a course specifically about analytics). Unless that happens my primary driver will be that “completion certificate”. In this instance, and many others, to get that I don’t really need to make use of the course community. So I’m just perpetuating an existing where I know how to play the game, even if it’s appearance is somewhat disguised.

When learning means teaching, and learner means teacher – thoughts on #learnersrights

Like many others I was introduced to A Bill of Rights for Learners in a Digital Age” yesterday. And like a few others was slightly confused by it. I think there is maybe a slight tendency for us the UK to be slightly skeptical of anything claiming to be a “bill of rights”. It’s a bit too American, too explicit for those of us used to having an unwritten constitution underpinning our version of democracy. 

After reading the document, lots of questions ran through my head: what can a learner do with this constitution? how does it protect their rights? who/how/what/why signs up for it? Which brought me to thinking is this really for learners? Or is it actually for teachers/ educational institutions/governments in terms of giving them a framework for providing the “right” context for learning to take place? Is this actually a teachers/teaching bill of rights?  

Perhaps because I’m taking part in #oldsmooc which is about learning design, the subtleties of distinctions between learning and teaching are higher than normal on my agenda. As it has been pointed out on many occasions, learning design could actually be called teaching design as it is in fact in many ways more about the teaching side of education than learning. Sometimes we have a tendency to use learning when we mean teaching, and teacher when we mean learner. This again was highlighted by Stephen Downes in his response to my review of the Larnaca Learning Design Declaration (which isn’t really a declaration but let’s not get caught up in more semantic circles). 

As someone involved in the drafting of “the Bill”, Audrey Watters has written a really useful post on the process and her own thoughts on the the process and terminology used. I found this extremely useful in understanding how, and by whom, the document was written. Audrey’s article highlights another conundrum in terms of the use of “student’ and “learner”. Again bringing me back to my questions about who this is bill is actually for.  

I do think there are some fundamental issues and some which will become increasingly important i.e. ownership and use of data which “the bill” highlights. As with the other announcements from the folks at Hybrid Pedagogy, Audrey is advocating hacking this initial document and getting much wider involvement in its development.  

I’m not sure I’m really adding anything constructive here, but I do think if this is to gain any traction it needs to be clear who this is for. Maybe this needs to evolve into a set of “bills/manifestos/declarations” call them what you will, explicitly directed at students, learners, teachers administrators but with some common underpinning themes to ensure we are all contributing to building successful learning cultures. 

Five new publications from JISC

The JISC e-Learning Programme team has just announced the release of five new publications on the themes of lifelong learning, e-portfolio implementation, innovation in further education, digital literacies, and extending the learning environment. These publications will be of interest to managers and practitioners in further and higher education and work based learning. Three of these publications are supported by additional online resources including videos, podcasts and full length case studies.

Effective Learning in a Digital Age: is an effective practice guide that explores ways in which institutions can respond flexibly to the needs of a broader range of learners and meet the opportunities and challenges presented by lifelong learning.

Crossing the Threshold: Moving e-portfolios into the mainstream is a short guide which summarises the key messages from two recent online resources, the e-Portfolio Implementation Toolkit, developed for JISC by the University of Nottingham, and five institutional video case studies. This guide and accompanying resources explore the processes, issues and benefits involved in implementing e-portfolios at scale.

Enhancing practice: Exploring innovation with technology in further education is a short guide that explores how ten colleges in Scotland, Wales, Northern Ireland (SWaNI) and England are using technology to continue to deliver high-quality learning and achieve efficiency gains despite increasing pressure and reduced budgets.

Developing Digital Literacies: is a briefing paper that provides a snapshot of early outcomes the JISC Developing Digital Literacies Programme and explores a range of emergent themes including graduate employability, and the engagement of students in strategies for developing digital literacies.

Extending the learning environment: is a briefing paper that looks at how institutions can review and develop their existing virtual learning environments. It offers case study examples and explores how systems might be better used to support teaching and learning, improve administrative integration or manage tools, apps and widgets.

All guides are available in PDF, ePub, MOBI and text-only Word formats. Briefing papers are available in PDF.

There are a limited number of printed copies of each guide for colleges and universities to order online.

A conversation around the Digital University – Part 5

Continuing our discussions around concepts of a Digital University, in this post we are going to explore the Learning Environments quadrant of our conceptual model.

MacNeill, Johnston Conceptual Matrix, 2012

MacNeill, Johnston Conceptual Matrix, 2012

To reiterate,the logic of our overall discussion starts with the macro concept of Digital Participation which provides the wider societal backdrop to educational development. Information Literacy enables digital participation and in educational institutions is supported by Learning Environments which are themselves constantly evolving. All of this has significant implications for Curriculum and Course Design.

Learning Environment
In our model we highlighted three key components of a typical HE institutional learning environment:
*physical and digital
*pedagogical and social
*research and enquiry

1 Physical and digital
A learning space should be able to motivate learners and promote learning as an activity, support collaborative as well as formal practice, provide a personalised and inclusive environment, and be flexible in the face of changing needs.Designing Spaces for Effective Learning, a guide to 21st learning space design.

One of the key starting points for this series of blog posts was the increasing use of “digital” as a prefix for a range of developments (mainly around technology infrastructure) which seemed to have an inherent implication that the physical environment, and its development was almost defunct. However, any successful learning environment is one where there is the appropriate balance between the physical and the digital. Even wholly online courses the student (and teacher) will have a physical location, and there are certain requirements of that physical location which will enable (or not) participation with the digital environment e.g. device, connectivity, power etc. Undoubtedly the rise of mobile internet enabled or Smart devices is allowing for greater flexibility of physical location; but they also create extra demands in the physical campus e.g. ubiquitous, freely available, stable, campus wide wireless connectivity; power sockets that aren’t all at the back of a classroom?. If students and staff are using and creating more digital resources where are they to be stored? Who provides the storage – the institution or the student? If the former how are they managed? How long do they stay “live”? Can a student access them once they have left the institution? Technology is not free, and providing a robust infrastructure does have major cost implications for institutions. For campus based courses, blended learning is becoming increasingly the norm. Which leads to questions around the social and pedagogical developments of our learning environments.

2. Pedagogical and Social
Vermut has summarized a number of patterns of what he refers to as teaching-learning environments which influence effective student learning . From his analysis of these patterns, and their components he has suggested a set of key features for powerful learning environments:
*They prepare students for lifelong, self-regulated, cooperative and work-based learning;
*The foster high quality student learning
*The teaching methods change in response to students’ increasing metacognitive and self-regulatory skills and
*The complexity of the problems dealt with increases gradually and systematically. (Vermut, Student Learning and University Teaching 2007, )

Of course to create these powerful environments requires a shift in terms of what he describes as “a gradual shift in the task division in the learning process form educational ‘agents’ (e.g. teacher, tutor, book or computer) to students”. This shift creates a culture of increasing self regulation and thinking from students. Curricula are developed with an increasing set of challenges which foster key lifelong learning skills that become common practice for students beyond their formal education and into the workplace. Vermut et al refer to this as “process-orientated teaching” as it is targeted at the “processes of knowledge construction and utilization”.

This style of teaching and learning requires an increasingly complex mix of skills including diagnostician, challenger , monitor, evaluator and educational developer. Technology can provide a number of affordances to create the learning spaces for to allow more self regulation for students e.g. collaborative working spaces, and personal reflective spaces. However, there needs to be support from all levels of the institution to continually provide the wider environment which effectively develops the skills and knowledge to allow this type of student as self regulating researcher culture.

3 Research and Enquiry
There is a growing discourse emerging around effective research practice in the digital age, and the notion of the digital scholar is increasingly recognised. Martin Weller’s recent book “The Digital Scholar How Technology is Transforming Scholarly Practice” explores key themes around digital practice, and what the increasing role of networks and connections, the disconnect and tensions between traditional and new forms of increasingly self publication platforms and formal recognitions within Universities and the role of open scholarship. This blog post summarises his top ten digital scholarship lessons.

What is crucial now is that institutions and funders begin to recognise and more importantly not only begin to reward these different types of digital scholarly activities, but also ensure that staff and students have the relevant literacy skills to exploit them effectively. Information literacy has been recognised as having an impact on effective research practice, but we would argue for that more research needs to be done in this area to make explicit the link between effective information and literacy skills and effective research and scholarly practice.

There is a growing backlash against traditional academic publishing models which was recently highlighted by John Naughton’s feature in The Observer “Academic Publishing Doesn’t Add “Up”. Open access and open publishing can again be seen as being key to digital scholarship.

Early findings from the JISC Developing Digital Literacies Programme are showing the impact of undertaking a digital literacy audit to enable institutions to define (and therefore develop) their expectations for and to students. There are differences between disciplines which again need to be understood and shared between staff across institutions. Digital literacies are becoming more prevalent in institutional policies, and need to be supported by relevant provision of services and shared understandings if there are to be more than token statements. We think our matrix may play a role in forming and extending strategic discussions.

In the next post we will try and pull together key points from the series so far and the comments we have received and frame these in terms of some of the wider, societal contexts. As ever we’d love to get feedback on our thoughts so far, so please do leave a comment.

*Part 1
*Part 2
*Part 3
*Part 4

Understanding, creating and using learning outcomes

How do you write learning outcomes? Do you really ensure that they are meaningful to you, to you students, to your academic board? Do you sometimes cut and paste from other courses? Are they just something that has to be done and are a bit opaque but do they job?

I suspect for most people involved in the development and teaching of courses, it’s a combination of all of the above. So, how can you ensure your learning outcomes are really engaging with all your key stakeholders?

Creating meaningful discussions around developing learning outcomes with employers was the starting point for the CogenT project (funded through the JISC Life Long Learning and Workforce Development Programme). Last week I attended a workshop where the project demonstrated the online toolkit they have developed. Initially designed to help foster meaningful and creative dialogue during co-circular course developments with employers, as the tool has developed and others have started to use it, a range of uses and possibilities have emerged.

As well as fostering creative dialogue and common understanding, the team wanted to develop a way to evidence discussions for QA purposes which showed explicit mappings between the expert employer language and academic/pedagogic language and the eventual learning outcomes used in formal course documentation.

Early versions of the toolkit started with the inclusion of number of relevant (and available) frameworks and vocabularies for level descriptors, from which the team extracted and contextualised key verbs into a list view.

List view of Cogent toolkit

List view of Cogent toolkit

(Ongoing development hopes to include the import of competencies frameworks and the use of XCRI CAP.)

Early feedback found that the list view was a bit off-putting so the developers created a cloud view.

Cloud view of CongeT toolkit

Cloud view of CongeT toolkit

and a Blooms view (based on Blooms Taxonomy).

Blooms View of CogenT toolkit

Blooms View of CogenT toolkit

By choosing verbs, the user is directed to set of recognised learning outcomes and can start to build and customize these for their own specific purpose.

CogenT learning outcomes

CogenT learning outcomes

As the tool uses standard frameworks, early user feedback started to highlight the potential for other uses for it such as: APEL; using it as part of HEAR reporting; using it with adult returners to education to help identify experience and skills; writing new learning outcomes and an almost natural progression to creating learning designs. Another really interesting use of the toolkit has been with learners. A case study at the University of Bedfordshire University has shown that students have found the toolkit very useful in helping them understand the differences and expectations of learning outcomes at different levels for example to paraphrase student feedback after using the tool ” I didn’t realise that evaluation at level 4 was different than evaluation at level 3″.

Unsurprisingly it was the learning design aspect that piqued my interest, and as the workshop progressed and we saw more examples of the toolkit in use, I could see it becoming another part of the the curriculum design tools and workflow jigsaw.

A number of the Design projects have revised curriculum documents now e.g. PALET and SRC, which clearly define the type of information needed to be inputted. The design workshops the Viewpoints project is running are proving to be very successful in getting people started on the course (re)design process (and like Co-genT use key verbs as discussion prompts).

So, for example I can see potential for course design teams after for taking part in a Viewpoints workshop then using the Co-genT tool to progress those outputs to specific learning outcomes (validated by the frameworks in the toolkit and/or ones they wanted to add) and then completing institutional documentation. I could also see toolkit being used in conjunction with a pedagogic planning tool such as Phoebe and the LDSE.

The Design projects could also play a useful role in helping to populate the toolkit with any competency or other recognised frameworks they are using. There could also be potential for using the toolkit as part of the development of XCRI to include more teaching and learning related information, by helping to identify common education fields through surfacing commonly used and recognised level descriptors and competencies and the potential development of identifiers for them.

Although JISC funding is now at an end, the team are continuing to refine and develop the tool and are looking for feedback. You can find out more from the project website. Paul Bailey has also written an excellent summary of the workshop.