Learning Analytics for Assessment and Feedback Webinar, 15 May

**update 16 May**
Link to session recording

Later this week I’ll be chairing a (free) webinar on Learning Analytics for Assessment and Feeback. Featuring work from three projects in the current Jisc Assessment and Feedback Programme. I’m really looking forward to hearing first hand about the different approaches being developed across the programme.

“The concept of learning analytics is gaining traction in education as an approach to using learner data to gain insights into different trends and patterns but also to inform timely and appropriate support interventions. This webinar will explore a number of different approaches to integrating learning analytics into the context of assessment and feedback design; from overall assessment patterns and VLE usage in an institution, to creating student facing workshops, to developing principles for dashboards.”

The presentations will feature current thinking and approaches from teams from the following projects:
*TRAFFIC, Manchester Metropolitan University
*EBEAM, University of Huddersfield,
*iTeam, University of Hertfordshire

The webinar takes place Wednesday 15 May at 1pm (UK time) and is free to attend. A recording will also be available after the session. You can register by following this link.

Acting on Assessment Analytics – new case study

Despite the hype around it, getting started with learning analytics can be a challenge for most everyday lecturers. What can you actually do with data once you get it? As more “everyday” systems (in particular online assessment tools) are able to provide data and/or customised reports, it is getting easier to start applying and using analytics approaches in teaching and learning.  

The next case study in our Analytics series focuses on the work of Dr Cath Ellis and colleagues at the University of Huddersfield. It illustrates how they are acting on the data from their e-submission system, not only to enhance and refine their feedback to students, but also to help improve their approaches to assessment and overall curriculum design.  
 
At the analytics session at #cetis13 Ranjit Sidhu pointed out that local data can be much more interesting and useful than big data. This certainly rings true for teaching and learning.  Using very local data, Cath and her colleagues are developing a workshop approach to sharing generic assessment data with students in a controlled and emotionally secure environment. The case study also highlights issues around data handling skills and the need for more evidence of successful interventions through using analtyics. 

You can access the full case study here

We are always looking for potential case studies to add to our collection, so if you are doing some learning analtyics related work and would be willing to share your experiences in this way, then please get in touch.

Bye bye #edcmooc

So #edcmooc is now over, our digital artefacts have been submitted and reviewed and we all now move on.

I thought it would be useful to reflect on the final submission and peer review process as I have questioned how that would actually work in a couple of earlier posts. The final submission for the course was to create a digital artefact which would be peer reviewed.

The main criteria for creating the artefact were:

* it will contain a mixture of two or more of: text, image, sound, video, links.
* it will be easy to access and view online.
* it will be stable enough to be assessed for at least two weeks.

We had to submit a url via the Coursera LMS and then we were each assigned 3 other artefacts to assess. You had the option to assess more if you wished. The assessment criteria were as follows:

1. The artefact addresses one or more themes for the course
2. The artefact suggests that the author understands at least one key concept from the course
3. The artefact has something to say about digital education
4. The choice of media is appropriate for the message
5. The artefact stimulates a reaction in you, as its audience, e.g. emotion, thinking, action

You will assign a score to each digital artefact

0 = does not achieve this, or achieves it only minimally
1 = achieves this in part
2 = achieves this fully or almost fully

This is the first time I’ve done peer review and it was a very interesting process. In terms of the electronic process, the system made things very straightforward, and there was time to review draft submissions before submitting. I’m presuming that artefacts were allocated on a random basis too. On reflection the peer process was maybe on the “lite” side, but given the scope and scale of this course I think that is entirely appropriate.

My three allocated artefacts were really diverse both in style, content and substance. Whilst reviewing I did indeed reflect back on what I had done and wished I had the imagination and time of some of my peers, and I could have spent hours going through more but I had to stop myself. Overall I am still satisfied with my submission which you can explore below or follow this link.

2/2 all round for me and some very positive comments from my peers, so thank you – although as one of my reviewers did point out I maybe did push the time limits a bit far:

“The choice of the media is also apt but I guess the only little drawback is that the artifact far exceeds the guidelines on how big the artifact should be (actually it’s a gist of the entire course and not a little five-minute artifact!). “

Overall I really enjoyed #edcmooc, it made me think about things from different perspectives as well as confirming some of my personal stances on technology in education. It was well paced and I liked that it used openly available content where possible. Now I’m bit more experienced at MOOC-ing didn’t take up too much of my time. The course team made some subtle adjustments to the content and instruction over the duration which again was entirely appropriate and showed they were listening if not talking to everyone. I didn’t feel a lack of tutor contact, but then again I didn’t interact in the discussion spaces as much as I could have, and this is also an topic area where I was relatively comfortable exploring at my own pace.

It’s also been quite a counter balance to the #oldsmooc course I’m also doing (which started before #edcmooc and finishes next week), but I’ll share more about that in another post.

Also feel free to assess my artefact and share your comments here too using the criteria above.

**Update, I’ve just received an email from the course team. Apparently the process didn’t work as smoothly for some as it did for me. They are investigating and encouraging people who couldn’t share their artefacts to use the course forums. Hopefully this will get sorted soon.

eAssessment Scotland – focus on feedback

Professor David Boud got this year’s eAssessment Scotland Conference off to a great start with his “new conceptions of feedback and how they might be put into practice” keynote presentation by asking the fundamental question ‘”what is feedback?”

David’s talk centred on what he referred to as the “three generations of feedback”, and was a persuasive call to arms to educators to move from the “single loop ” or “control system” industrial model of feedback to a more open adaptive system where learners play a central and active role.

In this model, the role of feedback changes from being passive to one which helps to develop students allowing them to develop their own judgement, standards and criteria. Capabilities which are key to success outside formal education too. The next stage from this is to create feedback loops which are pedagogically driven and considered from the start of any course design process. Feedback becomes part of the whole learning experience and not just something vaguely related to assessment.

In terms of technology, David did give a familiar warning that we shouldn’t enable digital systems to allow us to do more “bad feedback more efficiently”. There is a growing body of research around developing the types of feedback loops David was referring to. Indeed the current JISC Assessment and Feedback Programme is looking at exactly the issues brought up in the keynote, and is based on the outcomes of previously funded projects such as REAP and PEER. And the presentation from the interACT project I went to immediately after the keynote, gave an excellent overview of how JISC funding is allowing the Centre for Medical Education in Dundee to re-engineering its assessment and feedback systems to “improve self, peer and tutor dialogic feedback”.

During the presentation the team illustrated the changes to their assessment /curriculum design using an assessment time line model developed as part of another JISC funded project, ESCAPE, by Mark Russell and colleagues at the University of Hertfordshire.

Lisa Gray, programme manager for the Assessment and Feedback programme, then gave an overview of the programme including a summary of the baseline synthesis report which gives a really useful summary of the issues the projects (and the rest of the sector ) are facing in terms of changing attitudes, policy and practice in relation to assessment and feedback. These include:
*formal strategy/policy documents lagging behind current development
*educational principles are rarely enshrined in strategy/policylearners are not often actively enaged in developing practice
*assessment and feedback practice doesn’t reflect the reality of working life
*admin staff are often left out of the dialogue
*traditional forms of assessment still dominate
*timeliness of feedback are still an issue.

More information on the programme and JISCs work in the assessment domain is available here.

During the lunch break I was press-ganged/invited to take part in the live edutalk radio show being broadcast during the conference. I was fortunate to be part of a conversation with Colin Maxwell (@camaxwell), lecturer at Carnegie College, where we discussed MOOCs (see Colin’s conference presentation) and feedback. As the discussion progressed we talked about the different levels of feedback in MOOCs. Given the “massive” element of MOOCs how and where does effective feedback and engagement take place? What are the afordances of formal and informal feedback? As I found during my recent experience with the #moocmooc course, social networks (and in particular twitter) can be equally heartening and disheartening.

I’ve also been thinking more about the subsequent twitter analysis Martin has done of the #moocmooc twitter archive. On the one hand, I think these network maps of twitter conversations are fascinating and allow the surfacing of conversations, potential feedback opportunities etc. But, on the other, they only surface the loudest participants – who are probably the most engaged, self directed etc. What about the quiet participants, the lost souls, the ones most likely to drop out? In a massive course, does anyone really care?

Recent reports of plagiarism, and failed attempts at peer assessment in some MOOCs have added to the debate about the effectiveness of MOOCs. But going back to David Boud’s keynote, isn’t this because some courses are taking his feedback mark 1, industrial model, and trying to pass it off as feedback mark 2 without actually explaining and engaging with students from the start of the course, and really thinking through the actual implications of thousands of globally distributed students marking each others work?

All in all it was a very though provoking day, with two other excellent keynotes from Russell Stannard sharing his experiences of using screen capture to provide feedback, and Cristina Costa on her experiences of network feedback and feeding forward. You can catch up on all the presentations and join in the online conference which is running for the rest of this week at the conference website.

Making assessment count, e-Reflect SUM released

Gunter Saunders and his team on the Making Assessment Count project (part of the current JISC Curriculum Delivery programme), have just released a SUM (service useage model) describing the process they have introduced to engage students (and staff) in the assessment process.

“The SUM presents a three stage framework for feedback to students on coursework. The SUM can act to guide both students and staff in the feedback process, potentially helping to ensure that both groups of stakeholders view feedback and its use as a structured process centred around reflection and discussion and leading to action and development.”

You can access the e-Reflect SUM here.

Assessment technologies in use in the Curriculum Delivery Programme

Developing practice around assessment is central to a number of the Curriculum Delivery projects. There has been an emphasis on improving feedback methods and processes, with a mixture of dedicated formal assessment tools (such as Turnitin) and more generic tools (such as excel, google forms, adapting moodle modules) being used. The later often proving a simple and effective way to trial new pedagogic methodologies, without the need for investment in dedicated software.

Excel
*Ebiolabs (excel macros embedded into moodle for marking)
*ESCAPE (WATS – weekly assessment tutorial sheets, again used for submission, also generates a weekly league table)

EVS
*Escape

Turnitin
*Making the new diploma a success
*Integrative Technologies Project

Moodle
*Cascade (submission extension)

ARS
*Integrative Technologies Project

Google forms
*Making Assessment Count

IMS QTI
None of the projects have actually implemented IMS QTI, however the Escape project did highlight it in their project plan, but didn’t actually need to use the specification for the work they undertook.

More information on the projects can be found by following the specific links in the text. More detailed information about the the technological approaches is also available from our PROD database. Specific assessment resources (including case studies) are also being made available through the Design Studio.