An emerging trend coming through from the JISC Curriculum Design programme is the use of video, particularly for capturing evidence and and reflection of processes and systems. Three of the projects (T-Sparc, SRC, OULDI) took part in an online session yesterday to share their experiences to-date.
T-Sparc at Birmingham City University have been using video extensively with both staff and students as part of their baselining activities around the curriculum design process. As part of their evaluation processes, the SRC project at MMU have been using video (flipcams) to get student feedback on their experiences of using e-portfolios to help develop competencies. And the OULDI project at the OU have been using video in a number of ways to get feedback from their user community around their experiences of course design and the tools that are being developed as part of the project.
There were a number of commonalities identified by each of the projects. On the plus side the immediacy and authenticity of video was seen as a strength, allowing in the case of SRC the team to integrate student feedback much earlier. The students themselves also liked the ease of use of video for providing feedback. Andrew Charlton-Perez (a lecturer who is participating in one of the OULDI pilots) has been keeping a reflective diary of his experiences. This is not only a really useful, shareable resource in its own right, but Andrew himself pointed out that he has found it really useful as self-reflective tool and in helping to him to re-engage with the project after periods of non-involvement. The T-Sparc team have been particularly creative in using the video clips as part of their reporting process both internally and with JISC. Hearing things straight from the horses mouth so to speak, is very powerful and engaging. Speaking as someone who has to read quite a few reports, this type of multi-media reporting makes for a refreshing change from text based reports.
Although hosting of video is becoming relatively straightforward and commonplace through services such as YouTube and Vimeo, the projects have faced some perhaps unforeseen challenges around consistency of file formats which can work both in external hosting sites, and internally. For example the version of Windows streaming used institutionally at BCU doesn’t support the native MP3 file formats from the flip-cams the team were using. The team are currently working on getting a codec update and they have also invested in additional storage capacity. At the OU the team are working with a number of pilot institutions who are supplying video and audio feedback in a range of formats from AVI to MP3 and almost everything in the middle, some which of need considerable time to encode into the systems the OU team are using for evaluation. So the teams have found that there have been some additional unforeseen resources implications (both human and hardware) when using video.
Another common issue to come through from the presentations and discussion was around data storage. The teams are generating considerable amounts of data, much of which they want to store permanently – particularly if it is being incorporated into project reports etc. How long should a project be expected to keep evaluative video evidence?
However despite these issues there seemed to be a general consensus that the strengths of using video did make up for some of the difficulties it brought with it. The teams area also developing experience and knowledge in using software such as Xtranormal and Overstream for creating anonymous content and subtitles. They are also creating a range of documentation around permissions of use for video too which will be shared with the wider community.
A recording of the session is available from The Design Studio.
I think this was a timely meeting and that it is important to explore the value video can offer – especially the impact and record such a visually rich media may achieve. However, we also need maintain a critical perspective on how much we use and ‘trust’ video; especially in research and analysis. The raw, visual appeal may be seductive and appear ‘authentic’ but this may not be the case (for example: http://goliath.ecnext.com/coms2/gi_0199-5959080/Using-digital-video-as-a.html). Likewise, the questions of what data to extract and what meaning to make/take are not dissimilar to using any other data source (e.g. Chapter 3: http://drdc.uchicago.edu/what/video-research-guidelines.pdf#page=16&view=fitH,354). So here I think we need to distinguish the purpose for using video: for dissemination, as anecdotal evidence, as rough-and-ready analysis, or in well planned research with appropriate methodological considerations.
Hi Simon
Good points – and yes like all data collection we need to be clear about the why and the how. Thanks for the comments and the references.
Sheila
Pingback: Curriculum Design Technical Journeys: Part 2 « JISC Curriculum Design & Delivery