Foregone conclusion?

The third Economist debate launched yesterday, debating the proposition that ‘social networking technologies will bring large [positive] changes to educational methods, in and out of the classroom’.  Opening arguments from Ewan McIntosh (Learning and Teaching Scotland) speaking in favour of the motion, and Michael Bugeja (Iowa State University of Science and Technology) in opposition have already been posted, as have an impressively large number of comments from the virtual floor.  As Bugeja wryly observes, his chances of winning an online debate (held under a version of the Oxford Union rules that The Economist rather quaintly refers to as Oxford 2.0) on this topic are slim, and voting so far is as one-sided as might be expected. 

Rebuttals will be posted on Friday 18th, followed by closing arguments on Wednesday 23rd; the debate itself closes with the final count of votes on Friday 25th.  There’s still plenty of time to get involved, but are the books already closed on the outcome?

Update:  owing to a gloriously ironic technical fault with the website, the dates above have all been moved forward by a day.  As moderator Robert Cottrell observes, ‘you might say that this hiccup has lent support to Dr Bugeja’s argument that applied technology is dangerously fallible.’  Could Web 2.0 be it’s own worst enemy?

2020 vision

The third Future of the Internet survey sponsored by Pew Internet will be available online for the next few weeks and is well worth participating in.  Participants are encouraged to express their views on topics such as digital inclusion, DRM, privacy and digital identity, and virtual and mirror worlds in the year 2020, and can remain anonymous or identify themselves as they wish.  It’s a stimulating and thought-provoking exercise, as well as an opportunity to contribute to a significant study on perceptions of our online futures.

You can also check out the results from previous surveys and a range of other internet-related resources; particularly fun are the predictions from the early 90s.  One that stood out for me was Eric Hughes’s 1992 comment, ‘In the world of the future, people will use low-cost Radio Shack equipment to spy on themselves to find out who they are': in the world of FOAF and Facebook fakes, we need to spy on ourselves to find out who we’ve been constructed as.  No comments about tinfoil hats, thank you very much.

Assessment SIG meeting, 26 September 2007

Academics and developers met in Glasgow recently to participate in the most recent Assessment SIG meeting. The very full agenda covered a range of topics, both technical and pedagogic, and presentations led to some lively discussions.

Myles Danson of JISC opened the day by presenting JISC’s views and priorities for eassessment, as well as pointing to some future work they will be undertaking in the domain.

Yongwu Miao of the Open University of the Netherlands discussed work undertaken by the TENCompetence Project, with a particular focus on the relationship between IMS QTI and IMS Learning Design and the work they have done in this area. Dick Bacon of the University of Surrey and the HEA discussed the relationship between different varieties or ‘dialects’ of QTI, exploring some of the implementation and interpretation issues that hinder or break interoperability between systems nominally implementing the same version of the specification. CAL Consultant Graham Smith pleased the audience with news that a new Java version of his QTI demonstrator will be available shortly with updated support for QTI 2.0 items, which should help in the identification and resolution of implementation problems.

Martin Hawksey of the University of Strathclyde presented the work of the Re-Engineering Assessment Practices project. With a focus on real world assessment experiences, including an impressive collection of case studies exploring the impact of transformation within assessment practices, the REAP project was of particular interest to participants. Also of great interest, and perhaps unsuprisingly sparking the greatest amount of debate, was the exploration of ‘Assessment 2.0′ presented by Bobby Elliott of the Scottish Qualifications Authority. Bobby looked at ways in which Web 2.0 technologies can be used to enhance and modernise assessment in ways which can engage and appeal to increasingly digitally literate learners.

The day also featured several demonstrations of tools under development. Niall Barr of NB Software demonstrated his current work, an assessment tool which utilises the IMS QTI, Content Packaging and Common Cartridge specifications, while Steve Bennett of the University of Hertfordshire demonstrated MCQFM, a JISC-funded tool which provides a simple text-based format for converting and editing items between formats. Two more JISC projects closed the day. AQuRate, presented by Alicia Campos and David Livingstone of Kingstone University, is an elegant item authoring tool while ASDEL, presented by Jon Hare of the University of Southampton, is an assessment delivery tool which builds on the R2Q2 project to provide a fuller test tool. A third project, Minibix (University of Cambridge) on item banking, is working closely with AQuRate and ASDEL.

Links to presentations (via slideshare), project websites and other information can all be found on our wiki: http://wiki.cetis.org.uk/JISC_CETIS_Assessment_SIG_meeting%2C_26_September_2007.

Assessment for Learner Responsibility

On Monday, I attended a Learning Enhancement Network event here at Strathclyde on Assessment for Learner Responsibility.  Strathclyde is in the process of revising its assessment policy, and this event brought together staff from across the university, together with a number of student representatives and some colleagues from other universities. 

The University of Edinburgh has recently completed a similar process, and Nigel Seaton from Edinburgh’s College of Science and Engineering presented the outcomes of this process.  It was particularly interesting to see that all assessment in the College is now formative, in that students receive feedback for all the work they do – including formal examinations.  I really like this: as both a student and a tutor I always found it hugely frustrating to not be able to get or give feedback from the most important assessments beyond a bald grade or classification.  This is particularly important for students who experience signficantly worse performance in these assessments than in earlier coursework, and who are often bewildered, demoralised and demotivated by the lack of information provided.  The Data Protection and Freedom of Information Acts make the disclosure of examiners’ comments a legal obligation, but it’s nice to see this spun positively and used as a real learning opportunity rather than just warning markers not to make rude comments on exam scripts.

The College will also be introducing an eportfolio system, not for PDP but for use as a subject-specific learning and reflection aid for students, another very appealing idea. 

Jim Baxter from Strathclyde’s Department of Psychology gave a very entertaining presentation on collaborative WebCT-based activities introduced to the first year course in collaboration with the REAP project.  This involved a lot of collaborative work, something which had already been raised by Nigel and which was returned to in the afternoon’s breakout discussions.  I’ve never been a fan of group assessment, peer assessment, and other activities which force people into social models regardless of whether that is what is right for them, and I was pleased to see that I wasn’t the only one who felt some unease about compelling students to take part in such activities.  There was genuine agreement that there’s a need to respect all different learning styles and that there may be a tension between fashionable approaches and what’s actually best for an individual.  Staff reported that the majority of students themselves said that they dislike group work, although it’s regarded far more positively in post-graduation surveys – which is rather interesting itself, perhaps extroverts are more likely to complete surveys..?

One issue which concerned me is the observation that a student can be prevented from sitting the final examination for a course if they haven’t participated in group activities.  The justification for this is that these activities are detailed in the course materials so they knew what they were signing up for – but surely students should be studying a course because they’re interested in the subject and not because they can cope with the teaching style, and really shouldn’t be prevented from studying their chosen subject because they are don’t have a particular learning style.  The principle that students should have a choice in the methods and timing of assessment is therefore very welcome.

David Nicol presented the eleven principles of good assessment practive which were the initial outcomes from the university’s working group examining the assessment policy and from David’s long interest in this area.  These covered engagement to stimulate learning through clarificiation of what constitutes good practice, encouraging ‘time and effort’ on educationally purposeful tasks, high quality feedback, opportunties to close the feedback look, encourage positive motivational beliefs and self-esteem and encourage dialogue about learning between all stakeholders.  Other principles focused on empowerment, sharing responsibility for learning with students by facilitating self-assessment and reflection, giving learners choice in the nature, methods, criteria and timing of assessment, involving students in policy and practice decisions and supporting the development of learner communities and social integration. 

A particular strength of the day was the involvement of Strathclyde students and the opportunity the day gave for dialogue between staff and learners.  The greatest concerns which emerged from the afternoon breakout discussions were time, feedback and over-examining.

Deadline coordination is perhaps a particular issue at Strathclyde which has a very broad first year curriculum: in the Faculty of Law, Arts and Social Sciences, for example, students have to study five potentially quite disparate subjects, all of which tend to set the same deadlines for work.  There were occasional examples of the principle of ‘giving learners choice in the timing of assessment tasks’, for example the lecturer who negotiates deadlines with his classes.  Our group spent some time lusting after a hypothetical ‘assessment booking system’ which course coordinators could use to pick ‘slots’ for assessment deadlines – perhaps something the Enterprise SIG could look at :-)

Feedback was another major issue for the students.  They appreciate detailed constructive feedback, particularly where it explains marks in relation to published assessment criteria and offers suggestions for improving weak areas as well as highlighting strengths.  There were a few examples of bad practice, with ‘feedback’ which consisted of a smiley face and a mark being a particular low, but not many examples of truly detailed feedback.  The timeliness of feedback was also a concern: as one student observed, ‘how can I learn from my feedback if I don’t get it until after I’ve submitted my next assignment?’

Students and staff were both concerned that students are being over-examined.  Virtually every piece of work a student submits contributes towards the final mark for the course, meaning that students don’t have a space to fail.  The students in our group actually wanted more formative assessment, practice excercises particularly when undertaking a new type of task they hadn’t encountered before, and the opportunity to experiment and learn before being summatively assessed.

Other issues that emerged were the desire to offer staff incentives and reward innovative and engaging projects, training for staff in how to write content that actually will engage learners, and the real desire to share innovation throughout the university.

Assessment SIG meeting, 22 February 2007

I’ve finally managed to get the notes and reports from our most recent assessment SIG meeting at the University of Southampton up on the wiki

It was a hugely enjoyable meeting which brought together people with a wide range of interests in the CAA area.  JISC activities were well represented, with three new one year Capital Programme projects presenting a brief introduction to their work: AQuRate, for assessment authoring, Minibix for item banking and AsDel for assessment delivery.

Other JISC-funded work presented included the FREMA semantic wiki which includes a huge amount of information on the eassessment domain, PeerPigeon for services to support peer review (and which would have won had there been a prize for best project logo), MCQFM which is developing a web-based question generator, and XMarks, which is working on web services for exchanging assessment-related information including the development of an XML information model for handling this data.

Gillian Palmer of ElementE looked at some of the issues around assessment delivery in the wider European context, including the difficulty of getting different nations to adopt or support standards which are identified with particular nations, while our own Clive Church (who we share with Edexcel) looked at the challenges facing accreditation and assessment bodies in England, Wales and Northern Ireland with the introduction of the 14-19 specialised diploma. In both contexts, the use of agreed standards is crucial to the success of the assessment and accreditation process.

The importance of standards and differences between implementations were explored by Dick Bacon of the University of Surrey, who looked at some of the issues he encountered when developing a physical sciences question bank using content generated in a range of assessment tools.

Overall, it was a lively and stimulating day, and very encouraging to see how much activity and enthusiasm there is in the field.  Many thanks to all who took part, and to our hosts at Southampton for an excellent day.

Back in the blogosphere

Well, I’m back blogging again after my long silence.  The blog was getting overrun with comment spam and was becoming pretty unusable and pretty unpleasant.  Many thanks to Sam for installing Akismet antispam and solving the problem.

Serendipity

Endemol, creators of some of the most successful television formats of recent years, announced yesterday that they would be launching the first Virtual Big Brother within Second Life.  It’s a fascinating concept, and it’ll be fascinating to see how it’s received by both the Second Life community and those who so far haven’t engaged with it.  Will the intrusive and often prurient appeal of ‘real’ Big Brother with ‘real’ people really transfer to avatars within a virtual world?  Whether it succeeds or fails, it should tell us a lot about how we negotiate our own and others’ identities within real and virtual communities.

And hopefully something to look forward to in January 2007: Boris Johnson, inimitable Shadow Minister for Higher Education, is widely rumoured to have been approached to appear in the next series of Celebrity Big Brother.  I will of course be watching carefully in case he has anything to say on eassessment and elearning in any potential future Tory government.

Now all I need is someone to pay me to play WoW and my life would be complete…

Digital Literacy, Podcasting and eLearning – trainerspod webinar

Yesterday afternoon saw my second webinar in two days, this time a session on digital literacy, podcasting and elearning led by Graham Attwell of Pontydysgu.  Around 40 people took part from many countries.  Because of the way in which the session was run (which I’ll discuss in yet another blog post), the following is just a few impressions from the event rather than a proper report.  The archived session will be available online soon for those who would like to learn more about this topic.

The session was split into three sections: digital literacies and new pedagogic approaches; what is a podcast and how should it be used in education; and how to make a podcast.

Graham pointed out that traditional LMSs use a traditional, didactic, ‘push’ approach to learning.  In the new era of ‘elearning 2.0′, this should change to a more constructivist approach; however, there are many activities around at the moment that are constructivist in name only – as always, there is a need to examine what we’re actually doing instead of just optimistically applying labels like plasters and hoping they stick.

One quote, from Harry Jenkins, which particularly struck me was: ‘We need to shift the focus of the conversation about the digital divide from questions of technological access to those of opportunities to participate and to develop the cultural competencies and social skills needed for full involvement’.  Graham cited the example of George W Bush discussing what he’s ‘used on the Google‘ to help illustrate this point, and Senator Ted Stevens’ own personal internet is also always worth remembering (no, it’s not funny).

Clarence Fisher’s ‘eleven skills for participation’ were mentioned, and are worth repeating: play, performance, simulation, appropriation, multitasking, distributed cognition, collective intelligence, judgement, transmedia navigation, networking and negotiation. 

It was interesting to learn that I’m not the only person who dislikes the term ‘podcasting’ because of its close relationship to Apple and the iPod.  A few alternatives were suggested, with ‘audio report’ being the most popular, but – as with the whole Web 2.0 business- it’s not got that snappy, cliquey, in-the-know connotation that will probably leave us with podcasting for some time to come.

It’s also worth mentioning that enhanced podcasts (enhanced audio reports?) which allow users to embed still images such as the ubiquitous PowerPoint slides into podcasts, can be created with tools such as Garageband, Audacity and Divicast; we made presentations from our joint Assessment and MDR SIGs meeting available as Breeze presentations, integrating MP3 recordings and PowerPoint presentations and received a generally favourable reaction.

A key part of the new models of education and elearning is sharing, yet I felt that Graham made one of the most important points of all when he said that it’s about ‘learning to share, learning how to share, and learning how to have the right not to share’.  That’s something that will be very relevant to the TrainersPod webinar he’ll be leading on eportfolios in the new year: I’ll certainly be there.

Integrated Assessment – IMS Webinar

On Monday night I attended IMS’s webinar on ‘Integrated assessment products and stategies: gauging student achievement and institutional performance’.  This was the first IMS webinar I’d attended, and I found it a useful session.  Over 80 people participated on Horizon Wimba for the session.

Rob Abel, CEO of IMS, introduced the session by describing integrated assessment as assessment which is designed into and throughout the learning experience.  He discussed the outcomes of a recent survey on satisfaction with elearning tools which showed that tools for quizzing and assessment had the highest satisfaction ratings amongst users; of the 88 products surveyed, Respondus came top in terms of user satisfaction.  This is a consequence both of the usefulness and maturity of this category as well as the availability and quality of tools available.

Rob also suggested that ‘standards are a platform for distributed innovation’, which is a nice phrase, although one of the criticisms often made of QTI is that it isn’t innovative.  It’s hard to see, however, how true innovation could be standardised.

Neil Allison (BlackBoard Director of Product Marketing), Sarah Bradford (eCollege Vice President of Product Management) and Dave Smetters (Respondus President) all spoke briefly about how their tools could be used for integrated assessment. 

Neil illustrated how BlackBoardcan ‘make assessment easier and more systematic’ by integration with other elements of the VLE such as enterprise surveys, portfolios and repositories.  One comment I found particularly interesting was an outcome from a December 2005 Blackboard Survey of Priorities in Higher Education Assessment, which found that portfolios are used in 86% of public institutions but only 43% of private, while interviews and focus groups are used by 78% of private institutions and only 48% of public.  I’ve tried to find this online without success; it’s referenced in slide 20 of the presentation.

Sarah noted that eCollegeusers are using Respondus and Questionmark’s secure browser to assess their learners.  Her talk focused on the eCollege outcome repository or database, which is linked to their content manager, stressing the importance of a good tagging system.  The eCollege Learning Outcome Manager addresses some of the problems for usage data management for quality assurance, an important issue given the current interest in item banking.

Dave’s talk was most wide-ranging, looking not only at the highly popular Respondus assessment authoring and management tool but at some of the wider issues around integrated eassessment.  He referenced research which found that only between 13 – 20% of courses with an online presence have one or more online assessment as part of that course – yet market research consistently shows that online assessment capabilities are one of the most appealing elements in drawing users to esystems.  As he said, once the system is in place, ‘reality kicks in': online assessment takes work, effort and time, raises difficulties in converting or creating content, and raises fears of the potential for cheating.  He argued that only a very small number of students have the desire to cheat, yet the impact can affect an entire class.  Students themselves like a secure assessment environment that minimises the possibilities for cheating.  Locked browsers are a big issue for Respondus at the moment; security of online assessments is also addressed by BS7988 which is currently being adopted by ISO.

Colin Smythe, IMS’s Chief Specification Strategist, provided a brief survey of the standards context for integrated assessment.  He noted that all specifications have some relevance for assessment, citing Tools Interoperability, Common Cartridge, ePortfolio, Content Packaging, LIP, Enterprise and Accessibility.  He also posted a useful timeline (slide 69) which shows that QTI v2.1 is scheduled for final release in the second quarter of 2007, to be synchronised with the latest version of Content Packaging.

He also said that Common Cartridge provides ‘for the first time content integration with assessment'; how much this will be adopted remains to be seen but IMS are marketing it quite forcefully.

There was time for a short question and answer session at the end.  I asked about the commitment of the vendors to QTI 2.1 and the use of QTI 2.1 in Common Cartridge.  The Common Cartridge specification uses an earlier version of QTI partly because there were some migration issues with 2.0 which have been resolved through transforms in 2.1, and also because IMS ‘didn’t want to force the marketplace to adopt a new specification’.  As Rob says, interoperability requires the ‘commitment of the marketplace’, and it would be useful to know what commitment these vendors have to the newer version. 

The session concluded with a reminder about the Learning Impact 2007 conference being held in Vancouver on 16 – 19 April 2007, which should be of interest to many.