Using Video to Provide Feedback on Students’ Work

Russell Stannard, a lecturer at the University of Westminster, has just been been given the JISC/Times Higher Outstanding ICT Initiative of the Year award for using video to provide training in multimedia and Web 2.0 applications.  As well as using video to produce online training videos, he has also been using video to provide feedback on students’ work.  His website on multimedia videos includes an example of using video to mark a student’s work.

The THES (Times Higher Education Supplement) wrote about Stannard’s use of video for providing feedback on students’ work in 2006 and described the process involved.  Using video (or rather screen recording software with an audio track) to provide feedback means that a tutor can explain both verbally and visually any corrections that a student needs to make.  Instead of handwritten notes in margins or a page of comments attached to a student’s work, the video feedback approach can be used to give more lengthy feedback. 

Of course, this approach means that both the tutor and the student have to go through the whole video sequence each time they want to review their feedback rather than quickly glancing through a static set of pages.  However, this approach might be of value to some students with disabilites.  We often tend to concentrate on making online resources accessible, but perhaps we do not always think about how the feedback itself can be made accessible or value added, particularly for those students with learning disabilities or particular learning styles.  The video feedback approach will not be appropriate for all students, tutors or assignments, however, it is an alternative way of presenting information, which some students may find beneficial.

First Three Parts of ISO Multipart Accessibility in e-Learning Standard Published

The first three parts of the ISO (International Organization for Standardization) “Individualized Adaptability and Accessibility in E-learning, Education and Training” Standard have just been published (16th September 2008).

This standard integrates the IMS ACCLIP (Accessibility for Learner Information Package) and IMS ACCMD (AccessForAll Meta-data Specifications) into a single multi-part standard.

The first three parts are now available (cost is around £65 each) and consist of:

* ISO/IEC 24751-1:2008 Individualized Adaptability and Accessibility in E-learning, Education and Training Part 1: Framework and Reference Model.
Part 1 of the multi-part standard. It lays out the scope and defines the reference model for Parts 1 and 2 below.

* ISO/IEC 24751-2:2008 Individualized Adaptability and Accessibility in E-learning, Education and Training Part 2: “Access For All” Personal Needs and Preferences for Digital Delivery.
Part 2 of the multi-part standard. It covers the IMS ACCLIP Specification and defines accessibility needs and preferences, which can then be matched to resources (as defined in Part 3 below).

* ISO/IEC 24751-3:2008, Individualized Adaptability and Accessibility in E-learning, Education and Training Part 3: “Access For All” Digital Resource Description.
Part 3 of the multi-part standard. It covers the IMS ACCMD Specification and defines the accessibility meta-data that expresses a resource’s ability to match the needs and preferences of a user (as defined in Part 2 above).

A further four parts have been given “New Project” status and will cover non-digital learning resources and physical spaces.  They have a target publication date of December 2010.

Part 8 of the multipart standard will describe how language and learning preferences will be referenced and is expected to be published by the end of 2009.

Making Wikipedia Fully Accessible

I’ve just been sent a link to an article on the ePractice blog by Per Busch about Making Wikipedia Fully Accessible for All.  Per is looking for funding to try and remove some of the barriers to accessibility for screen reader and visually impaired users of MediaWiki.  Some of the areas which are still considered to be problematic include CAPTCHAs and screen reader issues (see Accessibility and Wikis for a longer list).

Comments and suggestions (or offers of funding!) can be left at Per’s post – Making Wikipedia Fully Accessible for All.

There’s no Algorithm for Common Sense!

Virtual Hosting.com has drawn up a list of 25 Free Website Checkers, with a brief description of what each one does.  The checkers are split into handy sections – General, Disability, and Usability – but automated checkers will only check the easy bits – e.g. colour contrast, HTML (HyperText Markup Language) and CSS (Cascading Style Sheet) code, etc – i.e. the bits for which an algorithm can be written. 

However, whilst a website checker can check that alt text, for example, is used with an image and will tell you if it’s missing, it can’t actually tell you whether what that alt text actually makes sense.  For example, alt text of “an image” or “asdfg” is not going be very useful to someone who doesn’t download images or for someone who uses tooltips to find out the relevance of the image (particularly where a description or title hasn’t been provided).  So developers and content authors need a hefty dose of common sense to make sure that the aspects of a website that can’t automatically be checked by a computer are actually usable and accessible.

It’s often quoted (but I can’t remember by whom) that one could implement the whole of WCAG (Web Content Accessibility Guidelines) and still end up with an inaccessible site. Whilst an automated checker might find that the site is accessible based on a simple checklist, a human may find it unusable.  Human involvement in checking accessibility is still necessary and as well as common sense, an understanding of accessibility issues and context is also required.  For example, whilst a photo of Winston Churchill might have the alt text of “Photo of Winston Churchill”, if the photo is illustrating a particular point, it could be more relevant to say “Photo of Winston Churchill smoking a cigar” or “Photo of Winston Churchill in London in 1949″, depending on context.

So whilst automated web accessibility checkers have their uses, it’s important to remember that they generally don’t include an algorithm for common sense!

Technology and Control: the Designer v. the User

Chapter Two: Framing Conversations about Technology” of “Information Ecologies: Using Technology with Heart” by Bonnie Nardi and Vicki O’Day looks at the differing views of technology from the dystopic to the utopic.  The authors make some interesting comparisons between the technology we have now with the technology of the recent past, as well as some very interesting comments.

Nardi and O’Day have noticed that although the advance of technology is seen as inevitable, people do not critically evaluate the technologies they use, even though they have been designed and chosen by people.  In other words, we accept the technology that is placed before us but we forget that we have a choice as to the type of technology we actually use and the way in which we use it.

The authors compare the differing views of Nicholas Negroponte (technophile and director of the MIT Media Lab) and Clifford Stoll, author of “Silicon’s Snake Oil”, programmer and astronomer.  Interestingly, although their views are remarkably different (one utopic, the other dystopic), they both agree that “the way technology is designed and used is beyond the control of the people who are not technology experts” (Nardi & O’Day).

Nevertheless, people often use technology in ways that are completely different from the way in which the designer intended. For example, Johnny Chung Lee has developed some interesting and unusual uses for the Nintendo Wii controller.  Thinking out of the box can bring control back to the user and it’s probably fair to say that we all (from expert users to newbies) use the technology we have in ways which weren’t even considered by designers, even if it’s just using a CD as a coaster for a coffee mug.

So although technology (hardware and software) designers may only have a limited perspective on the way in which they expect their technology to be used, once it is out in the public domain, alternative uses or ways of working will often be developed and exploited. 

BBC Podcast: Accessibility in a Web 2.0 World?

I’ve just listened to the BBC’s Podcast Accessibility in a Web 2.0 World (around 43 minutes long, available as MP3 and Ogg Vorbis formats).  The podcast takes the form of a facilitated discussion between a number of experts talking about what Web 2.0 applications mean to accessibility and included representatives from the BBC, commercial web design companies, and the AbilityNet charity.

There were some interesting comments and if you don’t get chance to listen to the whole thing, here’s a brief run-down of some of the ideas and issues, which I thought were particularly salient.

* Social networking sites can take the place of face-to-face networking, particularly where the user has motor or visual disabilities. However, many sites often require the user to respond initially to a captcha request, which can be impossible for people with visual or cognitive disabilities.  Some sites do allow people with voice-enabled mobiles to get around the captcha issue, but not everyone has such technology. Once the user has got past such validation, they then have to navigate the content which, being user generated, is unlikely to be accessible.

* One of the panellists felt that people with disabilities did not complain enough about inaccessible websites and that a greater level of user input would help web based content be more accessible.

* Jonathan Chetwynd, who has spoken to the CETIS Accessibility SIG in the past (see Putting the User at the Heart of the W3C Process) stated that users were not involved in the specification and standards process, because it was led by large corporate companies.  He also felt that users with low levels of literacy or technical ability were being overlooked in this process.

* There was some interesting discussion about W3C (World Wide Web Consortium) and the way in which their accessibility guidelines are developed.  Anyone can be involved in the W3C process but as a fee is charged for membership, it is mostly companies, universities, and some not-for-profit organisations who take part.  As some companies don’t want their software to appear as inaccessible, it may be that their motive in joining the W3C is less altruistic.  It was stated that it was actually easier to “fight battles” within the W3C working groups than to take them outside and get a consensus of opinion. As a result, there is not enough engagement outside the W3C working groups which has resulted in a lot of dissatisfaction with the way in which it works. 

* We are now in a post-guideline era, so we need to move away from the guideline and specification approach to an approach which considers the process.  This means taking the audience and their needs into account, assistive technology, etc.  Accessibility is not just about ticking boxes.  The BSI PAS 78 Guide to Good Practice in Commissioning Websites, for example, gives guidance on how to arrive at the process and to ensure that people with disabilities are involved at every stage of the development.  However, developers often want guidelines and specifications to take to people who don’t understand the issues regarding accessibility.

* It is important that everyone is given equivalence of experience so there is a need to separate what is being said and how it needs to be said for the relevant audience.  The web is moving from a page-based to an application-based approach.  One panellist likened Web 2.0 applications to new toys with which developers were playing and experimenting and he felt that this initial sandpit approach would settle down and that accessibility would start to be considered.

* Assistive technology is trying hard to keep up with the changing nature of the web but is not succeeding.  Although many Web 2.0 applications are not made to current developer standards (not the paper kind!), many of the issues are not really developer issues.  For example, multimodal content may have captions embedded as part of the the file or as standalone text, which both browsers and assistive technologies need to know how to access.

* People with disabilities are often expected to be experts in web technology and in their assistive technology but this is often not the case.

After the discussion, the panel members were asked what they felt would advance the cause of web accessibility.  My favourite reply was the one where we all need to consider ourselves as TAB (Temporarily Able Bodied) and then design accordingly.  The rationale behind this was that we will all need some sort of accessibility features at some stage.  So the sooner we start to build them in and become familiar with them, the better it should be for everyone else!

Using Virtual Worlds to Improve Social Interaction

Rowin’s (Assessment SIG Co-ordinator) has just sent me an article – Virtual Worlds Turn Therapeutic for Autistic Disorders – which describes how virtual worlds are being used for behavioural and social learning.  The article describes how a virtual environment in Second Life, for example, can be used to teach someone how to interact or behave in social situations without fear of consequences.

This safe environment is ideal for people to learn from their mistakes without having to worry about embarrassment, failure, social faux pas, etc.  Virtual worlds can also be a non-threatening means of interacting with other people, particularly for people who are worried about discrimination or who are not able to take part in real world social activities.

Perhaps if we all used virtual worlds to learn how to deal with and face up to our own issues without fear of hurting ourselves or others, the real world would be a better place.

Comment on the Stick (Standards Enforcement) Approach to Accessibility

Headstar’s eAccess Bulletin has the scoop on Accessibility Ultimatum Proposed for UK Government Websites.  Sources claim that government websites will be penalised by being stripped of their “gov.uk” domain names if they don’t meet the WCAG (Web Content Accessibility Guidelines) AA rating.  At the moment, this is still only a draft proposal but if ratified, would mean that all existing government sites would need to have the AA rating by December 2008.

Whilst the government’s intentions are no doubt admirable, WCAG (and I’m assuming they’re talking about WCAG 1.0 here) is useful but still needs a lot of common sense in its implementation.  I’m also assuming that government websites include national government, local government, public libraries, police, fire services, museums, art galleries, etc.  But what about universities, schools, educational bodies, etc?  Is this just the start of a British equivalent of America’s Section 508?  And what happens when WCAG 2.0 is finally ratified?

Whilst the standards approach is useful and can provide a lot of guidance, actual enforcement may mean that alternative approaches to accessibility are not pursued and common sense is not taken into account.  For example, an image with an alt tag will easily pass a Bobby check but what if that alt tag is completely meaningless – <alt=”gobbledygook”>? Innovative ways of tackling accessibility problems may not be thought about or explored – and in any case, it’s quite possible to be completely WCAG compliant and still be inaccessible.

So here are my somewhat Utopian ideas for an approach to accessibility:

1. Education, education, education – all design, web design and IT courses should automatically include a complusory accessibility and usability module.

2. Standards and guidelines – for “guiding” developers, not hitting them over the head with a big stick. However, they should remain as guidelines and recommendations and should not be forced on people, unless it has been proven without a doubt that a particular guideline is useful and can be successfully applied in all situations.  This is not always the case.  For example, anyone can add an alt tag to an image but does everyone know the best type of text to put in it?

3. Common sense and innovation – this is perhaps more wishful thinking on my part but we should all use our common sense and our understanding of the barriers in conjunction with guidelines and see if there are alternative, better ways of doing things, particularly as new technologies and approaches come to the fore.

4. User testing – by all types of users, from the “silver surfer” to the person with learning disabilities as well as the average person in the street.  We all know what we should do but often time and resource constraints mean that lip-service is often only paid.

Whilst standards are great for things that can be set in stone, such as nuts and bolts, sizes of credit cards etc, they are not so successful for “fuzzy” applications (like users), who have many different needs and preferences depending on different contexts, situations and how they’re feeling at the time.  Fuzzy applications (users) need fuzzy blended, complementary approaches so taking the big stick of standards enforcement to developers could be a bit of a backward step in the support and encouragement of accessibility.

News: Disability Rights Commission Moves Home

As of 1st October 2007, the DRC (Disability Rights Commission) has been brought under the wing of the Equality and Human Rights Commission, which now takes on the responsibilities and powers of the CRE (Commission for Racial Equality), DRC (Disability Rights Commission) and the EOC (Equal Opportunities Commission).

The archived site of the DRC is still available

Could on-screen narration be discriminatory?

Cathy Moore has an interesting blog post entitled “Should we narrate on-screen text?, where she suggests that automatic narration of on-screen text can actually be detrimental to learners. She states that learners generally read more quickly than the narration is read – screen reader users can “read” text very quickly -and that learners are then forced to move at the pace of the narration.

Although some learners may find on-screen narration useful such as young children, learners who are learning the language of the learning resource, and learners with low literacy skills or cognitive difficulties, on-screen narration should not be included just to try and fulfil obligations towards students with disabilities. Automatically including on-screen narration just to fulfil SENDA (Special Educational Needs and Disabilities Act) obligations, could actually discriminate against students who do not need on-screen narration.

Therefore, the provision of on-screen narration should be considered very carefully and, if it is considered necessary, offered as an alternative or option, but the original resource (or an alternative) should still be accessible to screen reader and other technology users.