It’s Official: WCAG 2.0 has been Finalised

After much deliberation, pulling of hair, and no doubt many sleepless nights, the W3C (World Wide Web Consortium) has finally officially published WCAG (Web Content Accessibility Guidelines) 2.0.

Yesterday’s press release from W3C states that trial implementations of the new standard have shown that most web sites which already “conformed to WCAG 1.0 did not need significant changes to meet WCAG 2.0″, so many developers may be breathing a sigh of relief. But it is also likely that there will be pressure for developers to ensure that their web content conforms to the new standard. Does this mean that what was “accessible” yesterday is not “accessible” today?

WCAG 2.0 is different in many aspects to WCAG 1.0, so for a while there may be a two-tier level of conformance (although the A, AA, and AAA conformance levels are still in place). Some of new aspects covered include:

* captchas;
* semantic markup using ARIA (Accessible Rich Internet Application) – once this specification has reached “recommendation” status;
* recommendation that an alternative is provided for any text that requires a reading ability more advanced than the lower secondary education level (how will online academic papers be dealt with?);
* etc.

However, WCAG 2.0 comes with several other resources to help with its implementation:

* WCAG 2.0 at a Glance;
* WCAG 2.0 Documents;
* How to Meet WCAG 2.0: A Customizable Quick Reference;
* Understanding WCAG 2.0;
* Techniques for WCAG 2.0;
* How to Update Your Web Site to WCAG 2.0.

The WAI (Web Accessibility Initiative) have tried hard to give developers as much information as possible to help with the implementation of WCAG 2.0.  They have gone beyond simply defining what one can and can’t do, and include additional information around conformance, failure testing, conformance policies, etc. Perhaps this level of assistance with implementation should be considered by other standards bodies.

In any case, WCAG 2.0 is finally here.  Whether developers and users will see it as a welcome Christmas present or something they’d rather take back to the shops in January remains to be seen.  Let’s hope it helps rather than hinders.

Draft BSI Standard on Web Accessibility Now Available for Public Comment

BSI (British Standards Institute) has just released the draft of the first Web Accessibility Code of Practice for public comment.

Its aim is to give “recommendations for building and maintaining web experiences that are accessible to, usable by and enjoyable for disabled people”. It includes sections on:

* use of W3C WAI (World Wide Web Consortium Web Accessibility Initiative) accessibility specifications and guidelines;
* accessibility policies and statements;
* involving people with disabilities in the design, planning and testing of websites;
* allocation of responsibilities within an organisation for accessibility;
* suggestions on how to measure user success.

“BS 8878:2009 Web Accessibility. Building Accessible Experiences for Disabled People. Code of Practice” will be available for public comment until 31st January 2009. You can access the (free) draft in HTML. However, you will need to set up a user account in order to access it. Once you’ve logged in, you can then make comments online. If you find the HTML version somewhat inaccessible, it can be downloaded either in PDF or Word format (at time of writing, a log in is not required).

Technological Literacy: Kit-Kats Strapped to the Back of iPods

As I write, the online JISC Innovating e-Learning 2008 Conference “Learning in a Digital Age – Are We Prepared?” is in full swing.  I’ve been tracking the discussions in the “Listening to Learners” theme, which involved two presentations – one by E.A. Draffen on the issues arising from the LexDIS project and one by Malcolm Ryan giving selected findings from SEEL (Student Experience of e-Learning Laboratory) project.

The presentations arrived at the following conclusions:
* Not all students are digital natives (age is not necessarily a barrier, often it is the technology itself or the learning curve/time required);
* Using technology for its own sake (or because it’s “cool”) does not necessarily enhance the learning experience;
* Not all students want their learning to take place online – face-to-face interaction may be more suitable for some students and/or learning situations, and traditional (i.e. not electronic) resources are still preferred by many students;
* Students generally expect their tutors to be competent technology users and may have a negative experience if this is not the case;
* Not all tutors are motivated or able to use the technology (even if students expect them to be experts in this area);
* Technology used in the classroom, online, and socially is growing so quickly that it is often difficult for staff (and students) to keep up;
* Whilst some disabled students are more technologically adept and willing to experiment to get the technology to work in the way they need, there is often a time or financial cost, which can produce barriers.

The discussions which followed on from these presentations confirmed many of these findings and my favourite quote of the day came from E.A. Draffen, when she talked about the difficulties in cascading technology information to teaching staff: “Kit-Kats strapped to the back of iPods just don’t do it with staff sometimes”. E.A. was referring to the difficulty in getting staff to attend CPD (Continuing Professional Development) workshops on using technology.  Many staff just can’t afford the time to attend such workshops or may not even be technologically engaged.  Like students, teaching staff need to know what technologies are available to them, how they can be used (officially and unofficially), and have the time and motivation to explore those technologies. One counter-argument which came out of the discussions was that tutors should concentrate on helping students to understand their particular subject area, be it art or zoology, rather than have to be learning technologists as well.  However, if educational institutions generally expect their students (and staff) to be literate (i.e. be able to read and write), perhaps it is not unfeasible to expect them to be technologically literate as well?

The unpopularity of VLEs (Virtual Learning Environments) was also discussed and one delegate (a postgraduate student) suggested that if VLEs were designed by students that they might look more like PLEs (Personal Learning Environments). The importance of personalisation of the learning experience and flexibility in course design and delivery looks likely to become even higher as students (or “customers”) demand more value as fees increase. E-learning is not the be all and end all, and in any case, not all students want, or even are able, to engage with the technology.

So, although the discussions in this strand did not really throw up anything new, perhaps the fact that the same old issues and barriers to e-learning still exist is rather worrying. Online and learning technology is moving at a much faster rate than most of us can keep up with. For many students (not just those with disabilities) and even staff, this can be a real barrier to effective learning (and teaching).  Is there a solution? We can’t slow down the rate of technological innovation and there are only so many hours in a day. Perhaps all we can do is muddle through as best we can, being more tolerant of those staff and students who have difficulties with using technology, and to continue to help each other to find innovative solutions to problems. Talking about the same old issues acknowledges that they are still there, but it also gives people the chance to discuss and disseminate the many different workarounds they have found. Whilst these issues are frustrating and challenging, perhaps they also make us more inventive.

Latest News from W3C WAI

There’s a lot going on over at the W3C WAI (World Wide Web Consortium Web Accessibility Initiative), with current guidelines being updated and new ones being developed. So here’s a brief overview of what’s happening.

* ATAG (Authoring Tool Accessibility Guidelines) 2.0 – These guidelines are currently at Working Draft level. ATAG 1.0 is still the stable version which should be used.

* EARL (Evaluation and Report Language) 1.0 – The public comment period for the “Representing Content in RDF” and “HTTP Vocabulary in RDF” companion documents has recently finished (29th September 2008). Once the comments have been addressed, these documents will be published as Notes rather than Recommendations. (EARL 1.0 is currently has the status of Working Draft.)

* Shared Web Experiences: Mobile and Accessibility Barriers – This draft document gives examples of how people with disabilities using computers and people without disabilities using mobile devices experience similar barriers when using the Web. Comments on this document closed on 20th August.

* UAAG (User Agent Accessibility Guidelines) 2.0 – This version is currently at Public Working Draft status and is at this stage for information only.

* WAI-AGE Addressing Accessibility Needs Due to Ageing – This project is currently at the literature review stage and aims to find out whether any new work is required to improve web accessibility for older people.

* WAI-ARIA (Web Accessibility Initiative Accessible Rich Internet Applications) – The Working Draft has recently been updated and comments on this update closed (3rd September).

* WCAG (Web Content Accessibility Guidelines) 2.0 – After a lot of to-ing and fro-ing, WCAG 2.0 finally looks as though it’s going to finalised for public use by the end of the year. Data from the implementation of trial WCAG 2.0 websites has been gathered and whilst the status is still “Candidate Recommendation”, this status is likely to be updated in November.

Using Video to Provide Feedback on Students’ Work

Russell Stannard, a lecturer at the University of Westminster, has just been been given the JISC/Times Higher Outstanding ICT Initiative of the Year award for using video to provide training in multimedia and Web 2.0 applications.  As well as using video to produce online training videos, he has also been using video to provide feedback on students’ work.  His website on multimedia videos includes an example of using video to mark a student’s work.

The THES (Times Higher Education Supplement) wrote about Stannard’s use of video for providing feedback on students’ work in 2006 and described the process involved.  Using video (or rather screen recording software with an audio track) to provide feedback means that a tutor can explain both verbally and visually any corrections that a student needs to make.  Instead of handwritten notes in margins or a page of comments attached to a student’s work, the video feedback approach can be used to give more lengthy feedback. 

Of course, this approach means that both the tutor and the student have to go through the whole video sequence each time they want to review their feedback rather than quickly glancing through a static set of pages.  However, this approach might be of value to some students with disabilites.  We often tend to concentrate on making online resources accessible, but perhaps we do not always think about how the feedback itself can be made accessible or value added, particularly for those students with learning disabilities or particular learning styles.  The video feedback approach will not be appropriate for all students, tutors or assignments, however, it is an alternative way of presenting information, which some students may find beneficial.

First Three Parts of ISO Multipart Accessibility in e-Learning Standard Published

The first three parts of the ISO (International Organization for Standardization) “Individualized Adaptability and Accessibility in E-learning, Education and Training” Standard have just been published (16th September 2008).

This standard integrates the IMS ACCLIP (Accessibility for Learner Information Package) and IMS ACCMD (AccessForAll Meta-data Specifications) into a single multi-part standard.

The first three parts are now available (cost is around £65 each) and consist of:

* ISO/IEC 24751-1:2008 Individualized Adaptability and Accessibility in E-learning, Education and Training Part 1: Framework and Reference Model.
Part 1 of the multi-part standard. It lays out the scope and defines the reference model for Parts 1 and 2 below.

* ISO/IEC 24751-2:2008 Individualized Adaptability and Accessibility in E-learning, Education and Training Part 2: “Access For All” Personal Needs and Preferences for Digital Delivery.
Part 2 of the multi-part standard. It covers the IMS ACCLIP Specification and defines accessibility needs and preferences, which can then be matched to resources (as defined in Part 3 below).

* ISO/IEC 24751-3:2008, Individualized Adaptability and Accessibility in E-learning, Education and Training Part 3: “Access For All” Digital Resource Description.
Part 3 of the multi-part standard. It covers the IMS ACCMD Specification and defines the accessibility meta-data that expresses a resource’s ability to match the needs and preferences of a user (as defined in Part 2 above).

A further four parts have been given “New Project” status and will cover non-digital learning resources and physical spaces.  They have a target publication date of December 2010.

Part 8 of the multipart standard will describe how language and learning preferences will be referenced and is expected to be published by the end of 2009.

Making Wikipedia Fully Accessible

I’ve just been sent a link to an article on the ePractice blog by Per Busch about Making Wikipedia Fully Accessible for All.  Per is looking for funding to try and remove some of the barriers to accessibility for screen reader and visually impaired users of MediaWiki.  Some of the areas which are still considered to be problematic include CAPTCHAs and screen reader issues (see Accessibility and Wikis for a longer list).

Comments and suggestions (or offers of funding!) can be left at Per’s post – Making Wikipedia Fully Accessible for All.

There’s no Algorithm for Common Sense!

Virtual Hosting.com has drawn up a list of 25 Free Website Checkers, with a brief description of what each one does.  The checkers are split into handy sections – General, Disability, and Usability – but automated checkers will only check the easy bits – e.g. colour contrast, HTML (HyperText Markup Language) and CSS (Cascading Style Sheet) code, etc – i.e. the bits for which an algorithm can be written. 

However, whilst a website checker can check that alt text, for example, is used with an image and will tell you if it’s missing, it can’t actually tell you whether what that alt text actually makes sense.  For example, alt text of “an image” or “asdfg” is not going be very useful to someone who doesn’t download images or for someone who uses tooltips to find out the relevance of the image (particularly where a description or title hasn’t been provided).  So developers and content authors need a hefty dose of common sense to make sure that the aspects of a website that can’t automatically be checked by a computer are actually usable and accessible.

It’s often quoted (but I can’t remember by whom) that one could implement the whole of WCAG (Web Content Accessibility Guidelines) and still end up with an inaccessible site. Whilst an automated checker might find that the site is accessible based on a simple checklist, a human may find it unusable.  Human involvement in checking accessibility is still necessary and as well as common sense, an understanding of accessibility issues and context is also required.  For example, whilst a photo of Winston Churchill might have the alt text of “Photo of Winston Churchill”, if the photo is illustrating a particular point, it could be more relevant to say “Photo of Winston Churchill smoking a cigar” or “Photo of Winston Churchill in London in 1949″, depending on context.

So whilst automated web accessibility checkers have their uses, it’s important to remember that they generally don’t include an algorithm for common sense!

Technology and Control: the Designer v. the User

Chapter Two: Framing Conversations about Technology” of “Information Ecologies: Using Technology with Heart” by Bonnie Nardi and Vicki O’Day looks at the differing views of technology from the dystopic to the utopic.  The authors make some interesting comparisons between the technology we have now with the technology of the recent past, as well as some very interesting comments.

Nardi and O’Day have noticed that although the advance of technology is seen as inevitable, people do not critically evaluate the technologies they use, even though they have been designed and chosen by people.  In other words, we accept the technology that is placed before us but we forget that we have a choice as to the type of technology we actually use and the way in which we use it.

The authors compare the differing views of Nicholas Negroponte (technophile and director of the MIT Media Lab) and Clifford Stoll, author of “Silicon’s Snake Oil”, programmer and astronomer.  Interestingly, although their views are remarkably different (one utopic, the other dystopic), they both agree that “the way technology is designed and used is beyond the control of the people who are not technology experts” (Nardi & O’Day).

Nevertheless, people often use technology in ways that are completely different from the way in which the designer intended. For example, Johnny Chung Lee has developed some interesting and unusual uses for the Nintendo Wii controller.  Thinking out of the box can bring control back to the user and it’s probably fair to say that we all (from expert users to newbies) use the technology we have in ways which weren’t even considered by designers, even if it’s just using a CD as a coaster for a coffee mug.

So although technology (hardware and software) designers may only have a limited perspective on the way in which they expect their technology to be used, once it is out in the public domain, alternative uses or ways of working will often be developed and exploited. 

BBC Podcast: Accessibility in a Web 2.0 World?

I’ve just listened to the BBC’s Podcast Accessibility in a Web 2.0 World (around 43 minutes long, available as MP3 and Ogg Vorbis formats).  The podcast takes the form of a facilitated discussion between a number of experts talking about what Web 2.0 applications mean to accessibility and included representatives from the BBC, commercial web design companies, and the AbilityNet charity.

There were some interesting comments and if you don’t get chance to listen to the whole thing, here’s a brief run-down of some of the ideas and issues, which I thought were particularly salient.

* Social networking sites can take the place of face-to-face networking, particularly where the user has motor or visual disabilities. However, many sites often require the user to respond initially to a captcha request, which can be impossible for people with visual or cognitive disabilities.  Some sites do allow people with voice-enabled mobiles to get around the captcha issue, but not everyone has such technology. Once the user has got past such validation, they then have to navigate the content which, being user generated, is unlikely to be accessible.

* One of the panellists felt that people with disabilities did not complain enough about inaccessible websites and that a greater level of user input would help web based content be more accessible.

* Jonathan Chetwynd, who has spoken to the CETIS Accessibility SIG in the past (see Putting the User at the Heart of the W3C Process) stated that users were not involved in the specification and standards process, because it was led by large corporate companies.  He also felt that users with low levels of literacy or technical ability were being overlooked in this process.

* There was some interesting discussion about W3C (World Wide Web Consortium) and the way in which their accessibility guidelines are developed.  Anyone can be involved in the W3C process but as a fee is charged for membership, it is mostly companies, universities, and some not-for-profit organisations who take part.  As some companies don’t want their software to appear as inaccessible, it may be that their motive in joining the W3C is less altruistic.  It was stated that it was actually easier to “fight battles” within the W3C working groups than to take them outside and get a consensus of opinion. As a result, there is not enough engagement outside the W3C working groups which has resulted in a lot of dissatisfaction with the way in which it works. 

* We are now in a post-guideline era, so we need to move away from the guideline and specification approach to an approach which considers the process.  This means taking the audience and their needs into account, assistive technology, etc.  Accessibility is not just about ticking boxes.  The BSI PAS 78 Guide to Good Practice in Commissioning Websites, for example, gives guidance on how to arrive at the process and to ensure that people with disabilities are involved at every stage of the development.  However, developers often want guidelines and specifications to take to people who don’t understand the issues regarding accessibility.

* It is important that everyone is given equivalence of experience so there is a need to separate what is being said and how it needs to be said for the relevant audience.  The web is moving from a page-based to an application-based approach.  One panellist likened Web 2.0 applications to new toys with which developers were playing and experimenting and he felt that this initial sandpit approach would settle down and that accessibility would start to be considered.

* Assistive technology is trying hard to keep up with the changing nature of the web but is not succeeding.  Although many Web 2.0 applications are not made to current developer standards (not the paper kind!), many of the issues are not really developer issues.  For example, multimodal content may have captions embedded as part of the the file or as standalone text, which both browsers and assistive technologies need to know how to access.

* People with disabilities are often expected to be experts in web technology and in their assistive technology but this is often not the case.

After the discussion, the panel members were asked what they felt would advance the cause of web accessibility.  My favourite reply was the one where we all need to consider ourselves as TAB (Temporarily Able Bodied) and then design accordingly.  The rationale behind this was that we will all need some sort of accessibility features at some stage.  So the sooner we start to build them in and become familiar with them, the better it should be for everyone else!