One Person’s Strategy is Another’s Barrier

Accessibility is very personal – what works for one person may not work for another.  The “one size fits all” approach has been tried and although admirable in its intentions has often proved difficult to implement.

The W3C WCAG (World Wide Web Consortium Web Accessibility Guidelines) v 1.0 tried a technical approach to accessibility by setting out a number of accessibility guidelines, which could be automatically tested by online validators such as Bobby. Whilst this automatic validation can validate the HTML code, many of guidelines require human input and common sense.  For example, whilst an automated accessibility validator can check that an image has an alt text tag, it cannot check that the tag actually makes sense.  People who don’t use images, such as those using mobile technologies or visually impaired people still need to know whether an image is important to the content or not.  An image with an alt tag of “image01.jpg” gives no information to website users, whilst an alt tag of “Photograph of Winston Churchill” would not only aid navigation through a web page, but it would also provide information that the image does not provide additional information to the text, so the user knows they are not missing out on anything.  As well as this, users could hover a mouse over the image to see the alt text.  This could be useful where an image doesn’t have a text caption underneath.

Difficulties with adhering to such guidelines and standards can cause barriers for people because content developers may then try to produce content to the lowest common denominator, i.e. text only.  Although text can be easily accessed by people using screen readers, it can be difficult for people with dyslexia to read and is visually unappealing.  So in this case, whilst the content is accessible for people using screen readers, it is less accessible for people with dyslexia.

Despite the drawbacks, this standardisation (“one size fits all”) approach is important.  Without a set of guidelines, developers may not know where to begin with accessibility and may not approach the basics in the same way, thereby reducing interoperability with assistive and other technologies.

One way to complement the standards approach is to produce alternative but equivalent versions of content.  For example, transcripts can be provided for podcasts, text heavy content can be offered as with animations or images or in simple language for people with learning disabilities or language learners.  This holistic approach has been proposed by Kelly, Phipps, and Howell in Implementing a Holistic Approach to e-Learning Accessibility.  This approach also takes student learning styles and pedagogy into account as well as technical and usablity issues.

Standards and guidelines are important but they need to be used with common sense and in combination with other approaches. Standards and guidelines can help with the physical presentation of the content, whilst holistic and other approaches can help the user to interact and use that content in the format best suited to their needs.

Personalisation – Many Things to Many People?

I finally got around to reading Designing for Learning: The Proceedings of Theme 1 of the JISC Online Conference – Innovating e-Learning 2006 (PDF Format, 788Kb) after several aborted attempts.  The paper I found most interesting was Diana Laurillard’s keynote, which got me thinking about personalisation of e-learning systems and resources. 

Laurillard talks about several different levels of personalisation:

* “…a pre-test to determine the level at which a learner might begin a learning design, or the chance to select the vocabulary set with which a language learner would like to work, or the opportunity to choose the order in which topics are confronted…”

* “…a negotiated learning contract that specifies the content topics, the prior learning and intended acheivement levels…”

* “…[an] adaptive system vision, in which opportunities are personalised for the learning, based on a diagnosis of their [learners’] needs.” (From Laurillard, D. Keynote: Learning Design Futures – What are our Ambitions? in Minsull, G. & Mole J (eds) (2007), Designing for Learning, The Proceedings of Theme 1 of the JISC Online Conference: Innovating e-Learning 2006, p10. JISC. Accessed 12/09/07).

There are no doubt other levels, but what struck me was that the idea that personalisation can mean different things to different people, depending on their requirements and viewpoint.  So, when two people talk about the personalisation of e-learning resources and systems, they may be envisaging completely different processes and interactions from each other.

To me, personalisation means accessible e-learning systems and resources that adapt themselves based on the learner’s learning needs and preferences.  So, for example, a visually impaired learner is offered alternative learning resources that have little or no visual element.  Or the e-learning system automatically changes the font colour and background colour based on a the preferences already set up by a dyslexic learner. However, these preferences and requirements for alternative resources are not just beneficial for learners with disabilities.  They are beneficial for all learners who may have learning, technology, or environmental requirements which differ from the norm (if indeed such a thing exists).  For example, a learner without access to an mobile audio device, such as an MP3 player, may prefer to print off the transcript of a podcast in order to read it on the bus.

The course designer may well have a completely different view of personalisation, whereby the e-learning system automatically presents the apropriate starting point on a course based on the learner’s level of competency and prior knowledge.  This could be established by online (or offline) pre-tests, tutor-entered proof of competency, such as certificated evidence of experience or skills, or other means of verification.  A learner who exceeds the initial competency requirements could be started a higher level of the course with options to view and/or take part in previous lower levels.

Both of these views of personalisation relate to Laurillard’s “adaptive system” approach, whereby the e-learning system “pushes” out resources or automatically places the learner at a particular level of a course, based on the learner’s needs and preferences.

However, there is another less formal approach to personalisation, which I’ve termed the “active approach” (although I’m sure there must be an official term for this out there), whereby the learner chooses what tools they want to use and/or the level at which they want to start the course.  In this approach, information and learning is “pulled” from the content managing system using the tools and approach that the learner prefers.  For example, a student may prefer to input all her assignment dates into a mobile device which she carries with her at all times, rather than input them onto the institution’s approved calendaring system, which she only accesses when she is on site. 

Although there does not seem to be much difference between the “adaptive” and “active” approaches, the details are quite subtle.  For example, in an adaptive approach, an institution’s system could offer a specific text-to-speech reader for all students to choose, should they wish, which is supported by institution’s IT (Information Technology) department.  However, in the active approach, the learner uses the text-to-speech reader they prefer to use.  Also, the active approach allows a learner to choose where they want to start learning based on their interests or prior learning. For example, a biology student with an interest in or prior knowledge of plants may want to start with a module on plant biology before moving to animal biology in order to orientate himself and gain confidence.  Laurillard’s idea of a “negotiated learning contract that specifies the content topics, the prior learning and intended acheivement levels” will help the learner to identify where they want to start learning.

Although there is a need for an adaptive approach to personalisation, there is also a complementary need for an active approach, which can empower learners, help hone their learning, and help them to gain confidence by consolidating any prior knowledge.

Personalisation does mean different things to different people – from a system which adapts itself to present content in the way the learner requires to learners actively choosing where they want to begin their learning.  Perhaps personalisation is all these things at the same time and it’s only that actual viewpoint that makes the difference.

Scribd: A multi-format document repository

I’ve just come across Scribd – a web-based repository, which allows anyone to upload their documents for free.  I suppose it’s a bit like YouTube for documents, but the great thing about it is that you get several formats for the price of one.

All it needs is a document in a Word, PFD, text, PowerPoint, Excel, PostScript or LIT format, which it will then display in a web browser using Scribd’s custom Flash player (seems to be Macromedia’s FlashPaper format).  The clever part is that Scribd will then automatically convert your document into a PDF, Word, and Text file – and MP3 format!  The application obviously uses some sort of text-to-speech software, but the great thing about Scribd is that it automatically provides so many formats from just one upload.

The only drawback I’ve found so far is that I can’t seem to tab through to the links for the alternative versions, but the site was only launched last month, so perhaps they’ll fix that soon.  Oh – and it only works in IE.  It is available via Firefox and conversely, you can tab through to the download links but not actually see result inline.  Typical!  However, that aside, it’s a great idea – now all we need is something that automatically provide transcripts for podcasts, and something which will caption and describe videos.   I’m not even going to mention actually producing accessible content – that’s for another day!

Adding Value: Providing Transcripts for Podcasts

I’ve just listened to a podcast by EASI (Equal Access to Software and Information) on RSS/Podcast Basics.  As well as covering the basics of RSS (Really Simple Syndication) feeds and podcasting, it also briefly touched on podcast accessibility.

Podcasting can consist of either audio or audio and voice (video).  Both Section 508 and the W3C (World Wide Web Consortium) WCAG (Web Content Accessibility Guidelines) recommend that alternatives are available for auditory and visual content – e.g. a transcript for an audio podcast; captions and/or transcript (and possibly a description of the video, depending on its format) for a video podcast (this could take the form of a presentation with voice-over or an actual video).

Podcasts can be of particular value to people with visual disabilities or physical disabilities, and people who want to learn on the run, such as travelling into work on the bus, exercising, waiting for a train, etc.  However, including a transcript of a podcast, particularly in the educational environment, will not just benefit people with hearing impairments but it can also benefit all students. 

One example given in the EASI podcast was of lecturers making their lectures available as a podcast for students to download.  The presenter suggested that students would probably only want to listen to a podcasted lecture once (or twice, if they had a high boredom threshold) and that, as in a normal lecture room situation, the student would make notes as they went along.  However, if a transcript was provided of the podcast, the student could print it off (as well as listen to the podcast) and make notes in the margin.  The transcript and annotations could easily be carried around and used as a useful revision resource at exam time.

Another problem is that it’s not always easy to find one’s way around a podcast – there are no headings or marker points (although maybe this will come in time) as in a large document – so the listener is forced listen to the podcast all the way through in order to find the relevant bits.  Providing a transcript alongside the podcast allows for easy navigation, it can printed by a Braille machine, and allows for annotation by the student.

Although there are obviously the cost benefits of writing a transcript, providing an electronic resource in two different formats could greatly increase the value of that resource and will benefit a greater number of students.

BBC Jam – Last Chance to See…

BBC Jam is having to suspend its free online learning resources – see BBC Suspends Net Learning Project. These resources are aimed at children in the 5-14 age range and one of the main objectives was to try and make the resources as accessible as possible.

The developers have worked hard. The learning objects are engaging and there is a suite of accessibility preferences, which can be saved by the user and re-presented at log-in without having to set them up again. Preferences include:

* text to speech;
* facility to change text size and colour, and background colour;
* a set of standard text and colour combinations;
* subtitles/captions;
* and choice of language (English, Gaelic, Welsh).

Many of the learning resources have been developed in Flash, so there are a couple of sticky bits, but on the whole the concept seems to work quite well. One of the things I particularly liked was that the text to speech facility also worked in Welsh (however I couldn’t seem to get the text to be read out in Gaelic – but that could just have been me!).

There were also interesting plans to make certain core subjects available this year:

* for people with severe learning disabilities;
* people with visual impairments (audio described video);
* people with hearing impairments;
* in BSL (British Sign Language);
* and symbol supported resources.

So you’d better hurry if you want to take a look at one way in which accessibility preferences can be included in a learning environment, as the BBC Jam website is only available until 20th March 2007. I do hope this work is re-instated as it proves that learning resources can be engaging, interactive, and accessible.

Jumping Through Hoops – Reasonable Adjustments for Exams

The DRC (Disability Rights Commission) are currently supporting a legal case brought about by a student, who claims that she was discriminated against when taking an online exam for a professional qualification.  The complaints include one about unreasonable demands for evidence of disability, and one about requests for reasonable accommodations.

Hoop 1: Prove Your Disability

The SENDA (Special Educational Needs and Disability Act) states that educational institutions must make reasonable adjustments for disabled students in order to avoid substantial disadvantage.  However, in order for those reasonable adjustments to be made in the exam room, many institutions need actual evidence of disability. For those students who are known to their educational institution, evidence may not necessarily mean a medical certificate, as a tutor’s or other adviser’s statement may be enough. 

However, in the legal case mentioned above, the professional body running the assessment had not actually met the student as most of her studies were completed electronically.  So in this case, actual medical proof was their only recourse to evidence of disability, before any special accommodations could be made.  Including a suite of preferences as part of the test software could have helped with some of the accommodations the student required and may even have avoided the need for the provision of evidence of her disability.

Hoop 2: Take the Test the Hard Way

In the legal case mentioned above, the student was not allowed to take her own laptop into the exam room nor was she allowed to use a screen reader to access the test, because the professional body felt that installation of software from outside their test suite could put the security of their test at risk.

In this case, both parties’ requests could be considered as reasonable – the student’s request for additional software in order to take the test and the testing body’s refusal on the grounds of security.  Security issues around assessment are a common fear.  If an exam body wants to keep the quality of its qualifications high, then security will be paramount.  But where does this leave the student who needs additional software in order to access the test?  In this case, the student was offered the services of a reader and extra time – it was not an ideal solution for her, but was one with which the testing body was happy.  This is probably not an isolated incident – there are no doubt many conflicts between what the student really needs to take an online test and with what the testing body feels comfortable about allowing the student to use.  Compromises are made, but perhaps it is the student who always ends up with the worst deal.

Levelling the Playing Field

So would it be worthwhile for test centres (and maybe other online test providers) to provide generic accommodations?  Offering a text reader or screen magnification software as part of the test software suite could remove the need for some students to provide evidence of their disability and could reduce the worry for exam bodies about compromising security. 

Screen readers, for example, are a common type of assistive technology and come in various shapes and sizes.  Although it would be impossible to provide screen readers to suit everyone’s needs, it might be possible to provide one cut-down or authoritative version simply to allow students to access online assessments, as long as the student was allowed to practise using the technology well in advance.  This could keep costs down and possibly improve a student’s interaction with the test software, if practice runs have been made available.

Availability of such technology would also depend on the type of test being undertaken but for online exams, where reading the questions aloud did not defeat the actual purpose of the question, providing access to even one type of assistive technology could go a long way to including rather than excluding people.  Of course, the questions would also need to be screen reader-friendly and alternatives to questions containing graphs or images may need to be offered, but providing accessible and/or alternative questions may benefit all students.

I’m not saying that one size should fit all.  Many disabled students will still need to use their own particular type of technology but including some common types of assistive technology in a test software suite could help level the playing field.  In an ideal world, students would be able to set their own preferences, use their own software and even have assessments based on their particular learning styles. 

Above all, it is imperative that exam bodies and educational institutions who provide online tests are clear about what is actually being tested (is it the student’s ability to interact with the test software or their knowledge and understanding of a particular subject?) and to ensure that any assessment clearly reflects that goal in a user-friendly and supportive manner.  After all, assessment in any form is usually stressful enough without having to jump through extra hoops.

Should alt text be used to paint a thousand words?

We’ve all been told that alt text is an essential part of web accessibility, but how much detail do we actually need to include and who should do it?

There’s been some discussion over on the DC-Accessibility JISCMail Discussion List (February 2007, “Not Accessible or Adaptable”) about a lot of issues, including whether alt text should always be added to an image. One contributor to the discussion gave a link to a slideshow of dance photographs, where:

“the author refused to label the images with text… his argument being that the photographers images capture and demonstrate an emotional experience, and that whilst text can perform the same expression, he’s not the person to annotate them.”

The photographs in question are various stills from dance rehearsals and performances.  There is no accompanying text of any kind, but most people would probably recognise that the people in the photos were involved in some sort of dance medium from the clothes being worn, the environment, and from the positions of the bodies.  However, unless one knows the language of dance or the context in which the dance is being performed, the photos may have no further meaning – and could therefore be inaccessible to some people.

This actually brings up several issues:

1. How can one describe an image that expresses emotion or abstract concepts?

2.  If such concepts can be described, who should be responsible (and have the capability) for doing so?

3.  Where does alt text fit into all this?

1. Describing Emotion and Abstract Concepts

So is it possible to extract emotional and abstract meanings and describe them for people who do not have a concept or understanding of such areas?  The Dayton Art Institute Access Art Website has attempted to do so.  For each artwork on the Access Art website, there is an image, a section on the artwork in context, comments by the Art Director (including an audio commentary) and a description of the artwork. Each section is no more than a couple of paragraphs. For example, the description of Frishmuth’s “Joy of the Waters” has attempted to put across abstract concepts such as the mood of the statue:

“The girl’s springing, energetic step, joyful expression, and animated hair create an exuberant mood and suggest that she may be a water sprite.” (Marianne Richter, Dayton Art Institute)

This helps make the artwork become more accessible for visually impaired people and for people who do not know the language of art. 

2. Responsibility for Describing Images

The people best qualified to describe a visual resource are probably the people who have decided it should be included in the first place.  For example, someone with archaeological experience is probably best placed to describe an image of a stone tool, whilst a geography tutor may be the most suitable person to describe a meteorological image from a satellite put onto the university’s VLE (Virtual Learning Environment).

The descriptions used will also differ depending on the image’s intended audience.  A museum generally has a wide public audience with many different levels of understanding and access requirements, whilst a Geography department may only have a small number of students at a fairly high level of understanding. 

So, unless the photographer in the quote above, is also versed in the language of dance, he is unlikely to be able to describe the dance photos he has taken.  Even if he were, he would also need to be aware of the level at which they needed to be pitched in terms of language, description, and audience. 

3. Use of alt Text

So where does alt text fit into all this?  The W3C (World Wide Web Consortium) WCAG (Web Content Accessibility Guidelines) recommends providing:

“…a text equivalent for every non-text element (e.g., via “alt”, “longdesc”,  or in element content)… For complex content (e.g., a chart) where the “alt” text does not provide a complete text equivalent, provide an additional description using, for example, “longdesc” with IMG or FRAME, a link inside an OBJECT element, or a description link.”

Therefore, alt text should be used for every image (even empty alt text should be used for spacers and decorative images), but it should only provide a brief text description of the image – the Guidelines on ALT Texts in IMG Elements recommends no more than 50 characters.  Longer descriptions of an image, such as those describing complex images, emotions, or abstract concepts, should not be included as alt text, but should either be attached as a separate link (perhaps using the longdesc or d-link elements) or added next to the image. 

Alt text can also be different for different audiences and purposes (see WebAIM’s Communicating the Purpose of the Graphic) and does not necessarily need to be completed by experts.  However, although the photos of the dancers should have had alt text, they may well have needed someone with a knowledge of dance to add it.  Basic alt text, such as “photo of dance students” could have been added by anyone but would there be any benefit to seeing roughly the same alt text added to over 80 images?  A choreographer would be capable of adding more informative alt text, such as stating the dance step or intention, e.g. “photo of a dancer in fifth position”, particularly where the intended audience was other dancers or dance students. 

Alt text is a requirement under the WCAG guidelines, but it shouldn’t be used to describe an image in a thousand words – these have to be written elsewhere.

Can You Really Have an Accessible Google Map?

Well, yes – according to Greg Kraus of LecShare Inc, in a presentation entitled “Creating Accessible Google Maps“, part of EASI’s (Equal Access to Software and Information) webinar series.  Kraus has developed some JavaScript code that calls the Google Maps API (Application Programming Interface), which he has generously made freely available to the accessiblity community.

This allows a Google map to be accessible in two ways:

1. Navigation - By creating form buttons and tying some JavaScript commands to them a Google map’s navigation can be made keyboard (and screen reader) accessible.  It allows the user to Zoom In/Out, have Normal/Satellite/Hybrid views and pan North, South, East and West.  Details on how to accomplish this are available from Making Google Maps Accessible (Part 1 – Controls).  (However, the actual maps themselves will not be screen reader accessible – only the controls).

2. Providing Accessible Data – Data, such as name, website, weather, etc, can be entered into an accessible form, which is stored in a database.  The data is then retrieved from the database via the scripting mechanism described in Making Google Maps Accessible (Part 2 – Accessible Data), and displayed both on the map on custom made pushpins and as an ordered list, which is accessible to screen readers.  Because the pushpins can be custom made, the font associated with the pushpins can also be made larger.

Although Google Maps only understand latitude and longitude co-ordinates, rather than actual addresses, Google does provide a publicly available API which will do the translation for you.  However, it should be noted that any information that shows terrain or streets will be inaccessible to screen reader users.  Nevertheless, descriptions could be added to the pushpins to describe their relationship with other features.

It was quite exciting to see attempts to make something as inaccessible as a map accessible and it’s great to see that Greg Klaus has made his work freely available to us all.