Models of disability and their relation to accessibility

This week I have been writing an introductory section for a paper on models of disability and accessibility.  This has led me to think again about the relationship between the two.

The Medical Model of Disability

Disabilities have traditionally been described with reference to medical conditions that they were seen to arise from.  This is known as the medical model of disability and was encapsulated in the 1980 World Health Organisation’s (WHO) International classification of impairments, disabilities, and handicaps [1] which included the following definitions:

  • Impairment = a loss or abnormality of physical bodily structure or function, of logic-psychic origin, or physiological or anatomical origin
  • Disability = any limitation or function loss deriving from impairment that prevents the performance of an activity in the time-lapse considered normal for a human being
  • Handicap = the disadvantaged condition deriving from impairment or disability limiting a person performing a role considered normal in respect of their age, sex and social and cultural factors

The Social Model of Disability

The main alternative to the medical model of disability is the social model.  This has been highly influential, over the last 30 years, in shaping policy, practice and attitudes to disabled people.  The social model stemmed from the publication of Fundamental Principles of Disability in 1976. [2] This revolutionised the understanding of disability arguing that it was not mainly caused by impairments but by the way society was organised and responded to disabled people.

In the social model, disability is caused by society and is not the ‘fault’ of an individual disabled person, or an inevitable consequence of their limitations. Disability is the product of the physical, organisational and attitudinal barriers present within society.  The social model takes account of disabled people as part of the economic, environmental and cultural society.

The WHO revised its definitions of disability, in part as a response to this social model, and from the realisation that the medical model was of very limited use in defining effective responses in meeting the needs of disabled people.  In 2001 WHO published the International Classification of Functioning, Disability and Health (ICF) [3].  In the ICF disability is described as:

… the outcome or result of a complex relationship between an individual’s health condition and personal factors, and of the external factors that represent the circumstances in which the individual lives.

Building on the social model the IMS Global Learning Consortium, introducing its work developing technical standards for accessibility in e-learning, offered a more education specific definition of both disability and accessibility [4]:

… the term disability has been re-defined as a mismatch between the needs of the learner and the education offered. It is therefore not a personal trait but an artifact of the relationship between the learner and the learning environment or education delivery. Accessibility, given this re-definition, is the ability of the learning environment to adjust to the needs of all learners. Accessibility is determined by the flexibility of the education environment (with respect to presentation, control methods, access modality, and learner supports) and the availability of adequate alternative-but-equivalent content and activities. The needs and preferences of a user may arise from the context or environment the user is in, the tools available (e.g., mobile devices, assistive technologies such as Braille devices, voice recognition systems, or alternative keyboards, etc.), their background, or a disability in the traditional sense. Accessible systems adjust the user interface of the learning environment, locate needed resources and adjust the properties of the resources to match the needs and preferences of the user.

Post Social Models of Disability

The social model of disability has been criticised and various moves instigated to move beyond it. For an example see Torn Shakespeare and Nicholas Watson (2001) [5].  They argue instead for an Embodied Ontology: “we are our body, with all of its imperfections and impairments”.  Further, they assert that “there is no qualitative difference between disabled and non-disabled people because we are all impaired in some form, some more than others”. They consider the idea of a normal/perfect person as mythical. However, this discussions has been more within the academic world of disability studies and I would contest has yet to have widespread impact beyond this, and particularly relevant to this post, on accessibility.  That being said one direct relation to accessibility is that accessibility accommodations have benefits for many who do not consider themselves disabled. An example of this is the feature present in most modern browsers to enlarge the display of web pages in response to a short cut key, usually Ctrl +.  This was originally introduced for those with a visual impairment but at times is useful to all.  A major piece of research undertaken by Forrester for Microsoft in 2003 [6] supports this case of the wider benefit of accessibility accommodations.  It found that 57% of working-age computer users are likely to benefit from accessible technology (where accessible technology is understood as technical responses to promote access for disabled people to computer hardware and software).

Functional Models of Disability

The term accessibility is widely used in the context of web design. The W3C describes web accessibility thus:

Web accessibility means that people with disabilities can perceive, understand, navigate, and interact with the Web, and that they can contribute to the Web. [7]

This is in essence based on a functional model of disability.  Generally in Human Computer Interaction (HCI) a functional approach is most useful. What is important in the design of web-based applications or content is how the diversity of users access the computer. This design can be said to be accessible if it facilitates full interaction by all users irrespective of assistive technologies or access approaches that may be adopted by some.

The AccessForAll 3.0 Personal Needs and Preferences (PNP) provides a specification that enables comprehensive profiles of individuals’ access approaches and assistive technologies to be stored based on a functional model. This specification is being developed within the IMS Global Learning Consortium and went to public draft in September 2012 [8]. These functional profiles could be generated by disabled people themselves, possibly with the help of advisors, inputting their specific access approaches and requirements to a web-form. Such profiles have great potential in personalisation approaches to accessibility and in analytics based approaches to identifying accessibility issues, as discussed elsewhere in this blog.

A Note on WCAG and Models of Disability

The Web Content Accessibility Guidelines (WCAG) 2.0 [9], a formal recommendation of the web standards body the W3C, are the de facto international standard on web accessibility.  These are targeted at web developers and cover what is normally referred to as technical accessibility. They are organised according to four top-level principles of web accessibility: that web pages should be perceivable, operable, understandable, and robust. WCAG are focussed at the properties of a web page and in so doing might be considered to be based on a functional model.  However, the user is deliberately subsumed in their formulation; their concern is the functional properties of the web page not the person accessing them.  This ignores the consequences of the social model of disability of the importance of context and the relational nature of accessibility.

The development of web assets or applications is a process. Accessibility considerations need to be built into the everyday practices across the web product life-cycle from conception and specification through development to delivery and maintenance. Recognising this, the British Standards Institute developed BS 8878: 2010 Web Accessibility Code of Practice [10]. This facilitates a pragmatic application of WCAG 2.0 within a process based approach and reasserts a user focus.


Our models of disability are important, they shape our attitudes and impact on how effectively the needs and preferences  of disabled people are met in design. The medical model is now widely seen as outmoded and a perpetuator of  discriminatory attitudes. The social model has had widespread influence. It is important in accessibility considerations because it recognises the importance of the context of the users and supports the view of accessibility as a relationship property; in the case of web accessibility the relationship being between the diversity of users and the web resource or application. Functional models have been asserted as the most useful in design and development and the potential of these for personalisation and analytics highlighted.


(All web-links checked 10 October 2012)

[1] World Health Organization, (1980) International classification of impairments, disabilities, and handicaps. A manual of classification relating to the consequences of disease. Geneva, WHO

[2]        UPIAS, (1976) Fundamental Principles of Disability, London: Union of Physically Impaired against Segregation, available on-line at:

[3]        World Health Organization. (2001) International Classification of Functioning, Disability and Health. Geneva, WHO, Searchable online versions available at:

[4]        IMS Global Learning Consortium (2004) IMS AccessForAll Meta-data Overview.

[5]        Torn Shakespeare, Nicholas Watson, (2001) The social model of disability: An outdated ideology?, in Sharon N. Barnartt and Barbara M. Altman (ed.)Exploring Theories and Expanding Methodologies: Where we are and where we need to go (Research in Social Science and Disability, Volume 2), Emerald Group Publishing Limited, pp.9-28. Availible on-line at:

[6]       Microsoft (2004) The wide range of abilities and its impact on computer technology. Available on-line at:

[7]       World Wide Web Consortium (2005). Introduction to Web Accessibility, available at:

[8]        IMS Global Learning Consortium (2012) Access for All (AfA), Version 3.0 Specification, Public Draft 1.0. Primer and specification documents available from:

[9]        W3C (2008), Web Content Accessibility Guidelines 2.0 (WCAG 2.0), available at:

[10]         British Standards International (2010). BS 8878:2010 Web Accessibility – Code of Practice, (charged for publication available through and by subscription through BSOL

UK Government cancels Code of Practice for Higher Education on Equality Act 2010

Today I have been writing a section on Disability and Accessibility for a paper for LAK13 entitled “What Can Learning Analytics Contribute to Disabled Students’ Learning and to Accessibility in e-Learning Systems?”.  In doing so I had cause to check on the status of the long promised Code of Practice for Higher Education covering the UK’s Equality Act 2010 .  I discovered this on the Equality and Human Rights Commission’s web site:

Other Codes of Practice

We were intending to produce further statutory codes of practice on the Public Sector Equality Duty (PSED), which came into force on 5 April 2011, and codes for the Further and Higher Education (FEHE) sector and schools.

Unfortunately, we are no longer able to proceed with these plans. The Government is keen to reduce bureaucracy around the Equality Act 2010, and feels that further statutory guidance may place too much of a burden on public bodies. Although the Commission has powers to issue codes, it cannot do so without the approval of the Secretary of State, as we are reliant upon government to lay codes before parliament, in order for them to be statutory.

It is the Commission’s view that, rather than creating a regulatory burden, statutory codes have a valuable role to play in making clearer to everyone what is and is not needed in order to comply with the Equality Act. However, as this is no longer an option, we feel the best solution is to issue our draft codes as non statutory codes instead. These non statutory codes will still give a formal, authoritative, and comprehensive legal interpretation of the PSED and education sections of the Act and will make it clear to everyone what the requirements of the legislation are.


These now non-statutory codes do not seem to be published yet and with the further discouragement from Government who knows when they will be.  I and many others had been eagerly hoping that among other things the statutory codes would have provided clear legal guidance on “reasonable adjustments” generally and web accessibility specifically.  It was hoped that they would include reference to the key external accessibility standards: WCAG 2.0 and BS8878.

To my view this is a very retrograde step.  The old CoP relating to the previous legislation, the Disability Discrimination Act (1995 as amended 2005) is still available but now has no statutory basis and is outdated in terms of educational practice, web accessibility standards, technology and the law.  Available at:

I am now chairing the newly formed Open University Web Accessibility Standards Working Group defining a common web accessibility standard for the OU and developing associated support documentation for managers and developers.  This is part of a overall Web Governance Review.  This work needs a secure legal underpinning which I had hoped would come from the CoP. It would be helpful is we could authoritatively point to a statutory statement of what is considered as the appropriate level of web accessibility under the term “reasonable adjustment”.  That being said it is probably optimistic to think the CoP would have given that.

As commented elsewhere in this blog defining levels of accessibility is problematic. This follows from the fact that accessibility is a property of the relationship between the user and the web resource and depends on the circumstances in which and technology they use to access it. More generally it is a summation of these relationships for the full diversity of potential users. Web accessibility is not, as usually inferred from WACG2.0 and in most work on accessibility metrics, a property of the resource alone. However, organisations in education, commerce and the public sector are longing for a way of authoritatively asserting that they have sufficiently addressed accessibility in terms of their legal obligations.

Web accessibility metrics – “What are they for then?”


Yesterday I participated in the W3C’s, Web Accessibility Initiative’s (WAI) Website Accessibility Metrics Online Symposium. Details and access to the papers and presentations of the symposium is available at:

This blog post is not an attempt to give a comprehensive report of the symposium but to air some of my thinking about it and how it relates to ongoing work I am involved in at the Open University where I am employed as a Senior Research Fellow with an internal consultancy role on accessibility.

Personal basis for interest in web metrics

I have been working on technology for people with disabilities since 1991. Since 1998 when I joined the Open University that has been focused at technology which enables teaching and learning. My academic background is in cybernetics and I usually describe myself as a systems engineer. So my main interests are in access to systems and systems behaviours that can be enabling. Most systems today have web-based interfaces so web accessibility is an important issue. In the interdisciplinary teams I have led or been part of, and in the accessibility work of the Institute of Educational Technology for the rest of the university, our evaluation of accessibility has put highest value on user (disabled student) evaluations. These are normally based on observational studies with participants interacting with functioning prototypes followed up by structured interviews. For pragmatic reasons extensive expert evaluations supplement these end-user evaluations (early in development and for procurement assessments they are often the best method). However these expert evaluations are not based on the automated or semi-automated evaluation tools, often associate with the metrics reported in this symposium, that evaluate against the web accessibility standards. Rather they are based on heuristic methods interacting with the prototypes using a range of assistive technologies (ATs) and access techniques to in effect emulate the users with different disabilities. This is to answer two key research questions for a range of different users:

  • Can the disabled user undertake the actions intended by the design?
  • What will the end-user experience be (compared with a user not deploying AT or access approaches)?
So the web accessibility guidelines were and still are not core to our evaluation work, although the access principles are the same in both. (Note we have been involved in more conventional accessibility evaluation against the standards too but not in methodological development here.) Where the standards have been important is in communicating to developers what needs to be done and in supporting their QA practices.
I became aware of the work on web accessibility metrics sometime around 2002, especially the work that was subsequently sponsored by the European Commission. I was part of the accessibility community (and there was a sizeable number of us) who was quite sceptical. I could not see the value (for my work) of a single score for the accessibility of a set of web pages. What I wanted to know was who could access the site/interface, with few problems, who would have significant problems, where the deficits were and what could be done about them? I needed fine-grained information not an overall metric. I have only just envisaged a possible use for metrics in my work in facilitating a systems behaviour in e-learning that I point to at the end of this blog post. This was my motivation for taking a detailed look at the state of the art of accessibility metrics at this stage and hence my participation in the seminar.
I am currently undertaking, due to complete before Christmas, an internal standards review project. I am reviewing all internal web accessibility policy statements and standards (we historically have had a silo situation which we are seeking to rectify) against WCAG 2.0 and the British Standard Institutes BS8878 “Web Accessibility Code of Practice”. So my attention is currently on the standards at some level of detail.
I have given this rather long preamble so you can judge the perspective for my comments below.

A definition of Web Accessibility Metrics

Web metrics in general quantify a result for assessments of properties of web pages and their use; they might include:

  • Web usage and patterns
  • User supplied data
  • Transactions
  • Site performance
  • Usability
  • Financial analysis (ROI)

Web accessibility metrics try to give an assessment of the level of accessibility against a given standard e.g WCAG 2.0.

What are they for?

Three basic questions about any metric:

  • What should you measure?
  • How do you measure it?
  • What do you do with the data once you have it?
Most people in the field would argue for Web Accessibility Metrics as a measure of the degree of accessibility of a web page or collection of web pages. The main school of thought has been defining accessibility in terms of conformance to web accessibility guidelines like WCAG 1.0 or WCAG 2.0. A lot of the research in the field has been in terms of defining the form of the measure that makes up the particular metric concerned, implementing tools to automate its application and then researching the validity and reliability of the metric. However, from my perspective on the field, the 3rd question as to what you actually do with the metric is much neglected by the web accessibility metrics research community.
What do you do with relative rankings of the accessibility of web sites?
  • Large scale comparative studies: It seems to be that the most obvious use case and the one that such metrics have had most impact to date is in the large-scale comparative study of websites in a particular domain, with the possibility of doing so over time.
If a credible, stable metric of web accessibility was to be established (at the moment we have many with differing properties) this enables such investigations of the form: What is the overall level of web accessibility in UK public-sector websites?; Accessibility in on-line shopping sites: and improving situation?; … etc. Such studies can be important in informing high level policy and legislation.
  • Litigation: [I will confine myself to the UK legal situation here.] In the UK we have anti-discrimination legislation not accessibility legislation. This is now based on the Equality Act 2010, which builds on the Disability Discrimination Act (DDA) last amended 2005.
It is unlawful for any provider of services to the public, or educational establishment (in my case), etc. to discriminate in that provision against a person with a disability on the basis of their disability. What is more, they are required to make “reasonable adjustments” to meet the needs of disabled people and to be anticipatory in so doing.
Now, websites are not specifically mentioned in this UK legislation but they are in the codes of practice that accompany it. I always argue that if you think about it from the outset dealing with web accessibility is reasonable. However this is yet to be tested in a court of law. (There was a case some years ago when the RNIB begun court proceedings against a major supermarket chain because of the inaccessibility of their on-line shopping site. The case was settled out of court, RNIB worked with the company concerned to improve their site and everyone won, except the legal position on web accessibility was not clarified.)
If a case of web accessibility did ever go to UK courts it is most likely that expert witnesses would be called for both the prosecution and the defence to establish firstly, was the person(s) concerned substantially disadvantaged, if so was this because the site in question was inaccessible (see note below). Then if not was it reasonable that the provider of the web site had made it accessible? I could see a role here for web accessibility metrics and large-scale studies of numerous sites. Then with an evaluation of the site in question using the same metric a “score” could be given as to its level of accessibility and comparisons made with other sites. However would any of the current metrics and the body of research around them stand up in a court of law? (I would never appear for the defence in such a case but feel if I did I could knock some holes in the existing metrics to try and discredit them, if I could others would be able to too.)
[Note – I had a recent exchange on LinkedIn with the Accessibility Expert who appeared for the prosecution in the famous case (for those of us in the field then) when the web site for the 2000 Summer Olympics in Sydney was taken to court under Australian law for poor accessibility. He made the point he had to tell the court whether the site was accessible or not, i.e. a binary assessment. My reaction was “if that’s the law then the law’s an ass” [Charles Dickens’ Oliver Twist]. Metrics can have a role here in educating the law and the wider world that accessibility is not a binary property. Indeed it is a property that will be different for different users but I fear metrics are less helpful here.
  • Remedial Action: It seems to me that web accessibility metrics are poor tools at identifying where remedial action is required. However in the final section I allude to a future scenario where they may have a role.
  • Others? … Please feel free to suggest some in comments to this blog post.

Further Questions:

I will leave a few other questions undiscussed but they are informing my thinking about web accessibility metrics:

  • What are web accessibility guidelines for?
  • What does a metric try to give a measure of (how do they related to the guidelines)?
  • Who are they for, who are the users of the tools that produce the metrics then the consumers of the resulting metrics?
  • What are they for (in addition to the points raised above)?

Specific examples of schemes of web metrics

I just list here the specific schemes of web metrics mentioned in the papers of the symposium. I try and give a defining characteristic for some but make no attempt at a comparative study.

  • WAB Score [Paper 1] The Web Accessibility Barrier (WAB) score metric was proposed by Parmanto and Zeng (2005). It is a method that enables identification and quantification of accessibility trends across Web 1.0 websites and Web 2.0 websites. The WAB score formula tests 25 WCAG 1.0 criteria that can be evaluated automatically.
  • Failure rate [Paper 1], [Paper 6] The failure-rate metric computes the ratio between number of accessibility violations over the number of failure points. – First propose by Sullivan and Matson (in 2000) possibly the start of web accessibility metrics.

Part of the Unified Web Evaluation Methodology developed in 3 linked EU projects. Based on WCAG 1.0. Migration strategy to WCAG 2.0 published but not yet executed, see: Paper 11. The UWEM score function for presenting large-scale web accessibility monitoring results. The calculation yields a continuous ratio with a minimum of 0, in case no barriers are found. If all tests fail each time they are applied the score reaches its maximum value 1.

  • Barriers Impact Factor, BIF [Paper 2]

BIF reports, for each error detected in evaluating against WCAG 2.0, the list of assistive technologies/disabilities affected by such an error then: The calculation of the ratio yields continuous results with a minimum of 0, if no barriers are found. On the other hand if all tests fail each time they are applied the score reaches its maximum value 1.

BIF(i) = Σerror #error(i) x weight(i); the total BIF is: tBIF = Σi BIF(i) and the average BIF is: aBIF = tBIF/#pages

· i represents the assistive technologies/disabilities affected by detected errors;
· BIF(i) is the Barrier Impact Factor affected the i assistive technology/disability;
· error(i) represents the number of detected errors which affect the i assistive technology/disability;
· weight(i) represents the weight which has been assigned to the i assistive technology/disability.

WAQM is a fully automatic metric designed to measure conformance (currently to WCAG 1.0) in percentage terms. The ratio between potential failure-points and actual violations is computed for all checkpoints that can be tested automatically, that is, the failure-rate. The severity of checkpoint violations is considered (by WCAG 1.0 priorities) and each failure-rate is weighted by this severity. (Interesting but to my view inconclusive comparison between evaluations undertaken by WAQM (based on WCAG 1.0) vs expert? evaluations against WCAG 2.0 pointed to in Paper 6.)
  • SAMBA [Paper 4] a Semi-Automatic Method for measuring Barriers of Accessibility (SAMBA), it integrates manual and automatic evaluations on the strength of barriers harshness and of tools errors rates.

BITV-Test is a semi-automated web-based accessibility evaluation tool employing a rating approach. It undertakes page-level rating and aggregation of page level ratings in a overall test score. BITV-Test’s 50 checkpoints map to WCAG level AA. Each checkpoint has a weight of 1, 2 or 3 points, depending on criticality.

When testing a page per checkpoint, evaluators assess the total pattern or the set of instances and apply a graded Likert-type scale with five rating levels:

1. pass (100%)
2. marginally acceptable (75%)
3. partly acceptable (50%)
4. marginally unacceptable (25%)
5. fail (0 %)

Ratings reflect both the frequency and criticality of flaws. For ratings other than a full “pass”, a percentage of the weight is recorded. Page level rating values are aggregated over the entire page sample. At a total score of 90 points or more, the site is considered accessible.

The final BITV-Test (they also have self-assessment and design support versions) is a tandem test, in other words, two qualified evaluators test independent of each other and harmonise their results only once they have finished their respective test runs.

  • eChecker [Paper 8], not a metric but an automated web page accessibility tool that evaluates according to UWEM and was used in Paper 8 in a comparative study with eXaminator.

eXaminator has its roots in manual evaluations made by experts (since 2000). Unlike metrics such as WAQM, which seeks to achieve a failure rate for each page or UWEM, which seeks a failure rate for each checkpoint, eXaminator assigns a score to a specific occurrence in a page. The metric (the authors argue) is faithfully to the definition of WCAG’s compliance and the unit of conformity: the page.

  • Logic Scores Preferences (LSP) method [Paper 9],

LSP an aggregation model (based on neural networks) that computes a global score from the intermediate scores. (Dujmovic, 1996). These intermediate scores consist of failure-rates or the absolute number of accessibility problems. (Paper 9 reports using this approach in both Device Tailor and User Tailored metrics)

  • eGovMon Project [Paper 11] Paper reported on the issues uncovered by this Norwegian project in trying to update UWEM to a new metric based on WCAG 2.0 (a non-trivial tasks as discussed in the paper)

A critique of Web Accessibility Metrics (Martyn’s views)

How much do they help developers find and fix accessibility deficits? My thinking to date, is that for my context very little. However I am open to be persuaded otherwise from other’s experience (so please add a comment). A possible role for them at systems wide accessibility review in an eLearning context is envisaged in the final section of this blog.

A good thing recognised in almost all Web Accessibility Metrics approaches is that accessibility is not a binary issue. Web sites are not either accessible or not but have degrees of accessibility. In fact they have degrees of accessibility for different users and this is not recognised in any of the approaches known to me (but happy to be corrected). So few if any of the approaches enable statements like, “this site while reasonably accessible to screen-reader users but would be problematic for those with a hearing impairment or who were colour blind”, to be directly and correctly deduced.

None of the web accessibility metrics considered in [Paper 3] directly addresses the developers’ efforts needed to correct the accessibility problems. That paper went on to consider the impact of which accessibility deficits were due to deficits in templates used in the authoring of the sets of web pages under review. However, this raises the more general question from the perspective of the manager or web developer: what does the accessibility metric tell me about what will be the cost (in terms of time and effort) of improving that metric to a given level, for a given set of web resources? I would argue that none of the existing metrics facilitate this, although the data collected in calculating the metric will also be helpful in evaluating the cost or remedial action. Is this a feature facilitated in the automated and semi-automated tools created to calculate web metrics? I.e. do the tools make available the useful data? Estimates of cost of remedial action here are thus mostly facilitated by automated/semi-automated evaluation techniques not the metric. The one thing the metric may give is a scale on which to be able to say: how much will it cost to improve by so much and then by a degree further. However I have never heard managers of web resources frame the question this way. It is usually what is it going to cost to address the deficits to meet WCAG 2.0 Level AA (for example)? I am not sure metrics help here.

Where are the users? I find this the most disturbing situation around accessibility metrics (well and around web accessibility standards too). I am yet to encounter any work (and I would be delighted to have it pointed out to me) where attempts have been made to verify if the metrics correlate to the access experience of disabled people. I know that such a study would be difficult and costly to do because it would have to be done at scale and involve a large diversity of users to be meaningful. However until such work is done then we are just in a self-referential circle convincing ourselves we have something of real worth. This follows from the fact that the correlations that have been done are between expert evaluations and the metrics generated by various tools both working to the same standards which, as far as I am aware, have not undergone large-scale assessment against the experience of diverse users of web sites where they have been rigorously applied. [I am not questioning the validity of WCAG 2.0 here – I might elsewhere 😉 just asserting the importance of user evaluation in ensuring validity.]

The other users to consider here are the consumers of the metrics. Are the metrics meeting their needs? Are the metrics well understood by those that use them?

The importance of context Context is very important to the evaluation of user experiences. This is a long-established principle in evaluations undertaken by my Institute (established long before I was there). The web accessibility metrics reviewed here, for the most part, remove context. This issue was raised and discussed in the paper by Markel Vigo, of the University of Manchester, entitled “Context-Tailored Web Accessibility Metrics” [Paper 9].

Accessibility as process

BS 8878 provides a framework that allows definition – and measurement – of the process undertaken by organisations to procure an optimally accessible web site, but is at present a copyrighted work and not freely available. In comparison to a purely technical WCAG conformance report, the nature of the data being gathered for measurement means that inevitably the measurement process is longer; but it also provides a richer set of data giving context – and therefore justification – to current levels of accessibility.

[David Sloan, Brian Kelly Paper 10]

This paper, entitled “Web Accessibility Metrics For A Post Digital World“, rather than presenting results of previous work was more a position paper presenting a perspective on possible future directions for metrics that stood out as distinct from the other papers. It was closely aligned to my own views, but that is perhaps not surprising as I am a regular follower of Brian’s blog. (I know David and Brian quite well and respect them both.)

I commend Brian Kelly’s blog, which covers broader issues than accessibility, he has beaten me to getting up a post relating to this metrics workshop):

One theme of the paper is that measuring accessibility should not be restricted to web pages. That it should evaluate to what extent, interpreting to the OU’s context, disabled students can achieve the same learning goals as other students. This may include by alternative learning activities, or by using alternative online resources, or resources in alternative formats. This has been a major theme in my work for the last 10 years in the development of the AccessForAll metadata based approach for managing alternatives and implementations of it in EU4ALL. There has always been a tension, in evaluating for accessibility between those that assume a universal accessibility approach (one size fits all) and those that seek to facilitate flexibility and adaptability via alternatives and personalisation. It is always easier to measure something tightly defined and unchanging but that may not be the best access solution.

On of the strengths of BS8878 is that it has the perspective of embedding accessibility considerations in a company or organisation. (Note the link is to the BSI shop to order a paid copy. UK universities may be able to obtain a copy without further charge if their libraries subscribe to BSI online). BS8878 has a 16 step model of web product development from the pre-start to post-launch of the web product. It is noteworthy that only 4 steps reference WCAG 2.0.

What I understand David Sloan and Brian Kelly to be suggesting is that there could be a role for metrics across such a process. BS8878 provides a framework against which “measurement” could be made. While currently reflecting on how BS8878 might be applied across the university, and meeting this proposal, I am left with the questions:

  • What would be the nature of measurements against BS8878’s 16 step model?
  • Would there be any value in a metric that somehow aggregated these measurements?

Under “Major Difficulties” the paper raises the following point:

The obvious difficulties in defining and implementing an accessibility metric that incorporates quality of user experience and the quality of the process undertaken to provide that experience are the complexity of the environment to be measured – i.e. not just a collection of resources that enable an experience, but also evidence of organisational activity taken to enhance inclusion.
[David Sloan, Brian Kelly Paper 10]
They cite the TechDis Accessibility Passport as one possible way forward. Within the Open University a programme called Securing Greater Accessibility (SeGA) is embedding accessibility considerations across our processes (c.f. BS8878) and providing the mechanisms to record what steps have been undertaken to “enhance inclusion” at both the Module level and the web asset level.

The link between web standards, web metrics and Learner Analytics within a University.

Some ideas are just beginning to emerge in my mind that might suggest a role for accessibility metrics within the OU’s eLearning context. This was triggered by a presentation last week on another internal project on Learner Analytics. This might be the only bit in this 4,500+ word blog post that is original to me. However if that is not the case and anyone knows of a similar idea please flag it. If colleges give me the confidence that it is an idea worth exploring I will write it up as a briefing paper in the New Year.

The Open University has about 13,000 disabled students, It uses a Virtual Learning Environment (VLE), based on Moodle, that manages the timely presentation of on-line resources to students as they undertake their studies. (It does more besides and there are other systems integrated with it and along side it but that description will suffice for this discussion.) The Learner Analytics project is exploring what data about the student’s experience of their studies can be readily extracted from the VLE and other systems and what could be meaningfully deduced from it. I have raised the possibility that what ever can be analysed could potentially be factored across disability types or even, my preference but more challenging, functional abilities. (There are some technical and some data protection issues here yet to be explored.)

For example, if comparing student completion rates across different modules (across all modules if you wish), it would be possible to detect if there was any different patterns for students with disabilities and then if it was different for students with a particular disability or ideally a particular access requirement.

Drop-out rates are a challenge for any university, funding is often linked to them, and even if not they are a key measure of the university’s success in its teaching and learning. Disabled students traditionally have had higher drop-out rates than students who have not declared a disability. So reducing drop-out rates among disabled students is a highly desirable goal. In the above example it will be possible, from the Learner Analytics to identify which Modules are apparently presenting significant barriers to students with disabilities (there could be other explanations).

Identifying the Module only gets us so far. A Module may be made up of hundreds of assets. The barriers to learning could be diverse and at the teaching and learning level or the technical level, or could be population selection effects, etc. However it seems to me reasonable to want to undertake an accessibility audit of the assets of this module. To be able to do so in an easy automated way, at least for the first pass, seems highly desirable. This is where there is a possible role for accessibility metrics. An accessibility metric, based on an agreed standard like WCAG 2.0 AA, could be assessed for all assets on their production and it travel with them in their metadata or be stored in a database. This cold be part of the “passport” approach. However even if this were not the case, when a set of assets to be investigated has been identified as suggested, automated testing of just those assets could be undertaken. If the metrics indicated that core elements of the course had major access challenges for the students who were dropping out then an intervention point has been identified and some information about its nature collected. Thus data for possible future Learner Analytics is generated. Ideally this accessibility perspective on drop-out could be checked against other data the university collects on reasons for drop-out possibly supplemental with interviews of a sample of the students concerned.

It must be stated that we have very little understanding as yet of the experience of OU students (and students in general) when studying on-line. There is another internal OU project that will be looking at that to some degree in the New Year. So for example we have no sense of the balance between possible reasons for drop-out among disabled students and therefore what is the correlation between access issues in Module assets and drop-out. Nor, how this issue compares in significance with others such as health issues, time demands, family issues, etc. However we can say that as more of the university’s teaching and learning goes on-line, accessibility is going to become of increasing importance to meeting the learning goals of our disabled students and managing it efficiently is going to be vital for the university. This approach in part addresses both those drivers.

References (not linked to above)

J.J. Dujmovic (1996) A Method for Evaluation and Selection of Complex Hardware and Software Systems. International Computer Measurement Group Conference, 368-378

Parmanto, B., & Zeng, X. M. (2005). Metric for web accessibility evaluation. Journal of the American Society for Information Science and Technology, 56(13), 1394-1404.