What does accessibility supported mean?

With the recent news that Microsoft Edge now has 100% accessibility support for HTML5, this post looks at what “accessibility supported” means, and where it fits into the bigger accessibility picture.

Accessibility comes in many forms and all of them are important. For the purposes of this post however, the term “accessibility” is used to mean the ability of an assistive technology like a speech recognition tool or screen magnifier to access content in the browser.

For a feature of HTML (or any other web technology) to be considered accessible, three things have to happen:

1. The browser must support the feature

This means that the browser recognises the feature and provides the expected visual rendering and/or behaviour. When the W3C releases a new version of HTML, it makes sure that each element, attribute and API is supported in at least two browsers. This gives developers a reasonable degree of confidence that the features of W3C HTML will work in the wild.

2. The browser must expose the feature to the platform accessibility API

This is what is meant by “browser accessibility support”. In addition to supporting a feature as described above, the browser must also expose information about the feature’s role, name, properties and states to the platform accessibility API.

The level of HTML5 accessibility support in popular browsers is tracked on html5accessibility.com. These are publicly available tests and results that are updated as new browser versions are released.

3. The assistive technology must obtain and use information about the feature using the platform accessibility API

When a feature is accessibility supported by the browser, the assistive technology must use the information exposed to the platform accessibility API. This information is used by assistive technologies in different ways – by a screen reader to tell someone what kind of feature they are using, or by a speech recognition tool to let someone target a particular object for example.

In other words the browser is responsible for supporting a feature and exposing it to the platform accessibility API, and the assistive technology is responsible for utilising that information to help people access content in the browser. If either the browser or the assistive technology do not fulfill their responsibilities, then accessibility support is not complete.

In practice this means that different combinations of browsers and assistive technologies offer different levels of accessibility support. For example, Edge now has 100% accessibility support for HTML5, and Narrator (the integrated Windows screen reader) takes full advantage of this – meaning that Edge is extremely usable with the Narrator screen reader. In contrast, other screen readers have yet to take advantage of the accessibility information exposed by Edge, and so for now that browser remains largely unusable with those products.

According to html5accessibility.com, Chrome, Firefox and Safari all expose less information about HTML5 than Edge. However most screen readers make good use of that information, so all three browsers are usable with screen readers on the relevant platform.

The goal is for all browsers to hit the 100% benchmark set by Edge, and for all assistive technologies to make full use of that information. When complete accessibility support becomes a given, people are then free to choose their browser and/or assistive technology based on features and capability instead.

Categories: Development
Tags: , , , ,

Comments

Steve Lee says:

Given the platform a11y apis are fairly standard one could expect that Ats should at least give as good an experience on edge as firefox. What are the reasons this is not so? I can think of a few possibilities.

* there are multiple apis on Windows and the Ats support the old msaa but edge drives the newer use UAE
* There are gaps in the api specs. For example I once found much behaviour in Firefox chrome’s use ofcat-spi was undefined and unpredictable. For example when a pop up appeared.
* Ats are making assumptions based on minimal use of the Apis. Historically the only way they could function was with heuristics aka guessing

Or is there something else at play?

This argumentation is kind of backwards. ECMAScript 6 is ratified, so is HTML5. That Netscape 3 doesn’t support either doesn’t make that less of a fact. Windows 10 doesn’t give exclusive access to narrator, but exposes the APIs to everyone. So, non-support in other screen readers does mean that they need to get with the times and upgrade. It is a very common and iMHO dangerous assumption that accessibility means catering to outdated environments. People with disabilities also need security and old software is very likely to be a problem in that space.

Personally I don’t argue that the 100% isn’t debatable (or if comparisons like those make sense), but non-support of a third party software that could get the same access is an odd argumentation to question it. Users of IE6 don’t have much enjoyment on the web, neither have outdated screen reader users. To me, the operating system should provide access with an in-buiilt screenreader. That also makes it impossible for developers to claim they can’t test accessibility as screenreaders are “so expensive”

steve lee says:

@Codepo8 I agree with the built in AT point, that’s why iOS was so appealing once they provided a range of built in ATs (not just a screen reader). The old argument that 3rd parties should provide no longer applies due to demographics and situational disabilities. It’s thus great that narrator is now no longer the toy it has been for a while.

Steve Faulkner says:

there are multiple apis on Windows and the ATs support the old msaa but edge drives the newer use UIA

This is pretty much the reason i believe

Jason Kiss says:

Hi Léonie, I notice that you don’t mention the WCAG definition of accessibility supported [1], which is integral to assessing WCAG conformance. I think you implicitly address it in your point #3, but the WCAG definition explicitly includes an additional criterion for establishing accessibility support, namely the availability to users of accessibility supported user agents. As I understand it, this additional criterion is critical for determining accessibility support from a WCAG perspective for a specific technology or feature, depending on the users’ environment.

For example, and limiting ourselves to the Windows environment, let’s say that an HTML feature X does not work accessibly with JAWS in any browser, but is accessibility supported by NVDA in Firefox. In a closed environment that restricts users to JAWS with IE, that HTML feature X would be considered not accessibility supported. But in an open environment with the general public as audience, the same HTML feature could (would?) be considered accessibility supported because it works with NVDA in Firefox, both of which are accessibility supported, do not cost a person with a disability any more than a person without a disability, and are as easy to find and obtain for a person with a disability as they are for a person without disabilities.

Add multiple platforms, including mobile, browsers, and ATs to the mix, as you find in the general public user space, and accessibility support gets a little trickier to establish, at least from a WCAG perspective.

[1] https://www.w3.org/TR/WCAG20/#accessibility-supporteddef

Patrick H. Lauke says:

Jason, to your very valid point I would add, however, that there’s also an onus on whoever is providing the environment (e.g. an employer, university, etc) to make sure that the environment is suitable for the users (employees, students, staff). WCAG can’t hope to address all situations where an environment has been unnecessarily limited, nor can it address situations in which a user may be willingly using a device/OS/browser/AT which lacks the necessary features and hooks available in the mainstream (the classic “but it doesn’t work in Lynx on my OS/2 machine” argument). At that point, we’re perhaps not talking about a failure on the part of the web content author, but rather a failure on the part of the user or the user’s IT provider?

Jason Kiss says:

Patrick, really good points. What is or isn’t accessibility supported will depend on context, and the responsibility for that context will vary. Users and their IT providers, along with browser vendors, AT vendors, and web authors all have a role in the contract to deliver/consume accessible web content. I absolutely agree that where some technology or feature is not “accessibility supported”, the cause might very well be the user herself or the technology restrictions established by her environment’s provider.

Stomme poes says:

I read this comment by Serotek https://www.serotek.com/blog_seroteks_position_on_microsoft_edge and the resulting discussion (starting here https://twitter.com/megarush1024/status/763813301990133760 but unfortunately Twitter is terrible at threading) between one of the NVDA devs and some other developers about what AT should and should not do… and a lot of that debate was centered on whether AT should be the ones to make up for author failures, or should the browser or API do that.

In a Postal’s Law web universe, someone’s probably going to need to do things that in a perfect world shouldn’t need to be done. The browser, the AT, and the API can all be perfect. The websites will never be perfect. That does affect real people, so should that also be some part of the equation?