My colleague Adrian Roselli and I recently joined the W3C Web of Things (WoT) Interest Group, so I thought I should provide a summary of what we mean when we talk about the “web of things” and the implications, as well as opportunities, for accessibility.
The concept of the Internet of Things (IoT) has been the subject of discussion in industry, academia, and in the lay press for a number of years. IoT refers to the inter-networking of unique “things” (which can be real/physical or digital/virtual) within the existing internet infrastructure. The aim is to allow the sharing of data between entities without necessarily requiring human-to-human or human-to-computer intervention. WoT expands this concept by concentrating not only on how things are connected, but also on how existing web standards, specifications, and platform independent application programming interfaces (APIs) can be embedded to ensure interoperability and shared semantics across different platforms. Consequently, rather than spending time building a proprietary networking solution, developers can concentrate their efforts on the actual “things”, thus increasing the possibility of introducing them to market more quickly.
As the web expands beyond the browser and into everyday objects and environments, a host of new challenges and opportunities are introduced. Many related concepts — such as technological, regulatory, security and privacy concerns — have been discussed at length and are still evolving. We see our role as examining the implications of the WoT for accessibility, and by extension for accessibility practitioners and researchers.
We believe the WoT opens up many benefits for accessibility. For example, consider a set of household appliances. Operating such appliances may be difficult or even impossible for a visually impaired person, or a person with limited mobility or another physical impairment. However, let’s assume that each appliance is connected via wifi, shares a platform independent API, and is “paired” with a single host application on a smartphone or tablet. The person, who may already be familiar and comfortable with their own device, can therefore operate each appliance through this application, using the device’s built-in assistive technologies where necessary.
The WoT is not limited to physical devices, or even connected physical devices. For example, a WoT-enabled device could assist in wayfinding and navigation by warning a person who is blind or has a visual impairment of nearby hazards, or help them to find their destination. A WoT-enabled device may also enable this same person to take part in an activity or event that would otherwise not be available to them — for example, taking part in a sport such as soccer by providing information in a multimodal way, such as where the goal is and any nearby opposition players or teammates. A person with a hearing impairment watching a movie at the cinema or at home with the family may be able to access the movie’s captions on a suitable device without the need for captions to be physically presented on the screen. In short, the WoT can be used to promote independence, where dependence on another human being or an expensive proprietary device has erstwhile been necessary.
There are, of course, many challenges to overcome to reach such solutions. From a technical perspective, the metadata describing the data and interaction models must be exposed in an accessible manner to take into account the many different ways that a person may be interacting with the device. Can the metadata be interpreted correctly, and subsequently be conveyed, by assistive technologies? Are the APIs malleable enough to deal with all the different methods of interaction, and can they deal with new versions of assistive technologies introduced to the market?
Despite these challenges, we believe the WoT offers significant potential for disabled people, and we are excited by the opportunities.