Discovering Accessible Learning Resources with Benetech Labs—Part 2

By Benetech, posted on

We are grateful to the Bill & Melinda Gates Foundation for their support of our work on the discoverability of accessible educational resources, and for their recognition of the importance of accessibility in furthering inclusive education.

The digital revolution and ongoing advances in technology have made it possible to get more content, in more ways, to more people. The increasing availability of accessible digital content in alternative formats that people with print disabilities can use is a welcome trend, but it has created a challenge for readers to find those accessible digital resources on the internet. Why can’t you easily find, say, an accessible version of your favorite book or YouTube video via a simple Google, Yahoo, Bing, or Yandex search?

At Benetech, we believe that as accessible digital content and applications increasingly become more available, they must also be easily discoverable. That’s exactly what we set out to do through the Accessibility Metadata Project, one of our Benetech Labs projects.

As I explained in the first installment of this two-part blog series, the cause for the difficulty to discover accessible digital content and applications has been the lack of information about the accessibility features of these digital resources, such as image descriptions, tactile images, video captioning, support for screen readers, and the like. If accessible digital content and applications are to be easily discoverable they must first be tagged with this information, known as accessibility metadata.

Our goals for the Accessibility Metadata Project were therefore to develop standards for accessibility metadata and to submit these standards to—the organization that keeps a list of agreed-upon tags that all search engines can use in common so that users of those search engines can refine their searches to find exactly what they are looking for. We also undertook to develop a reference implementation, showing how the accessibility standards could be implemented. We have recently completed the project and I’m delighted to say that we accomplished all these goals and more!

As I shared in Part 1 of this blog series, accepted our proposed set of accessibility metadata tags. Now that the standard set of tagging of online educational resources’ properties includes accessibility metadata, these properties are being picked up by the likes of the Internet Archive’s Open Library initiative, Hathi Trust Digital Library, and the Learning Registry, a leading metadata aggregation platform about online learning resources.

As for implementation, we added accessibility metadata tags both to Bookshare and to a payload of metadata submitted to the Learning Registry. Bookshare now automatically submits accessibility metadata for Bookshare titles in the registry. Because of our reference implementation, Bookshare’s accessible content is more easily discoverable via online search and others are able to better understand how to make their content include accessibility metadata.

Beyond our grant commitment, we developed additional reference implementations and tools:

  • Searching for videos with closed captions: Before this project, it was not possible to search for captioned videos beyond the YouTube domain. By collaborating with the creator of the “WP YouTube Lyte” plug-in, which allows WordPress site administrators to automatically add accessibility properties to videos that have closed captions, we contributed code that allows for search based on accessibility. Now people who need captioned videos can easily find them on all WordPress sites using the plug-in using tools, such as Google’s Custom Search Engine.
  • Described video tagging: Smith-Kettlewell Eye Research Institute has developed a web-based video description product called YouDescribe that enables anyone to describe YouTube videos on the web. To assist people with visual impairments to easily discover videos described with the YouDescribe platform, Smith-Kettlewell is automatically tagging their videos with accessibility properties. Now search engines such as Google Custom Search Engine can index those properties.

You can read more about these implementation examples on the Accessibility Metadata Project website.

The adoption of our project’s proposed set of accessibility metadata tags and the implementation successes I just listed are a tremendous milestone in the collaborative journey towards our vision of a “Born Accessible” world: a world in which all content born digital is made accessible—and discoverable—from the outset. Our implementations demonstrate that a broad adoption of accessibility metadata is possible.

What’s next? We must now encourage content management systems, publishers, and sites like Wikimedia to start using metadata in their sites, so that one day everyone will be able to find the great accessible content that is out there.

It’s just a matter of time before everyone will be able to discover accessible resources on the web. Let’s make that time as short as possible!