Game accessibility and the Web

Computer games are a big deal: they are part of our culture, they provide and can promote social inclusion, they can educate (from encouraging and supporting player creativity through making modifications and new game levels, to being used as teaching aids) and they are a hugely popular means of recreation.

Just as with other walks of life, accessibility in computer games — and, importantly, their surrounding communities of online play, modifications and level design — is something for which we should strive, and many developers are. Historically, games have been regarded as a very hard accessibility problem to solve. It’s true there are some compelling challenges, but huge strides have been made, with the potential for game accessibility to become the norm.

In the UK, the games industry is significantly bigger than the film and music industries combined. In the US, the games industry is as big as Hollywood, but it doesn’t necessarily take huge resources to provide accessibility, and a significant number of indie developers and studios have been working on various accessibility features, from remappable controls, to variable font size and subtitles, with great success.

What are accessible games?

Some of the earliest computer games were fairly accessible by nature: interactive fiction takes advantage of the greatest rendering hardware known — the human brain — to create immersive and compelling worlds in which the player could explore, vanquish monsters, maybe nick a bit of treasure, and generally save the day. However, it’s important to consider that they do require good reading skills, and for their players to be able to type!

Text-based games reached their peak of popularity from the 1970s to the 1980s, though online variants, such as multi-user dungeons (MUDs), were popular into the 1990s. However, graphics pervaded every genre, including adventures, by this time. Most games had graphical user interfaces (GUIs), and it was increasingly common for them to require precise timing and deft cognitive and motor skills from their players. Whilst graphical games reached popularity in the late 1970s, these features and expectations were now the only mainstream option. Barriers to entry were created, either due to a disability or situational impairment of the otherwise-gamer. Games became challenging, even in ways they didn’t intend to be.

But, as game accessibility consultant Ian Hamilton points out, games have to be challenging in order to be rewarding experiences for their players. So, does this mean accessible games are a pipe dream? Of course not! There are many reasons why people may be unable to play games, but there are also many ways to present and interact with game worlds.

From the early days of computer gaming, there have been efforts to make games accessible, including specialist controller hardware such as sip/puff devices (some made by mainstream games companies) and assistive hardware and software features such as speech synthesis to assist blind people. You can find some excellent examples documented on the Accessible Gaming History Exhibit page at OneSwitch (specifically, check out the Accessible Gaming Displays PDF).

Some of the early specifically-made accessible games fall under the broader category of “Audiogames” (as opposed to video games) — these are games specifically designed with sound as the main means of expressing their world to the player. Some very imaginative, immersive and well-respected games were created by small studios, even one-person companies, many specifically for gamers who are blind (e.g. Monkey Business, Grizzly Gulch, and Chillingham). Others, such as Papa Sangre and The Nightjar were designed for all, with the high-tech audio engines and lack of video adding to the atmosphere of the game, and achieving significant mainstream attention at the time. These are great games, and fill a vital role. In parallel, however, the goal of accessibility in the majority of, if not all, games is an alluring one.

In recent years we’ve seen the benefits of much hard awareness-raising work by organisations such as the IGDA Game Accessibility Special Interest Group and many others, as well as corresponding effort from developers to make their games more accessible:

Likewise, an increasing number of games, from indie titles to triple-A blockbusters, are being released with at least some accessibility features, such as making important game areas easier to distinguish (as in FTL: Faster Than Light), a variable font size for their user interfaces, or controller button remapping (as in Uncharted 4).

But accessibility features don’t only help people who experience a permanent disability; some accessibility features are more commonly used than you might think. In Assassin’s Creed Origins, 60% of players turned subtitles on, so in the following game Ubisoft enabled them by default, and 95% of Assassin’s Creed Odyssey players left subtitles on.

So-called situational disabilities may be particularly prevalent for mobile/casual gaming, where players may be in a bright, busy or noisy environment, or not be able to use both hands to interact with the game.

Content accessibility

A great deal can be achieved with careful attention to content design. Games, more than web sites or apps, are all about their content. Simple, but fundamental techniques, like ensuring that information conveyed using colour is also conveyed by other means, such as shape, can have a profound impact for many people, even those who don’t regard themselves as having a disability for example. (Some great examples of using more than colour can be found on the Game Accessibility Guidelines site.)

For example, the following two symbols differ in both shape and colour, thus providing two ways to tell them apart. In a puzzle game, this can empower and include significantly more players than if colour alone had been used as the differentiator.

The key thing here is that this is accessible without the user having to turn on an accessibility setting — thus promoting inclusion out-of-the-box.

Spatial audio can provide surprisingly rich information to the player on the environment in which they’re in (a giant echoing chasm, or in tight quarters on a spaceship), and where they should explore. Attentive audio design really can afford accessibility SightlessKombat is a Killer Instinct player who rose to the top tier of players, despite not being able to see. However, access to the games’ user interfaces can still pose problems — most games are unable to interface with assistive technologies such as screen-readers. In many cases, blind gamers have to learn the buttons to press in order to navigate through the user interface to get into the game.

Of course, using spatial audio is an enabler for some people, but others (whether they be in a noisy environment, or perhaps have trouble hearing the game) may struggle to get the most out of it. Visual cues can also be used to convey information that is also provided through sound. Examples of this can be found in Half-Life, which uses visual effects to indicate the direction from which the player is taking damage, and “Everybody’s gone to the rapture”, which can visually highlight objects emitting relevant audio.

Everybody’s gone to the rapture can use visual patterns (in this case concentric circles) to highlight objects in the scene
(Example from the Game Accessibility Guidelines)

There are several sets of guidelines to which game developers can refer for help and advice on content design decisions that can afford accessibility to various different groups of people — check out the references section at the end for more info.

What’s missing?

Sometimes it’s not possible to provide content to cater for all situations. (In fact this is partly why closed captions were introduced to the Source engine: to allow the games to be marketed in areas where the developers didn’t have the resources to provide full localised character voice recordings, as recounted in the GamesCC Interview with Marc Laidlaw from Valve.)

Even if we wanted to support all possible choices users might have for reading the game’s user interface and using their preferred input devices, then we’d have to provide the following content…

  • Audio for every UI element…
  • …in several languages
  • …at several speeds
  • Make sure it’s navigable with a keyboard…
  • …and a mouse
  • …and works with a controller
  • …and with a single switch

There’s also the fact that sometimes, content comes not from the game’s developers directly, but from other players. This could include communications from other players (or maybe even procedurally-generated content from the game, for which pre-recording isn’t possible). It’s vital that people with disabilities are able to take part in such communication, and it is also now a legal requirement in the U.S. that communications functions in games (including the user interfaces necessary to reach them) are accessible (Ian Hamilton’s ’blog has more info on the 21st Century Communications and Video Accessibility Act (CVAA)).

Whilst content is essential to the game experience and ensuring it’s compelling and enjoyable, it can only take us so far. The player still needs to be able to navigate that content, particularly the game’s user interface, and understand and interact with it in a way that works for them. With websites and apps, users have access to various tools that can provide them with such access…

Assistive technologies and user interface accessibility

Screen-readers, screen-magnifiers, alternative modes of visual and auditory presentation and the ability to use different input devices are common in the mainstream world of desktop and mobile websites and apps. These assistive technologies are able to interpret information coded into websites and apps, and expose that to their users, e.g. by speaking that content, or expressing it as Braille, or in larger text, for example.

When native apps use standard widgets/controls provided by the Operating System (OS), they’re accessible because those controls automatically come with the required accessibility information. The screen-reader (for example) sits between the OS and apps, and can query for this information.

App

Accessibility layer → Assistive technology

Operating system

But games, in order to present a distinct and self-contained identity, and to promote immersion and entertainment, almost always use custom user interface elements that are entirely graphical in nature (even words on the screen are rendered as pixels ultimately, and the underlying text is not exposed to the Operating System).

Some consoles and gaming platforms are beginning to provide accessibility APIs such as text-to-speech (e.g. Xbox Narrator), which is excellent, though it’s still early days for such features, and they’re not available on all platforms.

The Unreal and Unity engines are those on which the majority of new games are based, and Unreal has recently started supporting preliminary screen-readers directly, with support from Unity expected to follow. This is excellent news for the industry, and is the path that most games will likely use towards improved accessibility in future. In parallel I have been wondering if all the existing infrastructure we have in browsers might help us bridge the user interface gap mentioned above, and support accessibility when games are delivered via the web, and by any engine…

The web and user interface accessibility

Assistive technologies work in a similar way with web sites and web apps as with native apps. The browser provides an accessibility tree that exposes various properties of the elements in the page’s Document Object Model (DOM) tree (such as the types of controls they represent, or their text content), mirroring its structure. The accessibility tree is then picked up by assistive technologies.

Web site/app

Accessibility tree → Assistive technology

Browser

The best (simplest) way to provide accessibility on the web is to use the standard HTML elements for the job. Using the standard HTML elements automatically brings the needed accessibility information (the purpose of the element; its content; its label, state and value, if it’s a form control). The “native” HTML elements also provide accessible keyboard handling by default, such as Space to activate a button and Enter to follow a link.

The following code shows a standard button, link and “landmark region” demarcating where the primary content on the page (which makes it easy for screen-reader users to find).

<button>OK</button>

<a href="...">Sazerac recipe</a>

<main>
    This is important.
</main>

However, if we’d used elements with no inherent meaning, no accessibility information would be there to convey. This sometimes happens when web developers make custom controls, instead of using the native elements. However, we could add the semantics using ARIA attributes. This fills in the gaps in the accessibility tree for assistive technologies. (Though that’s all it does, so keyboard handling code would need to be added manually to fill in that which would’ve been provided by the native elements above.)

The following code is semantically equivalent to the native HTML elements given above.

<div role="button" tabindex="0">
    OK</div>

<div role="link" data-href="...">
    Sazerac recipe</div>

<div role="main">Important</div>

Assistive technologies (e.g. screen-readers) can pick up on these cues, but how is this relevant to games?

Web game user interface accessibility proof-of-concept

Somewhere in the code behind the game, the intent of various user interface controls, and the text displayed, is present in a form that could be made accessible. The challenge is how to bridge from this information to players’ assistive technologies.

Many games these days, especially educational games are developed for the web, or web-like platforms (Wikipedia article on Web Games). In addition, the technologies exist to compile native code into a format that can be run efficiently in the browser: WebAssembly. This technology can be used to achieve near-native speeds — in fact one of the early prototypes was running the then-latest Unreal game engine in the browser! (Unreal Engine 3 in-browser; Unreal Engine 4 in-browser).

Instead of compiling a game to run natively on the computer’s hardware…

Drawn diagram of .C code passing through a compiler to a chip
Source code, e.g. in C, is compiled directly to a form that will run on the computer’s hardware

The native code can be compiled to WebAssembly binary format (a “.wasm” file) and run in a browser alongside existing JavaScript code…

Drawn diagram of code passing through multiple steps to a browser with javascript
Source code, e.g. in C or Rust, is compiled to WebAssembly and then run in a browser alongside the general accessibility library (JavaScript code that creates and manages the proxy UI elements)

The browser gives us a ready-made opportunity to expose the accessibility information. It works as follows…

  • A library of JavaScript code sits in the browser and provides a simple API to create HTML elements that match the visual-but-not-semantic user interface controls in the game.
  • We add a small amount of accessibility code to the existing native game’s source code, which is included only when compiling to WebAssembly. This provides the information to the JavaScript in the browser to create the proxy UI elements.
  • The JavaScript accessibility layer code moves focus around the proxy objects, in synch with the focus management that the game is doing visually, in response to the player’s inputs. This causes the player’s screen-reader to announce the proxy widgets at the same time the in-game widgets are displayed on-screen.

Thus, it seems like the game’s user interface has been made accessible, as it is conveyed by the user’s screen-reader. Here are some screengrabs of the system in action, with the rendered game on the left and the proxy UI elements (which would normally be visually hidden) on the right…

Screenshot of a game menu next to an unstyled HTML rendering of the same menu. "New Game" is highlighted.
Example game main menu, featuring links to “New Game”, “Options” and “Help” menus, and an “Exit” button, with the first item in both the rendered game menu and the proxy UI area focused. All of the options are grouped in a fieldset element with a legend of “Main Menu”.

When the user presses the down arrow, the next menu item is highlighted in-game, and the next proxy button element is focused behind the scenes, causing the player’s screen-reader to announce the change.

A screenshot of the same game menu and unstyled HTML fields
The same image as above, with the second item, “Options”, focused

In theory, issues such as focus handling and keyboard interaction should be fairly easy to solve, even if they are not accessible out-of-the-box, as the game UI has to be operable as-is, and usually this is supported by the keyboard (or a game controller, which could be emulated by a keyboard within the host OS). The main goal of the in-browser accessibility layer is to create the proxy objects for the UI that the user’s assistive technologies can understand.

The figures above show the use of links in the proxy UI area to represent sub-menus and a button to represent an immediate action. Input controls are required for a fully interactive UI too. The following figures demonstrate custom rendered controls that map to textboxes and sliders, with their labels appropriately associated.

Screenshot of another styled game menu next to the same controls in unstyled HTML
Player options screen showing textboxes for specifying the player’s name and team name
Screenshot of a third styled game menu next to the same controls in unstyled HTML
Sound options screen showing slider/range controls proxying the custom volume controls used in-game

Potential efficiency improvements

With this approach, real DOM elements must be created on the host web page. Whilst they can be hidden, we know they’re there. Also, the code used to create them is JavaScript, which is slower than the WebAssembly code and the browser’s internal APIs that are called to actually create and expose these proxy elements to assistive technologies.

There’s an upcoming standard called Accessibility Object Model (AOM) that provides for much more fine-grained control over the accessibility tree exposed by the browser. In fact, it removes the need for there to be real elements in the DOM, so this could be bypassed. What’s more: the Accessibility Object Model APIs are implemented in the browser’s (native) code, and are thus more efficient. On top of all of this, there’s a new method for WebAssembly code to directly call into the APIs provided by the browser, bypassing the need for JavaScript completely. This could make the whole UI proxying process vastly more efficient.

There is a consequence of this, however: the code to manage the Accessibility Object Model would have to be moved into the WebAssembly sphere, meaning that it would have to be provided by the game — or, most likely, the engine/middleware being used. This is not really a problem, as it makes sense for an engine to provide this code in a real-world scenario. Effectively decreasing the distance between the game developer and the accessibility code means there’d likely be opportunities to make the authoring tools more supportive of creating accessible user interfaces. For example, a lot could potentially be automated.

Next steps

These explorations demonstrate that, for games compiled to the web, it’s possible to make use of people’s existing assistive technologies such as screen-readers to expose any accessibility information that the game might provide.

Native games should use the accessibility support forming in the main game engines, but in the cases where games are delivered via the web, or are based on different technology stacks, this approach may be of help. It certainly demonstrates just how far browsers and assistive technologies have come in terms of performance and capability in recent years. There are two areas I’m continuing to explore…

  • Adapting existing in-game GUIs and games to use this approach. By doing this, a standard and minimal “accessibility layer” could be created that could be adopted to convey UI semantics to the browser, with minimal intervention from the developer.
  • Investigating how use of the upcoming Accessibility Object Model (AOM) standard might make things more efficient, and any other possibilities it might open up.

I’ll be talking about the latest developments in this work at the CSUN Accessibility conference in March 2020 — if you’re going to be in town, pop in and say hello, and if you can’t make it, then check out The Paciello Group ’blog, where we’ll post the slides afterwards.

Acknowledgements

Thanks to The Paciello Group for supporting my W3C membership and attendance of the W3C Workshop on Web Games. Thanks also to the Active Game Accessibility research project and to the W3C Accessible Platform Architectures Working Group for contributing to the position paper we submitted at that workshop, and thanks to the workshop attendees for their interest and advice.

Reference information

Demos and further reading around the article

Community sites

Guidelines

Matthew Atkinson

About Matthew Atkinson

Matthew is a senior accessibility engineer at The Paciello Group, and enjoys exploring and learning about web and app accessibility through clients and colleagues. He also maintains open-source projects in areas of web and game accessibility. Through TPG, Matthew is a member of the W3C's Accessible Platform Architectures group, and thoroughly enjoys the community involvement and further opportunities for learning that this brings. His background is in academia, having worked on accessibility research projects as well as teaching and presenting at conferences. He loves helping others learn about accessibility. When not tapping on a keyboard, he is often attending gigs, or having fun attempting to make music via the magic of karaoke.