Skip to content

Latest commit

 

History

History
145 lines (114 loc) · 6.52 KB

operating-systems.org

File metadata and controls

145 lines (114 loc) · 6.52 KB

Operating System Accessibility

An operating system, here, is made up of the low-level programs, toolkits, utilities, and libraries that a user may not interact with directly, but which facilitates interaction with a desktop environment.

Prime Directive

The prime directive is to communicate with users. The user will generally know what they would like rather than what wouldn’t help or even hinder their experience. Some users may have different ideas of what is “accessible” versus other users. In cases like this, you’ll either need to choose between the two ideas, or give the users a choice between the two (preferable). When it comes to accessibility, choice is never a bad thing. Your users will thank you for giving them something that makes their own experience enjoyable, productive, or both.

Totally Blind Users

This section will concern users who have either only light perception or cannot see anything at all. People who have limited vision will be addressed in another section.

During Install

A screen reader must be available during install. Otherwise, your operating system will not be used by blind people if they must switch to it from another. All messages about installation progress must be spoken, and widgets must be labeled.

If the user must enable the screen reader during install or beforehand, the steps to do so must be simple, able to be performed with the keyboard, and documented in an easily-accessible location.

After Install

The out-of-box experience, which constitutes setup screens and all “getting started” steps after the system is installed, must be accessible. Creating an account, adding online accounts, and setting user information must be accessible in order to facilitate a good user experience. If not, the user will likely start looking for something else.

Input and Output

A blind person sends input to the operating system through a keyboard. Input with a touchpad or touchscreen is possible, but requires that the screen reader, accessibility APIs, and touchpad or touchscreen driver communicate with each other to make this possible. For now, only Apple Macs support using the screen reader with a touchpad. Windows and ChromeOS, however, support using the screen reader with a touchscreen.

Since a blind person cannot see the screen, output is provided by a Screen reader, which speaks using a software or hardware speech synthesizer. Sound effects also help the blind user by giving cues that are often much faster, and more distinct, than speech. These sound effects, called audio icons or earcons, can be used to signify a change in application state, an error, an animation, or invalid input. More general sound effects, like “low battery” notifications, startup sounds, and shutdown sounds, are also very valuable.

Speech Synthesis

Speech synthesis can be used for more than just a screen reader. It can also, for example, help those who cannot speak to communicate, or help dyslexic people read. An operating system should come with at least one speech synthesis engine.

The quality of the voice provided by the speech synthesis engine, the frontend, matters. A boring voice will reduce productivity or enjoyment, just like a flat, unchanging, gray font will make content seem boring and lifeless. The voice should have good pronunciation ability, or allow the user to correct pronunciation.

The speech engine, the backend, should be as responsive as possible. The engine must provide the ability to be stopped and restarted at any point, so that the screen reader can move from one item, or line, or character, to another at the user’s request. The engine must provide a speech queue, so that messages can be read sequentially, such as in terminal output.

For good flexibility, the speech engine must allow the controlling application, like a screen reader, to change its pitch, volume, and speaking rate. For even better flexibility, and possibility for spoken syntax highlighting, the speech engine should allow the modification of the speech intonation, head size, roughness, and other speech characteristics.

Speech Input

It can be tempting to believe that speech input is perfect for blind people, since it may be hard for some to find keys on the keyboard, or the misunderstanding between speech input and output. Speech input (speech recognition) is a great way to quickly enter a small amount of text. Indeed, mobile phone users have grown to prefer using this over typing on a glass screen, unsurprisingly. Blind people overwhelmingly use Siri and Dictation to get things done.

On a computer, however, this isn’t necessary for most. Many blind people are taught touch typing, so they know where the keys are. Speech input should still be provided for users who are just beginning to learn to type, or who prefer dictation.

Sounds and Latency

Sound support must, at least, support two channels of output at 16-bit quality, with a latency of 40 milliseconds or lower. Sounds must also be able to play simultaneously, so that speech and sounds can play at the same time. At most, surround sound and virtual surround sound, through HRTF, provides a great experience, while latency should be as low as possible, to provide a fast, snappy-feeling experience.

Labels, Roles, and States

Screen readers use different attributes of an item, hereafter called a widget, to describe what’s on the screen. These attributes are:

Label
The item’s name. For example, “OK”, “Name”, and “Choose one”.
Role
What an item is, or its purpose. For example, “button”, “edit field”, and “check box”.
State
Relevant, dynamic observations about the widget. For example, “checked”, “unchecked”, “pressed”, “has text” (for edit fields), “unavailable”, and “dimmed”.

Toolkits and Text

User interface toolkits must have a way for programmers to add text, or accessibility information, to their widgets. The toolkit should also expose state and role information.

The accessibility stack

The accessibility of the operating system depends on what the system gives to the screen reader. It does this through the accessibility stack. The stack must be robust enough to allow programs to give information about custom widgets, and rich enough to allow screen readers to get information about text formatting, image descriptions (if the application provides them), or usage hints in an unfamiliar application.

Resources