The update round up: what’s new in accessibility when the updates are released?



It’s that time of year again when we all look forward to the regular updates of iOS, Android, and Windows and wonder what changes are ahead when the new updates are introduced. What can we expect from the assistive technology though, and in particular, what improvements are the big players planning in relation to their built-in software.

The latest updates from Apple

iOS 11 comes with many exciting features, however the big accessibility improvements are the 1-handed keyboard, adding another feature to its feature-rich OS. Other offerings include automatic image scanning, where Voiceover, (the built-in screen reader on iOS), will attempt to scan an image for text and read it to the user. This combined with the same scan for unlabelled buttons makes for interesting developments. For low-vision users, a new invert colour option, and additional integration with third-party apps means that low-vision users are able to have better contrast across more applications.

MacOS Users who experience difficulty using a physical keyboard will now benefit from an on screen keyboard in the September update of macOS. The keyboard will allow users to customise it to their requirements, although like other updates we will need to wait and see what the final result will be. Many of us talk to Siri, but have you ever just wanted to type a message to Siri instead? Now you can, Siri will still provide audio feedback, just type what you want if you can’t chat with Siri. Improved PDF support relating to tables and forms with Voiceover is another feature in the new Mac OS, a feature which I am sure will be much welcome by Voiceover users when attempting to quickly access PDF and other documentation. Similarly to iOS, Voiceover on the mac will describe an image by using a simple keyboard command, making it possible to interpret your photos maybe, I guess time will tell. Better navigation of websites which now use HTML 5 is also included in the update, meaning that Voiceover will support the new standard and provide better navigation when tables are used in messages for example.

Apple watch is also benefiting from a software update, including the ability to change the click speed of the button on the side of your watch. This means that users who have difficulty double-clicking for example, can customise the click speed when they need to use Apple pay or other such services. Apple TV will now support the use of braille displays. A braille display is a device which translates the print material on-screen in to braille via Bluetooth or USB, allowing users to navigate and read content such as programme guides ETC.


Improvements to Windows Narrator, the built-in screen reader on Windows devices, will see the ability to learn what command is performed when using another device such as a keyboard, via device learning mode. Narrator users will be able to experience a clearer and more unified user interface (UI), as improvements across all apps and devices will make Narrator easier to learn and use. The scan mode used to quickly navigate a screen or web page, will be set to on by default, and it’s setting across multiple apps will be remembered to further improve the user experience. Narrator will also include a service which attempts to recognise images which contain a lack of alt (alternative) text, by using Optical Character Recognition (OCR) to identify the image.

The Magnifier will follow Narrator’s focus, to make it easier for users who use both Narrator and magnification simultaneously. The desktop magnifier will include smoother fonts and images, as well as additional settings and the ability to zoom in or out using the mouse. Also included for low vision users are new colour filters, which make it easier for persons who have colour blindness or light sensitivity to use a windows device.


A new accessibility shortcut will be available for users running android o. The feature is set to toggle on and off Talkback by default, however it can be used to configure another accessibility service after set up, such as magnification or switch access. The shortcut can be performed by pressing the up and down volume buttons together on any compatible device, meaning that it will be easier than ever to get your required access option on Android O. When using Android o with Talkback, the addition of a separate talkback volume has been introduced to enable users to change the output volume separately from the media volume. For low-vision users, a new slider has been introduced at the top of the screen when media is encountered to easily perform the same action. So if listening to any media it is now possible to easily hear what Talkback is announcing. For devices running Android o with a fingerprint scanner, Talkback users can make use of customisable gestures which can be performed by using the fingerprint scanner on their device. To enable support for additional languages, multi-language support is another feature being developed for Android O, via Google’s text-to-speak software to detect and speak the language in focus.

When running an Android o compatible device, and having an accessibility service active such as magnification, users can implement an accessibility shortcut to magnify the screen when the Accessibility button is available. This means that, using the example of magnification, a user would be able to tap the accessibility button, and use a specific gesture to change the screen magnification. To return to the previous (or default) setting, all users need to do is press the Accessibility button again to remove the accessibility setting.

For low-vision users who may not require the features of Talkback, or for users who have dyslexia, select to speak will be a useful feature. Select to speak is a service which announces a selection of elements or text, and includes options to read by page, speed, and the previous or next sentence. As mentioned earlier, we will need to wait until the final updates are released in a couple of months, but the future is very interesting for built-in assistive technology.


To learn more about the latest updates, go to: The latest accessibility updates in iOS 11 from AppleVis (external link). The Microsoft Accessibility Blog (external link). The latest accessibility news about Android O (opens external link which contains a youtube video).

How do we deal with a CAPTCHA: Making authentication accessible for everyone.



CAPTCHA (completely automated public Turing test to tell computers and humans apart), is used to authenticate genuine users from others who have NOT SO GOOD intentions. The process of authenticating a person online need not rely on CAPTCHA though, as other methods of authentication can be used when proving yourself online. The problem with CAPTCHA is that it causes difficulties when users of assistive technology try to use it, and in the most inaccessible versions, can prevent users from completing the verification process. What follows is an example of the barriers faced by users of assistive technology when they encounter a CAPTCHA, and some alternatives to consider when implementing security on a website.

The need for authentication and the need for accessibility

Authentication of a user, and having secure channels when submitting a form is crucial when browsing the web. Not only for the use of contact forms when identifying real users from spam, but also for secure online transactions or account creation. When using assistive technology though, an added problem occurs; the one of accessibility to the CAPTCHA. There are many different methods of CAPTCHA from different organisations, and assistive technology can be affected depending on the type of CAPTCHA being used. It’s also important to point out that CAPTCHA can be displayed differently depending on the operating system (OS) being used, such as Windows verses Mac or iOS.

If completing an audio CAPTCHA on Windows for example, the ‘play’ button for the audio would do as expected assuming that all is working as it should be. On iOS however, the audio CAPTCHA prompts users to download an MP3 file meaning that users will have to remember the content of the audio, and switch to the required form to input the content to pass verification. While some audio is accessible though, a problem can occur if the files are heavily processed because it is difficult to pick out the correct letters or numbers if the audio is heavily distorted. While this is done to prevent bots from interpreting the information, an additional barrier is identified if users are not able to interpret the content clearly.

Image CAPTCHA which require users to select specific images and not others may work for users who have good vision, but will prevent users who have little or no vision from completing the verification process. A CAPTCHA which requires users to make a maths calculation, or select the correct response to a question will work for some users, but may cause problems for users who have a learning difficulty.

Implementing an accessible alternative will not only maintain security, but will also ensure that users of assistive technology are not excluded from the verification process. Some good alternatives such as ticking a box to indicate that it is a human and not a robot completing the form is one option. Another alternative would be to implement honeypot, which has a hidden form field which if filled in, will stop the submission. As long as the field is clearly labelled to warn screen reader users that it should not be filled in, this is a suitable alternative. While other methods of biometric authentication are being explored, one of the best methods would be 2-factor authentication, where the user enters an email address or mobile number, and receives a code to enter in to the form to verify their information. Each method has good and bad points, such as the 2-factor method would require the user to have immediate access to their email account or good phone signal.

Further information

For more information about good CAPTCHA and some alternatives, check out: Some CAPTCHA alternatives (external link.)

An accessibility wish list: Getting ready for a smarter future.


Before smart devices became accessible, any one with a disability would need to purchase a device capable of running third-party assistive technology, or would need to purchase a specific device which met their requirements by performing a function, such as a hearing-aid compatible phone. Now that many devices include access features built in, and wearable technology becomes a part of our daily routine, smart homes are being built to make use of such technology.

While smart home technology started with an alarm to alert a carer or the authorities if a person needed assistance, other applications and devices have been developed to make it easier to control various items in a smart home. From lighting to doorbells, and security to heating, there is often an app which can be used via a smart phone or tablet to control the various items in the home. While some apps may be accessible, all apps need to be coded to ensure that all users can control their home from their required device. I admit at this point that I have not at the time of writing used anything like a Wi-Fi enabled heating, lighting or security system, however my list of ideas to develop a fully accessible option will hopefully be realised in the near future.

As a blind user of iOS, I am familiar with the specific gestures which can be used to control an iPhone using Voiceover, the built in screen reader for apple products. Similar gestures can also be used to access android devices. If a smart home is going to be truly accessible, apps across multiple platforms will need to have clearly labelled items, and respond to the various touch gestures which are allowed through the use of assistive technology. Of course various apps should include as many access implementations for as many users as possible, including different font and contrast options for users who have some useful vision, assistive touch and switch access for users who Have limited mobility or a learning difficulty, and many other access requirements which are not covered in this post, but equally as important. To make things easy to access and use though, keeping touch screen gestures the same across devices, and adaptive keypad functions available for persons who are unable to use a touch screen would enable all users to be able to take advantage of a smart home.

To ensure the best experience possible, the following links will help you: Developing Android apps for accessibility (external link). Developing accessible iOS apps (external link).

Become accessible using some simple solutions

07/04/2017 written by Mike Taylor

Accessibility is often thought to be a costly process, and while some features can be complex, we have some top tips to get the process moving with minimum cost to you.

Consider accessibility as early as possible during a build or an update

It’s important to include accessibility as early as possible when building an app, website, or any online content or update. The earlier the build is evaluated and any problem areas identified, the earlier a fix can be implemented and in many cases an easier resolution be found, which works for everyone. It’s something we say a lot, but it’s very true; it’s much easier to include accessibility at the start, rather than retro-fit later. That said, it’s never too late to include accessibility, and it’s certainly better to do it than not at all.

Include a clear tab order and focus highlighting when tabbing through the page

For users who can only use a keyboard a clear and logical tab order is required for consistent and predictable navigation, and focus highlighting should be equally consistent while tabbing through the page. Implementing both solutions will enable users to navigate through the page easily, while giving focus to each element (like a link for example). The lack of this functionality means that users can be confused if focus disappears completely, or skips passed items such as a form or past a frame.

Include a clear and logical headings structure

Including a clear and logical headings structure, will make it easier for all users to navigate and understand the page content. The main content heading should be coded as a level 1 mark-up, and sub-sections would need a level 2, with smaller sections as a level 3 respectively. Not including headings will make it difficult for blind computer users who rely on screen reading software to navigate the content section-by-section. Headings enable users to move around the page easier, particularly if using screen reading software.

Avoid the use of fast blinking images, and items which move around the screen

If an item blinks at a high speed, and lasts for more than 5 seconds, it is important to provide a method of stopping the item easily. Such content can be distracting for users who have learning difficulties or worse for those with photo sensitive epilepsy. Avoiding the use of this content will not only make navigating easier, but potentially also reducing the possibility of illness as a result.

Include clear form labelling and clear error handling

When users are required to select an option, or type information in to a form, clear labelling for all items will make completing this both usable and very accessible. Including a clear error handling for forms which are not completed correctly, as well as providing a prompt for all compulsory form items, will make accessing the content easier for all user groups.

Include a clear alternative text for graphics which convey specific information

Include clear alternative text for graphics which convey specific information such as special offers, or other information which direct users to specific content. If graphics are used purely for decoration, hiding the items from screen reader users with a null alt attribute will cause less confusion for blind users and maintain the visual effect for other users moving forward.

Include clear link text and a skip link

Including a clear link text to indicate what will happen when a link is selected, such as ‘click here to read our top accessibility tips’ rather than ‘click here’, makes all the difference when navigating content. A skip link is used by those who only use keyboard to move to the main content section of the page, which is helpful in many aspects of navigation. For users of desktop or laptop devices a skip link reduces the amount of tabbing which is required to move to the main content of the page; while on mobile, less scrolling is required for users, particularly user groups who have limited mobility or those using the screen reading software.

Introducing WCAG 2.1.

22/03/2017 written by Mike Taylor


Anyone with an interest in the web content accessibility guidelines (WCAG,) will no doubt be aware of WCAG 2.1. Version 2.1 is open for feedback and suggestions until 31/03/2017, and has been published to take in to consideration the increase of mobile devices, and the accessibility of such devices using assistive technology.

Most of us will now use a smart phone, or have a portable device with internet capability, and more devices are thankfully now being shipped with accessible solutions built in to their respective operating systems. As a result, it’s vital that online content and user interfaces among other things are as accessible as possible to as many user groups as possible. While still in the early stages, it’s great to know that WCAG is moving to take in to account the inclusion of mobile and tablet devices. To read more about WCAG2.1 and to add comments and suggestions before March 31st, visit: The Web Content Accessibility Guidelines 2.1 pages.