The future of Artificial Intelligence: A future for all

07/08/2017


We now live in a world where artificial intelligence, and assistive technology is more accessible than ever before. In my previous post ‘the update round up’, I highlight some of the new updates to Apple, Windows, Android and iOS; and how each offering will improve access to content on mobile and desktop devices for various user groups. What about the day-to-day usage of artificial intelligence though? It’s actually closer to hand than we think.


Artificial intelligence or (AI), is fast becoming the norm in our daily lives. The first thing to identify is that it doesn’t just help people who have additional access requirements, all users regardless of whether or not they use assistive hardware or software benefit from using AI. If you have ever asked a virtual assistant such as Siri, Google, Alexa or Cortana to do something, you’re using AI. The technology is also developing to learn what we use most of all, and adapt to our digital habits. So if you frequently use Cortana to open aps or set reminders, it will become familiar with this task, and any others you use.


The use of AI can be incorporated in to apps, something which is on the increase with updates to the various desktop and mobile operating systems. This means that any third-party items which are installed to a device will be able to take advantage of AI, as long as the developer has included this functionality when producing the app. One app for iOS which is aimed at supporting blind or low vision users is Seeing AI. The app has various features including document scanning, a barcode reader, and the ability to share information via the iOS share sheet. This means that the app will be able to identify items from the camera roll, allowing users to include names for individuals in a picture such as relatives for example. So the use of AI is increasing as the updates and overall development of technology continues.


Additional Resources

To learn more about AI including the Seeing AI app, visit the following pages. *Note* The Seeing AI app is not available in the UK app store at the time of writing, when it is I will be giving it a good run through. The Seeing AI app for iOS (external link). The Cortana website (external link). All about Siri (external link). All about the Google Assistant (external link).

The icing on the cake: The difference between AA and AAA compliance

31/07/2017


Introduction

Achieving a level of compliance for your app or website means that as far as the Web Content Accessibility Guidelines (WCAG) are concerned, your offering is accessible to as many user groups as possible who require assistive technology to get online. The term assistive technology, and even accessibility can mean different things to different people, and here at the Digital Accessibility Centre (DAC) we offer level AA and AAA accreditation for our clients depending on their requirements.

What do the different levels mean?

  1. Single A is viewed as the minimum level of requirement which all websites, apps, and electronic content such as documents should adhere to.
  2. Double A is viewed as the acceptable level of accessibility for many online services, which should work with most assistive technology which is now widely available on both desktop and mobile devices, or which can be purchased as a third-party installation.
  3. Triple A compliance is viewed as the gold standard level of accessibility, which provides everything for a complete accessible offering, including all the bells and whistles which make the difference between a very good experience and an excellent one.

In his post called Why do we need WCAG Level AAA? (External link), Luke McGrath points out that problems may occur and cause a failure for some AA criteria when attempting to reach AAA. Trying to meet AAA will mean that your website is the best it can be, however including the additional implementation may not be possible if budget is a concern, and working through a particular problem may push back a go live date if trying to fix AA issues when trying to move to AAA. A good example of AAA is found below, which highlights how AA and AAA can make the difference for end users.


One key difference between AA and AAA is for screen reader users when navigating the page. If a screen reader user is viewing a list of links and hears their software announce ‘click here’ or ‘read more’, it will pass as double A if the links are associated with each other in a paragraph or list. This means that the link would be surrounded by text like, ‘to read the DAC blog click here’, click here being the link. While it is possible to read the information using another method of navigation such as reading the entire paragraph rather than just a set of links, the link text would be ambiguous when moving through all the links to find the required content. So including the icing (a clear link text in this instance), would make the link easier to read no matter what method of navigation is being used.


As shown above, moving to AAA if at all possible will create the best experience for all users, however AA is accepted as a very good commitment to accessibility. For more information feel free to get in touch, or check the following link for more information. Web Content Accessibility Guidelines 2.0 (WCAG2 External link).

The update round up: what’s new in accessibility when the updates are released?

17/07/2017


Introduction

It’s that time of year again when we all look forward to the regular updates of iOS, Android, and Windows and wonder what changes are ahead when the new updates are introduced. What can we expect from the assistive technology though, and in particular, what improvements are the big players planning in relation to their built-in software.


The latest updates from Apple

iOS 11 comes with many exciting features, however the big accessibility improvements are the 1-handed keyboard, adding another feature to its feature-rich OS. Other offerings include automatic image scanning, where Voiceover, (the built-in screen reader on iOS), will attempt to scan an image for text and read it to the user. This combined with the same scan for unlabelled buttons makes for interesting developments. For low-vision users, a new invert colour option, and additional integration with third-party apps means that low-vision users are able to have better contrast across more applications.


MacOS Users who experience difficulty using a physical keyboard will now benefit from an on screen keyboard in the September update of macOS. The keyboard will allow users to customise it to their requirements, although like other updates we will need to wait and see what the final result will be. Many of us talk to Siri, but have you ever just wanted to type a message to Siri instead? Now you can, Siri will still provide audio feedback, just type what you want if you can’t chat with Siri. Improved PDF support relating to tables and forms with Voiceover is another feature in the new Mac OS, a feature which I am sure will be much welcome by Voiceover users when attempting to quickly access PDF and other documentation. Similarly to iOS, Voiceover on the mac will describe an image by using a simple keyboard command, making it possible to interpret your photos maybe, I guess time will tell. Better navigation of websites which now use HTML 5 is also included in the update, meaning that Voiceover will support the new standard and provide better navigation when tables are used in messages for example.


Apple watch is also benefiting from a software update, including the ability to change the click speed of the button on the side of your watch. This means that users who have difficulty double-clicking for example, can customise the click speed when they need to use Apple pay or other such services. Apple TV will now support the use of braille displays. A braille display is a device which translates the print material on-screen in to braille via Bluetooth or USB, allowing users to navigate and read content such as programme guides ETC.

Windows

Improvements to Windows Narrator, the built-in screen reader on Windows devices, will see the ability to learn what command is performed when using another device such as a keyboard, via device learning mode. Narrator users will be able to experience a clearer and more unified user interface (UI), as improvements across all apps and devices will make Narrator easier to learn and use. The scan mode used to quickly navigate a screen or web page, will be set to on by default, and it’s setting across multiple apps will be remembered to further improve the user experience. Narrator will also include a service which attempts to recognise images which contain a lack of alt (alternative) text, by using Optical Character Recognition (OCR) to identify the image.


The Magnifier will follow Narrator’s focus, to make it easier for users who use both Narrator and magnification simultaneously. The desktop magnifier will include smoother fonts and images, as well as additional settings and the ability to zoom in or out using the mouse. Also included for low vision users are new colour filters, which make it easier for persons who have colour blindness or light sensitivity to use a windows device.

Android

A new accessibility shortcut will be available for users running android o. The feature is set to toggle on and off Talkback by default, however it can be used to configure another accessibility service after set up, such as magnification or switch access. The shortcut can be performed by pressing the up and down volume buttons together on any compatible device, meaning that it will be easier than ever to get your required access option on Android O. When using Android o with Talkback, the addition of a separate talkback volume has been introduced to enable users to change the output volume separately from the media volume. For low-vision users, a new slider has been introduced at the top of the screen when media is encountered to easily perform the same action. So if listening to any media it is now possible to easily hear what Talkback is announcing. For devices running Android o with a fingerprint scanner, Talkback users can make use of customisable gestures which can be performed by using the fingerprint scanner on their device. To enable support for additional languages, multi-language support is another feature being developed for Android O, via Google’s text-to-speak software to detect and speak the language in focus.


When running an Android o compatible device, and having an accessibility service active such as magnification, users can implement an accessibility shortcut to magnify the screen when the Accessibility button is available. This means that, using the example of magnification, a user would be able to tap the accessibility button, and use a specific gesture to change the screen magnification. To return to the previous (or default) setting, all users need to do is press the Accessibility button again to remove the accessibility setting.


For low-vision users who may not require the features of Talkback, or for users who have dyslexia, select to speak will be a useful feature. Select to speak is a service which announces a selection of elements or text, and includes options to read by page, speed, and the previous or next sentence. As mentioned earlier, we will need to wait until the final updates are released in a couple of months, but the future is very interesting for built-in assistive technology.


Resources

To learn more about the latest updates, go to: The latest accessibility updates in iOS 11 from AppleVis (external link). The Microsoft Accessibility Blog (external link). The latest accessibility news about Android O (opens external link which contains a youtube video).

How do we deal with a CAPTCHA: Making authentication accessible for everyone.

23/05/2017


Introduction

CAPTCHA (completely automated public Turing test to tell computers and humans apart), is used to authenticate genuine users from others who have NOT SO GOOD intentions. The process of authenticating a person online need not rely on CAPTCHA though, as other methods of authentication can be used when proving yourself online. The problem with CAPTCHA is that it causes difficulties when users of assistive technology try to use it, and in the most inaccessible versions, can prevent users from completing the verification process. What follows is an example of the barriers faced by users of assistive technology when they encounter a CAPTCHA, and some alternatives to consider when implementing security on a website.


The need for authentication and the need for accessibility

Authentication of a user, and having secure channels when submitting a form is crucial when browsing the web. Not only for the use of contact forms when identifying real users from spam, but also for secure online transactions or account creation. When using assistive technology though, an added problem occurs; the one of accessibility to the CAPTCHA. There are many different methods of CAPTCHA from different organisations, and assistive technology can be affected depending on the type of CAPTCHA being used. It’s also important to point out that CAPTCHA can be displayed differently depending on the operating system (OS) being used, such as Windows verses Mac or iOS.


If completing an audio CAPTCHA on Windows for example, the ‘play’ button for the audio would do as expected assuming that all is working as it should be. On iOS however, the audio CAPTCHA prompts users to download an MP3 file meaning that users will have to remember the content of the audio, and switch to the required form to input the content to pass verification. While some audio is accessible though, a problem can occur if the files are heavily processed because it is difficult to pick out the correct letters or numbers if the audio is heavily distorted. While this is done to prevent bots from interpreting the information, an additional barrier is identified if users are not able to interpret the content clearly.


Image CAPTCHA which require users to select specific images and not others may work for users who have good vision, but will prevent users who have little or no vision from completing the verification process. A CAPTCHA which requires users to make a maths calculation, or select the correct response to a question will work for some users, but may cause problems for users who have a learning difficulty.


Implementing an accessible alternative will not only maintain security, but will also ensure that users of assistive technology are not excluded from the verification process. Some good alternatives such as ticking a box to indicate that it is a human and not a robot completing the form is one option. Another alternative would be to implement honeypot, which has a hidden form field which if filled in, will stop the submission. As long as the field is clearly labelled to warn screen reader users that it should not be filled in, this is a suitable alternative. While other methods of biometric authentication are being explored, one of the best methods would be 2-factor authentication, where the user enters an email address or mobile number, and receives a code to enter in to the form to verify their information. Each method has good and bad points, such as the 2-factor method would require the user to have immediate access to their email account or good phone signal.


Further information

For more information about good CAPTCHA and some alternatives, check out: Some CAPTCHA alternatives (external link.)

An accessibility wish list: Getting ready for a smarter future.

13/04/2017


Before smart devices became accessible, any one with a disability would need to purchase a device capable of running third-party assistive technology, or would need to purchase a specific device which met their requirements by performing a function, such as a hearing-aid compatible phone. Now that many devices include access features built in, and wearable technology becomes a part of our daily routine, smart homes are being built to make use of such technology.


While smart home technology started with an alarm to alert a carer or the authorities if a person needed assistance, other applications and devices have been developed to make it easier to control various items in a smart home. From lighting to doorbells, and security to heating, there is often an app which can be used via a smart phone or tablet to control the various items in the home. While some apps may be accessible, all apps need to be coded to ensure that all users can control their home from their required device. I admit at this point that I have not at the time of writing used anything like a Wi-Fi enabled heating, lighting or security system, however my list of ideas to develop a fully accessible option will hopefully be realised in the near future.


As a blind user of iOS, I am familiar with the specific gestures which can be used to control an iPhone using Voiceover, the built in screen reader for apple products. Similar gestures can also be used to access android devices. If a smart home is going to be truly accessible, apps across multiple platforms will need to have clearly labelled items, and respond to the various touch gestures which are allowed through the use of assistive technology. Of course various apps should include as many access implementations for as many users as possible, including different font and contrast options for users who have some useful vision, assistive touch and switch access for users who Have limited mobility or a learning difficulty, and many other access requirements which are not covered in this post, but equally as important. To make things easy to access and use though, keeping touch screen gestures the same across devices, and adaptive keypad functions available for persons who are unable to use a touch screen would enable all users to be able to take advantage of a smart home.


To ensure the best experience possible, the following links will help you: Developing Android apps for accessibility (external link). Developing accessible iOS apps (external link).