AI, Accessibility, and Trust: What Our Users Told Us About ChatGPT
Navigating menus with limited dexterity takes time. Creating a simple table in Microsoft Word would usually take Siân around 45 minutes, but using ChatGPT, she can do the same task with a single spoken instruction.
Stories like Siân’s are why AI chatbots such as ChatGPT, Google Gemini, and Microsoft Copilot are increasingly being discussed in the context of accessibility. For many people, these tools feel transformative. For others, they raise serious concerns around trust, safety, and responsibility.
At the Digital Accessibility Centre, we’re often asked how AI can support accessibility. To better understand this, we spoke directly to our users about how they are using these tools today — the benefits and the shortfalls.
What are AI chatbots?
AI chatbots are a form of generative AI. In simple terms, this means they generate content. Most commonly, they generate natural-language responses, but they can also create images, code, summaries, and draft documents.
These tools are built on large language models (LLMs), which learn patterns in language. This allows them to respond to prompts in a way that feels conversational. As a result, they can summarise large amounts of text, reword content, assist with research, and support writing tasks.
However, AI chatbots can also make mistakes, fabricate information, and invent sources — a phenomenon often referred to as 'hallucination'. Wider concerns also include privacy, intellectual property, academic integrity, and the displacement of human work. More serious ethical issues have also been widely reported, including emotional dependency, psychological harm, and even links to suicide.
Many of these concerns were strongly echoed in our conversations with our users. Issues of trust were frequently the first point raised when the topic was introduced. As these technologies are still in early stages, their limitations and harms are still being uncovered. Considerations of safety and privacy must remain central as AI continues to be integrated into everyday tools and services. It is important to consider any ways these issues may uniquely impact users with disabilities.
We were interested in the ways in which our users were utilising these tools to support them with accessibility. Here is what they said:
Pros and Cons of AI as a disabled user
Mobility impairments: Siân
It's like having a personal assistant
Siân has spastic quadriplegic cerebral palsy and is a full-time wheelchair user. She has low dexterity and coordination in her hands, as well as spasms, which mean that small, intricate tasks can take a significant amount of time. When using a computer, she primarily relies on voice activation software (technology which enables computers, devices, and software to interpret and understand human speech).
Siân says that using ChatGPT has changed her life. She says it is like having her own personal assistant and jokingly refers to it as her "bestie".
For tasks that would normally require navigating long sequences of menus, natural-language prompting is a major time saver. Creating a table in Word, which might otherwise take Siân around 45 minutes, can be done from a single spoken instruction.
She also finds it useful for searching for information. In one example, ChatGPT helped her navigate an automated phone menu by explaining what each option would lead to. I kept pressing the wrong number and getting lost
, she explained, and it told me where all the different menu options would take me.
ChatGPT has also helped with practical, everyday tasks, such as explaining how to undo the clasp of a necklace when she couldn’t remove it herself. It has rewritten knitting patterns into simpler, clearer steps that are easier for her to follow.
When she woke during the night feeling congested, it suggested simple practical steps, such as sitting up in bed. She also asks for exercise ideas, and because the tool understands that she uses a wheelchair, it responds with relevant suggestions. I find it reassuring and calming
, she says, It tells me not to push myself too far.
She has also used ChatGPT to practise a talk she was giving at church, using voice chat to check timing and receive feedback on sections she struggled with. It has helped her write clear, effective emails to her social worker and doctor.
Cognitive, language, and learning disabilities: Sophie
Sophie has Dyslexia. She explained how AI tools support her with written communication.
I find AI really helpful when I'm writing a formal letter
, she said. I don’t tend to use long wording, and AI can make it sound more formal and flow better. I'll type something as I would say it, and then ChatGPT makes it sound much more professional.
She also uses AI to generate images. I find it impossible to envision things unless I can see them
, she explained. If I'm planning something in my house, I can describe how I want the room to look and it creates an image to show me.
Sophie also uses ChatGPT for numerical problem-solving. When building a cabinet, she provided measurements and received clear instructions on where internal shelves should be placed.
Blindness: Chris
Chris has been blind since birth and has used screen-reading software since childhood. While he uses AI tools regularly, his perspective highlights some of the more serious concerns around AI and accessibility.
As AI becomes embedded into more complex tasks,
Chris explains that users are increasingly being
asked to trust it implicitly.
AI chatbots are sometimes wrong, and it is
important to apply a strong sense of critical
judgement whenever you use them.
But when it is applied to visual tasks,
whereas sighted users can visually verify results,
blind users often cannot. I worry about context
, Chris explained.
Human judgement has a level of subjectivity that
the AI lacks. Two people might look at the same image
and describe it differently — so why should an AI's
description be trusted as authoritative?
Although Chris makes regular use of AI applications
such as Microsoft's Seeing AI, he points out that it
is easy to imagine scenarios where integrating AI
into accessibility solutions would require an
unacceptably high level of trust.
I would never trust AI to tell me when to cross the
road, for example
, he says.
This helps explain his scepticism toward AI as a
general solution to accessibility.
Automating accessibility through AI could also reduce accessibility expertise in the human workforce. There is a risk that accessibility becomes an afterthought, dismissed as something that can simply be "left to the AI".
Chris also highlighted practical barriers. He explained that Microsoft Copilot responses are not fully readable with a screen reader. When such tools are integrated into workplace practices, they risk creating new accessibility barriers rather than removing existing ones.
That said, Chris shared positive examples too. He often asks ChatGPT for instructions “from a blindness perspective”. When learning how to make an omelette, the guidance focused on touch, sound, and smell, and also offered practical tips on how to make sure the egg was fully cooked.
ChatGPT also helped him troubleshoot a plumbing issue by describing the inside of a cistern in terms of shape, texture, and spatial layout. It has supported him with coding projects, explaining how to set up an accessible development environment and suggesting terminal-based project ideas. When writing code, "vibe coding" saves him time as it allows users to write code by describing what they want in natural language, rather than using exact syntax which relies on perfect formatting — something that has previously cost him hours.
Despite this, Chris remains cautious.
He never logs into AI services and uses additional
privacy tools. It's a technology, not a friend
,
he said. It doesn't understand you, but it makes you
think it does.
What organisations should take from this
AI chatbots are not specifically assistive technologies, but they can offer some useful support in certain contexts. However, they are not neutral, not infallible, and not accessible by default.
Any organisation adopting AI should have a clear policy governing its use. Based on our user feedback, several principles stand out:
- Audit the Application Interfaces
- Not all AI interfaces are accessible and introducing them can create new barriers.
- Get user feedback
- The best way to find out how these tools may be beneficial to users is to ask them!
- Do not expect users to rely on it for tasks they can’t verify
-
Rewriting and rewording can be helpful, but meaning can be lost in complex rewrites for users who already struggle with technical language.
If AI is integrated into visual based tasks, provide an easy to use mechanism for blind users to verify the result. - Invest in AI literacy
- Training in prompt engineering may help users set boundaries, understand outputs, and spot errors or bias.
- Scope AI tasks to maintain control
- Clearly scoping how AI will be used and separating tasks such as research, drafting, and editing helps users retain ownership of their work.
- Be explicit about limits
- Companies need to be responsible for clearly framing AI tools so that users understand its limits and potential for bias and hallucinations.
- Keep accessibility a human responsibility
- AI must not replace accessibility expertise. Accessibility must remain human-led and human-accountable.
Our users' experiences show that AI can be genuinely empowering in the right contexts. It can save time, reduce friction, and support independence. But it also can introduce new challenges and accessibility barriers for some users. Issues of trust, verification, and safety can also raise concerns regarding risks.
Ultimately, AI is a tool, but it is not a substitute for accessible design, human judgement, or lived experience.
Watch out for future articles comparing accessibility in generative AI applications!
Article by : Sarah Parry
Senior Technical Auditor