From Side Panel to Centre Stage

From Side Panel to Centre Stage

From Side Panel to Centre Stage

Photosymbols at the United Nations

Photosymbols were invited to present our work at a side event during the United Nations 18th Session of the Conference of States Parties to the CRPD (COSP18) - a global gathering focused on upholding the rights of disabled people.

Our panel explored legal access, plain language, Easy Read, and the role of AI. Speakers from the USA, Sweden, Brazil, Italy and the UK shared bold ideas for making information more accessible. I was there representing Photosymbols and shared how our unique approach to co-production, technology, and values is already helping shape what inclusive AI can look like.

Why Were We There?

We believe people with learning disabilities should shape the AI systems that are going to affect their lives. AI is already transforming how we work, learn, and communicate. But without thoughtful design it risks leaving people behind. That’s why we set up our Newton Project where I've been travelling up and down the country meeting people with learning disabilities and asking three simple but vital questions:

  • What do you know about AI and how do you feel about it?

  • What makes good Easy Read?

  • How do you personally like to get information?

The insights people have shared with us have been invaluable. I soon realised this work had to feed into the heart of what we do next. So we have started building our own AI model from the ground up. It's an AI model shaped by these conversations. We’ve called it EveryVoice AI.

EveryVoice AI is the foundation of all our AI work now and in the future. It’s our attempt to build truly inclusive technology shaped by real voices. It is designed to be respectful and kind because that’s what people told us matters most. But this isn’t just a model that simplifies information. It’s a model that represents the people it is built for.

We’ve laid the groundwork and this is only the beginning. Every AI solution we create, from EasyMaker to tools we haven’t imagined yet will be powered by EveryVoice AI at its core.

What I Shared at the UN

My talk focused on three core principles that continue to guide everything we’re doing:

1. Meet people where they’re at and leave no one behind

We usually start by explaining what AI is, what it can do, and why it matters. I've noticed that everyone can be involved in a meaningful way, but everyone is at a different point on the journey. Some people feel unsure about AI, perhaps because they haven't had the chance to learn about it. Others are curious and even using it already. These conversations have really shaped how we work.

2. Access to technology can’t be assumed

One of the most important things we’ve learned is that many people don’t have access to the right device to receive accessible information. Some had smartphones but lost them or had them stolen and couldn’t afford to replace them, so now use basic phones for calls and texts only. Others liked using tablets, laptops or computers to go online. These differences really matter. Any serious information strategy needs to take them into account if we’re going to truly reach people.

3. Representation in AI-generated images matters

AI image generation can easily reinforce stereotypes when it comes to disability. That’s why we’re doing groundbreaking work to shape how we use this technology. Through the Newton Project and regular conversations with our Expert Advisers we’ve been exploring how AI can be used to represent people and ideas more thoughtfully.

We’ve found that many people with learning disabilities actually enjoy the creative potential of AI-generated images, particularly when it comes to abstract or niche topics that are hard to photograph. But it also raises big questions. Should we let AI generate people? How do we avoid harmful or lazy portrayals of disability? We’ve had powerful discussions with Expert Advisers and groups about these issues and it’s not just helped us shape our policies, it’s helped everyone involved understand AI better. Me included.

This isn’t a finished piece of work. It’s an evolving process, and we’re proud to be having these conversations openly, with the people whose voices matter most. This feedback and innovative thinking has given us a unique perspective and we are continually working to improve how EveryVoice AI generates images. It’s an ongoing project that adapts with emerging technologies and continues to be shaped by the lived experience of disabled people.

A Little Gift: The Easy Read News Feed

To wrap things up I gave a quick demo of our latest tool to use EveryVoice AI. It can look at any news feed or blog and automatically convert that into Easy Read articles. We built a version that follows the COSP18 feed in real time. Each story becomes a heading with three short paragraphs, and includes pictures automatically generated with AI.

Click here to view an Easy Read version of this blog made with EasyMaker
Click here to download an Easy Read PDF of this blog made with EasyMaker

#COSP18 #COSP #GlobalGoals #EveryoneIncluded