Two people sit together in a workshop session, discussing an Easy Read document, with one person holding a pen and pointing at the page while the other speaks.

Research That Includes Everyone

Research That Includes Everyone

The Easy Read Standard is built on research with over 100 people with learning disabilities. But how do you actually do that research well? How do you make sure you're hearing from everyone, not just the people who find it easiest to take part?

This is something we thought about a lot when designing the Newton Project. And I think the approach we took is worth sharing - not because it's the only way, but because it's different from how research in this area often gets done.

The problem with standard approaches

Most research methods were designed for people who are comfortable in unfamiliar settings, confident speaking up in groups, and used to filling in forms and answering questions. When you apply those methods to people with learning disabilities without adapting them, something predictable happens: you end up with convenience sampling - hearing mainly from the people who fit the mould.

The participants with the strongest verbal skills. The ones with clear opinions they can articulate quickly. The people who are used to being consulted and know how these things work.

Those voices matter. But they're not the only voices. And if your research only captures them, you're not getting the full picture - you're getting a skewed sample that just happens to be easy to reach.

Some researchers call these groups "hard to reach." We prefer "seldom heard" - because the problem isn't that people are hard to find. It's that standard methods aren't designed to hear them.

Going to where people are

For the Newton Project, I travelled across the country to meet people in naturalistic settings where they already felt comfortable - self-advocacy groups, People First organisations, day services. Over the course of a year, I visited more than 20 groups from Cornwall to York, from Camden to Dorset.

But it wasn't only group settings. I also worked with individuals - in coffee shops, over lunch, sitting outside at street cafes. Wherever people felt comfortable talking.

Sessions were planned in advance with group leaders. I arrived with accessible survey materials - Easy Read throughout, with pictures, following the same principles we were researching. People could respond on tablets or on paper, whichever they preferred. Groups were paid for their time - not in vouchers, but in money.

These are reasonable adjustments to standard research methods. They take more time and cost more money. But without them, you're only hearing from people who can adapt to your process - not the other way around.

Taking the time to explain

When you're asking people about fonts, layouts, picture styles, and page designs, you can't assume everyone immediately understands what you're asking or why it matters. So a big part of each session was simply explaining - in plain language, with examples, at whatever pace the group needed.

Sometimes that meant sessions ran longer than planned. Sometimes people needed to see the same comparison several times before they felt ready to give a view. Sometimes the discussion went in unexpected directions that didn't fit neatly into survey questions.

None of that is a problem. It's just what participatory research looks like when you're doing it properly.

Mixed methods: capturing what surveys miss

Not everything people shared could be captured in tick-boxes. So we took a mixed methods approach - combining quantitative survey data with qualitative evidence from conversations and observations.

We video recorded most sessions - with consent - and transcribed them afterwards. This let us go back and catch the nuances: the comments made in passing, the conversations between participants, the moments where someone's view became clear through context rather than a direct answer.

We used AI transcription to help process this material, then reviewed it carefully to understand how individual contributions fitted into the broader picture. For people who didn't express a clear preference during the session, this analysis often revealed where they stood - just expressed differently than a simple "I prefer A or B."

Using multiple data sources like this - sometimes called triangulation - gives you more confidence in your findings than any single method alone.

Iterative design: updating as we went

The survey itself evolved through the project. Early sessions taught us things about how questions landed, which comparisons made sense, where people got stuck. We updated the materials based on that feedback - a form of co-production built into the research process itself.

This iterative approach meant the survey was sharper and clearer by the end than when we started. That's not a flaw in the methodology. That's the methodology working as it should.

Hearing the quieter voices

Anyone who has spent time in groups of people with learning disabilities will recognise a pattern: certain people tend to step forward. They're confident, comfortable speaking up, often well-known in their group. Their views get heard.

Others hang back. Older members. Women - who, in my experience, are often outnumbered by men in the voices that get captured. People who use communication aids. People who need more time to process questions or formulate responses.

If you're not actively working to include those quieter voices, you'll miss them. Your research will skew towards the people who are easiest to hear, and you'll end up thinking that's representative when it isn't.

I tried to counter this throughout the Newton Project. That meant making space for people who needed longer. It meant not just accepting the first confident answer as the group's view. It meant going back to video recordings to catch contributions that got overlooked in the flow of a session.

Not a ram-raid

There's a version of research that parachutes in, gathers data, and moves on to the next project. The participants never hear from you again. Their contribution disappears into a report somewhere.

That's not how we work. Photosymbols has been providing images to these communities for over 20 years. We have relationships with some of these groups going back even longer. This isn't a one-off extraction exercise - it's part of an ongoing conversation.

We're already planning to return to the groups from this first wave of research, as well as working with new groups. This longitudinal approach means the Newton Project continues. The evidence base will keep growing. And the people who contributed will see their input shaping tools and standards they actually use - a form of respondent validation built into how we operate.

What inclusive research actually means

There's a version of consultation that ticks boxes without really listening. You invite a few articulate self-advocates to a meeting, record their views, and call it co-production. It's neat and efficient and produces clean data.

But it's not inclusive. It excludes everyone who doesn't fit that format.

The Newton Project took longer, cost more, and produced messier data than a quick consultation would have. I think that's a feature, not a bug. The evidence behind the Easy Read Standard comes from genuine participatory research with the full range of people who use Easy Read - not just the ones who are easiest to reach.

That's what we mean when we say the standard is evidence-based. Not just that we collected data, but that we worked hard to make sure the data reflected everyone.