Skip to main content
Logo of Web Access Club podcast. Dark green serif text on top of seas spray plain background.

Web Access Club podcast

Applying knowledge via audits

Season 1 episode 4, -

Once we’ve learnt about accessibility and assistive technologies, how do we apply that knowledge at work? Where do we start? How do we know what needs fixing or extra attention? One of the ways is through an audit. In this episode, I’ll be chronicling my adventures in applying the new found accessibility knowledge through accessibility audits.

Note: I'm trialing out my podcast host's iframe media player.

If this player is not accessible to you, please feel free to , find the podcast in your preferred podcast app, or download the episode.

Transcripts are always available in a section below.

Show notes

Once we’ve learnt about accessibility and assistive technologies, how do we apply that knowledge at work? Where do we start? How do we know what needs fixing or extra attention?

One of the ways is through an audit.

In this episode, I’ll be chronicling my adventures in applying the new found accessibility knowledge through accessibility audits.

References:

Accessibility and audit tools:

Transcript

Sawasdee-ka, kia ora and hello.

Welcome to Web Access Club, a podcast about accessibility for web creators. I'm Prae, a New Zealand based UX engineer, who is learning to make accessible web products.

In this episode, I'll be retelling my adventures in applying my newfound accessibility knowledge through accessibility audits.

Armed with the notion that we should stop disabling people, and with increased knowledge of how to use assistive devices. I... well, actually, where do I go from here?

Throwing ourselves into the accessibility project

As I mentioned in episode one, my team was thrusted into an accessibility project, so I had a reluctant, willing audience to receive my enthusiasm for accessibility. Though perhaps receive is not an accurate term. More like tolerate with kind understanding smiles.

Regardless, the accessibility project was the big reason why I wasn't too negatively received by my team. After all, accessibility has become a priority now, and the next step for the team is to learn how to achieve it. So we were keen to learn whatever we could from wherever we could.

One of the first things my team did was to get our products audited. Cause afterall, we need to understand how much work it will take for us to become accessible according to WCAG 2.1 AA standards.

At the time, my company didn't have an official accessibility support, or anyone who could do internal audit for us. So my team had to do the research on our own with the little knowledge that we have.

Luckily, one of the senior engineers were experienced with accessibility projects and recommended consultants we could reach out.

We asked the consultant to audit all of our core templates, but we knew that auditing the templates with placeholder content just wouldn't give us the whole picture. Cause after all, you can't achieve accessibility through code alone. You also need design and content. So we look at our products analytics and ask the consultants to audit the most visited or heavily used page made from each template.

At the same time, the consultants arranged to have people with different abilities come into our office for a couple of hours. They all used different assisted technologies and they were there to show us how they used their personal assisted tools.

For most of my teammates, this made accessibility click for them. Seeing someone navigate our site differently with screen reader, braille reader, 500% zoom and blink switch, is eye opening; especially when you've never thought of navigating the site using different tools before.

It felt like my team became more proactive after that. It's as if we now have a slightly better appreciation for what the accessibility project is actually for, and the target audience we are working to serve.

Learning from internal audits

While we waited for the official audit to be done in a few months time, we have to start somewhere. We can't wait for that long. So our team decided to try our hands at doing an internal audit ourselves.

We hoped that this would help us understand and interpret results from the official audit. We also hoped that it would accelerate our team's learning or understanding of the WCAG criteria.

Since this was seen as a front-end led project, most front-end developers in our team including our intern, were looped into the internal audit.

In hindsight, perhaps we should have looped in the other roles like quality engineers too, as later they became instrumental in ensuring that we keep up the accessibility standards.

To track the internal audit, we found a few publicly available spreadsheets that walked us through each of the the WCAG 2.1 AA criteria.

It also generates a report with passes and failures afterwards. Which was super handy for us to get an idea of what fixes might be needed.

The front-end developers decided to split the audit work based on the site's template. Each of us takes on a couple of sample pages and completed the audit spreadsheet for each of those pages.

The first thing we learned was that WCAG guideline was incredibly boring to read, and it's also hard to understand. It's especially difficult if you don't understand why a certain rule exists.

For example: 1.4 0.1, use of color. Color is not used as the only visual means of conveying information, indicating an action, prompting a response, or distinguishing a visual element.

When we first read that, we wondered... why? What's wrong with using only color to indicate something? We use it all the time for errors. It's only when we dug deeper into different disabilities that we realized, this criteria was designed to aid people with different color blindness. If you could only see in black and white or have red green color confusion, how would you know if that text is an error message?

Now that we have the understanding and context for that rule, the rule reads much more easily and we understand it better. If you have to compare, it felt like we were trying to understand law.

We read the law about something, and now we have to make judgment on whether or not what we are doing is legal.

It's hard. That's why we have lawyers to do the interpretation for us. After all, they study the context and the history of why that law came about.

Interpreting the WCAG is indeed a skill. I can see why there are professional accessibility auditors up there. Doing the audit myself made me respect those professionals so much more.

The second thing that we learned was that we needed extra tools to aid the audit, especially when we are new to accessibility. The more visual the aid, the better it is for us.

The audit spreadsheet, were great for tracking and grouping issues. But discovering those issues is another matter entirely.

We found out that some browsers have a built in developer tools for inspecting accessibility information. For example, Chrome accessibility pane, and Firefox accessibility inspectors.

These tools gave us an overview of the DOM tree information that is exposed to assistive technologies. If you want to quickly find the accessible name or aria-role of something, we can often use these built in tool to find out without turning on a screen reader.

Firefox accessibility inspector would even go as far as giving you a visual view of the tab order on the page; super, super handy.

Both of these built in browser tools now provide color contrast checks as well. It's clear that the in browser tools will only get better from here. And the best thing is, there's no extra installations, no extra cost involved. They came pre-shipped with the browsers.

There are other tools like Wave, which focus on providing extremely visual evaluations. This one is very useful for people who prefer less technical checks. I think non-engineering roles I work with tend to prefer this over the other tools.

Looking back, it made me so happy to know that people who care about web accessibility are also happy to share resources and learning. Creating tools like an audit template just for our project just didn't feel like an efficient use of my team's time. Plus, were all beginners. It was good to be able to learn using the tools that was made by experienced accessibility practitioners for accessibility practitioner.

After the internal audit, I've learned that there are even better internal audit tools like Microsoft's Accessibility Insight, which has many visual aids built in. This is now my go-to tool for when doing proper page assessments. But for quick checks during development, I use Deque's Axe DevTool browsers plugin, as it automatically highlights low hanging fruits and references the related the WCAG criteria.

The lovely thing about doing the audit together was that the engineers doing the audit were constantly sharing useful tips and tricks that we found. We also help each other decode each of the guidelines when we got stuck. This made the audit a lot less tedious and lonely than doing it alone.

The whole process did take us a couple of weeks to complete, but we noticed that we got faster towards the end. I was so traumatized by some of the criterias that I memorized them by heart.

First attempt at splitting up remediation work

So we've done the internal audits as much as we could, and err on the harsh side, and marked the criteria as failed when we were unsure whether or not it was a pass or a fail. Of course, we didn't meet the WCAG 2.1 AA standard, or even the older 2.0. For most of my teammates who weren't involved in the audit, this must have felt like an impossible amount of remediations.

But when the front-enders regrouped and read through our audit results, we felt quite positive about the work. On the surface, it seems like we had a lot of failures, but most of them seemed to be caused by the same things. The critical fails are mostly on low hanging fruits, like useless page titles, incorrect or absent headings, hidden focus, state, low color contrast links instead of buttons, and vice versa.

Those are not difficult to fix because our sites page are made from reusable templates. We use SASS: a pre-processor which allow us to author CSS in an imperative way, like declaring variables and creating reusable functions. We also have a system of naming CSS classes to make it easier to globally update things like colors, to improve the color contrast. So code base is at a moderately maintainable level already.

Again, it does take someone with front-end development or accessibility experience to know instinctively that these are not difficult to fix. Unfortunately, majority of my team had neither experienced. So they didn't even know where to start. So together the product owners and a few front-enders in our team sat down and started writing up some tasks.

At first, we were writing up task based on page templates and WCAG failure criterias. We used Jira, a task tracking tool, to raise different accessibility issues. We even named each of the Jira based on each of the failed criteria. We then workout if this is considered a front-end or back-end issue, then distribute the task accordingly.

The way we did this quickly backfired in four ways.

1. The same issue can cause multiple WCAG failures

First, splitting work based on the WCAG criteria just doesn't work, since many of the failing criterias are very interrelated. The same issue can cause multiple criteria to fail.

For example: one of our email input fields uses placeholder called email address as the sole label.

I know, I know. Scream at me later.

Basically, it assumed that the placeholder was enough. This failed in a few ways.

  1. It failed 2.5.3 label in name criteria, which states that for user interface components with labels that include text or images of text, the name contains a text that is presented visually.

    We realized that the placeholder text disappears as soon as the users start adding characters to the field. Both non-sighted and sighted users can lose track of what the email address was as soon as they type something in, since the name of that field is no longer visible, which makes it fail this criteria.

  2. The same reason also costed to fail 3.3.2 labels all instructions, which requires labels or instructions when we require users input.

    For this, we found that placeholder text isn't consistently read by screen readers, so for some blind screen reader, users there might not know what this email field was even for.

  3. Another failure was 4.1.2 name role value, which states that all user interface components, including but not limited to form field, links, and components generated by scripts, the name and role can be programmatically determined. States, properties and values that can be set by the user can be programmatically set.

    Our email field is a user interface. Since it doesn't have an accessible name provided by HTML label or re label, it also failed this criteria.

With that one problem alone, the template failed at least three WCAG criteria. When we logged a Jira issue based on the failed criterias, we would've done enough admin work to log three different issues.

As soon as we provide a proper HTML label to fix one issue, it would affect all of. The other Jiras would've been closed or marked as duplicate, making it a waste of time to set these up, or creating an unreasonably inflating scope of the accessibility remediation work.

2. A single WCAG failure could stem from many issues

Another way this approach backfired is when multiple issues on the page caused the same WCAG criteria to fail.

For example, we failed 1.3.1 info and relationships, which states that the information structure and relationship conveyed through presentation can be programmatically determined or available in text due to multiple issues.

Now, this page alone failed in three ways.

  1. We didn't properly use headings on the page template.

    There was no heading one, but we styled <h3> to look like <h1>. So for sighted users, they can see the biggest heading on the page and understand that this is what the page is about. But for non-sighted screen reader users, they could only see <h3> on the page, which makes them uncertain if that is really the main topic of this page, not just a subtopic.

  2. On this page, we do have a visible label on an input field, but we didn't relate the label and the input field using the for and ID HTML attributes. So screen reader users will not know the relationship between that input label and the input field itself. This means that when a user encounter the input field, the screen reader will not announce its name. It is as good as not having a label at all.

  3. We had two forms on this page, which served very different purposes. We didn't separate or name each of these form. So the relationship between fields is a bit ambiguous. Imagine if there are two sets of fields that are asking the same questions, but one of them was for the sender's address and one of them is for the receiver's address. How confusing for screen video users.

So in order to fix 1.3.1, we had to resolve all the three issues on this page. Now this flies at the face of best software development practice, which is to make small changes and release often to reduce errors. This Jira will look way more complex than it should, takes longer to fix and longer to test due to multiple success criterias.

And that was just the second way, the approach of creating Jiras based on the fail criteria has backfired.

3. Prioritisaton; which issue to address first?

The third way was that we had no idea what to work on first. Do we fix the easiest problem first? But what if that doesn't have high values for the user? If we spend weeks fixing the larger issue, will that be worth it? We ended up with prioritization problem.

4. Task become unintelligible for people new to WCAG

The last way this backfired, was that the Jira becomes almost unintelligible for team members who weren't involved in the audit. Testers end up relying solely on developers just to understand what they're testing. Now this was dangerous because testers approach would be dictated by the people who wrote the code, not by the tester's unique understanding of the issue, which is usually how ensure software quality. We would accidentally remove the diversity of skills within the team.

It doesn't take my team long to realize that splitting Jiras this way would not work at all, and we changed tact.

Adjusting our approach based on pro advice

By this time, the consultant has sent through their audit report. To our amazement, our internal audit results, were actually not too far off from the report. We've identified most of the critical issues, so none of what the consultants found were actually a major surprise. We did something right!

After sending the report, the consultant organized a call to walk our team through the report and answer any questions that we may have. They've organized the failures into three categories, critical, moderate, and minor.

Critical issues with the one which actually stop users from completing their tasks on that page.

Having no programmatic labels on the input fields will actually stop low or no vision screen reader users in their tracks. These are usually the WCAG A criterias.

Moderate issues would cause quite a bit of difficulty for users, but it will not be impossible for them to complete the task on the page.

An example would be ensuring that the focus ring is visible because keyboard users often rely upon the focus ring to know where they are on the page. If they're hidden or gone, they could still use the forms, but it's just significantly harder since they have to start typing to test where they are on the forms. These are usually WCAG A to AA criterias.

Minor issues don't usually cause difficulty for users, but could cause great annoyance and slow them down.

Having no page headings on the page doesn't really stop screen reader users from understanding the page content. But it does make it harder for them to quickly understand what the page is about or to look for information they need. They have to read the whole page until they find what they're looking for. These are often the WCAG AA to AAA criterias.

It was great for the team to be able to read the report and hear how the auditors think about different issues.

Afterwards we logged Jiras based on issues the auditors found. Jiras that are named, like "add a label to email input field" is a whole lot clearer than "2.5.3 label in name". When my teammates picked up the Jira, most of them can understand what it's about without relying on people who did the audit.

Sorting issues into critical, moderate, and minor also made it much easier for our product owner to prioritize the task. We focus on critical issues. And then slowly work our way through those and then move on to moderate issues.

This was also the time when our team members started to learn to use screen readers. We have a mix of Mac and PC people in our team, so we could test using VoiceOver and in NVDA in combination. We didn't get an approval for JAWS so these two will just have to do for the time being. We also have both iOS and Android users, so we also had fun using personal devices to test or borrow devices from the mobile teams device lab.

After a while, we learned that just like browsers rendering of CSS, screen readers on different browsers and different devices worked slightly differently. So we realized that it is important to test multiple combinations occasionally to ensure that experience hasn't degraded over time.

Reflections

That all sounds positive and rosy, but it took almost two years for us to complete the remediation work. By that point, we have also introduced other new features onto our site, but this time we've built in accessibility from the very start.

What worried and slowed down the most though with a third party tools we're using on our product.

There are a few third party interfaces we've embedded in our experience where we have very little control over the markup. They're unfortunately inaccessible, and we had to negotiate either removing those from our scope or delaying the remediation work with the management.

What this has taught us is that accessibility must be a criteria when procuring those third party tools. When third party tools are not accessible, we're left with three options.

One, accept that we would never be able to create a fully accessible experience for our customers while using this particular third party tools. The cost is that we cannot meet compliance or lost assistive tool customer's trust.

Second option is to do a lot of work to customize that third party tool. The cost, is increased in scope and the amount of work we need to do to make the third party to work for us.

Lastly, make something in-house with scratch, and the cost for that is that the team will have to own and maintain more product, spreading our resources thinner and thinner.

I do not envy our management for having to make the call.

We ended up with a combination of making something in-house, implementing integrations with heavy customizations are moving to an entirely different tools altogether.

So that was my experience in a team who started their accessibility journey through the WCAG compliance lens. We had a mandate or a buy-in for management, which led us prioritize accessibility.

But how do we start if we don't actually have a buy-in in the beginning? That's a story for next time.

Outro

You can follow Web Access Club on Twitter, Facebook, and Instagram.

Show notes, resources and transcripts are available on webaccessclub.com.

If you like this episode, please tell a friend, leave a review, and subscribe wherever you get your podcast.