Categories
Case Study Experience Reports

Delivering a vision

Vision is hard. Often it can feel fluffy and woolly. The vision for our group was “Fulfilling customer needs, through innovative trusted solutions, that we take pride in” but it wasn’t tangible. I established some quality pillars/strands for it. One around discovering behaviours, a sort of BDD-vibe to it and a second about quality attributes (NFRs).

Sharing the vision

When I shared this vision, I did have a picture of how I thought we could deliver with quality and talked about what this might look like. I also highlighted the various changes, improvements and pathways for us to get there. I tried to ground it in practicalities.

I also acknowledged that it would take years and in truth, we may never truly achieve it. I also acknowledged that things can and probably will change. It was a vision, not an expectation.

Delivering for the vision

To make the vision relevant, I’d refer back to these with each initiative I pushed. This meant that when pitching an idea and sharing progress in sprint reviews I could highlight how it ties in with our vision. This helps share the reasoning and get buy in.

Some of my initiatives hadn’t had time to fully settle in and see the value come through to be declared massively successful, but the ones tied to my vision never flopped. They are least made *some* improvement. They also stepped us slightly forward.

Having a vision is nice but you need to back it up. I’ve rarely felt like the visions that I’ve been sold meant anything so I feel like I can take great pride in having involved in setting a quality vision then delivering tangible real changes that could help us achieve it.

A slight reality check

It is worth calling out that I called this “Delivering a vision” and yes, in a year I had made a number of improvements but we were so far off. I also failed in getting people to recall the vision. Heck, our quality vision was meant as part of our group’s overall vision and I was probably the only one from our group who could remember it.

I would balance that by pointing out that I understood and acted upon our vision. For most it was word flotsam but for me, it was a destination to steer towards.

Categories
Case Study Experience Reports Ramblings

User journeys in refinement

Thinking about user journeys in testing isn’t a particularly new topic (although probably truly conducted a lot less than we would like to admit). I suspect even rather than that is the user journey in design and planning. At least once engineering teams are involved. This is something that I’ve had little chance to explore in practice but it really interests me.

I’ve explored a few techniques in my own time from feature mapping to story mapping. I like the structure of story mapping (and how it ties to example mapping), letting us consider the workflow, MVP and priorities.

Once the PO from our group took a really interesting approach that I quite liked.

We started off by discussing the parts of our larger feature up for consideration. We talked through personas. We drafted a few workflows. Then we prioritised what we thought was most important and tapped into those workflows some more. What was the most important one?

This gave the team more enthusiasm and ownership of what we were working on. However 6 months on and real grumblings and discontent came in. We’d abandoned trying to identify user journeys. This was because the business had already decided the workflows and priorities around 2 years ago. We were lagging well behind the design on the system. We picked up a solution to implement. Then tried to inject the “user” into our stories a little more artificially.

What made this frustrating is how awesome the UX guy was at being open to collaborate. But that lag between a workflow being agreed, mock-ups demoed to customers and then later, us getting involved, really broke our engagement with trying to solve customer issues. Yes, I pushed using personas and we included user benefits in our demos but it was “tacked on”.

My taking from this experience was that when the development teams got involved in considering the user early it made a real difference. Who is this for? What are they trying to achieve? However this only works if you get development, test, UX/design and product in the room together.

Get these key stakeholders involved in mapping out the problem and desired solution.

Categories
Case Study Experience Reports

User Journeys & Testing

I wanted to share a really cool activity that I did with a couple of developer teams a couple of months ago.

Two teams had (roughly) a sprint of testing with a push to be user focused. I was brought in to lead this effort (with no notice!). The end result was a number of new bugs, insights into users, better bugs and developers running a demo on what they’d learnt. It was pretty awesome!

As developers freed up, we put them in pairs and then the three of us would have a 60-90 minute workshop to design some tests.

We started by talking about the feature the pair were looking at. What is it? Why is it used? Who uses it? How does it fit into a daily workflow? On hand were some personas that I’d previously created to help guide us on who would do different actions and how they would be doing things in parallel.

Once we had a rough idea of what someone is trying to achieve when using our feature, we plotted out a journey using sticky notes. When we had decisions to make, they were notes for future journeys (e.g. is this first time or returning?). When we realised that we had different people involved, we mixed up the colours.

Eventually we had our journeys or tours. The teams then optionally wrote them up… or added annotations to our board. They then setup accounts for each persona and we’re using a shared customer like environment with no debuggers or sims in sight (sort of – we had a way to inject ourselves to look into bugs without polluting the environment).

To execute the tests, the devs would pair, where one person drives whilst acting as Alex, then huddle around the other person’s screen as Sam did their tasks and back again. All the while, taking notes around the experience, what they learnt and specific actions and timings. Not everyone was perfect at this (it’s a skill), but the group embraced it well.

I would be bouncing around to help and also picked up one task myself and would live stream myself doing the testing for people to watch how I’d work.

The feedback from the group after was great. Not only did it find new issues and showed a new way to test but people enjoyed it. Developers enjoyed hands on testing. Whilst obviously there’s things I would have done better, especially with given the timeline, it was definitely a success.

Final parting thought. I’d never been able to get this type of testing on the agenda. I wasn’t convinced that I ever would. However that didn’t mean that I’d forget about it. You never know when you’ll have your chance to shine so always have something in your back pocket.

Categories
Experience Reports Ramblings

Reflection: Key highlights as a Quality Coach

I recently reflected upon my failings as a Quality Coach but that wasn’t to say that I failed. Far from it. I am very proud of my work as a Quality Coach and it is a role that I hope to pick up again.

Initiatives

Analytics

As a group we had never really looked at analytics before. It was deemed “too much effort” for our previous application suite given it was often in air-gapped networks (i.e. no way for the data to be sent back to us. However we did bemoan the lack of understanding of our customers.

When picking up a cloud based web application, I latched on to this. First I spoke to people about their challenges and noted how prevalent understanding our customers were. I experimented with Google Analytics and learnt about using it. I then pitched us adapting it, organising some brainstorming and leading to user stories on the backlog… which were then implemented.

I then looked at reporting that data. When we weren’t sure about the importance of fixing a bug, I could point to usage data. When wondering about the impact of a new release, I could point to change in user behaviours. It was useful. We also were using it for our planning to understand when we can try and force customers to switch from an older view to another.

Whilst the developers who did the hard work got the credit (and rightly so), I know full well that I made a difference here.

Refinement & ATDD

Our refinement process had been “grand” for probably around 7-8 years (at least with the teams I worked in). We did lack a little bit of test input at times but generally, not too bad. However when working in a new product area, we had made a few mistakes. We’d misunderstood behaviours, requirements and what is involved. This is why it became a key area for me throughout my time as a Quality Coach.

As it is something I was quite pleased with, I’ll elaborate more in a blog post another time (I have a long “to do” list!), but what I wanted to highlight was that I got testing needs to be considered before starting work. More than that, we also started thinking about the impact of our changes and opportunities for improving quality (including tech debt).

I helped encourage teams to think of edge cases as well, partly by bringing my tester hat and leading by example on asking awkward questions but also I started leveraging my interest in examples, a concept that I’ve explored in manual testing (my first post on here!). Whilst full example mapping sessions felt optimistic, I started getting us to define examples of behaviour and this helped people think of different cases, ask new questions and find that what seemed like a simple change had way more variables. We even managed to take it further in a smaller “standalone” project and implement a process similar to ATDD. I’ll elaborate on this in my separate post but for me, it was a cracking win.

Others

There’s a number of other initiatives that I’m pleased with, which I won’t elaborate on but:

  • Kept unit testing, even in difficult projects where there’s temptation to abandon it, an active area.
  • Started a TDD initiative, including running a workshop with more to come, pairing with an architect (sadly never got a chance to see it through).
  • Meaningful RCAs
  • Collaborative Test Strategy
  • Assessing our levels with different quality attributes (NFRs)
  • We started caring about accessibility (even if unfortunately were told to stop caring)

Coaching & Leadership

Vision

As I started writing about this, I found I had plenty to say, which I’ll follow up in a separate post but one of my personal highlights was setting and sharing a quality vision. The initiatives that I described in more depth above? They are directly tied to part of the vision. We made tangible progress towards a vision. For me it wasn’t a load of fluffy words but integral to what we tried to achieve.

Coaching

Throughout my time as a Quality Coach I tried to develop my coaching skills. I would use retrospectives and RCAs to try and coach developers. Originally I started by trying to coach at a team level (albeit with resistance for taking up team’s time) but eventually got to start having 1:1s with the developers.

Not every session was gold dust but not only was I able to understand people’s challenges better but the developers themselves got plenty of benefit. I was able to explore topics and challenges that went unsaid all too often. As I developed my ability with coaching and asking questions, I was also able to help people solve challenges unrelated to my area of speciality. I got a real kick out of it when I learnt that I’d helped someone solve their problem and I felt it was a positive that people took value from them and were reluctant to decrease the frequency when I was under pressure to reduce the time I spent with the devs in this manner.

Side note: I struggled to sell the value of this coaching.

"When you do things right, people won't be sure you've done anything at all"

Summary

Whilst not without its challenges, I am proud of my work as a Quality Coach. Of course I could have done better in certain areas (this is always the case!) but the teams that I work with are definitely better off from my initiatives, leadership and coaching. It makes me excited to think about what I could achieve in my next role.

Categories
Experience Reports Ramblings

Reflection: Key mistakes as a Quality Coach

I enjoyed my time as a Quality Coach, working with between one and three teams at a time. I think it was a relatively successful stint. However I want to acknowledge some of the areas where I struggled.

Role Definition

As a role, quality coaching is definitely very different to anything that I’ve done before and it was also very new within our group. Unfortunately I never really got myself established into what I think would be the ideal Quality Coach role & responsibilities. It was only when I was on gardening leave, reflecting and taking on board a host of great resources that I felt like I had confidence in my own understanding of the role. If you don’t fully understand your place, let alone others understanding it, this doesn’t set you up for success.

Looking back, I just didn’t spend enough time getting people to talk through the challenges and how they can solve them. I always succumbed to pressure and would reduce my involvement, even if it was glaringly obvious to me that we were only going to suffer later.

Ownership

Coaching is largely about helping people make the right decisions and grow. However in my role I tended to own all of our quality initiatives. I owned our test environment & kit. Test strategies not working? I took responsibility to change that. RCAs only happened if I organised them. Talking about testing early, putting together test strategies etc, tended to only happen when I led it.

The knock on effect here is that whilst the team did view quality and testing as a shared responsibility and collaborated in the journey, they didn’t really lead it enough. I didn’t create enough opportunities to empower people. However “in my defense”, I would argue that a bunch of developers were never going to be super enthusiastic about leading change on test strategy.

I’m curious how they’ll get on without me. Have I led by example and left enough of a framework in place for people to carry it on?

Pairing

I’m a shy and socially anxious person. As no one has ever really paired with me, I wasn’t sure how to engage people. Consequently I did no where near enough to push pairing with people, in particular to push pairing with myself. I did do some mob approaches, which seemed easier from a social point of view, and had some successful sessions in those final few weeks and months but going forward this is arguably my biggest area for growth.

Automated Tests

I hate this topic. And this is a problem.

When it comes to quality and testing, most of the talk is around using these tools and I don’t have a lot of experience here. Don’t get me wrong, I am grand with every C# testing framework that I’ve used, explored TDD as a developer and have a good grasp of the theory & concepts. I’ve ran a TDD workshop (using C#) and introduced automated acceptance tests (in a C# windows app), so I’m definitely not clueless. However I don’t know the syntax or best practices with tools like Selenium, Playwright or Cypress.

Consequently as a coach I can really support people in understand what to test, layering and explain TDD & BDD but when a Playwright test isn’t behaving, I’m useless.

Metrics

Another topic that I’ve never quite gotten along with. It is odd as playing with spreadsheets makes me happy but I just couldn’t really see the value in them – at least the ones that I’ve seen. I was interested in using QPAM or QCTG but they had some subjectivity and it felt an expensive meeting to discuss.

However my biggest lightbulb moment came during a lean coffee session for a DORA Community Discussion on metrics. I’d been approaching metrics as an exploratory testing. Looking at data and seeing what I could discover. Instead I needed to be more of a politician. Have a story then gather data to prove it.

Summary

I believe that I was on the right path but needed to be bolder. I should have been establishing metrics to prove that we have challenges with our quality then using that to justify initiatives that I coach people through.

I need to have more confidence (and/or less fear) with pairing and unfortunately I will have to learn some of the modern automated testing frameworks in order to help me succeed in the future.

Categories
Case Study Experience Reports

Collaborative Test Strategy

I’ve always disliked writing test strategies and plans. Reviewing them was even worse. Just tedious long documents that tell me very little. Usually almost a copy paste as projects tend to be pretty similar. I did play with the one pager but still, it felt like a pointless exercise. We had a ways of working that incorporated testing.

In fact, inspired by Robbie Falck, I did our test strategy as a ways of working. That was well received by the teams but there was a push from the business to have documented test strategies per epic.

Not the WoW I used but we did may the various stages of a story and the activities performed.

I ended up taking inspiration from the one pager and organised a meeting with the team and we filled it in. I then carried and mixed it up. Eventually I finally started seeing the value. It wasn’t the document. It was still as pointless as ever. The value was in the conversations we had, the risks identified and the outcomes of the discussions about what we’ll need to do.

I liked doing this in phases. In our first session we typically started from a diagram of the system. What are we changing? What is impacted? What is the technology? (although later in my time I started by asking… what is the problem we’re solving). I’d also try and get a feel for what we knew and didn’t know. We’d ask about API changes – do we need to do threat modelling? Finally if there’s barriers to testing the feature (kit, environment etc), let’s highlight those early.

Deliberately blurry – but there’s a model of the system, discussion points and then some notes on key questions we want to ask.

I could then catch up with the team, or a couple of folk, again and ask what new have we learnt? What possible risks are new and what have we progressed on potential risks from our first chat. This is again focused in a collaborative way. We should know even more about the architecture so now I can tap into performance & load testing as well.

Whilst I evolved my templates for facilitating, I did explore different methods. It depended on the feature and our knowledge. I loved a diagram but having a series of prompts to ask questions or a mind map of SFDIPOT, it varied. This was to try and get us asking some slightly different questions and keep things fresh. The point is the discussion, not filling in a form, which leads to copy-paste strategies.

A mix of approaches

In terms of planning *how* to test everything, we focus that per story. If we identify dedicated testing activities, they are their own story. We shouldn’t need a document saying that we’ll do performance tests and unit tests. They are part of the definition of done or acceptable criteria.

So I’m happy to do away with the test strategy documents. They are still worthless in my view. However facilitating discussions involving the various team members to identify the risks and challenges we’ll face. Then documenting the testing needs through the usual tickets.

At the end of the day, if we’re trying to shift left then why have distinct documents about testing. Instead, yes let’s talk about the testing and intertwine that with what is required to close a story.

Categories
Experience Reports Ramblings

Experimenting for quick wins

Experimentation is important, if not essential, if we want to have successful high performing teams. It allows us to try things without needing months to review and phase in a new way of working. Instead encourage trying something different, reflect and start again.

In theory, I really like it.

In practice, it was difficult within our group.

I’d like to talk about my attempts to learn the drums. I was tempted to try drumming as I still believe there’s something musical that I can be good at. I started with a small experiment.

I bought a cheap electronic drum kit. I did some reading and I started trying to follow along to a few YouTube videos. I even got a little bit of success doing a not completely terrible job of playing some songs. It was kind of fun.

However I was pretty useless. My experiment didn’t transform me into a musician so I gave up. It joins my bass & guitar sitting idle. Occasionally I’ll have a quick play with one, maybe get a bit of fun but it isn’t yielding great results so I rarely bother.

And this was my frustration with experiments at my former work. We were very open to trying something but seeing things through is a challenge. Straight away we’re asking “is this giving us the results we want?”, potentially then spending more effort on analysing success/failure than the experiment itself and subsequently drifting or stepping off that path. Maybe the odd strum of the guitar so we can say that we play instruments, but not really.

If I want to become a musician, I know that it takes time and practice. The same can be said for becoming high performing teams.

Sometimes our experiments & initiatives can be about trying new things for quick wins but failure to succeed doesn’t necessarily mean the idea is bad. It may mean that it isn’t truly effective yet.

It is essential that we don’t just look for how quickly we can get end results. Some things will require building up skill and practice. If you never give yourself the opportunity to become good at something, you’ll never reach those standards.

Categories
Experience Reports Guide

Meaningful RCAs: Documenting the results

So far I’ve written a few blog posts around conducting RCAs where I’ve focused on the people and questions. However what I’ve yet to touch upon is the documentation side.

In a similar idea to the concept that the activity of coming up with a test plan is more important than the document itself, I have similar thoughts with the RCA. With this in mind, the most detailed document that I’d have is the collaboration board that I’ve used to facilitate the discussion. It captures our thoughts, discussion and key point.

A screenshot of a board created in Mural containing several sections loosely related to the SDLC and a number of different sticky notes
Example of a RCA, although this is all lorem ipsum text as I obviously can’t share a real one!

After the session I will then (as soon as possible) write up the overview. This is to capture the key findings from the RCA, explaining the nature of the problem, what we’ve learnt, any actions and so forth. This is shared with the team(s) on Slack to have a first look before I’d share it more widely.

I did like keeping a spreadsheet with my RCA findings. It would include the summary, a link to the board & tickets and an overly simplified “category” (missed requirement, domain knowledge, coding error etc).

This category is useful for metrics to help us understand patterns. This was useful when I was pushing to drive new initiatives because I could say “if we’d been using examples in refinement, we wouldn’t have had these massively complicated bugs”. If I’d had more time with my former employer, I’d have loved to explore a means of saving RCA summaries where I can tag the RCAs with different things to help demonstrate patterns.

I had also dabbled with feeding this data to an AI agent (one where we’d got the legal protections that it wouldn’t feed back into the main models). This was quite neat… but a topic for another day…

One final note is that I am aware that most people would still prefer to use a more formal & structured documentation approach than myself. I get that. Some of the things recorded could I guess be useful. However I’ve yet to experience any time where a 2 page document is useful. I have found these RCA discussions really useful and subsequently my documentation approach is similar to my retro approach. It is collaborating & capturing a conversation.

If you’d like to read more on RCAs, check out the collection of my posts on the Meaningful RCAs page!

Categories
Experience Reports Guide

Meaningful RCAs: Structuring questions

I’ve already talked about how we need to tap into unleashing our inner toddler by asking “why”. But what questions do we ask?

Background

Before getting into the guts of the RCA I like to go through the background. This is partly to act as a refresher for everyone as it may have been a few weeks but also it will help guide me in my questioning.

This usually means sharing:

  • Links to the defect we’re RCAing & the original ticket
  • Links to PRs to fix the issue and where possible the original (“offending”) PR.

Then asking:

  • Can you describe the problematic behaviour? (i.e. what was actually wrong from a user’s point of view)
  • Can you describe the describe the nature of the code fix?
  • What do you remember from working on the story?
    • How long did it take?
    • How many people were involved?

The Fix

Before learning more about why the issue came to be, let’s make sure that we’re confident in the fix. I like to ask two questions here:

  1. How resilient is the fix?
  2. Will we know if the behaviour regresses again? (i.e. did you add automated tests)

Quality Engineering Throughout The SDLC

Now we get into the real important questions. This is where we go through the software development life cycle and think about what we did and whether there were opportunities to (realistically) catch it then.

First of all, if this was an escape, lets ask if we could have caught it in production (e.g. monitoring), release testing or epic close off testing. I wouldn’t advocate for just asking “could have have caught it here?” but asking around what the process is, what was the testing performed and is this something in the scope of what we’d usually test?

We then move on to the story within the sprint, starting with testing of the original story / bug. We’re trying to understand whether this was a brain fart (it happens) or is it just something that we wouldn’t usually consider testing? If not, why not?

Then we get into more technical. We’re looking at the PR, starting with code review. I’ll be asking about the nature of the bug and is that something that we’d look for? I’d want to understand whether SMEs were involved & if not, why not? Did they check the testing notes & automated tests in the code review? Code reviews aren’t ever going to catch everything but it is good to discuss this process. It is a nice chance for people to get to talk about the value and role of a code review too.

I then concentrate on the developer’s testing. What had they covered through automated and hands on tests? How much was iterative? As a former dev, I know all too well how even a well intended developer who tests their work can let things come through here (see dev BLISS).

We’re back then to technical discussions on the code. This is where I hope the architect can ask a few questions, although regularly other team members often chip in. This discussion is a great way for the team to learn from each other.

You might think that now that we’ve talked about the types of testing and the development challenges that we may stop there, but no we don’t!

The teams will have planning and refinement when we’re breaking down the story. We do test strategies and planning at epic and sometimes user story level. We think about the complexity of the code work with architectural studies before starting an epic. Let’s continue diving into these.

Again we’re asking about what was done, whether this is a scenario that could have been caught, either behaviour wise or in code, and tapping into what more we could have done. This helps us with spread left.

A Parting Question

Near the start I asked about our confidence in catching this issue again. Unless we’re running out of time (unfortunately often), I like to ask a similar but slightly wider question. How confident are we that we won’t see a repeat of the issue? Not necessarily the same issue but a similar one.

Summary Section

Finally I’ll have a summary section with actions, learnings and a summary of the RCA. Often written up afterwards because unsurprisingly the hour I book for RCAs isn’t always enough to cover everything in this post! I’ll explain a little more on this in a separate post.

So in short…

We start off by discussing the background of the story to refresh ourselves and help us get an idea on what threads are best to pull on as we go into things. We’ll also check we’re confident in the fix.

We then take our time going through the SDLC. We’re not just asking “could we have caught it?” or “why didn’t we catch it?” but looking at the actions, steps and processes to understand the answer to this.

I switched the ordering from starting with the first stages of the story to starting in prod after advice from a great chap called Stu Ashman. I found this got us much more engagement in some of the testing and activities around post release. You’ll also see how through the different stages we are asking slightly different questions to consider more than “why didn’t we catch it?”.

We’re using every stage as a learning opportunity.

… and that makes for a meaningful RCA!

Categories
Experience Reports Guide

Meaningful RCAs: Involving the right people

I love collaboration and making exercises something that people can engage with. It is usually the discussion that matters more than what gets written on paper. For this to be successful, you need to have the right people in the (virtual) room.

As we’ve touched upon already, the RCA should touch upon all areas of the lifecycle of the source of the defect. Consequently I’d invite:

  • At least one person involved in refinement
  • The developer for the original story/defect
  • The code reviewer for the original story/defect
  • The tester for the original story/defect
  • The developer who fixed the defect that we’re doing the RCA for
  • An architect, even if they’ve no involvement before (arguably better). Failing that, a team lead.
  • Optionally any other team members.

I would have liked to invite a PO to some but I never got quite that bold.

There’s two things to highlight here.

First is that we’re focusing on who was involved when the defect was introduced. We have insight from the person who understands the fix but it is the processes, decisions and challenges in that original issue that we want to understand.

Secondly, with the architect and myself we have a cracking blend of insight. There’s someone who can analyse the code, design and technical side & ask meaningful questions and I can look at testing, process and examine ways of working.

For this to be successful you need all participants bought into the idea of being a safe place & no blame to be placed. I’ve written about this previously.