Why your team is missing bugs

May 28, 2022

Why your team is missing bugs

Bugs found in production can have a lot of bad effects on reputation, product confidence as well as the overall popularity of the product. In this blog, we will try to dig down why often teams miss out on bugs.

What is termed a bug

Typically in teams, although everyone understands what is desired and what is undesired behavior for product, yet you will find different roles in the team have different definitions for Bug

What Product Roles call something a Bug

Any Product Role in the team (Product Manager/Product Owner/Business Analyst) typically has a set blueprint for the product and they envision Product features to behave in a certain manner. When the product team sees a particular feature not behaving up to that envisioned version of requirement they term that undesired behavior as a Bug.

What Delivery Roles call something a Bug

Any Delivery Role in the team (Developer/QA) typically develops and tests Product features as per their understanding of how a Product Feature should behave. They term something as Bug if something is not working as per this very specific understanding of behavior, else they term it as an enhancement.

What matters

Although different roles have a different understanding of what is a Bug and What is not Bug what really matters is how the deviation in behavior impacts the end-user. Not Getting into arguments about whether to call something a bug or enhancement and focusing discussion on how the deviation impacts end users is something I found always helpful to understand the severity of the situation and risk associated with leaving undesired behavior untreated.

Typical causes for missing Bugs

When I looked back and retrospected on multiple occasions where I or my team has missed Bugs I found there are typical reasons for the same.

Putting untested change in a live environment

No one wants something to break, no one tries to play a blind eye toward change yet this happens in scenarios when we put untested change in a live environment. By Change, I mean any change like

  1. Change in Deployment topology
  2. Change in Code
  3. Change in Configuration

The reasons for putting untested change in live environment could be many, some of them mentioned below

  1. Individuals putting change in live environment think the change is very small and shouldn't cause much harm
  2. There is not enough Automated Test coverage around the area of change and there are not enough people in the team to test the change manually
  3. The Change is urgent and needs to be deployed in time which doesn't offer much room for any Testing around the change

Simple ways to mitigate this are

  1. Involving a Person Who has a deep understanding of the area not just technically but in terms of behavior as well much early when change is being discussed ( This Person could be anyone, typically QA)
  2. Having Enough Automated Test Coverage around at least critical parts of the product and having a quick dry run of change in test environments

Narrow focused testing approach

When you break the complex product into the smallest possible independently deployable stories and ship them incrementally, it is natural to lose the big picture when you are too focused on the story. This is a typical mistake that is often Tester does, due to various reasons

  1. The nature of information present in the story
  2. Horizontally sliced stories
  3. Time crunch
  4. Unorganized Testing
  5. Horizontal Testing

Simple Ways to mitigate this are

  1. Having sufficient links in the story for other stories that this story relates
  2. Slicing stories vertically ( This may not be feasible always due to various reasons in such cases having a coverup story or task that talks about Vertical slice)
  3. Time crunch and unorganized testing go hand in hand. Testers always feel time crunch if they don't plan their testing much in advance having at least different scenarios in terms of one-liner ready, this makes a huge difference over struggling to do everything after the development is done.
  4. When you deliver a story and just test the requirements for the story what you are doing is nothing but Horizontal Testing, To mitigate this you need to think of possible Test Scenarios From a Vertical Slice of the Test Pyramid so they include all sorts of testing in appropriate proportions like Unit, Component, Integration, System

Lack of exploratory testing and too much focus on scripted testing

People have written much about exploratory testing and its importance. Yet many organizations try to adopt 100% scripted testing. By Scripted Testing here I mean Testing where All Scenarios, Test Cases, and or Steps for Test Cases are figured out much in advance (Test Script) and not Automated Testing. They want their Testers to determine almost all of their test cases much in advance. But with experience, I have learned Testing is also the process of learning how software behaves and many scenarios can be thought of only when we learn more about software. It still good to note down these newly learned scripts and convert them into Test Scripts later. Yet no one can guarantee that you have identified all Test Scripts in order to test a feature. By that notion, you should always spare some timeboxed sessions for exploratory testing too.

Lack of different types of testing

I have seen that Cross-functional Testing, Configuration Testing, Security Testing, and Cross-Device Testing are always not thought about or done much often unless someone asks to do it. They are missed typically Due to the following reasons

  1. User Story Doesn't Explicitly mention these requirements as the nature of these requirements is implicit
  2. Lack of clarity on such requirements

A simple way to mitigate this is to Ask more Questions in case there is no clarity about the requirements

Missing on Regressions

Often, when teams deliver change, they miss testing the existing working area affected by that change. The reasons to miss the testing affected area could be several

  1. Impacted area is too large and we don't have enough time to test it.
  2. narrow focussed testing as mentioned above
  3. Tester doesn't know the impacted area due to lack of context because either it has been lost in past or has not enough knowledge about the product

Simple Ways to mitigate this are

  1. Having enough automated coverage
  2. Doing enough homework on the user story and collecting its context much in advance before even actual testing starts

Insider effect

When you know in and out of specific functionality to the point that you understand which code paths a particular behavior takes, your mind goes into narrowly scoped testing. You can't think much out of the box when your mind is totally inside and biased towards implementation than the outcome. I am talking of this from much of my personal experience since I take a lot of interest in understanding the code behind something which works. A Tester needs to be equally insider as well as an outsider of a system. Knowing implementation helps but let it not come in the way you think out of the box.

Simple ways to mitigate this are

  1. Giving enough time to mind to completely shift modes of thinking from How this works to How this may not work
  2. Taking Insider Advantage to uncover more scenarios

Final Thoughts

From the Stakeholder’s perspective missing many bugs regularly that causes all loss that we mentioned previously is a bit frustrating. From the team's perspective, missing bugs can be most embarrassing. The list mentioned here is pretty common yet not exhaustive, When your team is missing many bugs they should retrospect regularly and find the root cause associated with the same and take necessary corrective action.