What if I told you that, despite your best efforts, your brain could be sabotaging your testing approach? And that while you’ve spent your entire software testing career trying to find the biggest, juiciest bugs, your subconscious has had other goals altogether? Believe it or not, cognitive biases could have been messing with your testing processes since the start.
Most testing professionals I know try their best to be objective. We prioritise tests by business risk and frequency of execution, build our automated packs based on equally sensible criteria, raise defects as we find them, and put equal effort into every test case that we execute—or at least we think we do.
You see, our minds are funny things, subject to all sorts of whims and caprices that we’re not fully aware of.
Cognitive biases affect how you conduct software testing, influencing your approach to work and potentially impacting the quality of the final product. Understanding these biases is crucial for improving software testing practices and delivering more reliable software.
Speaking From Experience
I was prompted to write this insight after bumping into a friend and former colleague at a conference. We were reminiscing about the good old days and reminding each other of long-forgotten stories from our early careers.
One particular story involved one of our team, who shall remain anonymous, sending not one, not two, but twelve real credit cards to a real man at a real house from a supposed test environment at the financial institution where we all worked.
He got hauled over the coals for this, and we’re still talking about it to this day, but what’s always stuck in my mind is that he managed to send so many! Somewhere down the line, someone should have been able to see that something wasn’t right.
The cards needed to be printed, placed in the envelopes, and sent out… but even before that, the test and live systems didn’t look the same. When I asked my friend why we hadn’t noticed, he said he’d been focusing on an issue with how the applicant details were displayed on the screen.
He kept failing the test and reopening the defect, the developer kept insisting that it worked on his end (which it did in the test system), and my friend kept retesting.
Back to the present-day conference, my old friend commented, ‘Isn’t it funny how your mind works? Sometimes we’re so blinkered.’
The more I thought about that statement, the more I wanted to know… why?
The Impact of Cognitive Biases on Software Testing
Unfortunately, the million years of evolution that preceded us did not have software testing as its end goal. It turns out the cognitive biases that helped our species take over the world are sometimes less than ideal for QA.
ScienceDirect has this to say on Cognitive Biases,
“Cognitive bias refers to a systematic deviation from objective reality that arises due to the evolution of human cognition. It involves more than 200 types of biases, such as confirmation bias and cultural bias, which can impact biomedical research and data reporting.”
Essentially, our brains have developed hundreds of quirks that help us live our best lives but mess with our objectivity and affect our judgement. For example, we favour information that supports our beliefs (confirmation bias), and our background and experiences impact our viewpoint (cultural bias).
They can lead to systematic errors in judgment and decision-making—which, as we all know, are critical aspects of software testing processes.
These inherent patterns of thinking can affect everything from test case design to defect reporting to test management and our overall approach to risk and quality.
Confirmation Bias
Confirmation bias is the tendency to seek information that confirms pre-existing beliefs while ignoring contradictory evidence. In software testing, this can significantly impact the quality and thoroughness of the testing process:
- Focusing on expected functionality: Testers may unconsciously design test cases that align with their expectations of how the software should work, potentially missing edge cases or unexpected behaviours.
- Overlooking contradictory evidence: When encountering a bug that doesn’t fit their mental model of the software, testers might dismiss it as a fluke or user error rather than investigating further.
- Biased interpretation of results: Ambiguous test results may be interpreted in a way that confirms the tester’s initial assumptions about the software’s functionality.
Let’s say a tester is working on a new e-commerce platform. Based on their experience with similar systems, they assume the checkout process will function in a specific way. They create test cases that align with this assumption, focusing on common scenarios like successful purchases with valid credit cards.
Mitigation strategies For confirmation bias:
- Active hypothesis testing: Encourage testers to actively seek evidence that contradicts their assumptions about the software’s behaviour.
- Diverse testing teams: Involve multiple testers with different backgrounds and perspectives to challenge individual biases.
- Structured test design techniques: Utilise methods like boundary value analysis and equivalence partitioning to ensure comprehensive test coverage beyond expected scenarios.
Availability Bias
Availability bias occurs when individuals overestimate the importance or likelihood of events based on how easily they can recall related examples. In software testing, this can lead to skewed priorities and incomplete test coverage:
- Overemphasis on recent issues: Testers may focus disproportionately on bugs or scenarios they’ve encountered recently, even if they’re not the most critical or common.
- Neglecting less memorable scenarios: Important but less dramatic or frequent use cases might be overlooked in favour of more memorable ones.
- Biased risk assessment: The perceived likelihood of certain bugs or failures may be inflated based on vivid past experiences, leading to misallocation of testing resources.
Consider a mobile app development team that recently experienced a major production issue where the app crashed for users with older devices. This incident was highly stressful and memorable for the team. In subsequent releases, testers become hyper-focused on compatibility with older devices, dedicating a disproportionate amount of time to this aspect.
This recent, impactful memory leads to an imbalanced testing approach that may miss other important issues.
Mitigation strategies for availability bias:
- Comprehensive test planning: Develop and maintain a thorough test plan that covers all aspects of the software, not just recent problem areas.
- Data-driven prioritisation: Use metrics and historical data to inform test case prioritisation, rather than relying solely on recent experiences.
- Regular review of test coverage: Periodically assess the distribution of testing efforts to ensure balanced coverage across all important aspects of the software.
Anchoring Bias
Anchoring bias involves relying heavily on the first piece of information encountered when making decisions. In software testing, this can lead to narrow focus and missed opportunities for thorough testing:
- Fixation on initial expectations: Testers may become anchored to their first impressions or assumptions about how a feature should work, limiting their exploration of alternative scenarios.
- Over-reliance on requirements: Strict adherence to initial specifications may prevent testers from considering how users might interact with the software in unexpected ways.
- Limited scope in exploratory testing: The initial direction taken in exploratory testing sessions may unduly influence the entire session, potentially missing other important areas.
Imagine a software tester has flagged a defect in a system. The developer the develop states that the issue will be in a specific module due to a recent change they made. The tester might use their limited time to focus on that specific module, checking for bugs or issues based on the developer’s input, while neglecting other parts of the system.
It turns out that the root cause of the bug was actually an integration issue between other modules, which sent incorrect data down the pipe to the supposedly problematic area. Unfortunately, the tester failed to find it promptly because rather than taking a systematic approach to their testing, they were influenced by their, and the developer’s anchoring bias.
Mitigation strategies for anchoring bias:
- Structured exploratory testing: Use charters or time-boxed sessions to encourage broader exploration beyond initial assumptions.
- Multiple perspectives: Involve different testers or stakeholders to provide fresh viewpoints and challenge anchored thinking.
- Scenario-based testing: Develop diverse user scenarios that go beyond the basic requirements to uncover potential issues in real-world usage
Strategies for Overcoming Cognitive Biases in Testing
Ok, now you know some cognitive biases and how they affect testing, but how do you overcome them?
Unfortunately, I can’t give you the magic bullet to undo your human nature and millennia of evolution. Your own cognitive biases are here to stay, and you will never escape them, but now that you know they’re there, you can try a few of the following strategies to minimise their impact:
- Awareness and Education: This article has touched on three examples, but you should take time to learn about other cognitive biases and their potential impact on testing processes.
- Diverse Testing Teams: Teams with varied backgrounds and perspectives will help challenge individual biases.
- Structured Testing Approaches: You do this anyway, but using methodologies that promote systematic and comprehensive testing will help limit the impact of personal biases.
- Peer Reviews: Get input and opinions on the assets and approaches you’re using.
- Data-Driven Decision Making: Use metrics and objective data to inform testing decisions and priorities; don’t just rely on gut feeling or historical significance.
Always Keep In Mind: Your Brain Plays Tricks on You
Cognitive biases are natural, and as far as I can tell, they affect all of us. Sometimes, you can catch yourself, but sometimes, you have no idea they’re controlling your thoughts and actions.
They will mess with your priorities, your expectations, and your assumptions… but you can put things in place to mitigate them.
Building robust, data-driven testing processes, canvassing multiple opinions, and putting yourself in the users’ mindset are just a few of the approaches you can take to limit the risk.
As testers, we’re under so much pressure to deliver, but sometimes, just taking a step back and clearing your mind can help. In fact, why not subscribe to my mailing list? That way, you have something to do when you’re taking a few minutes of downtime—and all the content is testing-focused, so you don’t even need to feel guilty about it!