Products

Problems
we solve

We can help your business

Request a Free Demo / trial

Insights

Insights | From a different perspective
17 September, 2024

Software Testers: How Your Brain Plays Tricks on You

the impact of cognitive biases on software testing

What if I told you that, despite your best efforts, your brain could be sabotaging your testing approach? And that while you’ve spent your entire software testing career trying to find the biggest, juiciest bugs, your subconscious has had other goals altogether? Believe it or not, cognitive biases could have been messing with your testing processes since the start.

Most testing professionals I know try their best to be objective. We prioritise tests by business risk and frequency of execution, build our automated packs based on equally sensible criteria, raise defects as we find them, and put equal effort into every test case that we execute—or at least we think we do.

You see, our minds are funny things, subject to all sorts of whims and caprices that we’re not fully aware of.

Cognitive biases affect how you conduct software testing, influencing your approach to work and potentially impacting the quality of the final product. Understanding these biases is crucial for improving software testing practices and delivering more reliable software.

Speaking From Experience

I was prompted to write this insight after bumping into a friend and former colleague at a conference. We were reminiscing about the good old days and reminding each other of long-forgotten stories from our early careers.

One particular story involved one of our team, who shall remain anonymous, sending not one, not two, but twelve real credit cards to a real man at a real house from a supposed test environment at the financial institution where we all worked.

He got hauled over the coals for this, and we’re still talking about it to this day, but what’s always stuck in my mind is that he managed to send so many! Somewhere down the line, someone should have been able to see that something wasn’t right.

The cards needed to be printed, placed in the envelopes, and sent out… but even before that, the test and live systems didn’t look the same. When I asked my friend why we hadn’t noticed, he said he’d been focusing on an issue with how the applicant details were displayed on the screen.

He kept failing the test and reopening the defect, the developer kept insisting that it worked on his end (which it did in the test system), and my friend kept retesting.

Back to the present-day conference, my old friend commented, ‘Isn’t it funny how your mind works? Sometimes we’re so blinkered.’

The more I thought about that statement, the more I wanted to know… why?

The Impact of Cognitive Biases on Software Testing

Unfortunately, the million years of evolution that preceded us did not have software testing as its end goal. It turns out the cognitive biases that helped our species take over the world are sometimes less than ideal for QA.

ScienceDirect has this to say on Cognitive Biases,

“Cognitive bias refers to a systematic deviation from objective reality that arises due to the evolution of human cognition. It involves more than 200 types of biases, such as confirmation bias and cultural bias, which can impact biomedical research and data reporting.”

Essentially, our brains have developed hundreds of quirks that help us live our best lives but mess with our objectivity and affect our judgement. For example, we favour information that supports our beliefs (confirmation bias), and our background and experiences impact our viewpoint (cultural bias).

They can lead to systematic errors in judgment and decision-making—which, as we all know, are critical aspects of software testing processes.

These inherent patterns of thinking can affect everything from test case design to defect reporting to test management and our overall approach to risk and quality.

Confirmation Bias

Confirmation bias is the tendency to seek information that confirms pre-existing beliefs while ignoring contradictory evidence. In software testing, this can significantly impact the quality and thoroughness of the testing process:

  • Focusing on expected functionality: Testers may unconsciously design test cases that align with their expectations of how the software should work, potentially missing edge cases or unexpected behaviours.
  • Overlooking contradictory evidence: When encountering a bug that doesn’t fit their mental model of the software, testers might dismiss it as a fluke or user error rather than investigating further.
  • Biased interpretation of results: Ambiguous test results may be interpreted in a way that confirms the tester’s initial assumptions about the software’s functionality.

Let’s say a tester is working on a new e-commerce platform. Based on their experience with similar systems, they assume the checkout process will function in a specific way. They create test cases that align with this assumption, focusing on common scenarios like successful purchases with valid credit cards.

Mitigation strategies For confirmation bias:

  • Active hypothesis testing: Encourage testers to actively seek evidence that contradicts their assumptions about the software’s behaviour.
  • Diverse testing teams: Involve multiple testers with different backgrounds and perspectives to challenge individual biases.
  • Structured test design techniques: Utilise methods like boundary value analysis and equivalence partitioning to ensure comprehensive test coverage beyond expected scenarios.

Availability Bias

Availability bias occurs when individuals overestimate the importance or likelihood of events based on how easily they can recall related examples. In software testing, this can lead to skewed priorities and incomplete test coverage:

  1. Overemphasis on recent issues: Testers may focus disproportionately on bugs or scenarios they’ve encountered recently, even if they’re not the most critical or common.
  2. Neglecting less memorable scenarios: Important but less dramatic or frequent use cases might be overlooked in favour of more memorable ones.
  3. Biased risk assessment: The perceived likelihood of certain bugs or failures may be inflated based on vivid past experiences, leading to misallocation of testing resources.

Consider a mobile app development team that recently experienced a major production issue where the app crashed for users with older devices. This incident was highly stressful and memorable for the team. In subsequent releases, testers become hyper-focused on compatibility with older devices, dedicating a disproportionate amount of time to this aspect.

This recent, impactful memory leads to an imbalanced testing approach that may miss other important issues.

Mitigation strategies for availability bias:

  1. Comprehensive test planning: Develop and maintain a thorough test plan that covers all aspects of the software, not just recent problem areas.
  2. Data-driven prioritisation: Use metrics and historical data to inform test case prioritisation, rather than relying solely on recent experiences.
  3. Regular review of test coverage: Periodically assess the distribution of testing efforts to ensure balanced coverage across all important aspects of the software.

Anchoring Bias

Anchoring bias involves relying heavily on the first piece of information encountered when making decisions. In software testing, this can lead to narrow focus and missed opportunities for thorough testing:

  1. Fixation on initial expectations: Testers may become anchored to their first impressions or assumptions about how a feature should work, limiting their exploration of alternative scenarios.
  2. Over-reliance on requirements: Strict adherence to initial specifications may prevent testers from considering how users might interact with the software in unexpected ways.
  3. Limited scope in exploratory testing: The initial direction taken in exploratory testing sessions may unduly influence the entire session, potentially missing other important areas.

Imagine a software tester has flagged a defect in a system. The developer the develop states that the issue will be in a specific module due to a recent change they made. The tester might use their limited time to focus on that specific module, checking for bugs or issues based on the developer’s input, while neglecting other parts of the system.

It turns out that the root cause of the bug was actually an integration issue between other modules, which sent incorrect data down the pipe to the supposedly problematic area. Unfortunately, the tester failed to find it promptly because rather than taking a systematic approach to their testing, they were influenced by their, and the developer’s anchoring bias.

Mitigation strategies for anchoring bias:

  1. Structured exploratory testing: Use charters or time-boxed sessions to encourage broader exploration beyond initial assumptions.
  2. Multiple perspectives: Involve different testers or stakeholders to provide fresh viewpoints and challenge anchored thinking.
  3. Scenario-based testing: Develop diverse user scenarios that go beyond the basic requirements to uncover potential issues in real-world usage

Strategies for Overcoming Cognitive Biases in Testing

Ok, now you know some cognitive biases and how they affect testing, but how do you overcome them?

Unfortunately, I can’t give you the magic bullet to undo your human nature and millennia of evolution. Your own cognitive biases are here to stay, and you will never escape them, but now that you know they’re there, you can try a few of the following strategies to minimise their impact:

  1. Awareness and Education: This article has touched on three examples, but you should take time to learn about other cognitive biases and their potential impact on testing processes.
  2. Diverse Testing Teams: Teams with varied backgrounds and perspectives will help challenge individual biases.
  3. Structured Testing Approaches: You do this anyway, but using methodologies that promote systematic and comprehensive testing will help limit the impact of personal biases.
  4. Peer Reviews: Get input and opinions on the assets and approaches you’re using.
  5. Data-Driven Decision Making: Use metrics and objective data to inform testing decisions and priorities; don’t just rely on gut feeling or historical significance.

Always Keep In Mind: Your Brain Plays Tricks on You

Cognitive biases are natural, and as far as I can tell, they affect all of us. Sometimes, you can catch yourself, but sometimes, you have no idea they’re controlling your thoughts and actions.

They will mess with your priorities, your expectations, and your assumptions… but you can put things in place to mitigate them.

Building robust, data-driven testing processes, canvassing multiple opinions, and putting yourself in the users’ mindset are just a few of the approaches you can take to limit the risk.

As testers, we’re under so much pressure to deliver, but sometimes, just taking a step back and clearing your mind can help. In fact, why not subscribe to my mailing list? That way, you have something to do when you’re taking a few minutes of downtime—and all the content is testing-focused, so you don’t even need to feel guilty about it!

Stephen Davis
by Stephen Davis

Stephen Davis is the founder of Calleo Software, a OpenText (formerly Micro Focus) Gold Partner. His passion is to help test professionals improve the efficiency and effectiveness of software testing.

To view Stephen's LinkedIn profile and connect 

Stephen Davis LinkedIn profile

17th September 2024
What can testers learn from SpaceX

What Can Testers Learn From SpaceX?

As a test professional, I’ve seen countless projects where defects are treated as disasters rather than learning opportunities. But what if we flipped that mindset? What if software development projects embraced failure as SpaceX does—not as an end, but as the beginning of progress?

video to defect

How to Generate Defect Reports from Videos!

Testers can now convert video recordings into detailed defect reports. This groundbreaking functionality accelerates project timelines with AI-powered speed and accuracy. Not only does this technology provide the holy trinity of speed, quality and cost savings, but it also solves a huge—often unspoken—issue on many projects: the breakdown of dev/test relations at the worst possible time.

Video to Software Tests

A Testing Revolution? How to Turn Videos into Manual and Automated Test Cases

Imagine being able to record a user story and instantly turn it into manual and automated tests—how much time and effort would you save? Whether you’re preparing for SIT, UAT or streamlining regression testing, you can now generate manual and codeless automated test cases directly from video recordings, leveraging cutting-edge AI technology to streamline your testing processes.

Test Automation what's new

What’s New: Exciting Test Automation Tool Updates

As great as OpenText is at software development, it’s not always the best at keeping people informed about changes. So, today, I’m sharing a few recent updates to the OpenText automation tools. These are just a tiny sample of recently implemented changes. They focus on cloud capabilities, AI-powered object detection, codeless testing, and streamlined workflows that make test automation more accessible and efficient than ever.

Software Testing in 2030

Software Testing in 2030: 4 Ways QA Will Change

Over the next five years, software and software testing are set to evolve at a rate we’ve never seen. In fact, it has already started. Over the last few years, everyone remotely involved in tech has witnessed the constant change in the way things are done. This seemingly non-stop innovation has been driven by emerging technologies, shifting development paradigms, and businesses reevaluating their priorities… and is set to accelerate.

Software Testers v Rogue AI

Software Testers: Humanity’s Best Chance Against Rogue AI

In the race to protect us against rogue AI, our best defence might not be scientists or politicians, but the often-overlooked heroes of the tech world: software testers. As AI systems increasingly mediate healthcare, criminal justice, and military decisions, this unlikely profession could hold the key to preventing existential catastrophe.

4 testing breakthroughs

Software Testing AI: 4 Breakthroughs You Can’t Ignore in 2025

It’s 2025 and software testing AI can no longer be ignored. AI innovations in software testing can deliver unprecedented efficiency gains and bridge the gap between manual and automated workflows. This article contains four software testing AI breakthroughs you can’t ignore in 2025.

Remote Software Testing

Remote Testing Teams: 4 Strategies to Avoid Collaboration Disaster

It’s been years since the pandemic. Still, many companies I speak to have struggled to adapt to changing practices and have failed to implement effective working habits. Unfortunately, you can’t just continue as if nothing has changed—this approach just won’t cut it anymore. In this week’s insight, I provide four actionable approaches that I have picked up from the many successful testing projects I talk to. These easy fixes will help you prevent collaboration disasters in your remote testing teams.

Top Software Lists

Exposed Why ‘Top Software’ Lists Can’t Be Trusted!

You see them everywhere. Top 10 this, top 20 that. We have all searched for lists that rank products. Whether cars, phones, software, or anything else. But how trustworthy are the ‘top software’ lists on the internet?

Insights

Search

Related Articles

InsightsTrending

To get other software testing insights, like this, direct to you inbox join the Calleo mailing list.

You can, of course, unsubscribe 

at any time!

By signing up you consent to receiving regular emails from Calleo with updates, tips and ideas on software testing along with the occasional promotion for software testing products. You can, of course, unsubscribe at any time. Click here for the privacy policy.

Sign up to receive the latest, Software Testing Insights, news and to join the Calleo mailing list.

You can, of course, unsubscribe at any time!

By signing up you consent to receiving regular emails from Calleo with updates, tips and ideas on software testing along with the occasional promotion for software testing products. You can, of course, unsubscribe at any time. Click here for the privacy policy.