Software testing life cycle: When is testing complete? (2024)

Software testing life cycle: When is testing complete? (1)

As testers, we often get our testing end date imposed upon us. As one of the last stages in the software development lifecycle, the testing phase can get squeezed into a smaller-than-needed time frame because the organisation have committed to a release date. Sound familiar?

This blog will help you work out how and when to finish a testing life cycle, equipping you with the justification you need to push back on premature launch dates, and to make the most of the time you do have where that isn’t possible.

We’ll also cover elements within the software testing life cycle that will help you prioritise – that way, if you have to cut your testing short, you can be confident that you’ve covered the right things off.

What is the software testing life cycle?

The software testing life cycle is every task and action you do to verify and validate the software prior to release. It’s an umbrella term to cover the the whole start to finish process of software testing. Read on for a break down of each stage and how to mitigate issues and improve your methods.

What are the software testing life cycle phases?

When I first trained in the software testing life cycle phases, we were taught a (now very dated) acronym: Posh Spice Eats Raw Carrots. Which stands for: Planning, Specification, Execution, Recording and Closure. While the specifics and techniques for each of these phases have developed since I was first learning the processes, the basis of this is still sound.

So, first things first, the Planning phase. This can be split into three parts, requirements analysis, risk assessment and test planning.

Requirements analysis

First thing you need to do is understand what you’re testing against – so you’ll need to define your criteria for what is a verified and validated product.

  • Verification – this is making sure the software has no functional bugs. It’s answering the question: Does the product fulfill the requirements the end users set out at the start of the project? If we were testing a calculator, it would be making sure that when we plug in numbers, we get the answers that we were expecting.
  • Validation – This is a bit more nuanced. Validation is checking that the product we’re developing is what the end user actually needs from the product. So we’re trying to answer the question: Are the requirements we’ve set out suitable for meeting the users needs? Using the same example, if we’ve set out a requirement that 2+2 = 5, and our calculater does that, it would be verified. But when we checked this with user acceptance testing, we’d find that wasn’t valid for the user’s needs.

It’s worth saying here that you need to define your validation criteria based on what the user actually wants, and what’s specified by the wider project. Regardless of what you think 2+2 should equal, if the project requirements specify a calculator should say 2+2 = 5, you need to be testing for that instead.

At this stage you should thoroughly analyse the requirements to make sure you are verifying the right conditions. Check the paperwork and ensure what’s written is fit for purpose and not ambiguous.

A common example we get of where this goes wrong is when we do performance testing. The requirements document will often list criteria like, “the system is quick.” What does quick mean! According to what, a snail or the speed of light?

Make sure all the system requirements you’re going to write tests for are specific, measurable and relevant to users’ needs, and that they’re understood the same by every team throughout the software testing lifecycle.

(This is a key reason why it’s important to have testers involved in the early requirements documentation process as encouraged by shift left testing. Read more about it here.)

Software testing life cycle: When is testing complete? (2)

Test Planning

Before you start planning your tests, you obviously need to assess your risks. Check out this blog for detailed advice on how to do it right.

Next you’re going to plan your tests. Before you crack on you need to write a test plan, which must include getting sign off from business executives or stakeholders, on what, how and when you’re going to test. If you don’t, you leave the door open for misunderstanding and blame later down the line.

What your test plan looks like will depend on the kind of project you’re working on. If you’re doing a more traditional waterfall or V model project, you might have a test plan template set out in a word document, which includes things like:

  • What’s in scope
  • What’s out of scope
  • What teams are involved
  • Who the stakeholders are
  • How you’re going to carry out the tests

Test Planning in Agile

If you’re working on an agile project, the test planning should be part of the sprint planning. You’ll still need to consider the questions above, but it happens more regularly, and the intent and format of the planning might be a little different. For instance, it will probably be more tool driven – set out on kanban boards within your project management software, instead of in a hefty word document!

While your test planning document will be different depending on what your project is, these are the bare bones you need to have on every test plan:

Test levels

  • Differentiations of tests across different levels of the project: unit tests (small parts of the system in isolation), system tests (the whole system) and system integration tests (how it integrates with other systems).

Non functional testing

  • This should include elements like: Performance test, load test, security test, operational acceptance test etc.

Entry and exit criteria

  • We’ll cover these further on, but you’ll need to plan ahead and write these criteria so you know when you’re ready to start and complete testing.

Once you’ve done your test plan, you must get sign off from stakeholders, so everyone is aware of what’s going to happen.

That said, your test plan needs to be a living document. As circ*mstances and risk profiles change, you’ll need to revisit it and change the plan to adapt to the changing project. It’s good practice not to dogmatically follow it if things change, but it’s also key when things change to update the document accordingly.

Test case design and development

Next is the specification stage. The best way to think about and do this testing phase is to split it out into test conditions and test cases. This will ensure your test designs actually fit the specifications you need to meet.

Design and confirm the test conditions

Start with writing your test conditions. If we go back to the calculator example, a test condition might be, ‘addition works’. When designing test conditions, we obviously can’t test everything. You can’t add every number combination possible together to confirm that all addition works. Instead, you create a subset of testing that’s representative.

There are tonnes of different testing design techniques to account for this, but two of the most common ones are:

  • Boundary value analysis – This is where you focus your efforts around boundaries, as in software development that’s often where issues happen. For example, if you’re testing a system that computes interest rates, and a user will qualify for a higher interest rate when they’ve got more than £1000, you’d design a test that looks at £999, £1000, and £1001 cases.
  • Equivalence partitioning – This is splitting out your test conditions by working out that if X works Y logically must also work. For instance, if 1+1 = 2, then we can presume 2 + 2 = 4 will also work.

Create your test cases

Depending on who or what will execute the test cases, this is likely to be a very detailed step by step guide, detailing exactly how each test should be delivered. At this stage you’ll decide who will be executing the tests: will you automate them or get a third party team to follow the scripts? If you’re giving the test scripts to a computer, or someone who isn’t very adept with the system, then you’ll need to be very specific on what you want done!

The most important thing here is that you have the expected result: what output do you need to show that the test case has passed or failed? Always focus on that objective when writing your test cases.

It’s also worthwhile to prioritise your test cases. Often these will inherit priority from the risk or requirement associated with them, but there can be some key differences if a test case is a prerequisite to another one. Sometimes a certain case won’t be testing a high priority risk, but a test case that is high priority will be dependent on it, so then the first test case also becomes high priority. Considering those dependencies at this stage will help you complete testing to a better standard, because it will help you get the most important tests done first.

Test environment setup

Test environments can be a real thorn in the side when trying to keep a software testing lifecycle on track.

This phase focuses on your test environment and ensuring that it’s appropriate for the tests you are required to execute. If it’s wrong, your tests will be invalid, and you’ll have to repeat work – so it’s worth spending the extra time to check it!

Here’s what you need to consider when setting up and managing your test environment:

  • To avoid test environment issues, you need to be really specific about when and what you’re testing and ensure that the test environment stays the same each time you’re testing. The easiest way to do this is to do a smoke test to check you’ve got all the latest code versions and data sets etc for testing.
  • Test environment access management is essential. Communicate with other teams so everyone is aware of the known state that the test environment needs to be reset to.
  • Back up and store data correctly in your test environment, for extra protection against invalidation of tests if the environment is shared across teams.
  • Be very careful of the privacy requirements of your test data. Back in the old days people would copy chunks of live data, but now it’s vital to take account of privacy. There are loads of data obfuscation tools that replace personal information with dummy data, so use one of those if you’ve not got dummy data already.

Finally, once you complete testing, it’s best practice to decommission your test environment if it’s not going to be reused. You’d be amazed how many times when consulting that we’ve found servers running legacy test environments and costing teams money. Increase the efficiency of your usage by managing access and reallocating resources from a decommissioned test environment.

Read more about how we can help you with personalised advice on your Test Environment Mangement here.

Software testing life cycle: When is testing complete? (3)

Test execution

This stage is the most simple to explain of all the software testing life cycle phases – run the tests! Think through whether you’re going to go manual or automated and plan accordingly.

In this stage best practice is to report defects and incidents. Once you’ve run the test, go back and compare it with the outcome expected (set out in the test condition & test case) and if there’s a discrepancy, raise a defect.

If you’re working in an agile environment with a blended team, it can feel tempting to just report the defect straight to the developer and have them fix the problem and then just run the test again. But be careful with this method, while it’s quicker, you don’t know if the fix that they made on that test condition has broken something else you’ve already tested. If you don’t have a collated record of defects, you can’t identify this in a later phase and fix it, so consider ensuring all incidents are recorded.

Recording

Even in agile projects, recording your outcomes thoroughly is very important. Mark each test as pass or failed (or if you’re running automated tests, it should do that for you) and measure your progress against time and quality. Create a regular report that goes to stakeholders which demonstrates your progress – though if you’re using agile tools, you will probably get realtime progress reporting.

When recording and reporting your test progress, it’s necessary to consider progress and quality attributes, and consider what gives the most accurate picture. For instance, if I’ve got 10 tests and I’ve done 9, it’s easy to say I’m 90% through my testing. But if those 9 are low priority and the last one is the most important and longest to execute, that’s not an accurate report. So consider using the risk assessment outcomes from the test planning phase, as well as quantitative data, to give a true picture of your progress and outcomes.

Test cycle closure

This is the last phase of the software testing lifecycle, and centres around the report. Once you’re at the point where you think you’re ready to complete testing – or you’re told your testing period is finished – the report summarises:

  • What tests you carried out
  • Any deviations from the test plan
  • A summary of defects you found, especially those still outstanding
  • A recommendation on how fit for purpose the product is

As much as it can be frustrating, it’s not up to us testers to make the judgement call around whether or not the project goes live. So your report needs to be designed to give stakeholders all the information they need to make an informed decision. Focus on what you’re trying to communicate, and don’t put in unnecessary data just because you can!

When we work with teams that have sophisticated reporting tools like qlikview which create tons of data, they’ll sometimes send out a 400 page report… but no one is reading them. So take a smart, qualitative and quantitative approach that clearly focuses on the data that business executives need to make decisions, and ditch the irrelevant data.

What are entry and exit criteria?

Entry criteria are the requirements you must meet before you’re ready to start the testing process, for instance, ‘Is the test environment ready?’. Exit criteria are the requirements you must meet to signify that you’re ready to complete testing. You can use entry and exit criteria at all different levels of the project – from entering and exiting system integrations to user acceptance testing. Sometimes if you’ve got test dependencies, your entry criteria will reference the previous test’s exit criteria from the level before.

Creating entry and exit criteria in software testing

When it comes to writing exit criteria, there’s a mistake we see often: similarly to quantitative reporting, sometimes testers will base their exit criteria on purely statistical requirements, such as ‘95% of planned tests have been run’. But this fails to take into account what those 5% are! So it’s better to take a slightly more priority and risk focused approach, for instance, ‘100% of priority 1 tests have been run’.

As you probably know, the most important thing when it comes to exit and entry criteria in software testing is to not go ahead if the exit criteria have not been met. If you do that, you’re building on shaky foundations – it will invalidate the next tests you do, and just mean you’re wasting your time.

When is software testing complete?

From a practical perspective, you can complete testing when all of your exit criteria have been met; which is why it’s essential to get them right. Many testers will have a set of boilerplate exit criteria in their template test plan and will just copy that. But you need to actually think about the specifics of the project and what is and isn’t relevant to your desired outcomes. Take risk into account and take the time to make bespoke entry and exit criteria, because that will often actually save you time later on.

While having to finish testing before you’re ready is a common issue, sometimes testers forget that you can complete testing early if you’ve met the exit criteria! If you’re confident you’ve mitigated all the risks identified, don’t be afraid to stand up and say the testing phase is finished. Obviously use data to back up your assertion, but I’ve seen projects where resources could have been reallocated earlier if testers felt able to report that they were finished.

And that’s the end of your guide to completing testing and the software testing lifecycle. If you want to continue expanding your testing expertise, check out our guide to shift left testing here.

Interested in getting some advice on conducting your software testing effectively and efficiently? Get in touch with us here.

Software testing life cycle: When is testing complete? (4)

Software testing life cycle: When is testing complete? (2024)

FAQs

Software testing life cycle: When is testing complete? ›

The exit criteria is defined at the test planning stage and can include the following items which can signify the end of testing activities: Sufficient coverage of requirements by test cases is achieved. All test cases planned for the phase have been executed. All critical and high priority bugs are fixed and verified.

When to say testing is completed? ›

The final step to determine when testing is complete is to review your test results and evaluate your testing performance. You should analyze your test data, such as test cases, test execution, test coverage, defect reports, and defect resolution, to assess the quality and completeness of your testing process.

How do you know when you have done enough testing? ›

Data Driven Techniques to Measure How Much Testing is Enough
  • 2.1 When the build breakage is trivial:
  • 2.2 When all the involved parties sign off the stories:
  • 2.3 When the code freeze is effective:
  • 2.4 When all the blockers/bugs are addressed:
  • 2.5 When the test coverage is high:
Apr 4, 2024

How much testing is enough testing? ›

There are no predefined rules that you can use to find how much testing is enough. Visualize project situations, project readiness, and timelines to decide on your own. In practice, the goal is to strike a balance between thorough testing and project constraints.

Is it sufficient to test a software product only at the end of its life cycle? ›

Testing should begin early in the software development life cycle (SDLC) and continue throughout the entire process.

When should testing be completed? ›

Stop the testing when the testing budget comes to its end. Stop the testing when the code coverage and functionality requirements come to the desired level. Stop the testing when the bug rate drops below a prescribed level. Stop the testing when the number of high severity Open Bugs is very low.

What are the criteria to be considered for test completion? ›

The two most common criteria are these: Stop when the scheduled time for testing expires. Stop when all the test cases execute without detecting errors; that is, stop when the test cases are unsuccessful.

How do you decide when you have tested enough? ›

How do you know when it's time to stop testing? You can stop testing when you can't find bugs/defects after completing different types of test cases execution like functional, non-functional, etc .

Why is exhaustive testing not possible? ›

Complete Exhaustive Testing is not possible because it is not possible to cover all the test scenarios but still testers try to cover as many possible scenarios for software and the faults which remain in the software are very minor and can be ignored as they do not drastically affect the functionality of the ...

How do you confirm if test cases you have written are complete? ›

  1. We design the test cases for the requirement given, check test cases written cover all of the requirements.
  2. Check whether all positive, as well as the negative scenarios mentioned in requirement, are covered.
  3. Apply testing principles on the given requirements like boundary value, equivalence class etc.
Aug 17, 2023

How do you know when the product is tested well enough? ›

Regardless of whether it is a simple or a complex product, Test Case Coverage Analysis must start with well-documented and detailed requirements. These serve as the reference point for determining how many tests are necessary to ensure that the product will satisfy its intended purpose.

How do you figure out how much testing is needed for a software? ›

The most effective way to determine the amount of testing needed for a software project is to conduct a risk-based analysis. Identify critical areas, potential vulnerabilities, and prioritize testing efforts based on factors like complexity, impact on users, and business-critical functionalities.

Is complete testing possible? ›

We can't do complete testing, so what can we do? Since complete testing is impossible, choosing the tests to perform is essentially a sampling problem. Adopting approaches such as risk-based testing are important in making good sampling decisions. The focus should be on doing "good enough testing".

What are the 7 phases of STLC? ›

Let us dive into the 7 phases of the software testing life cycle (STLC) and their importance in ensuring top-notch software quality:
  • Phase 1 — Requirement Analysis. ...
  • Phase 2 — Test Planning. ...
  • Phase 3 — Test Design. ...
  • Phase 4 — Test Environment. ...
  • Phase 5 — Test Execution. ...
  • Phase 6 — Defect Tracking. ...
  • Phase 7 — Test Reporting.
Oct 27, 2023

Can you explain the software testing life cycle? ›

The Software Testing Life Cycle (STLC) is a sequence of specific actions performed during the testing process to ensure that the software quality objectives are met. The STLC includes both verification and validation. Contrary to popular belief, software testing is not just a separate activity.

What is the meaning of completed test? ›

Share button. a type of test in which the participant is usually required to supply a missing phrase, word, or letter in a written text. In nonverbal completion tests, a missing number, symbol, or representation must be supplied. See also word-stem completion.

What is testing complete? ›

Complete testing, aka exhaustive testing, implies that the software has been tested for all possible scenarios and it is 100% defect-free at the time of deployment.

What happens after the testing phase is completed? ›

The purpose of the Test Phase is to guarantee that system successfully built and tested in the Development Phase meet all requirements and design parameters. After being tested and accepted, the system moves to the Implementation Phase.

Which is the best definition of complete testing? ›

Think about what complete testing might mean: 1) Completed the discovery of every bug in the product. 2) Completely examined every aspect of the product. 3) Completed the testing that you believe is useful and cost-effective to perform at this time.

Top Articles
Latest Posts
Article information

Author: Manual Maggio

Last Updated:

Views: 5660

Rating: 4.9 / 5 (69 voted)

Reviews: 84% of readers found this page helpful

Author information

Name: Manual Maggio

Birthday: 1998-01-20

Address: 359 Kelvin Stream, Lake Eldonview, MT 33517-1242

Phone: +577037762465

Job: Product Hospitality Supervisor

Hobby: Gardening, Web surfing, Video gaming, Amateur radio, Flag Football, Reading, Table tennis

Introduction: My name is Manual Maggio, I am a thankful, tender, adventurous, delightful, fantastic, proud, graceful person who loves writing and wants to share my knowledge and understanding with you.