Test efforts estimation methods in a nutshell

Test efforts estimation methods in a nutshell

Estimation is one of the most common activities in IT. And it is not as hard as you think if you know how to do it. I would like to share my understanding of estimation methods.

Before you continue reading, I would like to emphasize that I have not read any smart books, guides or articles, therefore I will not refer to, or rely on any of them.
I use three main methods of estimation in my work, and they are Mathematical, Comparative and Expert Evaluation. Those names are not probably the ones you’ll find in professional literature, it’s just how I call them. So a bit of details for each.


This one is the easiest and the most accurate as for me. But this method requires good level of project documentation or at least good knowledge of the object to be estimated, therefore it’s not applicable in dynamic agile projects. Let’s consider general process of estimating a mythical ‘average feature’ using this approach.

Mythical ‘average feature’ description:

  • Our feature is some kind of report, which displays up to 15 unique sets of data.
  • Feature complexity is 40-50 test cases.

Estimation flow:

  • Test documentation. Feature complexity is 40-50 test cases and average speed of test cases creation is 10 TC’s per hour. As a result, we have 40-50/10 = 4-5 hours for creating test documentation.
  • Test data. The same approach works here. The amount of data sets/average time for a set creation = required efforts.
  • Functional testing. The formula looks as follows: the amount of test cases/average test case execution speed. 40-50/12 = 3.5 – 4.5 hours.
  • Bug verification. This is the most complex part which requires knowledge of development practices planned to be used during feature development. For example, if we have no Unit Tests or no Code Review that will be a trigger for us to expect additional bugs, I would say that it’s around +10% to bug rate for each practice missing.
    Usually we start with the default bug rate of 10%, which means that we expect each tenth test case to fail. So if we have no peer code review, the bug rate will be increased to 20% and so on.
    This is very simplified logic. Bug rate definition process deserves a separate article.
  • As a final step of the feature estimation, we sum up all the numbers we have, calculate risks and the estimate for feature is good to go.

The advantages of this approach are the following:

  • Each figure can be easily explained.
  • The Formulas used during estimation are supportable and adaptable.
  • This approach makes you think of each process/activity in the scope of every feature you estimate.


  • This approach requires the most investments in terms of efforts.
  • As it was already mentioned, Mathematical approach requires deep knowledge of estimation object and overall test strategy.
  • Our speed constants can cause some problems as they are aligned to average test specialist.


This is a commonly used approach which allows us to estimate the set of features divided by complexity instead of each feature directly.

For example, we have a set of features named A, B, C and D.
At first, we need to define the complexity sizes for our features. Usually T-Shirt sizes are used. Then we just match our features to the complexity sizes, like A and D features are of L size and B and C features are of M size. Thereafter we just estimate those sizes instead of each feature directly using any approach you like more.

The Pros of this method are:

  • Requires less time than the previous one.
  • Easy to support and react to any kind of changes in scope. For example, if new features are added, you should just define their complexity size, and that’s it.


  • All features are unique, and averaging them can become too risky in terms of overestimating or underestimating.
  • Doesn’t really work for small project estimation.
  • Personally I prefer to work with actual numbers on feature level than on complexity size level.

Expert evaluation

The easiest, the fastest and the most mystical process. This approach means that some person, who is known to be an expert, articulates some estimated efforts and they are considered more or less true.

One tip, if you have to follow this approach: to do the self-check, you should decompose features/activities and evaluate the subcomponents over and over again until you reach the desired level of confidence.


  • The one and only pro is that estimate can be done in a blink of an eye. I’ve heard of experts who can estimate an activity even before they have read the full description of the project.


  • Such estimates are almost impossible to explain adequately.
  • Another person working with an estimate of this kind would not understand it.
  • Mistake probability is too high and depends on the expert’s skill.

As a conclusion

I would like to say that there are tasks, activities and situations, where each of the methods described above is applicable. So don’t stuck to one method, use them all and use them wisely.

Personally, I prefer to use them all simultaneously, if I have enough time. This allows me to check each figure in my estimate several times and makes it a bit more confident.

Test Engineer at Sigma Software. Has around 4 years of experience in testing. Interested in: all testing types, process analysis/improvement, estimating, smart documentation

Leave a reply

Blue Captcha Image