In agile environments we use to assign story points to all our development tasks, understanding “development” as all the work needed to deliver a feature (not only code).

In some teams, the automation of the tests related to a feature is included as part of the Definition of Done and, in these cases, we need to estimate the effort of test automation as part of the development effort; but in other teams, not all tests can be automated into the sprint and a backlog of automation tasks is needed.

Most teams don’t estimate test automation tasks, but why? don’t they require any effort to be completed? aren’t they adding value to clients/users or the business/product? I don’t have this answer, so I won’t deep dive into it. 

What we are going to deep dive in is how to estimate test automation tasks and, take into account that this guide is based on the following assumptions:

  • This guide is for UI, API, and User Acceptance tests. Probably different other variables and risks/complexity are involved when estimating performance automation tasks, for example.
  • Apply only for teams that have a backlog of test automation tasks (effort is not estimated as part of the development of the feature).
  • These values/references are not set in stone; I wrote this guide for my team, but every team defines its own standard according to the seniority of team members, the risks/complexity of the business/product, their workflow,  their resources, and so forth.

 

Story Points’ guide for test automation tasks

 

0.5 – easy, no risk/complexity fix or minor feature

Should not take longer than 1 hour to implement and verify.

Examples:

  • Change a selector in an already existing test.
  • Update a verification/assertion in an already existing test.

 

1 – easy and minimal complexity; takes few hours of effort to implement

A straightforward task. Minor efforts to implement and verify.

Automating an easy test case with a few verification steps.

Doesn’t require adding new data (or mock data) or new libraries.

Examples:

  • Modifying existing automation scripts to accommodate minor changes in the user interface.
  • Writing new automated tests for a small, isolated feature.

 

2 – still relatively straightforward, while more complex and higher in volume than “1”

It could potentially take 1-2 days to implement and verify.

May require mock data or set new data.

Can be flaky and require other mechanisms to avoid/reduce flakiness.

Examples:

  • Same as 1 but adding mock data or interception.
  • Enhancing existing automation scripts to handle additional functionalities or error scenarios.

 

3 – moderate complexity or size

An average piece of work, that takes some effort but does not introduce risks or technical debt.

May require mock data or new libraries.

May require connection to external resources.

Examples:

  • Automating a moderately complex test case involving multiple steps and verifications.
  • Writing automated tests for a medium-sized feature with various use cases.
  • Same as 2 but adding mock data or interception which require other mechanisms to avoid/reduce flakiness.
  • Maintaining/updating mock data or mock services.

 

5 – moderate to high complexity/size, some risks

Could take a week to implement and verify, a more complex and higher in-volume task that requires some analysis and thorough implementation.

May require a connection to external resources.

Examples:

  • Implement a GraphQL connection.
  • Automating a complex workflow with multiple interactions across different modules.
  • Writing automated tests for a large and intricate feature requiring extensive validation scenarios.

 

8 – high complexity/size, moderate risks/uncertainty

This is the maximum size of a story that is acceptable to include in the sprint and should be broken down into smaller stories if possible.

Examples:

  • Refactoring and optimizing existing automation frameworks or test suites.
  • Automating a highly complex scenario involving integrations with external systems, complex data setups, and numerous dependencies.

 

13 – high risk/complexity/volume, high risk/uncertainty

Should be broken into smaller and more detailed stories instead.

Examples:

  • Creating a new automation framework or significantly redesigning an existing one.
  • Writing automated tests for an extensive, multifaceted feature with a wide range of use cases and interactions.

 

Is it worth estimating?

For me, YES!

When we estimate test automation tasks, we probably do not seek to measure the velocity of our SDETs, but rather to measure the effort involved in creating and maintaining our test automated suites.

Apply only for teams that have a backlog of test automation tasks (effort is not estimated as part of the development of the feature).

One of the advantages of estimating is keeping boundaries and limiting the WIP of a team or team member in a given moment, regardless of whether you work with sprints or not.

Also, estimating means that we are not starting an automation task if it isn’t clearly defined the scope and what is needed to get it completed (cases/scenarios, resources, data, external services, and so forth) which makes the task achievable reducing uncertainty.

 

This post is inspired by Story Points Estimation: From Theory to Practice