Testability is a concept that was born with ASIC (semiconductor industry) as a process that incorporates rules and techniques in the design of a product to facilitate its testing.

Beginnings of the Semiconductor Industry

In the early days of the semiconductor industry, companies developed the semiconductor and delegated testing to other companies or customers, who found flaws in the products, generating a high rate of returns, failures, and complaints.

On the other hand, in the technology industry, time to market is very valuable (whoever manages to deliver quality products in a short development time has a greater competitive advantage) and design and manufacture the product and then send it to a third party testing it meant a 6-month delay in market launch, as we can see in this document.

Special and complex hardware (ATE automatic test equipment) was also required for each type of board or device, and there was no standard and the testing conditions were complex to master in its two main aspects to be covered: Observation and Control.

 

Observation and Control

To test any system it is necessary to bring the system to a known state, provide input data/conditions (test data) and observe the system to verify that it works (behaves) as designed and built.

If we can’t control and observe, then we can’t be sure that the system works as it should.

 

Develop Testability Strategy

The semiconductor industry decided to adopt “testability” principles early in the design process to ensure maximum testability with minimum effort. This meant changes to the entire semiconductor development process.

The strategy they followed was based on these main aspects:

  1. Technology Selection. When selecting a technology or vendor, they would ensure that there was enough performance and gate-count margin to allow for testability insertion.
  2. Commit to Test Design Practices. Commitment to design testability practices, use them and review them from time to time (improve them) before starting the design of new products; and audit them (as part of the review of product development processes).
  3. Establish a “fault grade” requirement. It is about establishing the degree of fault they wanted to allow for each device, which should also be defined at the design stage since it drives many of the decisions in the development of the test strategy. The fault grade requirement is considered to be an index of the cost of the device. Failure to do so impacted profits over the life of the device.
  4. Establish testing patterns.
  5. Choose/build structured tools: By establishing test patterns, they also achieved standard hardware to test. In this case, “scan testing” was preferred to test logical sequences.
  6. Establish a set of patterns to speed up debugging.

In addition, different levels of testing were established:

If you want to learn more about each of these levels of semiconductor testing, read the IEEE Std 1149-1 (JTAG) Testability

 

More considerations about Testability

As a consequence of the implementation of a Testability strategy, the hardware of the devices changed by adding more logic to test, which added more parts and materials in each device.

The fault simulation and test-application criteria were also established.

 

Testing Methodology

Design-for-Testing was established in IEEE Std 1149.1 as a methodology to simplify problems associated with product development and at all levels of testing. The design of any new product must be planned to be tested in all phases of the product life cycle.

.

 

Testability in Software

When we bring testability to software we are talking about design and architectural decisions that will allow us to easily and effectively test our systems.

We can go further and establish it as a process that incorporates rules and techniques from the design of a (software) product to facilitate its testing, creating rapid feedback loops in the development process to find and eliminate flaws in the code, or even better avoid them.

It is about facilitating (reducing uncertainty and complexity) testing in each and every one of the stages/phases of the development process, from design until we receive feedback from our users/customers.

 

Analogies with semiconductor testing

Observation and Control

In software we also need observation and control.

To test any system (information, in this case) it is necessary to bring the system to a known state, provide input data/conditions (test data, environment variables) and observe the system to verify that it behaves as we have designed it. and built (better known as acceptance criteria).

As with semiconductors, if we can’t control and observe, then we can’t be sure that the system works as it should, we’re not really testing.

Software Changes

Just as semiconductors added new logic to make testing easier, our modern development frameworks now incorporate unit test, API, and UI test libraries that make automation easier.

But there are still plenty of things we can do to improve the testability of our software.

Testing Levels

Semiconductor test levels look like our testing pyramid, don’t they?

Requirements

Aren’t the requirements similar to our definition of functional and non-functional requirements?

 

Testing in our current software development cycles

Just as in the semiconductor industry, testing from design allowed them to improve their time to market by delivering high-quality products, implementing testing from software design itself allows us to achieve highly efficient development cycles with constant feedback and the confidence that we are delivering to our customers/users the highest value with the highest possible quality.

Since we plan functionalities, we must start thinking about testing (cases, scenarios, strategies, data, and above all, establishing good acceptance criteria); in development, we add unit and integration tests; in testing, we refine and execute cases and scenarios, we automate tests; we are going to release and we are constantly monitoring/observing what is happening in production to make the right decisions.

.

 

Whenever we are designing a new product, we think about every architectural detail, whether it is monolithic or microservices, which database best suits our needs (relational or non-relational), which framework best suits what we want to build, what tools are going to give us the best support when we need to scale, what third parties help us improve our development and delivery times. But, how many times do we stop to think about:

  • What kind of test do our products and businesses require?
  • How can we automate the tests of our product? UI, API?
  • Do we evaluate testability when we are evaluating third parties to interact with our software? How are we going to test them? how can we automate them? how are we going to monitor them?
  • What testing tools are best suited to our needs?
  • What tools facilitate our testing activities?
  • How are we going to monitor the behavior of our environments?
  • What tools allow us to constantly observe the status of our systems?
  • Are our environments/processes prepared to simulate failures?
  • Are we providing the best possible support to testers?
  • Are we being efficient (maximum testability with minimum effort)?
  • Do we even think about disaster management? what is our worst scenario? Murphy’s Law is always present.
  • Do we consider testing aspects as part of our software architecture (what, when and how are we going to test)?
  • Do we have fault tolerance in our equipment and software?
  • Do we review our testing processes to change/improve them?

 

Conclusions

  • Let’s take inspiration from the semiconductor industry and set our testability standards.
  • Software testability is design and architecture decisions that will allow us to easily and effectively test our systems. It is about facilitating (reducing uncertainty and complexity) testing in each and every one of the stages/phases of the development process, from design until we receive feedback from our users/customers.
  • When testing activities are of low priority in our development process, the result is applications/systems that are difficult to test. With the correct design and the correct tools/techniques, we can achieve efficiency (maximum testability with minimum effort).
  • Testing is not a phase or an isolated activity, it is a whole set of activities throughout the development process therefore, the entire team must be involved and we are all responsible for the quality of the software.
  • The earlier we start thinking about testing, the easier, cheaper, and more beneficial it will be for teams, projects, and products.
  • We can create environments in which testing is not only easy and efficient, but we can also automate everything we can automate.
  • Let’s make constant reviews of our processes to bring (experience) tools and techniques that make our tests more efficient and valuable within the entire development process.