Menu Close

Aspects of a Thorough Automated Testing Strategy

Automated Testing Strategy

Automated testing is one of, if not the most critical pieces of having a reliable CI/CD process which is defined below.

  • Continuous Integration (CI)
    • The process of pushing working (following a properly defined definition of done DOD) code back up to the repository as frequently as possible so that it can be integrated with the rest of the code.
  • Continuous delivery (CD) 
    • The process where team(s) build software in small iterations with a high level of confidence that the software can be deployed at any time and will follow a flow that includes the automated building, testing, and deployment of the software faster and more often. 

Notice that in the definition of CI, it references working code. It is very common that understanding of working code means something different to each organization and/or team, which is where the definition of done (DoD) plays a vital role.

DoD should contain some form of the following requirements:

  • Code that follows some sort of high level standard or guidelines to ensure consistency across the application.
  • Automated tests that test the accuracy of the code, along with some sort of metrics (test coverage) that determines how well the code has been tested.  
  • A manual review by your peer(s).
  • And more…. (will dive into this in a future article).

The second bullet references automated tests, which I will now outline some of the core aspects of what should be included in your automated test process.  A proper automated testing strategy should include as many of these as possible:

  1. UI
    1. Verifies that the user interface is easy to use for its target audience
    2. Does proper client side validation
    3. Validated inputs and outputs
    4. Works in multiple client types (browser, phone, tablet, etc..)
    5. Performs what the feature was supposed to deliver
  2. Manual
    1. Uses defined test cases / use cases / behaviors
    2. Verifies and validates that logic and behavior that has low assurance that it will not generate false negatives (tests daily when they should pass)
    3. Does not test things that should be automated
  3. Contract
    1. API calls have a pre-agreed upon “contact” of each API endpoint
    2. Defines the inputs and outputs, error messages, return codes, etc..
    3. Clear and Concise documentation for end consumers
    4. Kept up-to date and accurate
    5. Sample tests
  4. Unit
    1. Fast and stateless, test of what programmer wanted code to do 
    2. Focus on a single method, class, or function in isolation 
    3. Using static results to improve speed and test only the code
    4. Using mocks or stubs
  5. Acceptance
    1. Test application as a whole and what customer wants feature to do
    2. Provide assurance that higher level functional operates as designed, testing against user stories for example
    3. Use simulated service endpoints
    4. Once this build passes test, it’s passed to manual testing (exploratory and ui)
  6. Integration
    1. Ensure app interacts correctly with other systems and services
    2. Does not use stubbed services
    3. Test all new system versions until they all work together
    4. Brittle, slow, so minimize the number of tests run by ensuring reliable unit and acceptance tests are performed.
    5. Much harder to debug and fix
  7. Non-functional requirements
    1. Tests that validate and verify the NFRs are implemented correctly based on any defined SLAs and SLOs. Areas of focus can be:
      1. Performance
      2. Data
      3. Security
      4. Availability
      5. Etc…
  8. Exploratory testing
    1. Think out of the box to determine any other additional tests that can be run
    2. Allows the team to learn, experiment, and discover
    3. Any test that adds value to the testing process but is not part of the defined testing process

A few other words of advice that I have learned over the years:

  1. Automated testing should not be difficult or feared, if it is, this more than likely means your system is over-architected, or has critical design flaws.
  2. The entire team should work together when possible to get the tests passing as soon as possible, minimizing disruption to the consumers of the application.
  3. Unit and Acceptance tests are typically easier to write and faster to run, therefore more time should be invested in these.  Slower and more complicated tests such as integration and NFR should be done less frequently, but enough to ensure there is always a production-like build to be pushed as needed.  These will take  longer to run and that’s ok.
  4. Developer (local) and the rest of the environments should emulate production as closely as possible, and be self serving.
  5. Tests should be run in parallel when possible to speed up the testing process, for example, NFR and integration tests can be run concurrently. 
  6. When tests fail downstream, they should be evaluated to not only learn from them, but also determine what tests can be added upstream to minimize down stream defects which are usually more difficult to debug and fix.
  7. State of the build should be visible, either on a dashboard or some other way so the  entire team is always aware of the status.
  8. Metrics should be captured and compared against over time such as complete vs accurate, number of tests, test coverage, etc..
  9. Your testing strategy should also be a form of continuous learning, making sure that the immediate team is learning from their mistakes, and sharing with the rest of  product development teams at large.
  10. and ……. (many more could be  listed but these are some important ones)

Ultimately senior stakeholders  and leadership should be embracing a “Green Build” culture, meaning that there should always be a current build ready for deployment, a “Pull Cord (Andon)” culture should exist, prioritizing  quality over  features.