3 Layer Automated Testing
Birth of Quality Engineering
Quality Assurance practice was relatively simple when we built the Monolith systems with traditional waterfall development models. The Quality Assurance (QA) teams would start the GUI layer’s validation process after waiting for months for the product development. To enhance the testing process, we would have to spend a lot of efforts and $$$ (commercial tools) in automating the GUI via various tools like Microfocus UFT, Selenium, Test Complete, Coded UI, Ranorex etc., and most often these tests are complex to maintain and scale. Thus, most QA teams would have to restrict their automated tests to smoke and partial regression, ending in inadequate test coverage.
With modern technology, the new era tech companies, including Varo, have widely adopted Microservices-based architecture combined with the Agile/ Dev-Ops development model. This opens up a lot of opportunities for Quality Assurance practice, and in my opinion, this was the origin of the transformation from Quality Assurance to “Quality Engineering.”
The Common Pitfall
While automated testing gives us a massive benefit with the three R’s (Repeatable → Run any number of times, Reliable → Run with confidence, Reusable → Develop, and share), it also comes with maintenance costs. I like to quote Grady Boosh’s — “A fool with a tool is still a fool.” Targeting inappropriate areas would not provide us the desired benefit. We should consider several factors to choose the right candidate for automation. Few to name are the lifespan of the product, the volatility of the requirements, the complexity of the tests, business criticality, and technical feasibility.
It’s well known that the cost of a bug increases toward the right of software lifecycle. So it is necessary to implement several walls of defenses to arrest these software bugs as early as possible (Shift-Left Testing paradigm). By implementing an agile development model with a fail-fast mindset, we have taken care of our first wall of defense. But to move faster in this shorter development cycle, we must build robust automated test suites to take care of the rolled out features and make room for testing the new features.
The 3 Layer Architecture
The Varo architecture comprises three essential layers.
- The Frontend layer (Web, iOS, Mobile apps) — User experience
- The Orchestration layer (GraphQl) — Makes multiple microservices calls and returns the decision and data to the frontend apps
- The Microservice layer (gRPC, Kafka, Postgres) — Core business layer
While understanding the microservice architecture for testing, there were several questions posed.
- Which layer to test?
- What to test in these layers?
- Does testing frontend automatically validate downstream service?
- Does testing multiple layers introduce redundancies?
We will try to answer these by analyzing the table below, which provides an overview of what these layers mean for quality engineering.
After inferring the table, we have loosely adopted the Testing Pyramid pattern to invest in automated testing as:
- Full feature/ functional validations on Microservices layer
- Business process validations on Orchestration layer
- E2E validations on Frontend layer
The diagram below best represents our test strategy for each layer.
Note: Though we have automated white-box tests such as unit-test and integration-test, we exclude those in this discussion.
Use Case
Let’s take the example below for illustration to understand best how this Pyramid works.
The user is presented with a form to submit. The form accepts three inputs — Field A to get the user identifier, Field B a drop-down value, and Field C accepts an Integer value (based on a defined range).
Once the user clicks on the Submit button, the GraphQL API calls Microservice A to get what type of customer. Then it calls the next Microservice B to validate the acceptable range of values for Field C (which depends on the values from Field A and Field B).
Validations:
1. Feature Validations
✓ Positive behavior (Smoke, Functional, System Integration)
- Validating behavior with a valid set of data combinations
- Validating database
- Validating integration — Impact on upstream/ downstream systems
✓ Negative behavior
- Validating with invalid data (for example: Invalid authorization, Disqualified data)
2. Fluent Validations
✓ Evaluating field definitions — such as
- Mandatory field (not empty/not null)
- Invalid data types (for example: Int → negative value, String → Junk values with special characters or UUID → Invalid UUID formats)
Let’s look at how the “feature validations” can be written for the above use case by applying one of the test case authoring techniques — Boundary Value Analysis.
To test the scenario above, it would require 54 different combinations of feature validations, and below is the rationale to pick the right candidate for each layer.
Microservice Layer: This is the layer delivered first, enabling us to invest in automated testing as early as possible (Shift-Left). And the scope for our automation would be 100% of all the above scenarios.
Orchestration Layer: This layer translates the information from the microservice to frontend layers; we try to select at least two tests (1 positive & 1 negative) for each scenario. The whole objective is to ensure the integration is working as expected.
Frontend Layer: In this layer, we focus on E2E validations, which means these validations would be a part of the complete user journey. But we would ensure that we have at least one or more positive and negative scenarios embedded in those E2E tests. Business priority (frequently used data by the real-time users) helps us to select the best scenario for our E2E validations.
Conclusion
There are always going to be sets of redundant tests across these layers. But that is the trade-off we had to take to ensure that we have correct quality gates on each of these layers. The pros of this approach are that we achieve safe and faster deployments to Production by enabling quicker testing cycles, better test coverage, and risk-free decisions. In addition, having these functional test suites spread across the layers helps us to isolate the failures in respective areas, thus saving us time to troubleshoot an issue.
However, often, not one size fits all. The decision has to be made based on understanding how the software architecture is built and the supporting infrastructure to facilitate the testing efforts. One of the critical success factors for this implementation is building a good quality engineering team with the right skills and proper tools. But that is another story — Coming soon “Quality Engineering: Redefined.”