Why do companies fail at test automation?
51Testing | 17 Years As A Software Testing Service Provider
If we ask a question now: how does your company perform test automation?
A Test Architect from the QA department of a large company may introduce various functions and automation capabilities while if you ask the same question to a test engineer in this company, he/she might answer: automation? no, it’s manual.
Very interesting, isn’t it? Two totally different answers for the same question. This situation exists in most companies. Today, we will talk about: why companies fail at test automation?
When a company fails at test automation, in fact, it refers to the failure of building a test automation system. How can we define a “successful” test automation system? In my opinion, there are 4 basic standards to achieve.
- Have a unified testing framework and platform
- Have basic CI, CD process, with the ability to automatically and regularly trigger execution
- Have high test coverage, which can effectively help testers to do regression and improve test efficiency.
- The automated test result is recognized and accepted by both testers and developers.
It seems that these 4 standards are not difficult to achieve, however, you will find it not easy to accomplish after talking to your colleagues.
I will try to analyze the reasons why companies fail to build a successful test automation system in the following part.
- Coding is a scarce skill
In the past, the test engineers did manual testing entirely, which did not require coding skills. And then, some testers who can use QTP or Rational Robot to do automated testing appeared. These people were almost at the top of the test professional food chain at that time.
Since web2.0 became popular, there is a huge demand for automated testing by using Selenium and Watir for a large number of projects. Testers start to learn the importance of coding. Some test engineers start to learn coding to do automated testing, and some developers choose to switch to full-time automated testing.
However, until now, coding is still a scarce skill for most test engineers. Most of them just do functional testing, even though they want to learn a programming language.
Nowadays, most of the technology stacks for test automation, whether they are interfaces, web, or mobile, are built based on open source projects. They are no longer have the recording or playback of QTP. Without coding skills, automated testing cannot be implemented.
2. Don’t know where to start
After the test engineer has mastered the coding skill, another question will come out: where to start? Most of the test engineers will be confused because they don’t know what testing framework and tool to choose, which to start first among interface, web, and mobile. In short, they can not find the entry point.
Instead of learning programming languages first, many testers learn coding skills from learning frameworks, such as selenium, appium, therefore their coding skills are limited to these framework APIs. Consequently, most testers can only start from the framework that they are familiar with, and cannot fully consider the advantages and disadvantages of various frameworks.
Web automation: selenium
Interface: postman, jmeter
After the framework is confirmed, then we need to choose the layer to start. It’s a relatively big topic, units, integration, interface, UI can be the choice. The pyramid-layered model is quite classic.
In my opinion, the pyramid layered model is unpractical, the unit test is not easy. Under the microservice architecture, I prefer the rugby layered model, which means that interface testing should be done first.
If you have sufficient capabilities, I certainly recommend that you sink automated testing to the bottom to achieve a higher ROI.
However, I think the pyramid layering model is too ideal, and the unit testing is not too difficult. Under the microservice architecture, I prefer the rugby layered model, which means that interface testing should be done first:
3. Attach importance to framework and platform development but lose sight of the use cases.
If you pay attention to some test forums, you will find that there are many posts discussing test frameworks and platforms. But if you want to find some posts about automation test case design, you will find very few.
The core of an automated test system is definitely test cases, not the test frameworks and platforms.
4. More bugs in use cases code than in project’s
The lack of solid coding skills is the reason for this situation. Therefore, test engineers need to constantly strengthen the coding skill and start to learn data architecture, design patterns, databases, middleware, etc. And when developing test cases, strictly obey the FIRST principle.
5. Lack of effective maintenance of automated test cases
After the early accumulation of automated test cases and the integration of use cases into some CI systems, the harvest season is coming. You start to enjoy the profit brought by test automation. However, don’t think that the test automation has already successfully landed at this time. It’s not the end of test automation, however, the start of test automation. For each version of the product and function, changes occur, bringing the changes in expectation of test cases. If these changes can not be followed up in a timely and effective manner, the useless test cases roll up which results in an unmaintainable situation.
Research and development system
- Don’t have clear priority for your team
If let us set the priority of all tasks in QA by Urgent-Important Matrix, the construction of the test automation system will most likely be in the Important-Not-Urgent matrix. The reason is quite simple. We all agree to the benefits brought by test automation, however, manual testing works well to discover problems, ensure the efficiency and quality of delivery. What’s more, the implementation of test automation tests will increase the short-term costs.
2. Don’t have a clear test automation system construction roadmap
There are not many companies that can promote test automation construction at a firm level, not to say the successful cases. Therefore, in the early stage, it’s difficult for us to learn from others. The person in charge of the test automation system is generally a developer with strong engineering capabilities and less experience of testing. He/she may not be able to figure out the path and how to promote the project. On the operational level, the engineer may have the testing background and coding skill that can logically make up the above-mentioned shortcomings, but after all, they are not in the same dimension. The lack of global perspectives generally makes them only focus on the special interest.
3. Too many interference factors
It is also mentioned above that automation requires continuous iteration which means that it is a long-term construction running through the entire product life cycle. Therefore, the challenges encountered in the research and development process will also appear in the construction of a test automation system.
The lack of test engineers, the tight project period, the frequent change requirements, all kinds of challenges will make you stop the automated testing and switch to manual testing for speeding up the release and launch.
In Internet companies with high iterations and agile development capabilities, these irrational processes and insufficient resources are reasonable. We have to make compromises, however, don’t question the benefits of test automation and don’t give up.
To sum up, to truly implement automated testing, we must take into account factors such as personnel capabilities, cost, project period, and organizational structure. All these are not “free”, in fact, they are expensive. So every step should be considered carefully and steadily. The improvement of R&D efficiency, the reduction of test cost, and the closed-loop of CI/CD will verify the results of the test automation system.