SW QA Automation Recommendations Assessment Report
In place and enabled - Accepted, implemented and running In place but disabled - Obsolete, out of maintenance In progress - Accepted, being implemented Manual - Accepted, implemented manually N/A - Not Adopted |
TAF Implementation · PPT |
RECOMMENDATIONS |
STATUS |
COMMENTS |
Unit tests and gated check-in · Peer code review · Automatically build against commit(s) / pull request (PR) · Refused to merge code if UT fails |
|
|
Regularly build on CI platform · Nightly/daily build through CI pipeline |
|
|
Static code analysis · Local scan before pushing code · Nightly/daily scan through CI pipeline · Report is monitored and issues are analyzed and resolved |
|
|
Code coverage analysis · Data are collected through nightly/daily build and used to improve test coverage |
|
|
Environments can be auto-deployed / rolled back · Automatically deploy environments if build was successful · Automated testing (e.g., BVT) are executed through CI pipeline and the environment(s) can be rolled back if test was failed |
|
|
|
|
|
Automated API testing through CI pipeline · API collections are created for specific workflows · Test suites are created for different feature sets · Performance and security testing are also considered |
|
|
Automated functional testing (regression) · Test suites are created for different feature sets · Automated tests are integrated into CI pipeline |
|
|
Automated integration testing (E2E) · Automated tests are integrated into CI pipeline |
|
|
Performance testing / Security testing · Non-functional requirements are in place · Regularly executed before releasing · Detected issues are handled |
|
|
Test results visualization · Test execution trends are visible to stakeholders · Test issues are tracked and handled · ALM tools are communicating with each other to visualize the test status as well as the project status |
|
|
|
|
|
UAT on staging/live deliverables · Automated UAT is in place |
|
|
Staging / Live environment auto-deployment - CD · Automated sanity check is in place |
|
|
|
|
|
Everyone can coding · OOP (e.g., Python) · Selenium / Appium / Python requests, etc. · JMeter, etc. |
|
|
Test automation code quality control · Dedicated automation engineers / team · Regular code review and code refactoring · Knowledge sharing |
|
|
Test automation metrics · Mean time to diagnosis (MTD) · Bugs find by automation · Flaky rate · Automated to manual ratio |
|
|