In the final episode, Lee rants on how many automation implementations have very little value from a testing perspective. We think he saved his best rant for last!
Interested in knowing what your organization’s level of readiness is for implementing Test Automation? Click here to take Utopia’s Test Automation Assessment, our free online automation assessment will give each participant a personalized scorecard to gauge their test automation readiness and existing health.
Take our user-friendly functional test automation assessment to determine your automation potential. Whether you get top scores or have plenty of room to grow, Utopia Solutions will help you optimize the return from your investment in test automation.
-Check out more of Lee’s Rants-
Part 2 – Managing Expectations
Part 3 – Consultants and the Perfect Framework
Part 4 – It’s not about the % of Test Cases Automated
You can get as much ROI out of automation as you can get ROI out of manual testing, IMO. Trying to measure quality in any context is extremely difficult and not something much talked about. We measure what people output, but that can be gamed. It is really hard to output the true quality of a product. Some try to measure product quality in terms of defects found, but that is just the tip of the iceberg as there are probably orders of magnitude more defects unfound.
While we do measure quality by defects found to some extent, I think a good measure is how long since your last hotfix deploy; This is like “X number of days since the last accident” at a construction site. Nothing is a good measure of quality other than comparing to low quality. Low quality would be having defects that impact the customer.
Personally, I think about identifying quality as the Supreme Court defined pornography, “I cannot define it, but I know it when I see it”.
Great rant. You can’t show ROI without tracking metrics. The most common misused metric is “Number of Defects Detected”. Automation is strong at regression testing, showing where the application broke (or has not broken, depending on your perspective) since the last build. It has a weakness in detecting more subjective defects. That’s where manual testers come in. Showing that automated testing allowed manual testers to focus more time on problem areas is a great way to demonstrate ROI on both sides of the equation.