Test Automation Rant Part 5 (video) - Utopia Solutions
Full Service Software Testing Solutions. Our 25+ Years of Experience is Unparalleled Call us for more info: (630) 593-2074
Onsite – Onshore – Offshore request a discussion
Dec 08

Test Automation Rant Part 5 (video)

In the final episode, Lee rants on how many automation implementations have very little value from a testing perspective. We think he saved his best rant for last!

Interested in knowing what your organization’s level of readiness is for implementing Test Automation? Click here to take Utopia’s Test Automation Assessment, our free online automation assessment will give each participant a personalized scorecard to gauge their test automation readiness and existing health.

Take our user-friendly functional test automation assessment to determine your automation potential. Whether you get top scores or have plenty of room to grow, Utopia Solutions will help you optimize the return from your investment in test automation.

-Check out more of Lee’s Rants-

Part 1 – Test Automation

Part 2 – Managing Expectations

Part 3 – Consultants and the Perfect Framework

Part 4 – It’s not about the % of Test Cases Automated

 

About The Author

Lee Barnes has over 20 years of experience in the software quality assurance and testing field. He has successfully implemented test automation and performance testing solutions in hundreds of environments across a wide array of industries. He is a recognized thought leader in his field and speaks regularly on related topics. As Founder and CTO of Utopia Solutions, Lee is responsible for the firm’s delivery of software quality solutions which include process improvement, performance management, test automation, and mobile quality. Lee holds a Bachelor’s Degree in Aeronautical and Astronautical Engineering from the University of Illinois.
  • Dave says:

    You can get as much ROI out of automation as you can get ROI out of manual testing, IMO. Trying to measure quality in any context is extremely difficult and not something much talked about. We measure what people output, but that can be gamed. It is really hard to output the true quality of a product. Some try to measure product quality in terms of defects found, but that is just the tip of the iceberg as there are probably orders of magnitude more defects unfound.

    While we do measure quality by defects found to some extent, I think a good measure is how long since your last hotfix deploy; This is like “X number of days since the last accident” at a construction site. Nothing is a good measure of quality other than comparing to low quality. Low quality would be having defects that impact the customer.

    Personally, I think about identifying quality as the Supreme Court defined pornography, “I cannot define it, but I know it when I see it”.

  • Great rant. You can’t show ROI without tracking metrics. The most common misused metric is “Number of Defects Detected”. Automation is strong at regression testing, showing where the application broke (or has not broken, depending on your perspective) since the last build. It has a weakness in detecting more subjective defects. That’s where manual testers come in. Showing that automated testing allowed manual testers to focus more time on problem areas is a great way to demonstrate ROI on both sides of the equation.

  • >