The Role of Performance Testing in Agile Development - Utopia Solutions
Full Service Software Testing Solutions. Our 25+ Years of Experience is Unparalleled Call us for more info: (630) 593-2074
Onsite – Onshore – Offshore request a discussion
Apr 21

The Role of Performance Testing in Agile Development

A common topic of discussion with my colleagues in the performance testing arena revolves around the role of performance testing (and the performance tester) in an agile development lifecycle. The impetus for these conversations is almost always a feeling that these roles are trending towards irrelevance as organizations become more agile focused. Are performance testers really going the way of the dinosaurs? I don’t think so, but they, along with their counterparts in agile development teams, must evolve so that they (and the systems they develop) can continue to perform optimally.

Before we take a look at how they must evolve, let’s examine why performance testing appears to be irrelevant and how it got that way in the first place.

The symptoms are relatively consistent. Performance teams that were once the final gate before release are now finding themselves off of the critical path and sometimes off the project plan altogether. How can this be? Is there something magical about agile teams and their methods that make the systems they build immune to performance issues? I don’t think so. Instead, the reasons being given by agile teams are actually very practical. The way performance test teams do their work doesn’t support the fast paced, frequent feedback requirements of agile. Trying to fit a large-scale, system-level performance test into a two week sprint doesn’t work. And, getting feedback on system performance for the first time right before release to the customer flies in the face of why you’re agile in the first place. Additionally, in many cases, the agile teams weren’t heartbroken that they couldn’t use the services of the performance test team (more on that below).

To understand how “traditional” performance test teams (myself included) got to this point, we need look no further than when the dedicated performance test team (dare I say Center of Excellence?) came into its own – in the hey-day of waterfall development. Performance testing traditionally takes place at the end of a development cycle on functionally tested and stable code in a production like environment. Many performance test teams developed rigid processes and requirements to ensure the results they provided were meaningful. The process typically looked something like:

• Project team requests a performance test with a minimum of 8 weeks prior to requiring results.
• Performance test team gathers the appropriate discovery information.
• Project team provides a functionally tested, stable code base and a production-like test environment.
• Performance test team plans, develops and executes test.
• Performance test team delivers formal test analysis to project team.

Obviously, this just doesn’t fit within the methods or spirit of agile development. To make matters worse, the performance test teams often built a wall between themselves and the project team. Their “throw it over the wall process” was often accompanied by thinly veiled threats of budget and schedule overruns if the development team dared to update code while performance testing activities were being performed. This gives agile teams the motivation to rid themselves of having to dance to the performance test tune.

While it may look bleak for the performance tester, all is not lost. There are many examples of development teams and performance testers that have successfully evolved the ability to produce high performance systems in an agile environment. They were able to take a step back and look at the ultimate goal – a shift from the late cycle performance test to a state of ongoing performance analysis. The key was their ability to determine what’s missing and address it.

In many organizations, the “what’s missing” from agile development teams is fairly common. First, and foremost, performance objectives aren’t part of user stories. As the teams proceed through their sprints the functionality matures, but performance is completely missing from the process. It doesn’t help that performance testing/analysis knowledge and tools are likely locked up on the other side of the wall with the performance test team.

To close the gap, agile organizations must begin to build a performance culture. This starts with addressing the most important issue – including performance in the user stories. Performance testers have a good deal of experience stating performance requirements in clear and measurable terms and their insight is invaluable to this process. But, how this is accomplished is another matter. Many organizations employ the same user story template that is used for functional requirements. This does an especially good job of making performance requirements visible and understandable to stakeholders. Another common approach is to include performance requirements into the acceptance criteria for the relevant user stories.

Another key aspect of shifting performance left in the development cycle is unit/component level performance testing. First, performance testers and developers must work together to map the appropriate user stories onto system components to create a focus for unit/component performance testing activity. Next, performance testers assist developers with “decorating” existing unit tests to make them timed or multi-user performance tests as appropriate. Similarly, traditional performance test tools can be used to execute tests on system components (e.g., web services) that are complete. Service virtualization solutions can be used to stub out components that are still in development or external. An important caveat is that unit and component level performance tests will likely not be executed in a production sized environment since it probably doesn’t exist yet. To adapt, performance testers need to shift focus away from absolute measurements that don’t mean much on a developer’s desktop. Rather, it’s important to look at trends and anomalies as the tests are run on each new build.

Each of the activities discussed above helps to shift performance left in the development cycle and make performance part of everyone’s job. The result is a culture that promotes a constant focus on performance.

Finally, a traditional end-of-cycle, system-level performance test still has its place prior to release to the customer. This activity, however, becomes a confirmation of system performance rather than an initial investigation.

About The Author

Lee Barnes has over 20 years of experience in the software quality assurance and testing field. He has successfully implemented test automation and performance testing solutions in hundreds of environments across a wide array of industries. He is a recognized thought leader in his field and speaks regularly on related topics. As Founder and CTO of Utopia Solutions, Lee is responsible for the firm’s delivery of software quality solutions which include process improvement, performance management, test automation, and mobile quality. Lee holds a Bachelor’s Degree in Aeronautical and Astronautical Engineering from the University of Illinois.
  • Rex Black says:

    You’re absolutely right! There is nothing magical about Agile methodology as it relates to system performance. I have been called in after projects lost 8 figures/hr in revenue due to a performance issue, as they failed to migrate to Agile “gracefully”. Agile requires a more comprehensive approach to performance than before (though many of the same processes can/should be applied to Waterfall). A final integrated round of performance testing, just prior to launch, is still an important aspect of a performance methodology. But, It should not be the only one. As you say, earlier testing of units, components and modules, in-line or parallel to Agile sprints is important. Also, there are those non-testing aspects of ensuring performance, such as code profiling, performance modeling, etc…

  • >