After the need for healthcare brought down the Obamacare website (www.healthcare.gov), it can be easily stated that performance testing plays a critical role in the software development life cycle (SDLC). Companies cannot afford to put performance testing activities towards the end of their development pipeline. Automated regression testing has slowly been making its way to the head of the class due to agile development best practices. Why does performance testing still hide in the back?
The feature being developed is not ready for performance testing
Even if the feature has not been fully developed, a lot goes into planning and making sure that the right tests are being defined. If development is being tracked with a Kanban board, get in the practice of adding a performance testing story that can be worked on every sprint. With continuous integration becoming a standard practice, try to design the stories so that they become part of your CI process.
The infrastructure is not ready to do load testing
With many cloud services offering solutions for performance testing, such services can be used to execute small load testing tests against the application being developed. In some cases, such services may actually be cheaper than hosting an internal infrastructure to perform the load testing.
There isn't budget for performance testing tools
The open source community has developed mature tools for performance testing. Even enterprise companies have developed free or community edition of their software for performance testing. At this point, the only cost is the time spent by an engineer to learn how to use the tools and apply it to everyday work. For the same free open source tools, there are exponentially more videos on the internet that show you how to use them.
So what is stopping organizations from moving to a more agile approach to performance testing? The industry has made advancements in bringing functional testing "up the waterfall" with automation. The same can be true with performance testing.
According to the documentation, "timers are processed before each sampler in the scope in which they are found". If a 10 second constant timer where added at the same level as the 5 second constant timer, each sampler would have waited 15 seconds to process. The corrected code has been checked into github: http://bit.ly/10s9xh2