Performance tests (4): Executing a test scenario

Now that we have prepared test scenarios, we have to execute them. Till here we did only some preparation work (but I cannot stress enough, that you really need to do this step!), but now you need to get your hands dirty. This means:

  • Decide which system you use to execute the test scenario. A developers laptop is usually not sufficient anymore to simulate a production-like scenario.. This information should be already contained in the test scenario description.
  • Implement the test scenario, so it is a machine executable format. Usually you record the specified user activities with a tool, modify them and then execute the recorded behavior using a tool.
  • Get test data. Having always the same test makes the analysis much easier.
  • Inform all parties, that you are running performance tests and that the results are sensible to any other activity on these systems.
  • Run the tests
  • Do the analysis.
  • React accordingly.

Let me point out a few important aspects of this execution. First, the choice of the systems to run the performance tests. I recommend to have them as similar to the production as possible. So, choosing the same hardware platform, same number of systems, same SAN, same content. This makes if much more easier, if someone asks the important question: “That’s all nice, but can we apply the results of this test also to our production environment?”. If you have identical hardware and the identical setup, it does not require much arguments to get to a answer everyone can agree on: “If we would run this scenario on production, we would get the same results”. And that’s a very important step later on when you need to decide about follow-up activities.

Secondly the test data. It is crucial, that if run a test multiple times, that you run it always on the same test data. This is not only required for the functional aspects (your most critical function regarding performance should not fail because of missing test data), but also the test data might impact performance. So it makes a difference, if you test your application on a 20 gigabyte repository or on a 200 gigabyte. repository. I usually recommend to create a copy of the instances before you start the first execution run of a performance scenario; before the execution of the second run, this copy will be restored, so we start the same point. This is important, so can you actually compare the results of multiple test runs.