ProductivityAndFun Read More about LoadUI

Creating and Running LoadTests

PDF Print E-mail
User Rating:  / 63
Rate this article: PoorBest 

Using LoadUI Pro for load testing

SoapUI Pro offers basic load testing capabilities. If you want to get the most out of your load testing, LoadUI Pro has more functionality and capabilities for the professional load tester. You can use your SoapUI tests with LoadUI Pro and get more visibility into server performance and how your application behaves under stress.

You can try a free trial here


1. Using soapUI for load testing

Basics

A LoadTest in soapUI is used for asserting that your target services  when you run an existing functional TestCase repeatedly for a desired duration with a desired number of threads (which is the same as "Virtual Users"). LoadTests are shown in the Navigator as children to this TestCase;

loadtests-in-navigator

(here you can see that the "Search and Buy TestCase" TestCase has four LoadTests defined).

You can create any number of LoadTests for your TestCase either from the TestCase right-click menu or the TestCase Toolbar with the New LoadTest option. Open the LoadTest window by double-clicking it in the Navigator (a newly created LoadTest will be opened automatically).

1.1. LoadTest Execution

SoapUI allows you to run your LoadTest with as many Threads (="Virtual Users") as your hardware can manage, depending mainly on memory, CPU, target service response-time, etc. Set the desired value and start the LoadTest with the Run button at the top left in the LoadTest window. The underlying TestCase is cloned internally for each configured Thread  and started in its own context; scripts, property-transfers, etc.. will access a unique "copy" of the TestCase and its TestSteps which avoids threading-issues at the TestStep and TestCase level (but not higher up in the hierarchy; if you TestCase modifies TestSuite or Project properties, this will be done on the common shared ancestor objects). The "Thread Startup Delay" setting in the LoadTest Options dialog can be used to introduce a delay between the start of each thread, allowing the target services (or soapUI) some "breathing space" for each thread.

Depending on which limit and strategy has been selected, the LoadTest will run as configured until it terminates due to one of the following:

  • It has reached its configured limit
  • It has been canceled by the user with the Cancel button on the LoadTest toolbar
  • It has been canceled by a LoadTest Assertion when the maximum number of allowed errors for that assertion has been passed

If the limit is time-based, the Cancel Running option in the LoadTest Options dialog allows you to control if running threads should be allows to finish or should be canceled. In the same manor, the "Cancel Excessive" option controls  if excessive threads should be cancelled when the thread-count decreases (for example when using the Burst Strategy).

Multiple LoadTests can be executed in parallel to test more advanced scenarios, just open several windows at once and run them side-by-side. A sample scenario for this could be a recovery test consisting of a simple LoadTest generating baseline traffic and another LoadTest using the Burst Strategy which creates high traffic in Bursts; after each Burst the baseline LoadTest can assert that the system handles and recovers as required.

1.2. Statistics Collection

As the LoadTest runs, the LoadTest statistics table is continuously updated with collected data each time an executed TestCase finishes, allowing you to interactively monitor the performance of your target service(s) while the LoadTest executes. If your TestCase contains a loop or long-running TestSteps it may take some time for the statistics to update (since they are collected when the TestCase finishes), in this case select the “TestStep Statistics” option in the LoadTest Options dialog to get updated statistics on the TestStep level instead of TestCase level (this requires a bit more processing internally which is why it is turned off by default).

Collection and calculation of statistic data is performed asynchronously (i.e. independently from the actual TestCase executions), so it will not directly affect the actual LoadTest execution. Further, the Statistics Interval setting in the LoadTest Options dialog controls how often the Statistics table is updated from the underlying statistics model, change this value if you require more or less frequent updates.

Quick tip:

Several Strategies also allow you to change the number of threads during exection which enables you to interactively change the load and monitor the results as the LoadTest progresses. If you want reset the calculated statistics when the number of threads changes (so numbers like avg and tps are not skewed by previous results), make sure Reset Statistics in the LoadTest Options dialog is selected.

1.3. TPS / BPS Calculation

Calculation of the different values in the Statistics table is straight-forward for all columns except TPS (transactions per second) and BPS (bytes per second) which can be calculated in two different ways (controlled by the "Calculate TPS/BPS" setting in the LoadTest Options dialog);

  • Based on actual time passed (default):
    • TPS : CNT / Seconds passed, i.e. a TestCase that has run for 10 seconds hand handled 100 request will get a TPS of 10
    • BPS : Bytes / Time passed, i.e. a TestCase that has run for 10 seconds and handled 100000 bytes will get a BPS of 10000.
  • Based on average execution time:
    • TPS : (1000/avg)*threadcount, for example avg = 100 ms with ten threads will give a TPS of 100
    • BPS : (bytes/cnt) * TPS, i.e. the average number of bytes per request * TPS. For example a total number of received bytes of 1000000 for 10 requests with a TPS of 100 would give (100000/100 * 100) = 1000000 BPS

To better understand the difference between these two let's create a small example; a TestCase with two groovy scripts, the first sleeping for 900ms, the second for 100ms. We'll run this with 10 threads for 10 seconds, which theoretically should result in 100 executions of our TestCase;

sample-tps-based-on-time-passed

With TPS being calculated on time passed, the value is the same for both TestSteps since they were executed the same number of times during our 10 seconds. Their individual execution speeds (907ms vs 111ms average ) does not affect this value. Now lets change the way TPS is calculated to be based on average (in the LoadTest Options dialog):

change-tps-calculation-option

When we now run the test we get the following:

sample-tps-based-on-average

Here the hypothetical TPS for the firs teststep is calculated to be 11, since with 10 parallell runs the average time was 909ms. The second teststep gets almost 90 TPS, once again calculated with 10 parallel runs which took 112ms in average (if this was sustained performance we could theoretically have "squeezed in" 90 requests in each second).

Which of these two to use is up to you; with single step TestCases you should get roughly the same result from both, but when your TestCases contain many steps, both ways have their pros and cons (as illustrated above).

Statistics Graphs

There are two types of graphs available from the LoadTest Toolbar during execution; statistics and statistics history. The main purpose of these is to visualize selected statistics over time to be able to detect sudden and unexpected changes. Display of statistics is relative (not absolute) and the graphs are thus not very useful for analyzing exact data. Both graphs have a Resolution setting which controls how often the graph is updated; setting this to "data" will update the graph with the same interval as the Statistics Table (which is the underlying data for the graph, hence the name). Alternatively if you want one of the fixed resolutions select these.

The Statistics Graph shows all relevant statistics for a selected step or the entire testcase over time, allowing you to see how values change when you for example increase the number of threads. In the example below we increased the number of threads from 20 to 40 after half the test and the other values changes accordingly;

statistics-graph

As you can see the green line (threads) jumps up after half the test, which also causes an expected jump in average and a minor change in transactions per second, the later meaning that although we increase the number of threads we don't get a corresponding increase in throughput (since the average response time increases).

The Statistics History Graph shows a selected statistic value for all steps allowing you to compare them and see if the distribution of any value between teststeps changes over time. For the same test we can compare the avg over time:

statistics-history-graph

Here the graph contains one line for each TestStep in our TestCase shown with the same color as the TestStep in our Statistics table (yellow and pink);

graph-stats

As we can see the average changes similarly for both TestSteps when the number of Threads increases.

1.4. TestStep Specific Things to keep in mind

The multi-threaded execution of LoadTests has some TestStep-specific implications that you should be aware of when designing and running your LoadTest:

  • Run TestCase : If your TestCase contains “Run TestCase” steps, their target TestCase will not be cloned for each Thread in the LoadTest; all threads will each execute the same instance of the target TestCase. The "Run Mode" option in the Run TestCase Options Dialog is available for controlling behaviour in this situation; set it to either "Create isolated copy..." (which will do just that) or "Run primary TestCase (wait for running to finish.." which will synchronize access to the target TestCase. The "Create isolated copy" will give better performance but any changes made to the internal state of the target TestCase will be lost for each execution.
  • DataSource : DataSources can be shared between threads in a LoadTest, allowing you to "divide" the threads between the data your using to drive your tests. This can come in handy but does have some configuration-implications that you should understand, read more.
  • DataSink : Just like DataSources, DataSinks can also be shared, resulting in all threads writing to the same DataSink. Remember that a shared DataSink will be used for all Threads during the entire run of the LoadTest, which could amount to quite a lot of data.
  • DataGen : DataGen properties can be set to be shared between threads, which can be usefull with DataGen Number Properties which are used for generating unique ID's (if the property is not set to be shared, each thread will get the same sequence of numbers).

Ok! Hope this gets you going.. next up is how to use the different Load Strategies for simulating different types of load, and then we'll move on to assertions, reporting and script.. what fun!