What is a Load Testing Strategy?
If you are thinking of load testing your API, the first thing you should do is establish a load testing strategy.
What is a load testing strategy?
To create a load testing strategy, the first step is to discover and understand the load requirements, scenarios and the use cases for your APIs. For example, if you’re a retail-based API, then you might have to understand the seasonality associated with traffic, know how to enable APIs to perform during the peak times – like the holidays and Cyber Monday when your website will have a massive influx of traffic.
During these times, people are coming in and doing a lot of transactions on your website and hence your APIs.
You might decide to build a baseline support of 500 users per second. You want to know the capacity of your hardware to determine the number of servers you will need for your peak loads
You would also want to know the peak traffic times at different times of the day, a good way to determine the traffic during different times of day is through Google analytics.
Load Testing Profiles
Once you’ve your requirements established, the next step is to convert the traffic patterns you observed and discovered into load profile(s) that you will configure and build on your load testing tool. LoadUI provides you with several different load testing profiles, including: fixed, burst, RAM, random, variance and custom. Custom means that you can design your own profile from scratch.
LoadUI Pro will enable you to create detailed load profiles for your APIs from scratch. But if you don’t have time to do that we have created for templates work for you, then you can use the baseline peak stress, or spike, or smoke strategy template that LoadUI comes pre-configured with.
What this means is that, once you convert your functional test to a load test, you can configure these strategies for your APIs with just a click. Then, you can test them, see the response and error metrics, perform statistical analysis, download reports and fix any issues around them.
1. Baseline Profile
The baseline is very good for a couple of things. With it, you can very quickly check that what works for you from a functional perspective, where a single user interacts with your APIs, also works for you from a performance perspective.
This will allow you to ensure that it works for a couple of users, more than just one. In many cases, it will also help you establish the SLA.
If you’re an API provider and someone wants to interact with your API, they might say: “What is the SLA you’re willing to commit to?” Just coming up with a number in your head obviously isn’t the right approach here, because that number might be wrong. It might cause a breach of contract and ruin a great relationship.
So, you can’t just say, “Under three seconds I think.” With the built-in baseline scenario in LoadUI, you’ll be able to calculate this exactly. You’ll be able to see real numbers, standard deviations, and you can have 99% confidence in them.
2. Peak Profile
This scenario often causes some confusion with testers, they say “I’m going to have a situation like the rush on Black Friday and I’m going to have 10,000 users, so I’m going to generate a baseline scenario with 10,000 users.” But that’s not what happens – even on Black Friday.
People do not get on the shopping sites all at once! In the morning, they gradually wake up and then go out shopping. Yes, there’s a heavier load, but it involves a buildup. In LoadUI, the peak scenario literally starts like it does at midnight.
Then they get on the system more and more and more, until they’re at peak throughput (10,000 users in this example). When you have thousands of people in the system, you gradually grow up to it. This is fundamentally different than the baseline scenario where, in the first second there’s no one – and then suddenly you have a stampede of 10,000 requests hammering down on your system.
The peak scenario is much more life-like, and it might give you an idea that your APIs can grow to service that many transactions, whereas if you ran it as a baseline test, you might see it crash immediately when your system actually may have the capacity to withstand 10,000 users
3. Stress Profile
One question a lot of people ask when we’re talking about peak load tests: “Can I configure a load test that would add load to the point where the API breaks?” This type of scenario is called a stress test.
Say you have an SLA of 300 milliseconds. In that case, you can add an assertion to your test to meet that SLA. You might say, “This is where I can look at the performance with unlikely number of users and ensure that their response is under that 300 millisecond SLA.” At that point, it’s important to add in what the failure would be.
Then, you’re going to ramp up the profile from no users to some undefined unlimited amount – maybe 15,000 users. As the users get closer to 15,000, the point that exceeds 300 milliseconds in response time is where you would stop the test.
From there, you go in the logs and find out how high your concurrent user count was before the test broke.
4. Soak Profile
Then last one that is also common is a soak load test. While everything might work and everything might be configured well for a burst of heavy traffic but is your system setup right for the long haul?
Testing your system under such conditions is important as there might be a memory issue that will get uncovered only when hit by traffic for a long duration of time. A soak load test involves a low number of users, but is run for a long period – say 12 to 24 hours.
After running the load test for a few hours to a day, you can go in and see if there’s been any kind of increases in memory consumption.
These are the most common load tests, baseline for establishing SLA, peak for managing heavy volumes, stress to find the number of boundaries that can be serviced with your defined SLA and soak to identify any kind of hardware memory leaks.
Alternatively, if there’s a requirement of a strategy very particular to your use case, you can always draw a custom graph with different peaks and valleys and different measures at different times.