Sure, it’s pretty easy to break your application, website or API under an excessive load. But figuring out why and how it broken isn’t so simple. In this section, we highlight the areas of load testing metrics you should look out for to help you get to know your API, its limitations, and your users better.
Before we break down the kinds of areas you should be considering — because it’s not just your API and its metrics that will be able to measure performance — think about the following, as each ultimately affects your user experience:
- Average response time - time to first byte or last byte
- Peak response time - tells your longest cycle
- Error rate - percentage of problems compared to all requests
- Concurrent users - many virtual users are active at any given time
- Throughput - often bandwidth consumed, but, in general, the maximum rate at which something can be processe
Virtual User Calculation
- Virtual Users (VUs) - A certain number of users simultaneously accessing your system or a certain number of users accessing from different browsers
- Session length - A group of interactions that took place on your website at a certain time, like how long did someone spend on your app or website, including jumping around to different pages
- Peak-hour pageviews - Your base can come from your Web analytics tool (like Google Analytics). Also find your peak traffic time (perhaps holiday rush?), then increase with percentages of expected traffic growth
- Concurrent user - Runs through a transaction from start to finish then repeats
- Single user - Only one transaction completed
- How many actual users do you predict will access your system at once?
- How many VUs do you need?
- How many rows of data do you need?
- How much bandwidth do you need?
Now that you know these key load testing terms, we break down how these will play out for you and your load testing results and where they are testing which parts.
Load Testing Metric #1: Web Server Metrics
Web server metrics help you find errors in your API deployment, so you can scale and augment as needed:
- Busy and idle threads - Do you need more Web servers? More worker threads? Do you have an application performance hotspots slowing you down?
- Throughput - How many transactions per minute can your API handle? When is it time to scale to more Web servers?
- Bandwidth requirements - Is your network your bottleneck? Or is there content that is pulling it down that you can offload?
Load Testing Metric #2: App Server Metrics
Whether your application server is Java, PHP, .NET or something else, here’s where you can try to find deployment or config concerns:
- Load distribution - How many transactions are handled by each engine? Do you have load balance? Or do you need more application servers?
- CPU usage hotspots - How much CPU usage do you need for each load? Can you fix programming to lower CPU or do you simply need more?
- Memory problems - Is there a memory leak?
- Worker threads - Are they correctly configured? Are there Web server modules that block these threads?
Load Testing Metric #3: Host Health Metrics
Sometimes it’s not your API’s fault at all. These Web and application servers run on hosts. Try these host tests:
- CPU, memory, disk, input/output - Problems with network interfaces? Are we exhausting resources?
- Key processes - Which processes are running on our host? Should we be taking some resources off or should we be redistributing them to other virtual or physical machines?
Load Testing Metric #4: App Metrics
If you have either created an application or if your APIs are connecting to them, you’d be remiss to not investigate how each part of your apps handles loads and scales:
- Time spent in logic layer - Which layer slows down with an increased load? Which layers scale or don’t scale well?
- Number of calls in logic layer - How often are you calling internal Web services? How often are you calling into your critical APIs? Are they your own APIs or others’?
Load Testing Metric #5: API Metrics
Your API performance affects mobile and Web apps, which means increasingly impatient users will quickly uninstall or Google for your competitor. As we say at Smartbear, your service level agreement (SLA) is a promise that you cannot afford to break. API load testing metrics get into specific kinds of throughputs:
- Transactions per second (TPS) - in any number of transactions presented, how many are able to go through and how many have to queue?
- Bits per second (BPS) - bytes divided by time passed
Measure These Metrics Easily With the Right Tools
As you get started with your load testing, SmartBear has the tools you need to ensure that your APIs perform flawlessly under various traffic conditions. LoadUI Pro is the industry’s leading API load testing tool that is great for beginners, because it’s scriptless and allows for easy reuse of your functional API tests from SoapUI Pro.
LoadUI Pro allows you to quickly get started and:
- Save time & resources by building load tests from pre-configured templates in just a few clicks
- Create real-life traffic patterns from servers ‘on premise’ or in the cloud
- Understand server performance by visualizing the effects of load on your servers with real-time monitoring
- Quickly analyze results by collecting advanced performance metrics for your load test runs and benchmarking them against past tests
- Reuse your existing functional test cases from SoapUI Pro for increased efficiency