5 Best Practices for Data Driven API Testing

Once you’ve decided to take the plunge and begin applying data-driven testing procedures to your API quality assurance efforts, you’ll quickly start reaping the rewards of this highly flexible strategy. The advantages of following a data driven testing approach are the following:

  • Comprehensive application logic validation
  • Accurate performance metrics
  • Efficient technology stack utilization
  • Synchronization with Agile software delivery techniques
  • Automation-ready tests

Follow these 5 simple best practices in your data driven API tests, and you’re sure to see worthwhile results!

 

1. Use Realistic Data

This point may seem intuitive, but the closer that your test data reflects the conditions that the API will encounter in production, the more comprehensive and accurate your testing process will be.

The best way to ensure that your test data is realistic is to start at the source – the business procedures that your API was designed to support.

It’s incumbent upon you to be mindful of the gaps between business users and API testers: make it a priority to understand the rationale behind the API, along with the information being sent to it, both in design and in practice.

It’s also important to consider that there may be numerous yet non-obvious interrelationships among data: certain input values may be contingent on other information that is be transmitted to the API, and the same conditions may apply for returned data.

With the proper set of tools, you should be able to accurately represent these relationships in your test data.

 

2. Test Positive and Negative Outcomes

Most people only think of monitoring positive responses from their APIs: transmitting valid data should result in a serverside operation being successfully completed and a reply to that effect returned to the API’s invoker.

But this is just the start - it’s equally important to confirm that sending incorrect or otherwise invalid parameters to the API triggers a negative outcome, which is commonly an error message or other indication of a problem.

Furthermore, functional tests can be configured to gracefully cope with error conditions that would normally halt the test.

This method of API evaluation frees the tester from having to wade through the full set of results to hone in on a point of failure.

Instead, only those cases where an anticipated positive outcome came back negative - or vice versa - need to be investigated.

 

3. Use Data to Drive Dynamic Assertions

Assertions are the rules that express the projected response from any given API request.

They’re used to determine whether the API is behaving according to its specification, and are thus the primary metrics for quality. Many data-driven.pngtesters make the mistake of hard-coding these guidelines, which introduces unnecessary maintenance overhead and brittleness to the API evaluation process.

On the other hand, dynamic assertions are flexible, and can very from one API request to another.

For example, an e-commerce shipping calculator API probe may transmit one row of test data for a sale of $50, which should result in free shipping (i.e. a reply stating that the shipping charge will be $0).

The next row of test data may be for an order valued at $49.99, which should result in a shipping charge of $5.99. A dynamic assertion will let the tester store the expected response of $0 alongside the input order of $50, and $5.99 for the $49.99.

New test scenarios can then easily be added to the set of input data without requiring any changes to the functional test itself. And if the shipping policy changes, only the test’s data and assertions need to change – everything else will remain the same.

 

4. Track API Responses

Many testers fixate on the success or failure of each API invocation and discard the set of responses after they’ve finished running their functional tests.

That’s a shame, because replies from an API are very useful artifacts. Without recording these test results, important history is lost.

If an API undergoes multiple changes and a new error is uncovered during the regression testing process, it can be a monumental task to determine precisely which modification caused the flaw. Consulting a library of stored API requests and responses makes identifying the moment that the new problem occurred – and correcting it – much less of a hassle.

 

5. Repurpose Data Driven Functional Tests for Performance and Security

It's worthwhile to note that many organizations use highly unrealistic, narrowly focused performance and security tests that are also hamstrung by narrow sets of hard-coded test data.

Since it takes significant time and effort to set up a properly configured, adaptable data driven functional test, once you’ve made that investment, why not utilize it for more than just one objective?

Reusing a data-driven functional test introduces a healthy dose of reality to the performance and security evaluation processes, and great tools, like ReadyAPI, make it easy to make this transition.

 

To get started with data driven testing, download ReadyAPI. Learn more about how ReadyAPI can help your data driven testing strategy by watching the video on our data driven testing documentation page. 

 

Learn more:

Common Data Driven Testing Obstacles

The Gap Between Testing Goals and Reality