Common Data Driven API Strategy Hurdles

Over the past several years, the role of the API in the modern enterprise has grown exponentially. Long gone are the days when an API was treated solely as an afterthought – a mere set of tacked-on calls to an existing application, with little additional value to anyone other than a narrow developer community.

Today, businesses - regardless of whether or not they operate in the technology industry - place increased emphasis on delivering APIs. In many cases, the API is now the primary mechanism by which partners and consumers interact with an enterprise’s business systems.

The API has now evolved into absolute fact of life in nearly every organization, from mid-size to global giant. Whether they’re employed internally, externally, or both, APIs are assets that connect systems, streamline workflows, and make every type of intedata-driven testing graphicgration possible. In fact, beyond improving operational efficiency and enabling cross-system communication, APIs now serve as competitive differentiators for many organizations. It’s no exaggeration to state that technology driven businesses such as Uber, AirBnB or eBay live and die on the strength of their APIs, and this degree of reliance is spreading across every industry.

With all this in mind, it’s more important than ever to treat the API with the same amount of attention and care as you would for any other mission-critical enterprise asset. This means that API quality and performance are absolutely nonnegotiable.

But how do you ensure that your API is ready to handle the seemingly infinite possible conditions that it will encounter in the real world?

One powerful, yet surprisingly simple technique is to feed your tests with vast amounts of representative data to help ensure that every conceivable use case has been simulated in your quality assurance process.

This solution is known as data-driven API testing, and it’s a well-proven quality assurance strategy that separates the data that powers the functional test from the underlying test logic.

The autonomy permits a tester to quickly design a series of scenarios that execute the same steps - which are performed repeatedly in identical order - but with a variety of test data supplied at run time from information sources such as spreadsheets, flat files, or relational databases.

This approach is in direct contrast to the common practice where testers hard code – or manually enter – input parameters and then eyeball the responses to ensure they’re correct. It’s also possible for testers to store predicted responses alongside the messages that will be communicated to their API. This makes it feasible to configure automated assertions to compare the expected results with the actual responses.

 

Obstacles to Employing Data in API Testing

At first glance, it’s tempting to simply resurrect the same methodologies and best practices that have been so successful in evaluating graphical user interface (GUI) applications and apply them to SOAP and REST API testing. This is a mistake: APIs are very different types of assets, with their own distinct set of design patterns, access methods, usage characteristics, and risk exposures.

For example, when testing an application through its standard user interface, the array of potential inputs are greatly constrained by what’s physically possible in a GUI - after all, it’s not easy to hand-type a 2 GB video file into a text box in a browser or Windows application. On the other hand, it takes no extra effort to attempt to transmit that video file to a SOAP service or REST API.

The people or software that interact with the API can submit any kind of request; it’s the specification and application logic of the API itself that will determine whether or not the request content is permissible, and what type of response will be returned.

The allure of APIs is that they’re open and encourage interoperability, but these attractions also multiply testing complexity. Even the simplest API must be tested using colossal numbers of permutations, far beyond what a person or even large team can manage by hand. Simply stated, fruitful API testing mandates that the enterprise apply a specialized collection of quality assurance practices that recognize the inherent complexity found in distributed computing.

Data driven testing is a major part of this methodology, but many enterprises encounter difficulty trying to properly incorporate it into their day-to-day operations. Let’s look at a few factors that contribute to this shortfall.

 

Time Pressures

Diverse influences such as agile delivery practices, competitive business pressures, outdated testing methodologies, and unyielding schedules team up to place enormous pressure on software developers and quality assurance professionals. The relentless drive to continually release new API functionality means that there’s rarely sufficient time to properly architect reusable tests capable of effectively incorporating data.

Instead, testers tend to reuse the same handful of static, sample records. This tactic is easily replicable (and after all, consistency is important), but falls far short of accurately representing real-world usage or the full set of API capabilities. For example, cutting corners by using test records from a collection of five customers misses out on the nearly infinite combinations of data that will ensue in production.

 

Communication Barriers

Many organizations segregate the business analysts and users that specify the functionality to be supplied by an API from the QA team that is tasked with testing it. By the time the testing phase begins, the specification has gotten lost in translation and the testers – who may even be employees of a third party outsourcing firm working on the other side of the globe – aren’t quite clear on the exact purpose of the API itself, much less the nuances of its inbound and outbound parameters.    

                                                          speech bubbles_soapui

This unawareness makes it impossible to design effective, far-reaching data-driven tests. Without precise knowledge of the API’s business goals, it’s natural for the QA team to design test cases based on a relatively limited set of hard-coded data.

 

Inadequate Tooling

Even with the best of intentions, it’s impossible to quickly and easily create data-driven API tests using much of the software tooling on the market.

With the exception of techno-logies such as SmartBear’s Ready! API, most API evaluation tools have been produced for software developers. These utilities frequently require extensive scripting to conduct even the most rudimentary API interactions, much less a full set of automated data-driven probes.

Faced with tight schedules, functionally poor testing tools, and the job of writing copious scripts, it’s no wonder that many testers take the easier path of hand-typing sample API input data and glancing at the results to detect anomalies.

 

 

Adopting a data-driven API testing methodology is an essential prerequisite for ascertaining that your APIs are of production quality. Beyond validating functionality, these tactics also serve as a foundation for confirming that your APIs will deliver on their performance and security obligations.

 

Download ReadyAPI today to get started with data driven testing.

 

Learn more:

5 Best Practices for Data Driven API Testing

The Gap Between Your API Testing Goals and Reality