Automation will play an increasingly important role in IoT testing

There’s no doubt that a certain heightened level of the human element is necessary to IoT testing. But having more to test means that you need to prioritize automating whatever you can.

In the future of the Internet, there will never be enough testers because there will be numerous layers and devices to test for functionality, interoperability, security, and more. This means the demand for better testing tools and much more sophisticated testing models will only grow in the connected world. Modern QA will be increasingly automated to the point that manual testing will still happen, but it will sometimes diminish in importance and necessity — although that day is still a long way out.

To start with Internet of Things testing automation, Gupta points out that “If someone is an IoT developer and if they are already using the testing automation tool, they can build their particular tool on top of that protocol.”

The general consensus is that testing the API that connects our connected world should be automated as much as possible because when that breaks down, you’re left with a dumb phone and the same remote-controlled planes we’ve had for decades. Without a fully functioning API, there’s simply no more Internet in those things.

 

“There is a part of testing that can be completely automated: checking,” Bolton said. By our definition, checking is the process of applying algorithmic decision rules in order to observe and evaluate some function in the product. You can run a program that inputs a two and then calls a plus function and then inputs another two and returns a four. That part can actually be automated. You can automate a series of checks and activities.”

Bolton reminds us that we need to judge our automation tools continuously to answer these questions, among others:

 

  • Is the programmer able to use it easily?
  • Does the function have its own level of error checking that programmers can rely on?

 

Knopf offered more parts of IoT testing that can be automated, including:

  • Discovering URLs
  • Doing port scanning

He goes on to say, “I’m a big believer in writing test code and having more test code than actual production code.” However, Knopf strongly warns that “For me, automation is critical, but, from the security standpoint, you can’t automate anything,” but “from both the functional and performance, all those things can be automated.” He said that, If you do black box testing, run through these scenarios and you verify the data. If you’ve got an API, a mobile application, and a web interface, how do you know where they break down?

  • Stub out the applications
  • Automate each part of the testing

Knopf says you have to keep asking, “How do I know my API is working right? I want to understand if I can get anything improper out of the API. I can stub the mobile application because everything I’m testing is the API, and send every range of inputs into it.

In another IoT automation use case, Altitude Angel performs static testing which covers code before it ships, including:

 

  • Run-time testing for behavior based on dynamic in puts, simulating realistic drone paths
  • Security A/B testing and fuzz testing: the API expects certain inputs, what happens when they are different?
  • A testing simulator followed by human verification

 

Altitude Angel also runs penetration testing, which acts as a sort of insurance. They outsource this before releases because they run scripts, but “They’re still not as good as my team who are building my software and know how to attack it.” They use pen testing to find things that are open, as well as internal

Load testing automation. Pen testing helps them find issues to hone their A/B testing. Gupta recommends that penetration testing should be performed between the device and the mobile app and between the device and the server it speaks to, referencing Fitbit’s very public security vulnerabilities that they weren’t sending data securely to the server.

“We perform extensive penetration testing of the entire IoT device. We assure that at that particular time, it doesn’t have any other security issues. We give a certification. ”

Admitting that in this rapidly changing world, “maybe the next week a new security vulnerability can come out. It’s not secure for the lifetime. You have to do the security testing over a period of time depending on how often the device updates code, making sure you do security testing every month or every two months.

Gupta recommends that you “start testing for the mobile app and web app for which they probably have a tool for,” but he warns that “the rest of the problem for the hardware device, there aren’t a lot of testing tools for how the device actually works,” offering up that maybe you can “plug your device into the debug ports to test that.” He points out that, for many of his clients, “the overall architecture of IoT is something a bit different from the architecture. They might not be familiar with testing the mobile device and [it takes] a lot of manual effort. Their functional testing and security testing were mostly manual steps around the test cases.”

 

Test design must evolve in the endlessly scaling Internet of Things

 

Redesigning our lives online necessitates redesigning testing as well. Trifa says that you can automate “the really nasty low-level engineering bits, yes, [but] the complexity of data is going to be hard. The more data you put in, you have the exponential of mixing data, harder to automate.” Gerrard observes that, “What we haven’t got are good enough tools to create test design at scale. If I’m testing a traffic management system for a medum-sized town, you can’t just say ‘Let’s create some random locations and destinations for cars.’ They can’t just be randomly placed,” when you have countless variables like one-way streets, holes in the road, traffic lights, and not being able to drive through walls.

In the Internet of Things, test design shifts from hand-crafting a few tests to “designing tests by patterns and then randomizing within that legitimate pattern which becomes a test model.”

Speaking of the current testing automation tools, Gerrard continued that “The models we have are viable for actuaries, but we need to do that for cars, trash cans, pharmacists, fire and rescue, and police. Historically we only tend to model for the purpose of requirements and throw them away. We now need to create trusted models for both developing and testing. The testers’ contribution is to challenge those models and refine them,” Gerrard said.

“We don’t have millions of tests. It’ll shift from running their tests manually to shifting tools to automate it.” He offered the example of how could Google or any other company ever test driverless cars in a bustling city like Shanghai? You have to do simulations. Offering up the example of air traffic controllers, he says we already have sophisticated simulation technology, but that it’s “ferociously expensive.”

Gupta contends that to design tests for the Internet of Things, “You have to write the test cases for all the possible things that any user could do with the device.” If you have four buttons, you can’t just do the workflow of one, two, three, then four. “Not just test for the normal working flow but also check for all the probabilities and see how the device acts for them.”

“Again, with the heightened level of unpredictability, out-of-the-box testing becomes the primary form of testing in the Internet of Things, even as you try to automate it.”

Menon echoed Thurai’s earlier point that many of “the players that are coming out aren’t the big players. They can’t be tied down to an expensive product.” That means testing automation has to be used to make it easy and inexpensive enough to complete as thorough a testing as possible.

Menon says you can start by asking “How can you leverage existing, probably open-source tool sets?” There also needs to be protocol generators that allow you to test your own systems as well as simulate multiple devices. Gerrard says we “need a performance-testing tool which can generate a lot of traffic and we need to feed those agents meaningful tests that have value. We are very good at crafting tests one-to-one — you and the machine — but we need to do that by the million and that’s a different game.”

 

The next step is to find the existing test tooling to help it all.