Data-driven tests are like the early promise of Java: Write once, run many. With some side effects. Just like Java.
Here’s a very VERY simple example. This is an API for authentication:
Without looking at the code behind it, you probably think it’s a good idea to test this API. So how many cases do you think it will take to test this thoroughly?
Here’s one:
And another :
And one more:
What do these tests have in common? Except for Star Trek captainship?
First, the structure is the same: Sending the information, and then checking the response for the resulting value.
Then, look at the names of the tests. Lets assume that the authentication algorithm doesn’t have some “good” or “bad” patterns identification. That is, the algorithm is agnostic to the values we send.
If there was some pattern matching there, hackers would probably be having a party in there right now. So let’s assume it’s agnostic. All we need is to send are different values, and check if they get authenticated or not.
(By the way, this is a real-case scenario I bumped into years ago. A client collected all kinds of really-used values, and wanted to run all of them, as a regression test against their system. And they kept adding values all the time.)
Without any classification of values, and as with the original story, the list of cases gets larger every day. We end up with hundreds of test cases, all written the same way. If one of them fails, we’d know which one, but there’s no clue in the name about the case, because the only difference between this test and all the others is the input value.
Data-Driven Tests To The Rescue
That’s exactly the case for using Data-Driven tests. Our data-driven test looks like this:
The test is written once and runs all the cases. By the way, data-driven tests, are also called parameterized tests. Why?
That’s because historically, automatic tests would not have parameters. You see, tests are run by a test framework. The framework doesn’t know, or care, about what it runs. But in order to run all the tests in the world, they should probably look the same.
(If you’re into SOLID principles, that’s an implementation of the OCP. The Open-Closed principle, tells us we can extend the system without needing to change it. Our “system” here is the framework + tests. We want to add tests as many as we can, without recompiling the framework. The trick is keeping the test signature the same. Principles of software, in real life. Amazing!)
Anyway, all tests need to have the same signature. So to make thing simple, they all would have no parameters. Ever since JUnit, the first of its name. And version.
Back to our case. In order to run the test multiple times, but with different values, we want to inject the values, and the usual way, is send them as parameters. Hence the “Parameteric Tests” nickname. For each case, we need to pass both the inputs (the username and password), and the expected result. So for every case, we’ll be sending 3 parameters.
Write this test once and run many times.
There are two types of data we can send. Input data (in our case, name and password), and reference data (the expected result). Where does the data come from? It can be hard-coded, a (long) CSV file, a database, an external server – depending on the tools you use, you can decide how to feed the test.
Most test frameworks support data-driven tests in some way. Postman, which is not a test framework, also supports data-driven test. And it looks like an awesome solution for someone who doesn’t like writing tests too much.
Some Terms And Conditions
Of course there’s small print. First, all tests of the same kind, must have exactly the same structure. Let’s say you usually pass the 3 parameters, but sometimes you want to pass 4 parameters, if there’s dependency in some cases on the date. Different set of inputs and reference requires a different test.
But that’s not the real problem.
The main issue is the general-purpose reporting. Since we’re executing an algorithm, same one with many different inputs and outputs, we are giving away our chance to name (or classify) specific cases that sometimes give us better insight into the responsible code. If there’s a failure, we don’t have any information outside of “this case is different”.
If there is some meaning of grouping values, we should do it. For example, let’s say our passwords should at least have one capital letter, we can name one authenticateWithCapital, another authenticateWithNoCapital and another one authenticateWithMoreThanOneCapital.
These are still data-driven tests, but they run different groups of values. If one of the sets fail, the name can tell us a bit more.
Data-driven tests are great for algorithms and workflows – but because they are not specific, that means more work later, when things go bust.
Did I mention Postman? Check out my “API Testing with Postman” workshop. It delivers.
0 Comments