Sometime we mix our test planning and test design. So, let’s talk about that.

You have a system, or a feature. You need to come up with the test cases. So you schedule a room, invite some friends, get some snacks on the table, and you get in.

Five hours later you come out, battered and bruised, but on the other hand, carrying a list of test cases.
Even better – your teammates are battered and bruised too! That’s always a plus.

The list of cases covers the “what”: What the system (or feature) does in different contexts and scenarios. Different inputs, order, time, whatever.

That’s your test plan right there. A whole lot of test cases. Too many in fact. You need to prioritize. Some of the cases are more important to check than the others. And we don’t have all the time in the world, so…

Back into the room. Some more snacks. some more brawling, and we’ve got a prioritized test plan.

We’re done with the test planning part. Are we ready to test?

Not yet.

We can just jump in and do it, but let’s think about the “how” of these cases. The outcome of testing, is an added amount of confidence. We can get this confidence in different ways. Some of them are more cheaper than others.

Which of these tests can be replaced by unit tests? It will save time, if the developers supplied proof that their parts works. And if something breaks, we’ll discover early and fix quickly.

Should we test every scenario through the UI? Sure, we’ll prove the system works, but it would be cheaper to drive some of them through APIs. Same confidence in less time.

Do we really need a fully-set database for every test case? Maybe some calls to the database can be mocked. Or maybe, we can run the test, instead with a fully deployed, highly configured database, with an in-memory database. Tests will run quickly, no configuration necessary, and even better- no footprint. Nothing to clean up.

In my early years as a developer, we were testing software that was controlling hardware. But do we need an actual hardware? Does it even exist? In my case, when I started testing my software, it was three years before we layed eyes on the prototype. I tested the software with simulators.

And that’s the test design. Once we’ve decided “what” to test, we decide “how” to test it. We first decide what we should focus on, and then check the price tag. If it fits we’ll do it.

Talking about this, I didn’t even mention the “who”. Different people on the team have differenet knowledge and skills, and the “who” can also go into these decisions.

Now, a lot of the time, we don’t do that separation, mostly because we “already know” how we’re going to test everything. But separation helps. We can make better decisions, if we first tackle importance, and only after filtering out the non-essential things, decide on the “how”.

And that’s the difference between test planning and test design. We need both, to be effective, and efficient, in testing.

Speaking of test planning and design, we talk and have some exercises on both at the “API Automation Testing” workshop. You should plan to attend.

Categories: Uncategorized

0 Comments

Leave a Reply

Avatar placeholder

Your email address will not be published. Required fields are marked *