A lot of people use the term “Edge Case”. I find it offensive.
Ok, not really, but there’s a misuse of the term. While we’re at it, let’s throw in negative cases, happy and unhappy paths, etc.
We tie in “cases”, which are behaviors, together with code paths. It’s a mapping mechanism. When we’re “testing an edge case”, we’re saying there’s a test going through that “other” code path.
What is an edge case, really?
Maybe it’s the case that the developer didn’t think through. Which is to say, there’s a risk here, and therefore we need to test more, better and deeper. When we’re doing black-box testing, we’re mapping risk to behavior, and we assign high risk to our perception of developers, our history with them, collective trauma and prejudice. We can also define the “edge” in probability terms: Most time one piece of code will execute, but sometimes, another will.
But then, if we look at the code, we’ll probably see the issues and behaviors. The edges are now longer there, because we mapped the risks to code. And code doesn’t have edges.
Let’s take a calculator, for example (of course). Adding two numbers is the “regular” case, the “happy path”. Adding two very big numbers (that may get us an overflow error) is considered an edge case. What makes the two cases different? Hmm. An overflow exception is an “exception”. And therefore… hmm, what then?
Expect the unexpected
When we’re talking behavior, there are only two types: Expected and unexpected. For the expected, we have requirements, and we address it in code. If we expect an error, our code can throw an exception, but it’s still expected behavior. We can decide that throwing the exception is not the behavior we expect, and change the code to fit the defined behavior.
For unexpected behavior, there is no requirement. We can explore it, and if there’s a bug, we might suspect “unhandled” behavior. We can map the behavior to the code, and when looking at it, call it an “edge”. But now it’s become existing behavior. If we don’t like it, we’ll change it.
Code doesn’t have edges
It doesn’t matter – if there’s code for that, there’s no “edge”. We’ve already done the analysis process. Neither is the case “positive” or “negative”. Even if our perception that a “positive” behavior will happen a lot more than a “negative” one. Again, both are expected and supported, and should be tested, in the same rigor.
By assigning different names, we send a signal: This code should be tested more (or less) than the other one. But all code paths are equal. We expect and demand the same quality all around.
We can prioritize effort to expected behaviors. There are important behaviors we want to make sure work. There are less important behaviors which still needs to work, and then there’s untested code. We can decide where to put more effort and where less.
But using terms like edges, negative and others are not helpful in this prioritization. They can even be confusing.
So, stop using these terms. Talk about defined behaviors and expected results. Ask if there’s code supporting those behaviors. Talk about what is more important to test more thoroughly.
And treat all code as positive code. Glass half-full kinda stuff.
Want to know more how I treat code? Check out the Clean Code workshop.
2 Comments
David V. Corbin · July 26, 2022 at 11:53 am
Basically a good article, but…
1) If I write code, I have expectations, I have done analysis. I then ship a finished product. Someone else does something that I never expected, it was not covered in my analysis…. clearly there is no way for me to code for that
2) The scope of possible behaviors is immense. What happens if other programs are running, what happens if L1 cache is flushed between instructions, what happens if processor affinity changes. I can probably produce 1000 distinct execution cases for a 3-5 line program in C (or Java or…). Clearly we are not going to test the millions that exist for even a moderate size clsss.
3) Since we can only test aq subset, there can be value in selecting the subset. Those that achieve the desired goal and produce an output of some value are typically highly valuable [they are happy paths]. Those that fail to produce the desired output (e.g. can’t complete transaction) are unhappy experiences for the customer/user, and it may not matter significantly exactly how or why it failed, the simple fact it failed differentiates it from success.
Gil Zilberfeld · July 28, 2022 at 11:15 am
David,
I agree with 1 & 2. We can’t prepare for everything, and therefore from the subset we have thought of. We do prioritization (either implicit or explicit) and decide what we want to test.
As for 3: A transaction failing may be unhappy from the user’s perspective. But when it happens we want it to work as well as in the “happy” path. From quality perspective both should work and be tested well.