Spying on call chains with Jasmine

I'm more familiar with the mocking capabilities of rspec than I am of Jasmine. I've used other mock frameworks in the past: NMock, NMock2, moq and OCMock to name a few.

One thing that I'm used to doing is creating a mock that expects a call on a method which returns another mock that also expects a call on a method. Let's break that apart. I basically want to be able to set up mocks for call chains like this:


I do this because the value that's returned by the call to third is used somewhere, and I want to make sure that it's being used correctly.

With rspec-mocks, I do something like this:

two = mock('two')

one = mock('one')


I've created similar constructions with other mock object frameworks, but when I went to go do this using Jasmine's spies, I was completely lost. After a bunch of hacking around, I came up with this.

var something, one, two;

something = jasmine.createSpy('something');
one = jasmine.createSpy('one');
two = jasmine.createSpy('two');

something.first = jasmine.createSpy('firstFn').andReturn(one);
one.second = jasmine.createSpy('secondFn').andReturn(two);
two.third = jasmine.createSpy('thirdFn').andReturn(42);

But then I realized that the spies for something, one and two aren't really doing anything. They're just containers for the spy functions... that's when it dawned on me: Jasmine spies create functions... it's a functional approach to what I was assuming to be an Object-Oriented problem.

So now, I'm able to rewrite the code like this:

var something, one, two;

something = {};
one = {};
two = {};

something.first = jasmine.createSpy('firstFn').andReturn(one);
one.second = jasmine.createSpy('secondFn').andReturn(two);
two.third = jasmine.createSpy('thirdFn').andReturn(42);

And everything still works the way I expect. I hope this helps someone else that's new to Jasmine but has a brain that's programmed to solved problems with Object-Oriented approaches.

Tip: Finding Unused Cucumber Steps

I've been giving a Cucumber test suite some love of the last couple of days. Cleaning up the code. Refactoring steps to make them more clear, that sort of thing.

After playing in the code a little while, I got the suspicion that there were some steps that were not being used. I have zero love for dead code, so I followed this procedure to hunt them out.

  1. Run all the features and make sure everything is passing
  2. Comment out all of the steps
  3. Run all the features again
  4. Uncomment out only the steps that are reported as missing
  5. Repeat from step 3 until everything is passing again
  6. Delete all of the steps that are still commented out

My suspicions were correct. Now there's a lot less code to maintain.

Misleading Metrics

Metrics are great. They provide valuable insight so that we can make good decisions and intelligent changes, while avoiding pitfalls and things getting worse.

However, there’s danger in optimizing the value of a single metric. We need to recognize that some metrics don’t directly measure the information that’s valuable to us. Think about school. We use grades as a metric, but is it the best reflection of the knowledge someone accumulated or how well they can cheat on a test? The debate is still out on that one, but I digress.

Types of Misleading Metrics

Here are some examples closer to our world of software testing.

Code Coverage

Code coverage is a useful metric for finding areas of our production code that aren’t executed by our tests. But it’s best to think of it as a measure of quantity, not quality. For instance, 100% coverage can be achieved with 0 assertions.

Another thing to keep in mind is that most code overage tools measure lines, not code paths. So while a particular method may have 100% code coverage, it’s still possible that there are untested routes through the method’s logic.

Execution time

It’s important to keep an eye on how long it takes to run your test suite. A slow test suite is less likely to be run by your team members. But a fast test suite does not guarantee a sensible design. For a good discussion of this point, check out Don’t Make Your Code More Testable.

Another thing to consider is that it’s possible for a very slow test suite to provide a lot of value. A safety critical project may have an extensive test suite that takes days to run, but every one of those tests is vital to ensure the safety of the product.

Cyclomatic complexity

Cyclomatic complexity is a great way to keep tabs on how complex our logic is at a micro level. But it does a poor job of helping us determine how complex things are at the macro level.

Methods with a low cyclomatic complexity score will likely have a higher fan out, which could indicate that it’s harder to understand everything that’s going on.


What are some metrics that you’ve found to be misleading? What are some other good ways to measure our test suites?

Testing Tips #1: Avoid Copy and Paste

How often do we use copy and paste when writing our test suites? It’s just so easy. There’s a subtle change that needs to be made to one test to get it to cover something else, so we copy, paste and make a small change. After we continue through the file in this manner, when we get to the end, we have a passing test suite and clean production code. We’re ready to check in our work and move on to the next task.

There is danger in leaving the code in this state, however. Writing code via the clipboard leads to the need for shotgun surgery later, when the test suite inevitably needs to change.

So how do we avoid this issue? Don’t Repeat Yourself (DRY) is something we’re pretty good at following when we write production code, so what’s the issue with the test code?

Depending on our preferred testing styles, either test first, test last or some mix of both, testing is either something that we’re doing before we write our production code, or it’s something that we write afterwards. When we’re writing it, our focus is on the production code. In the case of testing first, we’re writing tests to justify the code we’re about to write. In the case of testing last, we’re interested in verifying the assumptions of what we just wrote. In both cases, we’re interested in having a safety net to help us refactor later.

Refactoring the production code is something we’re pretty good at, now, because we’re used to having that safety net underneath us. We pay attention to duplication and employ refactorings like Extract Method, and we avoid needing comments to explain blocks of code.

When we’re writing test code, there is no such safety net. So we fall back to the same bad habits that we followed when there was no safety net for our production code. We copy code that we know works, we paste it somewhere else, and we make a small modification to make it fit our needs elsewhere. We don’t introduce a new method, because it’s risky and it’s hard to tell the affect that’ll have.

So what’s the solution? There are two, we can either get comfortable working without a net, to beat the metaphor to death, or we can build a temporary one to help us out.

What does working without a net look like? In the case of testing code, it means that there’s no guarantee that our test still does what it used to. The solution to that is to stick to safe, low risk refactorings. The best two for this are Extract Method and Introduce Explaining Variable. These refactorings are simple enough that, when applied one at a time, there’s minor danger in introducing a bug.

So what does a temporary net look like? We intentionally introduce a failure into the production code, and make sure that the test fails in a way we expect. I first heard this described as “Refactoring Against the Red Bar”. It’s a great technique for verifying that our tests are “working”. A test’s job is to fail when the production code is broken. So when we brake it in a known way, the test should fail in a way we expect. If we break the code first, then the entire time that we’re refactoring our test, it should fail. If it ever passes then we’ve broken it.

Let’s keep these two approaches in mind the next time we reach for copy and paste as we author our test suites.

Coping with testing tools

Some testing tools require complicated setup when they are included into a project. This creates an unfortunate barrier to testing. And this barrier exists for brand new projects who’s authors intend to write tests and on projects that don’t have any tests.

This is an issue that was recently brought to my attention by a colleague. He was working on an extension to an e-commerce engine, and he wanted to write it test first. But he was frustrated by all of the setup that he had to do to get things going. He mentioned that this was not the first time he’d tried to write such an extension test first. His other efforts had run into roadblocks as well, and the roadblocks were different each time. He also mentioned that there was at least six months between attempts. Not because he was not interested, but because starting a new extension from scratch is not something that he does all that often. And when he does, he’s been frustrated to find that so many things have changed. So that even if he did remember how he set things up the previous time, he’d still be lost the next time.

After listening to this story, I was struck with empathy. This sounds incredibly frustrating. I think it’s awesome that he’s been persistent enough to keep trying. But his story got me thinking about ways to combat this barrier.

I’ve come up with a few ideas, but I’m curious if anyone else has anymore ideas.

Find a testing mentor

The way my colleague got past this barrier was to ask me for help. After a short screen sharing session, he was up and testing away.

I think this illustrates the importance of having someone that you can reach out to when you get stuck.

Hopefully, you know someone that’s more familiar with testing than you are. Take the time to let this person know you’d like to establish a mentoring relationship, because if you just start sending along frequent, random questions you might not get the help you’ll need.

If you don’t know anyone, that’s not a problem. Establishing a mentor relationship with a stranger is your best bet. This requires you to know of someone that’s more proficient at testing than you are. Reach out to this person and start the conversation. If that person declines, ask for help finding someone else. I’m confident that you’ll be able to find someone that’s willing to help you along on your journey to becoming a better tester.

If for some reason that avenue fails completely, or it just does not sound appealing to you at all, the I suggest you take a peek at some question and answer sites with communities that are familiar with testing. The following list is not exhaustive, but it should be a good start.

Encourage project maintainers to simplify setup work

Getting your project setup for testing should be easy. If you run into stumbling blocks while trying to get things going, then I encourage you to reach out to the maintainers of the tools you are using. It might be something they are unaware of, and you mentioning the issue might be the exact catalyst that’s needed for them to improve things.

If the project maintainers don’t agree to address your problem, then I suggest implementing your own solution. This solution could come in many forms, and the best form will likely be determined by the specifics of your issue.

Plan for mistakes and be persistent

You are bound to encounter snags and slowdowns while you are learning a new tool or framework. These snags are going to slow you down and frustrate you.

The best way to cope with the slowness is to plan for it. It’s unwise to assume that you’ll be able to complete a task using a new tool just as fast as you did with the old tool. Give yourself some wiggle room with your expectations. You are likely your own worst enemy in this regard. The time that you invest in learning a testing tool should pay off significant dividends later on. Rushing through it will lessen the return on your investment, however.

Frustration is a natural byproduct of encountering a roadblock. It takes a strong sense of will to continue on with the task after you get frustrated. Some problems are only fixed after you try to tackle them repeatedly, so don’t give up.

Share what you’ve learned

If you are having a problem and your search engine fails to find you a solution, then when you do figure out what was wrong, please post that knowledge somewhere. There are many places that you could do this. A blog, a wiki, a question and answer site. The goal is to post the knowledge somewhere where it can be found by a search.

There are two reasons for this. One of them is selfish and the other is altruistic.

The selfish reason is to make sure that you can find a solution if you encounter the problem again. I’ve been doing this for years, and there are many times where I just end up searching my own blog to lookup how to get around an issue.

The altruistic reason is to help others that may encounter the same problem. If you like helping people, then this can be a great motivator for sharing what you’ve learned.

Any more?

There are likely more ways to cope with frustrating tools, but these are the best ones that I can think of. Please leave a comment if you have any suggestions.

Also, this content is going to be used in my upcoming book Hooked on Testing. Please sign up to be notified when the first chunk is available for purchase.