True unit tests and cross-domain calls

In our company we recently started marking automated tests as “Unit” and “Integration” for the purpose of running unit tests frequently (and on every check-in). Such separation obviously requires clear definition of what is a unit test. Michael Feathers in his excellent book “Working Effectively with Legacy Code” writes the following:

“Unit tests run fast. If they don’t run fast, they aren’t unit tests. A test is not unit test if:

  • It talks to a database
  • It communicates across network
  • It touches the file system
  • It depends on configuration environment”

This list is convincing, however since it defines qualities of non-unit tests, negating them won’t necessarily give us a definition of what unit test is. However, the main Feather’s message is clear: a unit test should only use volatile memory. No databases or network calls, not even reading from configuration files.

But what about use of multiple domains? What if code under test (CUT) creates new AppDomain instances? Should an automated test that executes such code be classified as unit or integration test?

I tend to think that such tests should be considered integration tests. Let’s recall the primary purpose of writing a test: exposing a bug. Now imagine we wrote an automated test for code that uses cross-domain calls and it failed. The reason for failure can have something to do with multiple domains, or failure was caused by an error that had nothing to do with cross-domain communication. If the reason for failure is domain-related, this is an integration issue. Since use of multiple domains is related to assembly probing and loading, any failures in this area are essentially integration failures. But if a test that involves cross-domain calls exposes an error that does not have anything to do with domain setup, such test can and should be re-written to expose the same error within a single domain. Moreover, classifying such test as a unit test may lead to a waste of developers’ precious time: when running tests marked as “unit”, they will also execute tests that consume massive amount of CPU cycles on completely unnecessary operations.

While performance implications of cross-domain calls is usually negligible for business applications, it is huge for CUT that is not supposed to touch non-volatile memory. And if developers want to build a habit of running unit tests often, the speed of unit test execution becomes crucial. If a team has 5000 tests that need 10 minutes to run, chances that they will be run regulary are pretty small. Fit them in one minute – and they will be run often. And running CUT in a different domain can result in more than 10 times slower execution.

So what should be done if an integration test exposes a failure? I believe once an error is identified and located, in case it is an error in application logic, developer should try to write a true unit test that would expose the same problem. Integration test can still be kept unless everything it validates is now fully covered by a new test, and the new test will enrich a set of fast (and therefore frequently) run unit tests.

This reasoning is not AppDomain-specific. In general, if it is possible to write a faster unit test, it should be re-written so it runs faster. To strengthen Michael Feathers’ point: a good unit test validates a given unit of functionallity in a shortest time. A test is not unit test if the same validation can be performed significantly faster.

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s