Friday, January 18, 2013

Good practises for unit testing

When performing unit test (no matter which language, the same applies to e.g. javascript unit tests) make sure your tests are fullfilling the some quality criteria.

Unit tests should...

... be deterministic: Assert.Equals(Random.Next(), myResultingNumber) is probably a bad idea ;-)

... be repeatable: you should be able to run them 1, 100 or 1000 times in a row, the results shall always be the same.

... be order independent: running TestB before TestA shouldn't have any influence.

... be isolated: strive for not using external systems like databases or services, use a mocking framework instead. Reason: doing not so will make it hard to fulfill some of the other principles listed here, e.g. "fast", "easy to setup", "deterministic" (think about a temporary network problem when connecting a test database).

... run fast: slow tests will decrease your productivity and they will be run fewer times because no one likes waiting

... be included in continuous integration process: don't rely on developers manually triggering of the tests, they should be run automatically (as often as possible).

... be easy to setup: the danger in hard to setup tests is that they are simply not written.

... be either atomic or integration tests: atomic tests (i.e. tests that cover a very specific, small amount of functionality) are a must, integration tests (covering the collaboration of multiple modules) are not always necessary but sometimes useful. The disadvantage in integration test is that in case of failing tests, problems are harder to find whereas a failing atomic unit test often even does not have to get debugged to find the problem. Do not mix both types but make a clear separation (e.g. by introducing naming conventions).

... have one logical assert per test: does not mean you should never have multiple asserts in your test case, but if this is the case make sure the asserts are tightly logically connected to each other.

... concentrate on the public "API" of your SUT (which normally covers private methods. Note that the need of testing private methods is often an indicator for violation of SRP within the class).

... read like documentation for your system: benefit from your test suite also in a way that it is an additional documentation for your software. Actually, a system without unit tests cannot be conidered as being "valid": it might be free from obvious bugs (such as users get error messages), but that does not always mean that it works as it should (and often other documentation - if available at all - is far away from being as precise as unit tests in describing desired behavior).

... have the same code quality as productive code: there is NO reason for neglecting unit test code. It will grow like productive code grows, you will get the same problems as with your productive code if you are not applying the same patterns and practises.

... also cover the "sad" path, not only the "happy" path: also test unexpected values and behavior including tests for exceptions.

... be written each time a bug in development, testing or on your live system is occuring. Like this you make sure that this bug is abandoned forever.

Sunday, January 6, 2013

TFS build process templates vs MSBuild

The introduction of build process templates (implemented with WWF / XAML) with Team Foundation Server 2010 did not mean the end of MSBuild scripts. After all every .csproj or .vbproj project file Visual Studio generates during creation of new projects is a MSBuild script.
WWF build process templates provide a higher level orchestration layer on top of the core build engine MSBuild and has some more sophisticated possibilities that are coming with WWF, e.g. distribute a process across multiple machines and to tie the process into other workflow-based processes.
But still, a lot of steps you want to have within your project specific build (e.g. Stylecop analysis, NDepend static code analysis, script and style bundling and minification) can be realized in both ways. So the question arises which way to go: WWF or MSBuild.

I found an guideline from Jim Lamb (who is a TFS programm manager at Microsoft) how to handle this:
MSBuild is the tool of choice in the following scenarios:

1) the task requires knowledge of specific build inputs or outputs
2) the task is something you need to happen when you build in Visual Studio (so for example you have to decide if you want to have a StyleCop check for every local build or only after check-in)

Jim's recommondation is to use WWF in all other cases.

In my opinion the WWF approach has also it's downsides:
1. While it is quite simple to let an MSBuild script run on a developers machine (e.g. for debugging a build problem) this isn't so simple with the WWF solution (you had to install TFS build service locally).
2. The WWF approach can not be reused when your organization switches from TFS to another ALM platform (e.g. Subversion and TeamCity).
3. You have to know not only how MSBuild works but also have to have a clue at least of the basics of the WWF stuff.

When leveraging MSBuild, keep in mind that from a maintenance and reuse perspective it is better to create additional MSBuild files (can be referenced by "import" statements) rather than writing the additional task directly into the project files (they are already containing enough stuff).

What happens when you click "Build Solution" in Visual Studio?

You probably know that msbuild.exe is somehow involved when you click "Build Solution" from the "Build" menu within Visual Studio.
But msbuild.exe is not called directly, instead Visual Studio does the same as you would call "devenv.exe /build" from the command prompt. The executable has to be passed the name of the solution together with the desired solution configuration.
devenv.exe is more or less a wrapper that calls msbuild.exe with a set of properties that are visual studio specific.
Note that devenv.exe only comes with an installed Visual Studio, msbuild.exe is (easier) available with the .NET Framework installation.