Code Aesthetics

Programming, agility, technical stuff ...

Steps to achieve higher code quality

This post focuses on how to achieve high code quality on software projects. I want to discuss a few methods to improve the quality of software and to create an infrastructure to support development of high quality code.

Overview:

  • Coding guide lines
  • Version Control
  • Continuous Integration
  • Test-Driven Development
  • Behaviour-Driven Development
  • Unit-Testing and Component-Testing
  • UI-Testing and User Acceptance Tests
  • Regression Testing

Coding guide lines

Different programmers have different ideas of clean code. I personally like to refer to Clean Code by Robert C. Martin. Therefore the most basic method to achieve higher code quality is to agree on and define coding standards which have to be respected by all team members. Very often there are already coding guide lines defined by the company. Those guidelines should be defined in a document available to all team members. Furthermore it should contain information on how to name name spaces, classes, constants, variables, members, interfaces, length of methods and classes etc. Also it makes sense that those conventions are based on common conventions of the programming platform. It is also possible to use tools such as StyleCop to enforce the rules as part of continuous integration. Also the settings used for IDEs should be shared by all team members.

Version Control

In software projects in which more than one developer is involved, it is important to support the development in a team by using a version control system. A version control system allows the user to simultanously work on the code base and it automatically merges differences and reports conflicts. There are different kinds of version control systems, either centralized systems or distributed systems.

Continuous Integration

In conjunction with a version control system it makes sense to introduce a continuous integration server. A continuous integration daemon running on separated machine/server allows to automatic check outs of the current code base from the version control system. It checks the repository for changes in the code base and then it tries to build the source code and reports any errors. This makes sure that there is always a running/compilable version in the repository which the developers can work with.

Additionally such a daemon could run automated tests, produce statistics (e.g. appy code coverage metrics) and produce development documentation for the sources. It is also possible to let the server produce release setup packages for manual testing on a regular basis (e.g. a daily release).

Test-Driven Development (TDD)

In Test-Driven development automated unit tests are used to define the expected behavior prior to creating the actual implementation code. Thereby the unit test is defined for an empty class rack and then the class is implemented step by step. Apart from the fact that the programmer does not forget to create unit tests for his/her code, it also supports to think about the design and architecture of the classes to produce testable code. Later on refactoring is not a  problem because the defines unit tests can prove that the code is still working in the same way.

Behavior-Driven Development (BDD)

What TDD is for the developer, BDD can be for the business users. This method comes from agile development methods and focuses on collaboration between developers, quality assurance and business users. For me it is just a shift in terms of the scope of testing. Developers create unit tests to test their classes and small sub systems whereas in BDD the focus lies on the system as a whole (e.g. User Acceptance Tests). Available frameworks often try to involve quality assurance and business users by using domain specific languages (e.g. user story notation like "Given, When, Then") which makes it easier for business users to create tests cases.

Unit-Testing and Component-Testing

When to write unit tests? Generally the following tests are classified as unit tests:
  • Specification of the expected behaviour of a class or contract, sometimes even before the concrete implementation is available (Test Driven Development). The test case should consist of small tests which test single methods of the class or contract for example. This means that the focus lies on the state of the object before and after the execution of a method or on the return value of a method. This can be checked by using so called mock objects. If there are too many classes which define the core of the application or no meaningful tests can be defined (because most of the classes are only value objects), there could be a flaw in the design of the application or responsibilities of different object are not clearly separated (separation of concerns).
  • Reproducing a bug and assuring that the bug does not occur again. If a defect is found in a software it makes sense to define a test case which reproduces the defect or bug. This test should be added to the suite of all test cases. This is shortening the execution time during the process of fixing the bug and also ensures that the bug is not reintroduced into the code later on (especially when continuous integration is used).
  • The aforementioned approach can also be used for components or subsystems. Components usually contain more than one class and test cases can be defined which test the component as a whole or the interaction of a set of classes. Those tests are usually called integration or subsystem (component) tests. They are not executed as often as class based tests.
What is subject of testing? Recommended approach:
  • Do not test 3rd party components. In general it should be assumed that such components work correctly. This does not mean that those components should not live in a separated application domain ( with restricted permissions), in case the vendor is not trustworthy. Also it makes sense to create mock objects in order to make tests independent from using 3rd party components. This speeds up the execution of tests and make them more reliable. For example it makes sense to abstract the tests from a database.
  • Do no test classes which are already covered by another test. If you do that it is more difficult to find out the reason of a failure.
  • At least one test case per class of the domain model. It makes sense to have at least one test case per model class of the application. Use a coverage tool (e.g. NCover) and calculate the test coverage of your application with different metrics.
  • Minimize the execution of the source which is executed by one test case. Keep it simple! More small tests are better than a few big tests, because it is easier to find the cause of failure.
  • Separate tests for classes and tests for the interfaces:
    • A unit test for an interface ensures that all implementations of the interface work. For example an interface "IDataProvider" should return data. Where the data comes from is for this test not important. It is important that the aspect of returning data works.
    • A unit test for a class (as the concrete implementation of an interface), checks that the implementation works with the focus on how the interface is implemented. For example an "XmlDataProvider" should be tested in a way that it reads XML data correctly from a file.
When should unit tests be executed? Recommended approach:
  • Always execute all unit tests (apart from the component/subsystem tests)
    • before any modification of the source code
    • after a modification of the source code
    • before the modifications are committed to the version control system
  • Continuous Integration. Apart from compiling the source code other processes can be executed with build scripts. The tests should be executed automatically on a regular base (for example daily). Additionally statistics, code documentation and installation packages can be created. Advantages:
    • Integration problems are recognized early and can be solved early. No last minute bug fixing.
    • Early warning in case of incompatible or defect source code.
    • Direct execution of tests if a modification is detected
    • There is always a runnable version available for testing, presentation or release.
    • The developers have a shorter feedback cycle if their source code is working with the rest of the application
How are tests run? Recommended approach:
  • Tests should always be defined in a way that they can be executed with a minimal effort of configuration and a minimal set of dependencies of the environment. E.g. it should be avoided that tests need special network connection or connection string to run correctly.
  • Tests should run as fast as possible. If tests run fast they will be executed more often by the developers. Therefore do no use "Sleep" or similar constructs in the testing code.
Who is testing? Recommended approach:
  • Every developer. In general every developer who is writing a component should also provide  unit tests for that component.
  • As an alternative, cross-testing could be used. This means developers pair up, one developer is developing a component and the other is defining and writing the tests. This can help in overcoming the issue that tests are too specific to the implementation.

UI Testing and User Acceptance Tests

When should the user interface be tested? Always. From a user's point of view the user interface is the application. Anything which is not presented on the user interface (or has any other representation such as an E-mail or a log entry in a file) does not exist in the perspective of the user. Therefore the user interface tests are the main criteria for user acceptance (User acceptance tests). In order to do proper user acceptance tests it is important that a group of business experts or end users of the application is available for testing. In case an iterative development process is used, it is important that this group is available for testing at the end of each iteration.
What are UI tests? UI tests are basically component tests in which the component is the whole application. This does not mean that UI tests must be automated. In most cases it can be enough that a test plan is created by QA staff which is executed by domain experts on a regular bases. It is also possible to use a software to automate those UI tests in order to shorten the time which is needed for test run.
When and how should UI tests be created?
  • In general two types of tests can be differentiated:
    • Manual tests: Executed by QA staff. Predefined actions are executed and compared with the desired result. Those should be defined in a (User-) test plan. Those tests should be done in a specific environment, so the test can be repeated without any side effects.
      • Advantages: Simple to implement. Test plans can be re-used. Problems can be discovered easily.
      • Disadvantages: Resources; Defects can be overseen.
    • Automated tests: Executed by test runner software or "software robot". A test script is created which contains the actions to be executed:
      • Advantages: Fast; Can be repeated more often than the manual tests. No additional resources are needed.
      • Disadvantages: Difficult to setup. Actions and conditions must be described very detailled. If there is a change in the layout of a UI, the scripts must be recreated.
  • UI tests should be base on user stories / use cases which are to be performed with the the software. This proves that the software can do what it is intended for.
  • Additionally there should be tests which do not base on the requirements specification (e.g. Eploratory Testing and Smoke Testing). Such a test could be to enter random data or click randomly on different input forms for example. Those kind of tests test the robustness and stability of the application.
Who is testing?
  • Basic functional tests should be done by the UI developer. Make sure that the application is not crashing within the first 15 seconds. It must be ensured that the UI is matching the requirements (e.g. user stories, use cases).
  • If there is automated UI testing, the script should be created by a developer or tests with the help of a domain expert also using the user stories and use cases.
  • User Acceptance Tests, should be performed by a group of the end users. Those tests should be done when the development of a feature or module is finished. This means such tests are done after a mile stone has been reached or after a specific amount of iterations (e.g. in Scrum Sprints).

Regression Testing

What does regression testing mean? The term regression testing describes the execution of all tests that have been executed on the previous version of an application in order to ensure that no defects have been reintroduced. For this reason there cannot be regression testing on the initial version of the software.
When does regression testing take place?
  • Before a software is released or after a new module has been finished
  • After a bug fix or a code modification on the existing code base
How should it be done?
  • Manually by each developer who is changing a component all tests of this component must be run.
  • Automatically by the continuous integration server. There should be a special build scrip for a release which runs again all tests including integration tests.

Johannes Täuber

I am a software architect and agility advocate. I am interested in all technologies out there and like to discuss options. My platform of choice is the .NET framework.