The practices:
- Introducing a continuous delivery pipeline – All systems have gated builds (We use TFS here) so every check in gets built, the tests executed and MSI’s generated. If the build fails for any reason, i.e. compiler error or failing test then the code fails to check in.
- Enforcing Unit Testing Discipline – All developers now routinely write unit tests for their code. This was an uphill battle, but we are very close to winning that one. We monitor build over time reports in TFS which give a color coded clue to test coverage. We encourage developers to be around 70-75% covered on new code and code they are maintaining.
- Use of standard Visual Studio Code Metrics – We encourage developers to keep an eye on certain metrics like Cyclomatic Code Complexity and Maintainability indexes. This gives a good high level indicator of any code smells brewing. These metrics are aimed at helping the developer to keep their code readable by reducing complexity.
- Static Code Analysis – All new code and a lot of legacy systems have static code analysis rules enforced on local and TFS server builds. For new projects we have a custom rule set that is a subset of the ‘Microsoft All Rules’ set. This caused a lot of heated debate in the teams when we started enforcing this, but once people got used to the rules, they just got used to working with it. For old legacy systems we start off applying the ‘Microsoft Minimum Rule’ and then work our way up from there.
- Code Productivity Tools – We make all our developers use a code productivity tool. We settled on CodeRush as it has a lot of extra tools for guiding less experienced developers, but tools like ReSharper and Telerik Just Code are just as good. The things I like about these tools are the visual on screen feedback they give you. You can enable a colored bar at the side of the code window that informs you of any code issues. These issues are driven from a rule set, so if you get the team into the mind set of getting rid of the color blips whilst they are working (the tool even does most of the work for you) then you are on the road to better code. Generally the refactoring helpers provided by these tools are better than those provided in Visual Studio too.
I won’t pretend that we now churn out systems so beautiful that angels will weep tears of joy, but by enforcing these points we are driving up the code quality standards and the difference have been very noticeable.
I have also started using these tools to guide code reviews. Code reviews used to just be a bunch of developers sitting around the projector picking holes in code. These code reviews were not very effective. Instead I propose the following process for running a code review:
- Get the code out of source control fresh.
- Does it build? Yes then continue, No then stop the code review.
- Run the unit tests.
- Do they run and all pass? Yes then continue, No then stop the code review.
- Check the unit test code coverage.
- Is the coverage around >60%? Yes then continue, No then stop the code review unless there is a good excuse for the coverage that the review team are happy with.
- Check the code metrics (Cyclomatic Complexity and Maintainability Index)
- Are the metrics within agreed boundaries? Yes then continue, No then stop the code review.
- Run the static code analysis against the agreed rule set?
- Are there any warnings / errors? Yes then stop the code review, No then continue.
- Once you get to this point, the development practices have been followed and you can proceed to review the actual code.
No comments:
Post a Comment