Continuous Integration is the flywheel of Agile development methods. It maintains the cadence of the team. It drives behaviors in quality, configuration management and project planning that orient the team on results and desirable task outcomes.
Although it is critical; it is not the build tool itself that makes Continuous Integration possible. As with all that we do in application delivery and operation, it is about the processes and practices we use. But most of all it is how we integrate all of that through automation.
At a minimum the process of being able to build code must be automated and there must be only one path to the build. Private scripts, personal sandboxes are all well and good but they rarely reflect the true nature of the build environment and introduce error and risk which results in rework. Developers must be able to integrate their code into the build in the easiest way possible. And the easiest way is to make it a natural byproduct of the development process. When the developer is done, when they check-in code the build should launch.
And the best “nice to have” in all of this is that once the build is complete the automated test scripts must run automatically.
In an ideal world if the build fails, or any of the automated tests fail, the process should send the developer new “tickets” into their inbox, the code in the repository needs to be marked “incomplete” and the team notified. Breaking the build is a serious matter and affects the productivity of the whole team; consequently breaking the build should come with serious consequences.
From a management standpoint it is helpful to build metrics around the performance of the team in terms of numbers of broken builds, number of errors detected by automation, number of turnovers to QA for a particular release … and so on. And collecting these metrics through process automation means that dashboards can show this real time for the team and show trending.
Selection of a build tool is often a technical merits decision, sometimes a fashion decision and always a cost decision. They all provide similar capabilities and performance and reliability. Indeed they have become quite commoditized. The most important feature they need to possess outside their required build functionality is that they can be integrated into the fabric of your development processes through automation.
Historically I’ve noticed that most developers write unit tests to self satisfy their assumptions. While test driven development was popular in the early 2000’s it caused issues on delivery times so a halfway house was often looked for.
Once the stakeholder has requested the story/business case and the elements are scheduled to be coded I see three levels of testing.
i) Developer testing – does it work without breaking or spewing out lines of exceptions.
ii) QA testing – test harnesses against the existing product. Does the QA release break the current live iteration. If so then it red flags back to the developer who created the code, retweak and check it in again.
iii) Release manager testing. Once code has passed QA the release manager should do a final sanity check and then schedule for a live release. Live release done then the code base can be tagged as production and the next iteration begins.
Notice that the developer isn’t involved with versioning or regression testing. They know all the tricks to make their tests pass but then lead to customer pain later.
Iteration length can vary from days to weeks. I’d rather not go any further that a two week cycle otherwise the story estimates become a pain to manage.