To this date CI infrastructure to make developers more productive is not a priority in many organisations. There are 3 main reasons for it:
- Top management fails to see the return on investment
- Security and operational constraints are such that it is too difficult to setup
- Operations teams are just too busy managing production environments
The most common CI solutions applied are either a Jenkins server and its build slaves, or buying a cloud or on premises vendor subscription. These require management of in house servers or are very expensive to get enough build concurrency for large teams.
None of this should be developers' concerns. Developers need enough build capacity to have fast builds and zero queue time. They need a CI capacity that scales automatically when the team grows. One that doesn't require to ask their manager or operations team or finance department to adjust capacity to improve their day to day job. The larger the organisation, the more complex the approval structure for anything is.
All in all we want 2 things:
1. Each developer in a team should have its own dedicated CI server so builds do not get queued. That means less context switching for each developer. That means more productivity. You almighty developer should be able to close more PRs in a day than Harvey Specter closes deals in a whole season of "Suits".
2. It should scale easily when the team size changes. New people in the team? They get their dedicated CI server out of the box from day one. Nobody needs to bother finance or operations to setup anything. The team is fully autonomous.
Ok, so where do we go from here? By the way I'm writting 'we' but it's actually just 'me'. My name is Jean-Paul. Look at the very end for more info if interested.
The thought process
I've started from a blank sheet and draw a bunch of developers and draw a server next to each person. And it came to me!
Nowadays developers work on powerful laptops. 4 CPU cores and 8GB of RAM is a bare minimum. Usually it is more. If you could run CI builds on the developers laptops you wouldn't need any servers. And it would scale by design. New developer. New laptop. New build server!
Hmn. Running the builds on a developers laptop is not really a stable environment. Or is it?
You can isolate the builds environment using Docker containers. Can you run unit tests, integration tests or whatever kind of tests your CI process triggers in Docker? Sure you can. Use a Dockerfile to run a sequence of commands or a docker-compose.yml file if you need to mimic infrastructure for your tests.
Second thing that comes to mind: will it disturb developers on their current work to have builds in the background? The answer is no. Each developer can setup how much CPU and RAM for Docker to use. If you give Docker half of your laptop resources upon need you are still good to write code and build locally and everything else you would be doing. Let's face it. The computing power of your laptop is heavily underused 99% of the time.
When talking to developers about the idea an objection was often raised that builds on a laptop would be less stable than builds ran on a server. Because it can stop anytime: the battery might die, you might need to go to a meeting, or the network might be down. I bought into that objection. But in fact it makes no sense. Your laptop is physically on the table in front of you. You control every aspect of it. Power supply, network, when you put it to sleep or not. You have much more control than over any hardware in any data center probably thousands of miles away from where you are. Power supply can also fail there. Networking can also fail there. And the whole hardware and software machinery built to have your build run can definitely fail.
After talking to people, making polls and gathering feedback, I've decided to build the thing.
Even better than expected
When building Fire CI and using Docker based local builds I realized that it opened a world of opportunities I didn't think of.
First, builds being run locally provides a great developer experience. Things are local and thus truly real time. You get desktop notifications for passed or failed builds and logs tailing is a breeze. You can also cancel, restart or reorder builds at will.
Second, you can use the Dockerfile and docker-compose.yml standards to define the builds. And that's just awesome. No need to learn and maintain any third party "pipeline definition" format. The ".fire.yml" file you need to add to your repository to work with Fire CI is a one liner: you point to which Dockerfile or docker-compose.yml file to use. Then it is all Docker standard. Extremely powerful and with rock solid documentation.
Third, Docker multi stage builds and layers caching speed up builds more than I anticipated. The first step of a CI build is almost always to install third party libraries and dependencies. And these do not change often. Docker caches that step out of the box when dependencies do not change. I have seen 15 minutes builds shrink to 4 minutes.
Needless to say I am pretty happy with the solution. I hope you will try and will be too :)
Oh and that's me: @jpdelima
I've worked for startups, software houses, enterprise corporations and consulting companies (currently at Netcompany in Warsaw).
I've seen the same pain in most places and decided to come up with a solution for myself and hopefully for you.
That's right. It's only me. How can one guy run a mission critical system like a CI platform? Very possible. Fire CI is an agent based platform. The agent code is good. The backend runs on autoscale on AWS. And my own CI workflow is rock solid :)