Some time ago, a new product demanded a lot of attention and manpower. Many of my fellow team members were taken off our project and diverted to this new one. That left us pretty shorthanded. In my primary role as developer, things were looking to get quite interesting. I didn’t just have to worry about getting my changes ready; I also had to make sure those changes were in a production environment. I didn’t have any DevOps knowledge at the time (it didn’t even exist then, in 2008 or so), and I also thought that the project would end quickly. Silly me. My way of promoting our team’s changes was using a basic flow: build, copy and paste. And as I’m sure you know, many things can go wrong when we add any human intervention to the process.
We Learned the Wrong Way
There’s a funny anecdote from the days of this project, and those of us who were involved always joke about it. Here’s the story. We only had one operations guy when the company started. The number of applications kept increasing, and he couldn’t give our team’s application the attention it deserved. We decided to help him out by documenting the manual process we had for deployments (yes, again, manually), and it worked very well for the dev team. But we discovered that it was because we knew better than anyone how the application worked.
A new operations guy joined, and we told him we already had some documentation for our deployments and he just needed to follow the recipe. One day, there was a need to do an emergency deployment, and only the Ops team was still in the office (so convenient). Deployment started. Everything was going well...until they started receiving emails, SMSs, and calls reporting that the system was down. The OPs team, in accordance with the protocol for problems like this, called the lead developer for help.
When the lead developer arrived, he started reading all the error messages and discovered the problem. He fixed it right away, and errors stopped coming. What went wrong? Ops guy skipped a very important step in the recipe. Even though we had the most elaborate document explaining how to do deployments, we failed because we introduced human intervention to the process. But we knew there was something better: we had heard about Jenkins.
Jenkins is an open-source platform that started in 2004, and it’s maintained mainly by CloudBees. (The GitHub repo is located here.) It’s an automation server that’s most commonly used with builds, but it can also automate a wide variety of tasks. It supports all common version control tools like Git, and the functionality can be extended with plugins. You can configure it to be automatically triggered by a push in Git or schedule it via a cron-like mechanism.
It’s the Easiest Way to Start
We already had some automation scripts for deployments. The part we were missing was something to keep running them in the proper order. With Jenkins, it doesn’t matter what OS or language you’re using. It runs in Linux and Windows, and you can plug in any tool you need. Jenkins is written in Java, so the only prerequisite is to have Java installed. (You can download the Jenkins installer here.)
You can start with a very simple job, which is a task or step in your process. You can start off with only one job, but the recommendation is to have multiple jobs (e.g., one for each script) so you can identify problems very easily and avoid having to start over again every time something fails in the process.
The very first thing you need to do to start taking full advantage to Jenkins is to document the set of steps you perform every time you need to deploy something. List the servers that you have for each environment, your source code repositories, and all the dependencies for your application to work. By doing this, you’ll identify what tools are going to be needed on the Jenkins server, what accesses or permissions are needed for your jobs, etc.
You Can Do Almost Everything
After installing Jenkins on your dedicated server, you can start installing everything you need. The Jenkins installation can be architected to have a master that can be used to authenticate users, manage security and permissions, and create the folder structure that you’ll need for your different application jobs. Because you might have different types of technologies, you’re better off having one Jenkins agent connected to the master for every line of business or every different stack (e.g., Windows for .NET or Linux for Java).
In order to have a Jenkins agent, you simply need to have Jenkins installed on the server and then register it to the master. The agent must have all the dependencies needed to execute the jobs you’ll choose for it (e.g., Git, Java, etc.). By doing this type of configuration, you enable your team to automate applications that have different versions of a stack (e.g., Java 7 and 8). When this configuration is ready, you’re able to choose where the Jenkins job will run.
Be aware that, as of the time of writing this article, there are 1,403 plugins that you can install on your Jenkins server. The Jenkins installation comes only with the essentials: a minimum set of plugins. But you’ll need to install extra plugins if you want to do things like notify the team via email or Slack about the status of any job, build .NET applications, use your cloud provider, and much more. (You can find all plugins you’ll need here.) There are so many at your disposal that, when I need to automate anything, I’ll joke that “there must be already a Jenkins plugin for this.”
Improve Time to Deploy to Any Environment
It’s every developer’s dream to push code and get it to production without any suffering. Deploying to any environment, mostly to production, should be a boring activity. It takes time to get there, but it’s possible. When you understand that our main goal as developers is to provide value to the company, you’ll start focusing on what’s adding that value: the code. We all have witnessed and lived the pain of integrating everyone’s changes into code and pushing them to production. It can be avoided.
We live in a world of constant change where, if you don’t adapt, you die. Improving the speed and the frequency of pushing new changes to your users is a major determinant for success. Say you just pushed something that’s affecting your users. How fast can you react to that? Do you have an easy, fast, and reliable way of reverting or pushing a new change? Jenkins can help you respond favorably to these questions.
Of course, Jenkins by itself won’t solve all your problems. For example, you’ll always need a set of automated tests—like unit tests, integration tests, and smoke tests—to assure that you don’t expose your users to bugs.
Consistency and Repetition
One of the best ways to avoid downtime in production is to avoid pushing new changes. Many things can go wrong with servers. There can be CPU spikes, memory leaks, storage capacity—all sorts of things. But nothing compares to getting live new features or fixes. When you have a consistent and repeatable way of doing things, you minimize the risk.
With Jenkins, you can script all your deployment pipelines by using theirs instead. The Jenkins pipeline is a set of plugins that help you to have your delivery pipeline as code. It enables you to keep everything scripted and easy to review when something new is needed. It also gives you the confidence that comes with knowing that it doesn’t matter if someone made a change to a Jenkins job; you’ll always get the latest version from your source control. Almost the same way you treat code, right? This will be the first step to start practicing continuous delivery.
There’s No Excuse Not to Automate Deployments
Once you have an automated, consistent, repeatable, and reliable way of changing things in code, you and your team can focus on what’s really important: providing value. Jenkins is only the tool you’ll need to have a graphic representation of your deployment workflow, so you can visualize how your team pushes updates to end users. Check out Jenkins and start coding your delivery pipeline, too. You don’t want to keep banging your head against the wall when something goes wrong in production. When we humans know that we’re causing damage, we tend to freak out and lose common sense. On the other hand, machines will keep doing what you told them to do. That’s where Jenkins is helpful.