During my career as a Software Engineer, I have seen a variety of different approaches to ‘deployments’. This is one aspect of the software development life cycle that wasn’t touched on at all during my 4 year Computer Science degree and seems to be a point of contention for a lot of software companies.
In my experience, a lot companies seem to almost fear deployments, leading to unmanageable 6-week deployment cycles, middle-of-the-night deployments that take hours and inevitably result in unplanned outages, a lot of error-prone manual configuration, unnecessary friction between developers and sys admins, no rollbackability, no version history, etc etc.
Deployments shouldn’t be that hard
Really, they shouldn’t. If you have a well-planned, stable and automated deployment process, you should be able to deploy early and often and hopefully without any glitches. Your customers will be happier and you won’t end up looking like the poor cat in the picture above.
Below is a list of what I consider to be the most important qualities of a good deployment strategy (not necessarily in order of priority):
Deployments really shouldn’t be as complicated and scary as a lot of companies make them out to be. It should be a one-button action and it shouldn’t require taking the whole system offline for hours in the middle of the night by the whole sys admin team. The number of steps involved in deploying may vary depending on your system and how major the deployment is, but they should be easy enough to follow.
It should be fairly easy to deploy any of your projects – from static files to binaries, web applications that need IIS to be recycled and even windows services that need to be uninstalled, installed then restarted.
Developers, sys admins and anyone else in the company who wants to know should be to quickly and easily be able to see a history of all deployments, what the last version deployed was, what the changes were, who worked on it and when exactly it went live.
For those odd occasions when a production release doesn’t go quite as expected, it should be relatively easy to rollback to a previous version. I say relatively easy because it really depends on how major the deployment was and how complicated your software is.
My deployment system
In one of my previous roles, I was asked to come up with a process for deploying the various components that made up the software we were building. The components that needed to be deployed (sometimes together, sometimes independently) were .NET web services, ASP.NET MVC web apps, windows services and static files. What I ended up building was a deployment process that involved Teamcity, MSBuild scripts, custom MSBuild tasks, a .NET ‘deployment web service’ and last but not least, a page on our intranet to show version history and comments.
Since building this process, I’ve been asked to describe it several times so I thought to myself, why not blog about it?!
I built this system for one company, with their requirements in mind. It may not suit everyone’s deployment needs. For example, my deployment system did not deploy SQL scripts as this was not a requirement. Any scripts / tables that needed to be created or altered / migration scripts would be run manually before deploying the code. This became a part of the greater deployment process, but it was not handled automatically. And finally, I only built the code aspect of this system and wired it all up through Teamcity – our sys admins set up all required VPN tunnels, FTP servers, etc etc…
For every code solution, we had an MSBuild script that Teamcity would use to build and publish the binaries / static files on dedicated build agents. Apart from giving Teamcity instructions on how to build the solutions, these MSBuild scripts would also zip up the binaries and static files using the version number as the filename and then copy the zip files to a build server. The sys admins set up an FTP server pointing to all these zip files so that the servers we deployed to had access to them.
The next step was to set up new Teamcity build tasks that used different MSBuild scripts to actually deploy the zipped up binaries and static files. We had one MSBuild task per solution and environment. These MSBuild scripts used custom MSBuild tasks to call the deployment web service on a particular server / environment. In cases where a particular solution was load balanced, we would pass the host names of each server hosting the solution to the custom MSBuild task which would then spin up a bunch of threads and call the deployment web service on each of those servers.
The Deployment Web Service – the brains of the system:
The ‘deployment web service’ was the brains of the system – the rest of the stuff I just described simply wired everything up. The first step in setting up a new server was always to deploy and configure the deployment web service. Once it was installed and configured properly, this is more or less what the deployment web service would do when it was called:
- Download the appropriate zip file containing the built solution from the FTP server onto the server you are deploying to.
- Unzip the zip file into a temporary folder.
- Modify the config files – depending on the environment being deployed to, the deployment web service would pick the right config file, delete all the others and rename the remaining one to Web.config (or App.config…).
- If the solution being deployed was a windows service, the deployment service would now stop and uninstall it if it were already installed and running.
- Copy all the files from the temporary directory over the top of the actual binaries using robocopy.
- If the solution being deployed was a website or a web service, the deployment service would now recycle the app pool that the site was running under.
- If the solution being deployed was a windows service, the deployment service would now install it.
- Delete all temporary files.
Simple, right? Seriously though, that was the brains of the system and it really didn’t do anything out-of-this-world. Deploying binaries and/or files really isn’t that hard!
But what about environment-specific configuration?
This is a much debated point and I’m not claiming that the way we dealt with environment-specific configuration is the only way to go, but it was certainly simple and enabled us to have automated stress-free deployments.
For every application we built, we would maintain separate config files for each environment. That’s right – we had a Web.config (for localhost), Web.Development.config, Web.Test.config, Web.Staging.config and Web.Production.config. I know some of you reading this will be rolling your eyes right about now and thinking… “So every time I need to add a new key to my config file, I have to add it to 5 different files?!? What a nightmare!!”. Trust me, it really isn’t that hard to keep all these files in sync and seriously, it’s much easier than trying to manually merge config files at the last minute before deploying. Think about it this way – once a system is up and running, you don’t add keys anywhere near as often as you have to deploy (x number of environments you’re deploying to).
So how did the right config file get deployed to each environment? Well, all these environment-specific config files would be included as part of the project to be deployed. Each instance of the deployment web service knew which environment it was sitting in and deploying to (this was configured via it’s own web.config) so as I described above, the deployment web service would pick the correct config file based on it’s filename, delete all the others and rename the correct one to Web.config (or App.config). Easy peasy.
Having used several different techniques for deploying code through environments and out to production over the years, this system certainly demonstrated various benefits and in particular, satisfied all of the qualities held by a good deployment process which I listed above.
Since this system was triggered from Teamcity, it really couldn’t get any easier to manage deployments. Deployments became a matter of pushing one button and waiting for a couple of minutes to see the result. There was no manual merging of config files, no remoting onto the server to restart an app pool. Just one button click.
In terms of set up, although this system is made up of several components, it was fairly easy to configure a new project to be deployed. Sure, there were a few steps involved but they were all simple and more or less a copy-and-paste of an existing project’s deployment scripts. It was also pretty straight-forward to set up a new server to deploy to. This involved setting up the deployment web service on the new server and updating any MSBuild scripts with the hostname of the new server for projects that you want to deploy to it.
Because it was highly configurable, the system was able to deploy several components at the same time, to various different servers.
Because we used Teamcity to trigger the deployments, we had all the whole build log that Teamcity produces. I put some logging into the custom MSBuild tasks so that we could see what was going on from within Teamcity’s build logs. Later on, I added some code to update a version databases so that deployments were visible from within our intranet.
Rolling back to a previous version was a matter of looking up which version number you wish to roll back to and setting this version number in the parameters passed to the deployment MSBuild script by Teamcity. The custom MSBuild tasks and deployment web service would then pick up that particular version and re-deploy it.
In over a year of daily deployments of several different web sites, web services, windows services and static files, my deployment system never failed. Having this system in place enabled us to focus on writing cool software and getting it out often and early instead of planning stressful deployments. Everyone benefited from this system – developers and testers could easily and quickly deploy to dev, test and staging as often as they wanted, without waiting for sys admins to be free, and production deployments were no harder. Sys admins didn’t need to know anything about how to deploy, managers were able to see when a new version had been deployed to a particular environment and all the automation meant that there was very little room for human error. A definite win for everyone!
If your team doesn’t have an automated deployment system already in place, I implore you to either build your own (like we did) or use an off-the-shelf one. It may take a bit of effort to set it all up at the beginning, but it’s definitely worth it in the long run.