Continuous Integration, Deployment, and Testing

I recently gave a Bright Talk session on adding security to the Agile Cloud/DevOps Development cycle. Part of this discussion addressed adding security testing as part of the process before, during, and even after continuous deployment. In other words, if we continually deploy, we must continually test. Our testing needs to be in the multi-minded …

Scale Is a Matter of Perspective

A lot of new technologies and techniques were invented and developed to enable large cloud businesses. Enterprise IT organizations are now looking at these innovations with an eye toward realizing business benefits from doing the same thing. Techniques like continuous integration (CI) and continuous delivery (CD) are helping cloud companies. Enterprises are looking for the same …

Talking Immutable Infrastructure with Codeship

Yesterday I had a chat with the folks at Codeship, a continuous integration and continuous deployment platform. The topic of immutable infrastructure came up and was intriguing to me, so I thought I would write about it. So what is immutable infrastructure? The concept of immutable infrastructure is to never change your existing production servers. Instead, build new automated servers and destroy the old. This concept falls in line with the “fail forward” belief system of many modern-day DevOps evangelists who believe that tweaking servers or rolling back code from servers in highly distributed systems is too risky and causes more problems than it is worth.

How DevOps Can Help Automate the Pain Away

I spent two days at PuppetConf 2013 in San Francisco this week, and the common themes were automate everything, monitor everything, provide feedback early in the process, and focus on culture. All four of those topics aligned with the DevOps movement, with the goal of faster and more reliable deliveries. Companies that can deliver software more frequently with fewer issues have a competitive advantage over those who can’t.

Continuous Operations for Zero Downtime Deployments

The old way of delivering software was to bundle up the software and ship it, sell the software off the shelf, or allow customers to download and install it. In the “shipping model”, it was the buyer’s responsibility to install the software, manage the uptime, patch, monitor, and manage capacity. Sometimes the buyer would perform all of those tasks themselves, or sometimes they would hire a third party to handle it for them. In either case, the buyer of the software had total control over if and when the software was updated and at what time a planned outage would occur in order to perform the patches or upgrades.