I was at the Santa Clara Convention Center, next to the beautiful brand-new San Francisco 49ers stadium, this week to listen to two days of discussions about continuous delivery and Jenkins. Keynote speakers were Kohsuke Kawaguchi, creator of Jenkins and CTO at CloudBees, and Gene Kim, coauthor of The Phoenix Project and well-known speaker on DevOps.

Keynote 1: Kohsuke Kawaguchi

Kohsuke (KK) told the story of how he started Jenkins (originally called Hudson) while working at Sun in an attempt to automate and simplify the way software was being built and deployed. It quickly became popular internally, with multiple projects running on his Sun server sitting under his desk. At the same time, Sun was laying off people, so KK started grabbing computers from departing employees, and these became part of his new cluster.

Now, ten years later, there are over 100,000 implementations of Jenkins across the globe, with an ecosystem that includes more than 1,000 plugins. Jenkins has become an important technology that is the anchor of continuous delivery. KK’s goal has always been to make things simpler. That mindset is reflected in some of the announcements he made at the show. CloudBees has added a robust workflow to its suite of tools to improve the overall orchestration of building and shipping software. It also announced integration with Docker, as more companies are moving to container-based architectures.

KK talked about the new mindset that “everything is code.” We talk a lot about infrastructure as code, where environments and configurations are now built from code and stored in repositories using the same tools and processes that developers use. Now we need to take the pipeline artifacts and treat them as code, too. One person used the term “Pipeline as Code,” while another said “CD as Code,” with CD standing for “Continuous Delivery.” Regardless of what we call it, the point is that just about everything today can be described in some form a code or configuration, and whether it is application code, infrastructure code, pipeline code, or whatever, we should treat it all in a standard way, in a common repository, with common tools and processes.

Keynote 2: Gene Kim

I have had the pleasure of watching Gene talk many times.  Gene walked us through his famous “The Three Ways” slides and also reviewed many of the high points in the recent 2015 Puppet Labs State of DevOps report. What was new for me was the number of real enterprise use cases that Gene can now point to in his talks. When I first saw Gene talking about DevOps a few years back, the examples he used were from what he calls “the unicorns”: Etsy, Facebook, Netflix, etc. Now he is able to point to large enterprises like Target, GE Capital, Capital One, Macy’s, Nationwide, Raytheon, and more.

A big takeaway for me was the importance of shifting away from optimizing for cost to optimizing for speed. When speed to market is a driving factor, people’s incentives will align more closely toward improving the process of building and shipping software. In a panel discussion, Forrester analyst Kurt Bittner discussed how when organizations optimize for speed, they tend to not organize in technical silos like dev, test, security, etc., but instead organize around products. Organizing by product lends itself to shared goals and higher collaboration.

Gene told us that one way to measure the maturity of DevOps in an organization is to rank it on a scale of 1 to 7 on how much it fears deployments, with 1 being low fear and 7 being scared to death. Companies with higher levels of fear tend to have less automation, lower quality, higher risks, and less-satisfied customers, because their deployments often disrupt the flow of business.

Vendor Reviews

I walked the floor and visited most of the booths. Most of the vendor solutions filled a specific need within the CD pipeline and had an integration with Jenkins. Two areas that stood out to me were automated testing and security.

BlazeMeter has built a nice user interface on top of the popular Apache JMeter project. JMeter has a strong community and user base but can be cumbersome to work with. BlazeMeter adds a layer of usability above JMeter to make configuring, running, and viewing test results a breeze. The UI provides simple configuration capabilities and allows for import and export of yaml files containing all of the testing instructions.

SOASTA is well known for its SaaS and on-premises performance and load testing capabilities. I caught a glimpse of its new TouchTest  product, which automates user interaction with mobile devices and tablets. One of the challenges with mobile is the number of devices that need to be tested and the frequency with which the device technologies change. Manually trying to test mobile applications is a death wish. SOASTA has some real cutting edge automation technology for simulating mobile user experiences across a wide variety of devices. It can also generate the load from multiple endpoints across the globe. Its visualization tools are second to none.

Perfecto Mobile takes a different angle to mobile testing. One area that does not get enough attention with mobile testing is event triggers. Have you ever had an app that was interrupted by a phone call, a text message, intermittent connectivity, or some other unplanned event? Not all mobile apps are properly designed to recover when interrupted unexpectedly. Perfecto Mobile’s testing solution focuses on testing apps against these types of events across multiple device types.

There were two really cool security solutions. I have written in the past about how we need to take new approaches to security in this new age of continuous delivery. These tools are a great step in the right direction.

First up is Black Duck. Many of my clients are using Black Duck in their CD pipelines to enforce security and compliance policies. The way it works is that various policies about security and compliance are set up in the admin console. In addition, a team of security researchers at Black Duck are updating the vulnerability list, as reported by the NIST each day. When a developer kicks off a build, the code is scanned for vulnerabilities and adherence to policies, and is matched against the vulnerability database. If certain vulnerabilities are discovered or if a certain security score is not met, the build fails and the developer must fix the issues before the code can move forward. Tools like this go a long way toward getting the security and compliance folks on board with your DevOps initiatives.

Sonatype’s Nexus product casts a much wider net. It focuses on what it calls the “software supply chain.” It has a repository for storing all of the binaries that make up the infrastructure. Where Black Duck scans the developer’s code, Nexus analyzes the infrastructure for vulnerabilities and compliance. It, too, is constantly monitoring the NIST vulnerability database, but because of the repository, it is able to proactively alert you to what machines are vulnerable when the next Shellshock hits.

My Best in Show vote goes to Codenvy. Codenvy has built its technology on top of the APIs from Amazon’s WorkSpaces technology and offers virtual environments as a service. Developers can choose from a library of prebuilt Docker images to launch the stack of their choice. This solves the “it works on my machine” problem, in which the developer builds something locally that immediately breaks when it gets shipped to another environment. A developer now can leverage a standard virtual workstation that is configured independently of the developer’s workstation. This same image can then be used in the pipeline as the code progresses from development through test and into production. This leads to huge cost savings and can drastically reduce the time to market by eliminating waste time previously required for configuring environments.

Summary

My big takeaway from the conference is that the adoption rate for continuous delivery within enterprises is increasing at hockey-stick rates. The Jenkins continuous integration server is a key piece of technology that is enabling this movement. The ecosystem around Jenkins is growing, as is the user base and the number of workloads. CloudBees’ recent pivot from pure-play PaaS provider to enterprise continuous delivery solution provider appears to be the right move at the right time.  Expect to hear the terms “Everything as Code” and “Pipeline as Code” more in the future, as companies become more mature with DevOps and Continuous Delivery.