Tuesday, June 28, 2016

Running Gatling load tests in Docker containers via Jenkins

Gatling is a modern load testing tool written in Scala. As part of the Jenkins setup I am in charge of, I wanted to run load tests using Gatling against a collection of pages for a given website. Here are my notes on how I managed to do this.

Running Gatling as a Docker container locally

There is a Docker image already available on DockerHub, so you can simply pull down the image locally:


$ docker pull denvazh/gatling:2.2.2

Instructions on how to run a container based on this image are available on GitHub:

$ docker run -it --rm -v /home/core/gatling/conf:/opt/gatling/conf \
-v /home/core/gatling/user-files:/opt/gatling/user-files \
-v /home/core/gatling/results:/opt/gatling/results \
denvazh/gatling:2.2.2

Based on these instructions, I created a local directory called gatling, and under it I created 3 sub-directories: conf, results and user-files. I left the conf and results directories empty, and under user-files I created a simulations directory containing a Gatling load test scenario written in Scala. I also created a file in the user-files directory called urls.csv, containing a header named loc and a URL per line for each page that I want to load test.

Assuming the current directory is gatling, here are examples of these files:

$ cat user-files/urls.csv
loc
https://my.website.com
https://my.website.com/category1
https://my.website.com/category2/product3

$ cat user-files/simulations/Simulation.scala

package my.gatling.simulation

import io.gatling.core.Predef._
import io.gatling.http.Predef._
import scala.concurrent.duration._

class GatlingSimulation extends Simulation {

  val httpConf = http
    .baseURL("http://127.0.0.1")
    .acceptHeader("text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8")
    .doNotTrackHeader("1")
    .acceptLanguageHeader("en-US,en;q=0.5")
    .acceptEncodingHeader("gzip, deflate")
    .userAgentHeader("Mozilla/5.0 (Windows NT 5.1; rv:31.0) Gecko/20100101 Firefox/31.0")

  val scn1 = scenario("Scenario1")
    .exec(Crawl.crawl)

  val userCount = Integer.getInteger("users", 1)
  val durationInSeconds  = java.lang.Long.getLong("duration", 10L)
  setUp(
    scn1.inject(rampUsers(userCount) over (durationInSeconds seconds))
  ).protocols(httpConf)
}

object Crawl {

  val feeder = csv("/opt/gatling/user-files/urls.csv").random

  val crawl = exec(feed(feeder)
    .exec(http("${loc}")
    .get("${loc}")
    ))
}


I won't go through the different ways of writing Gatling load tests scenarios here. There are good instructions on the Gatling website -- see the Quickstart and the Advanced Tutorial. What the scenario above does is it reads the file urls.csv and randomly picks a URL from it, then runs a load test against that URL.

I do want to point out 2 variables in the above script:

  val userCount = Integer.getInteger("users", 1)
  val durationInSeconds  = java.lang.Long.getLong("duration", 10L)

These variables specify the max number of users we want to ramp up to, and the duration of the ramp-up. They are used in the inject call:

scn1.inject(rampUsers(userCount) over (durationInSeconds seconds))

The special thing about these 2 variables is that they are read from JAVA_OPTS by Gatling. So if you have a -Dusers Java option and a -Dduration Java option, Gatling will know how to read them and how to set the userCount and durationInSeconds variables accordingly. This is a good thing, because it allows you to specify those numbers outside of Gatling, without hardcoding them in your simulation script. Here is more info on passing parameters via the command line to Gatling.

While pulling the Gatling docker image and running it is the simplest way to run Gatling, I prefer to understand what's going on in that image. I started off by getting the Dockerfile from GitHub:

$ cat Dockerfile

# Gatling is a highly capable load testing tool.
#
# Documentation: http://gatling.io/docs/2.2.2/
# Cheat sheet: http://gatling.io/#/cheat-sheet/2.2.2

FROM java:8-jdk-alpine

MAINTAINER Denis Vazhenin

# working directory for gatling
WORKDIR /opt

# gating version
ENV GATLING_VERSION 2.2.2

# create directory for gatling install
RUN mkdir -p gatling

# install gatling
RUN apk add --update wget && \
  mkdir -p /tmp/downloads && \
  wget -q -O /tmp/downloads/gatling-$GATLING_VERSION.zip \
  https://repo1.maven.org/maven2/io/gatling/highcharts/gatling-charts-highcharts-bundle/$GATLING_VERSION/gatling-charts-highcharts-bundle-$GATLING_VERSION-bundle.zip && \
  mkdir -p /tmp/archive && cd /tmp/archive && \
  unzip /tmp/downloads/gatling-$GATLING_VERSION.zip && \
  mv /tmp/archive/gatling-charts-highcharts-bundle-$GATLING_VERSION/* /opt/gatling/

# change context to gatling directory
WORKDIR  /opt/gatling

# set directories below to be mountable from host
VOLUME ["/opt/gatling/conf", "/opt/gatling/results", "/opt/gatling/user-files"]

# set environment variables
ENV PATH /opt/gatling/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
ENV GATLING_HOME /opt/gatling

ENTRYPOINT ["gatling.sh"]

I then added a way to pass JAVA_OPTS via an environment variable. I added this line after the ENV GATLING_HOME line:

ENV JAVA_OPTS ""

I dropped this Dockerfile in my gatling directory, then built a local Docker image off of it:

$ docker build -t gatling:local .

I  then invoked 'docker run' to launch a container based on this image, using the csv and simulation files from above. The current directory is still gatling.

$ docker run --rm -v `pwd`/conf:/opt/gatling/conf -v `pwd`/user-files:/opt/gatling/user-files -v `pwd`/results:/opt/gatling/results -e JAVA_OPTS="-Dusers=10 -Dduration=60" gatling:local -s MySimulationName

Note the -s flag which denotes a simulation name (which can be any string you want). If you don't specify this flag, the gatling.sh script which is the ENTRYPOINT in the container will wait for some user input and you will not be able to fully automate your load test.

Another thing to note is the use of JAVA_OPTS. In the example above, I pass -Dusers=10 and   -Dduration=60 as the two JAVA_OPTS parameters. The JAVA_OPTS variable itself is passed to 'docker run' via the -e option, which tells Docker to replace the default value for ENV JAVA_OPTS (which is "") with the value passed with -e.

Running Gatling as a Docker container from Jenkins

Once you have a working Gatling container locally, you can upload the Docker image built above to a private Docker registry. I used a private EC2 Container Registry (ECR).  

I also added the gatling directory and its sub-directories to a GitHub repository called devops.

In Jenkins, I created a new "Freestyle project" job with the following properties:

  • Parameterized build with 2 string parameters: USERS (default value 10) and DURATION in seconds (default value 60)
  • Git repository - add URL and credentials for the devops repository which contains the gatling files
  • An "Execute shell" build command similar to this one:
docker run --rm -v ${WORKSPACE}/gatling/conf:/opt/gatling/conf -v ${WORKSPACE}/gatling/user-files:/opt/gatling/user-files -v ${WORKSPACE}/gatling/results:/opt/gatling/results -e JAVA_OPTS="-Dusers=$USERS -Dduration=$DURATION"  /PATH/TO/DOCKER/REGISTRY/gatling -s MyLoadTest 


Note that we mount the gatling directories as Docker volumes, similarly to when we ran the Docker container locally, only this time we specify ${WORKSPACE} as the base directory. The 2 string parameters USERS and DURATION are passed as variables in JAVA_OPTS.

A nice thing about running Gatling via Jenkins is that the reports are available in the Workspace directory of the project. If you go to the Gatling project we created in Jenkins, click on Workspace, then on gatling, then results, you should see directories named gatlingsimulation-TIMESTAMP for each Gatling run. Each of these directories should have an index.html file, which will show you the Gatling report dashboard. Pretty neat.


Thursday, June 16, 2016

Running Jenkins jobs in Docker containers

One of my main tasks at work is to configure Jenkins to act as a hub for all the deployment and automated testing jobs we run. We use CloudBees Jenkins Enterprise, mostly for its Role-Based Access Control plugin, which allows us to create one Jenkins folder per project/application and establish fine grained access control to that folder for groups of users. We also make heavy use of the Jenkins Enterprise Pipeline features (which I think are also available these days in the open source version).

Our Jenkins infrastructure is composed of a master node and several executor nodes which can run jobs in parallel if needed.

One pattern that my colleague Will Wright and I have decided upon is to run all Jenkins jobs as Docker containers. This way, we only need to install Docker Engine on the master node and the executor nodes. No need to install any project-specific pre-requisites or dependencies on every Jenkins node. All of these dependencies and pre-reqs are instead packaged in the Docker containers. It's a simple but powerful idea, that has worked very well for us. One of the nice things about this pattern is that you can keep adding various types of automated tests. If it can run from the command line, then it can run in a Docker container, which means you can run it from Jenkins!

I have seen this pattern discussed in multiple places recently, for example in this blog post about "Using Docker for a more flexible Jenkins".

Here are some examples of Jenkins jobs that we create for a given project/application:
  • a deployment job that runs Capistrano in its own Docker container, against targets in various environments (development, staging, production); this is a Pipeline script written in Groovy, which can call other jobs below
  • a Web UI testing job that runs the Selenium Python WebDriver and drives Firefox in headless mode (see my previous post on how to do this with Docker)
  • a JavaScript syntax checking job that runs JSHint against the application's JS files
  • an SSL scanner/checker that runs SSLyze against the application endpoints
We also run other types of tasks, such as running an AWS CLI command to perform certain actions, for example to invalidate a CloudFront resource. I am going to show here how we create a Docker image for one of these jobs, how we test it locally, and how we then integrate it in Jenkins.

I'll use as an example a simple Docker image that installs the AWS CLI package and runs a command when the container is invoked via 'docker run'.

I assume you have a local version of Docker installed. If you are on a Mac, you can use Docker Toolbox, or, if you are lucky and got access to it, you can use the native Docker for Mac. In any case,  I will assume that you have a local directory called awscli with the following Dockerfile in it:

FROM ubuntu:14.04

MAINTAINER You Yourself <you@example.com>

# disable interactive functions
ARG DEBIAN_FRONTEND=noninteractive
ENV AWS_ACCESS_KEY_ID=""
ENV AWS_SECRET_ACCESS_KEY=""
ENV AWS_COMMAND=""

RUN apt-get update && \
    apt-get install -y python-pip && \
    pip install awscli

WORKDIR /root

CMD (export AWS_ACCESS_KEY_ID=$AWS_ACCESS_KEY_ID; export AWS_SECRET_ACCESS_KEY=$AWS_SECRET_ACCESS_KEY; $AWS_COMMAND)

As I mentioned, this simply installs the awscli Python package via pip, then runs a command given as an environment variable when you invoke 'docker run'. It also uses two other environment variables that contain the AWS access key ID and secret access key. You don't want to hardcode these secrets in the Dockerfile and have them end up on GitHub.

The next step is to build an image based on this Dockerfile. I'll call the image awscli and I'll tag it as local:

$ docker build -t awscli:local .

Then you can run a container based on this image. The command line looks a bit complicated because I am passing (via the -e switch) the 3 environment variables discussed above:


$ docker run --rm -e AWS_ACCESS_KEY_ID=YOUR_ACCESS_KEY_ID -e AWS_SECRET_ACCESS_KEY=YOUR_SECRET_ACCESS_KEY -e AWS_COMMAND='aws cloudfront create-invalidation --distribution-id=abcdef --invalidation-batch Paths={Quantity=1,Items=[/media/*]},CallerReference=my-invalidation-123456' awscli:local

(where distribution-id needs to be the actual ID of your CloudFront distribution, and CallerReference needs to be unique per invalidation)

If all goes well, you should see the output of the 'aws cloudfront create-invalidation' command.

In our infrastructure, we have a special GitHub repository where we check in the various folders containing the Dockerfiles and any static files that need to be copied over to the Docker images. When we push the awscli directory to GitHub for example, we have a Jenkins job that will be notified of that commit and that will build the Docker image (similarly to how we did it locally with 'docker build'), then it will 'docker push' the image to a private AWS ECR repository we have.

Now let's assume we want to create a Jenkins job that will run this image as a container. First we define 2 secret credentials, specific to the Jenkins folder where we want to create the job (there are also global Jenkins credentials that can apply to all folders). These credentials are of type "Secret text" and contain the AWS access key ID and the AWS secret access key.

Then we create a new Jenkins job of type "Freestyle project" and call it cloudfront.invalidate. The build for this job is parameterized and contains 2 parameters: CF_ENVIRONMENT which is a drop-down containing the values "Staging" and "Production" referring to the CloudFront distribution we want to invalidate; and CF_RESOURCE, which is a text variable that needs to be set to the resource that needs to be invalidated (e.g. /media/*).

In the Build Environment section of the Jenkins job, we check "Use secret text(s) or file(s)" and add 2 Bindings, one for the first secret text credential containing the AWS access key ID, which we save in a variable called AWS_ACCESS_KEY_ID, and the other one for the second secret text credential containing the AWS secret access key, which we save in a variable called AWS_SECRET_ACCESS_KEY.

The Build section for this Jenkins job has a step of type "Execute shell" which uses the parameters and variables defined above and invokes 'docker run' using the path to the Docker image from our private ECR repository:

DISTRIBUTION_ID=MY_CF_DISTRIBUTION_ID_FOR_STAGING
if [ $CF_ENVIRONMENT == "PRODUCTION" ]; then
    DISTRIBUTION_ID=MY_CF_DISTRIBUTION_ID_FOR_PRODUCTION
fi

INVALIDATION_ID=jenkins-invalidation-`date +%Y%m%d%H%M%S`

COMMAND="aws cloudfront create-invalidation --distribution-id=$DISTRIBUTION_ID --invalidation-batch Paths={Quantity=1,Items=[$CF_RESOURCE]},CallerReference=$INVALIDATION_ID"

docker run --rm -e AWS_ACCESS_KEY_ID=$ACCESS_KEY_ID -e AWS_SECRET_ACCESS_KEY=$SECRET_ACCESS_KEY -e AWS_COMMAND="$COMMAND" MY_PRIVATE_ECR_ID.dkr.ecr.us-west-2.amazonaws.com/awscli


When this job is run, the Docker image gets pulled down from AWS ECR, then a container based on the image is run and then removed upon completion (that's what --rm does, so that no old containers are left around).

I'll write another post soon with some more examples of Jenkins jobs that we run as Docker containers to do Selenium test, JSHint testing and SSLyze scanning.






Modifying EC2 security groups via AWS Lambda functions

One task that comes up again and again is adding, removing or updating source CIDR blocks in various security groups in an EC2 infrastructur...