Thursday, November 23, 2017

Jenkins Backup Pipeline

I recently setup a Jenkins CICD service to complement the Travis based automation already in use where I work. I worked with job-based Jenkins workflows in the past - where we setup chains of interdependent jobs (ex: build, publish assets, deploy to dev environment, ...), but I took this opportunity to adopt Jenkins' new (to me) Pipeline pattern, and I'm glad I did.

A Jenkins pipeline defines (via a groovy DSL) a sequence of steps that execute together in a build under a single Jenkins job. Here's an example pipeline we use to backup our Jenkins configuration to S3 every night.

#!groovy

pipeline {
  agent any

  stages {
    stage('BuildArchive'){
      steps {
        echo "BuildArchive $env.JENKINS_HOME"
        sh "tar cvJf backup.tar.xz --exclude '$env.JENKINS_HOME/jobs/[^/]*/builds/*' --exclude '$env.JENKINS_HOME/jobs/[^/]*/last*' --exclude '$env.JENKINS_HOME/workspace' --exclude '$env.JENKINS_HOME/war' --exclude '$env.JENKINS_HOME/jobs/[^/]*/workspace/'  $env.JENKINS_HOME"
      }
    }
    stage('UploadToS3') {
      steps {
        echo 'Upload to S3!'
        sh 'aws s3 cp --sse AES256 backup.tar.xz s3://cdis-terraform-state/JenkinsBackup/backup.$(date +%u).tar.xz'
      }
    }
    stage('Cleanup') {
      steps {
        echo 'Cleanup!'
        sh 'rm -f backup.tar.xz'
      }
    }
  }
  post {
    success {
      slackSend color: 'good', message: 'Jenkins backup pipeline succeeded'
    }
    failure {
      slackSend color: 'bad', message: 'Jenkins backup pipeline failed'
    }
    unstable {
      slackSend color: 'bad', message: 'Jenkins backup pipeline unstable'
    }
  }
}

If you are a Jenkins user, then take the time to give Pipelines a try. If you're also using github or bitbucket - then look into Jenkins' support for organizations that nicely support pull-request based workflows. Also try the new Blue Ocean UI - it's designed with pipelines in mind.

Saturday, August 19, 2017

stuff an app into a docker image

I had some fun over the last week setting up a docker image for an S3-copy utility, and integrating it into the gulp build for https://apps.frickjack.com. I pushed the image for the s3cp app up to hub.docker.com. I use s3cp to sync a local build of https://apps.frickjack.com up to an S3 bucket. I could probably coerce the AWS CLI (aws s3 sync) into doing the same job, but I wrote this app a while ago, and it includes functionality to automatically gzip-compress each asset before upload, and setup various HTTP headers on the asset (content-encoding, cache-control, content-type, and etag). For example - inspect the headers (using a browser's developer tools) on an arbitrary apps.frickjack.com asset like https://apps.frickjack.com/resources/css/littleware/styleGuide/guide.css.


A few s3cp details - it's a simple scala application with two command line flags (-config aws-key aws-secret, -copy source destination). The '-config' command saves the AWS credential under ~/.littleware/aws/. The code could use a little love - it is hard-coded to use the AWS Virginia region, it could use a '-force' option with '-copy', and its error messages are annoying - but it works for what I need. The source is on github (git clone ...; cd littleware/webapp/littleApps/s3Copy; gradle build; gradle copyToLib; cli/s3cp.sh --help).


I pushed a binary build of s3cp to dockerhub. I run it like this. First, I register my AWS credentials with the app. We only need to configure s3cp once, but we'll want to mount a docker volume to save the configuration to (I need to wire up a more secure way to save and pass the AWS secrets):


docker volume create littleware

docker run -it -v littleware:/root/.littleware \
    -v /home/reuben:/mnt/reuben \
    --name s3cp --rm frickjack/s3cp:1.0.0 \
    -config aws-key aws-secret

After the configuration is done I use s3cp with '-copy' commands like this:


docker run -it -v littleware:/root/.littleware \
    -v /home/reuben:/mnt/reuben \
    --name s3cp --rm frickjack/s3cp:1.0.0 \
    -copy /mnt/reuben/Code/littleware-html5Client/build/ s3://apps.frickjack.com/

I added a gulp task to the apps.frickjack.com code to simplify the S3 deploy - from gulpfile.js:


gulp.task( 'deploy', [ 'compileclean' ], function(cb) {
    const pwdPath = process.cwd();
    const imageName = "frickjack/s3cp:1.0.0";
    const commandStr = "yes | docker run --rm --name s3gulp -v littleware:/root/.littleware -v '" +
        pwdPath + ":/mnt/workspace' " + imageName + " -copy /mnt/workspace/build/ s3://apps.frickjack.com/";

    console.log( "Running: " + commandStr );

    exec( commandStr, 
        function (err, stdout, stderr) {
            console.log(stdout);
            console.log(stderr);
            if ( err ) {
                //reject( err );
            } else {
                cb();
            }
        }
    );
});

Sunday, August 13, 2017

versioning js and css with gulp-rev and gulp-revReplace

I've discussed before how I run https://apps.frickjack.com on S3, but one issue I did not address was how to version updates, so that each visitor loads a consistent set of assets. Here's the problem.

  • on Monday I publish v1 assets to the S3 bucket behind apps.frickjack.com
  • later on Monday Fred visits apps.frickjack.com, and his browser caches several v1 javascript and CSS files - a.js, b.js, x.css, y.css, ...
  • on Tuesday I publish v2 assets to S3 - just changing a few files
  • on Wednesday Fred visits apps.frickjack.com, but for whatever reason his browser cache updates b.js to v2, but loads v1 of the other assets from cache

On Wednesday Fred loads an inconsistent set of assets that might not work together. There are several approaches people take to avoid this problem. Surma gave a good overview of HTTP cache headers and thinking in this talk on YouTube, and Jake Archibald goes into more detail in this bLog (we set cache-headers directly on our apps.frickjack.com S3 objects).

Long story short - I finally wired up my gulpfile with gulp-rev and gulp-rev-replace to add a hash to the javascript and css file names. Each visitor to apps.frickjack.com is now guaranteed to load a consistent set of assets, because an asset's name changes when its content changes. I was really happy to find gulp-rev - it just takes care of things for me. The only gotcha is that gulp-rev-rewrite does not like to work with relative asset paths (ex - <script src="511.js"), so I had to update a few files to use absolute paths (src="/511/511.js") - otherwise things worked great.

Monday, August 07, 2017

use docker's overlay2 storage driver on Ubuntu 16.0+

Docker defaults to using its aufs storage driver on Ubuntu 16.0.4, but the system has a 4.4 kernel, so it's probably a good idea to switch over to the overlay2 driver.


$ lsb_release -a
No LSB modules are available.
Distributor ID: Ubuntu
Description: Ubuntu 16.04.2 LTS
Release: 16.04
Codename: xenial

$ uname -r
4.4.0-83-generic

$ sudo cat /etc/docker/daemon.json
{
  "storage-driver":"overlay2",
  "log-driver": "journald"
}


Monday, June 05, 2017

S3+CloudFront+ACM+Route53 = serverless http2 https site with CDN

The other night I logged into the AWS web console, and upgraded my little S3 hosted site at https://apps.frickjack.com to http2 with TLS - which opens the door for adding a service worker to the site. As a side benefit the site is behind the CloudFront CDN. I also moved the domain's authoritative DNS to Route 53 earlier in the week - just to consolidate that functionality under AWS.

It's ridiculous how easy it was to do this upgrade - I should have done it a while ago:

  • create the Cloud Front network
  • configure the CDN with a certificate setup with the AWS Certificate Manager
  • update DNS for the domain 'apps.frickjack.com' to reference the CDN hostname - Route53 supports an 'alias' mechanism that exposes an A record, but you can also just use a CNAME if you have another DNS provider

Anyway - this is fun stuff to play with, and my AWS bill will still be less than $2 a month.

Monday, May 29, 2017

Content Security Policy and S3 hosted site

When I went to implement a content security policy on http://apps.frickjack.com I was disappointed to realize that the 'Content-Security-Policy' header is not one of the standard headers supported by S3's metadata API. This bLog
explains an approach using lambda on the CloudFront edge, but that looks crazy.

Fortunately - it turns out we can add a basic security policy to a page with a meta tag, so I added a <meta http-equiv="Content-Security-Policy" ...> tag to the nunjucks template that builds apps.frickjack.com's html pages at gulp compile time. The full code is on github:

<meta http-equiv="Content-Security-Policy" 
content="
  default-src 'none'; 
  img-src 'self' data: https://www.google-analytics.com; 
  script-src 'self' https://www.google-analytics.com; 
  style-src 'self' https://unpkg.com https://fonts.googleapis.com; 
  object-src 'none'; 
  font-src 'self' https://fonts.googleapis.com https://fonts.gstatic.com
"
>

Saturday, May 27, 2017

Update apps.frickjack.com - new 511 app

I finally updated my little S3-hosted web sandbox at http://apps.frickjack.com. The update has a few parts. First - I took down some old javascript apps and CSS code that I had developed several years ago using the YUI framework (now defunct), and tried to cleanup and simplify the landing page.

Next, I posted some new code exploring app development and testing with vanillajs, custom elements, and CSS. The new code includes an update to the 511 app for timing labor contractions with the 511 rule. The app uses a simple custom element to show the distribution of contractions over the last hour on a SVG pie-chart. The code is on github.

Finally, I switched the code over to an ISC license. I don't want an LGPL license to discourage people for copying something they find useful.

There's still a lot I'd like to do with apps.littleware - starting with upgrading the site to https by putting the S3 bucket behind cloud front, and wiring up a service worker. We'll see how long it takes me to make time for that ...

Thursday, January 26, 2017

VPN blocking Docker routes on Windows Workaround

Here's the situation. You're stuck in 2017 running Windows 7 with a Cisco VPN client. You're also a Docker evangelist, and run local developer environments using Docker Toolbox on that Windows 7 laptop. Docker Toolbox runs the Docker daemon on a Virtual Box VM running the boot2docker Linux distribution. One of the cool tricks Docker Toolbox manages for you is it sets up a virtual network (VirtualBox host-only network), so the Boot2Docker VM has its own IP address (192.168.0.100 or whatever), and you alias that IP address in \Windows\System32\drivers\etc\hosts, so that you can connect to https://my.docker.vm/services, and everything is super cool - until you connect to that damn Cisco VPN, because the VPN is configured by some bonehead IT Windows group policy to hijack all routes to private network IP addresses, and somehow they wired it so that you can't "route add" new routes to your Docker VM.

Fortunately - there's an easy workaround to this mess. First, identify a block of public IP addresses that you know you don't need to communicate with (I chose the 55.0.0.0/8 block assigned to the DOD Network Information Center), and reconfigure the Virtual Box host-only network to assign addresses from that block rather than the default private network it was originally configured with (the Virtual Box GUI has a tool under File -> Preferences -> Network). I had to reboot to get the boot2docker VM to stick to the new IP address, and screw around with 'docker-machine regenerate-certs', but it eventually worked. Good luck!

Monday, January 23, 2017

Debugging Dockerfile builds

I often find myself in a situation where I'm building an image from some Dockerfile, and the build fails 10 or 15 lines in, and I want to dive in and debug what's going wrong with that failing line. Fortunately - that's easy to do.
Let's suppose you're trying to build an image with a Dockerfile like this:


$ cat Dockerfile
FROM alpine:3.5
RUN echo "Step 2"
RUN echo "Step 3" && exit 1
RUN echo "Step 4"


Of course the build fails on 'exit 1' like this:


$ docker build -t demo:1.0.0 .Sending build context to Docker daemon  60.6 MB
Step 1 : FROM alpine:3.5
 ---> 88e169ea8f46
Step 2 : RUN echo "Step 2"
 ---> Running in 7ec0de04622c
Step 2
 ---> 281d8cac4e45
Removing intermediate container 7ec0de04622c
Step 3 : RUN echo "Step 3" && exit 1
 ---> Running in a8a16cb6d591
Step 3
The command '/bin/sh -c echo "Step 3" && exit 1' returned a non-zero code: 1

Fortunately, the docker build saves an intermediate image after each command in the Dockerfile, and outputs the id of that image (---> 281d8cac4e45), so it's easy to do something like this to debug that failing command:


$ docker run --name debug -v '/home/reuben:/mnt/reuben' -it 281d8cac4e45 /bin/sh
/ #