Thursday, October 11, 2018

little-elements webapp shell

I recently updated apps.frickjack.com to use unbundled javascript modules to load custom elements and style rules. It worked great, except I soon noticed that initial page load would present an un-styled, no-custom-element rendering of the page until the javascript modules finished loading asynchronously... ugh.

I worked around the problem by implementing a simple application shell. The html pages under https://apps.frickjack.com now extend a compile-time nunjucks template, basicShell.html.njk (npm install @littleware/little-elements). The shell hides the page content with a style rule, maybe loads the webcomponentsjs polyfill, and renders a "Loading ..." message if the main.js javascript module doesn't load fast enough. I also extended the main javascript module on each page to signal the shell when it has finished rendering the initial page content. For example - indexMain.js for https://apps.frickjack.com/index.html looks like this:

import './headerSimple/headerSimple.js';

if ( window['littleShell'] ) {
    window['littleShell'].clear();
}

The index.html nunjucks template looks like this:

{% extends "little-elements/lib/styleGuide/shell/basicShell.html.njk" %}

{% block head %}
<title>frickjack.com</title>
{% endblock %}

{% block content %}
    <lw-header-simple title="Home">
    </lw-header-simple>
    ...
   <script src="{{ jsroot }}/@littleware/little-apps/lib/indexMain.js" type="module"></script>
{% endblock %}

A visitor to https://apps.frickjack.com may now briefly see a "Loading" screen for the first page load, but subsequent page loads generally load fast enough from cache that the shell doesn't bother to show the "Loading" screen. I can improve on this by extending the shell with a service worker - that will be my next project.

I envy gmail's loading screen that renders a crazy animation of a googley envelope. Fancy!

Friday, August 03, 2018

jasminejs tests with es2015 modules

Running jasminejs tests of javascript modules in a browser is straight forward, but requires small adjustments, since a module loads asynchronously, while jasmine's default runtime assumes code loads synchronously. The approach I take is the same as the one used to test requirejs AMD modules:

  • customize jasmine boot, so that it does not automatically launch test execution on page load
  • write a testMain.js script that imports all the spec's for your test suite, then launches jasmine's test execution

For example, this is the root test suite (testMain.ts) for the @littleware/little-elements module:

import './test/spec/utilSpec.js';
import './arrivalPie/spec/arrivalPieSpec.js';
import './styleGuide/spec/styleGuideSpec.js';
import {startTest} from './test/util.js';

startTest();

The startTest() function is a little wrapper that detects whether karmajs is the test runtime, since the test bootstrap process is a little different in that scenario. The karmajs config file should also annotate javascript module files with a module type. For example, here is an excerpt from little-element's karma.conf.js:

    files: [
      'lib/test/karmaAdapter.js',
      { pattern: 'lib/arrivalPie/**/*.js', type: 'module', included: false },
      { pattern: 'lib/styleGuide/**/*.js', type: 'module', included: false },
      { pattern: 'lib/test/**/*.js', type: 'module', included: false },
      { pattern: 'lib/testMain.js', type: 'module', included: true },
      { pattern: 'node_modules/lit-html/*.js', type: 'module', included: false }
    ],

Feel free to import the @littleware/little-elements module into your project if it will be helpful!

Tuesday, July 31, 2018

little-elements

Last week I rolled out an update to apps.frickjack.com that changes the plumbing of the site.

  • The javascript is now organized as unbundled es2015 modules
  • The site also leverages javascript modules to manage html templates and CSS with lit-html

I also setup my first module on npm (here) to track UX components that could be shared between projects - which was fun :-)

wait for kubernetes pods

First, there's probably a better way to do this with kubectl rollout, but I didn't know that existed till last week.

Anyway ... kubernetes' API is asynchronous, so if an operator that issues a series of kubernetes commands to update various deployments wants to wait till all the pods in those deployments are up and running, then he may take advantage of a script like: kube-wait4-pods

(
    # If new pods are still rolling/starting up, then wait for that to finish
    COUNT=0
    OK_COUNT=0
    # Don't exit till we get 2 consecutive readings with all pods running.
    while [[ "$OK_COUNT" -lt 2 ]]; do
      g3kubectl get pods
      if [[ 0 == "$(g3kubectl get pods -o json |  jq -r '[.items[] | { name: .metadata.generateName, phase: .status.phase, waitingContainers: [ try .status.containerStatuses[] | { waiting:.state|has("waiting"), ready:.ready}|(.waiting==true or .ready==false)|select(.) ]|length }] | map(select(.phase=="Pending" or .phase=="Running" and .waitingContainers > 0)) | length')" ]]; then
        let OK_COUNT+=1
      else
        OK_COUNT=0
      fi
      
      if [[ "$OK_COUNT" -lt 2 ]]; then
        echo ------------
        echo "INFO: Waiting for pods to exit Pending state"
        let COUNT+=1
        if [[ COUNT -gt 30 ]]; then
          echo -e "$(red_color "ERROR:") pods still not ready after 300 seconds"
          exit 1
        fi
        sleep 10
      fi
    done
)

For example - this Jenkins pipeline deploys a new stack of services to a QA environment, then waits till the new versions of pods are deployed before running through an integration test suite.

...
    stage('K8sDeploy') {
      steps {
        withEnv(['GEN3_NOPROXY=true', "vpc_name=$env.KUBECTL_NAMESPACE", "GEN3_HOME=$env.WORKSPACE/cloud-automation"]) {
          echo "GEN3_HOME is $env.GEN3_HOME"
          echo "GIT_BRANCH is $env.GIT_BRANCH"
          echo "GIT_COMMIT is $env.GIT_COMMIT"
          echo "KUBECTL_NAMESPACE is $env.KUBECTL_NAMESPACE"
          echo "WORKSPACE is $env.WORKSPACE"
          sh "bash cloud-automation/gen3/bin/kube-roll-all.sh"
          sh "bash cloud-automation/gen3/bin/kube-wait4-pods.sh || true"
        }
      }
    }
...

BTW - the code referenced above is a product of the great team of developers at the Center for Data Intensive Science.

Saturday, April 07, 2018

nginx reverse-proxy tricks for jwt and csrf

I've been doing devops work over the last few months at CDIS. One of the tasks I worked on recently was to extend the nginx configuration (on github as a kubernetes configmap here) for the gen3 data commons stack.

First, we added end-user data to the proxy log by extracting user information from the JWT token in the auth cookie. We took advantage of the nginscript (nginscript is a subset of javascript) to import some standard javascript code (also in the kubernetes configmap) to parse the JWT.

Next, we added a CSRF guard that verifies that if a CSRF cookie is present, then it matches a CSRF header. The logic is a little clunky (in github here), because of the restrictions on conditionals in nginx configuration, but it basically looks like the sample configuration below - where $access_token is the JWT auth token (either from a header from non-web clients, or a cookie for web browser clients), and an "ok-SOMETHING" $csrf_check is only required for browser clients. Finally - we do not enforce the HTTP-header based CSRF check in the proxy for endpoints that may be accessed by traditional HTML form submissions (that embed the token in the form body), or for endpoints that are accessed by third party browser frontends (like jupyterhub) that may implement a CSRF guard in a different way. Finally, there's also a little bit of logic that auto-generates a CSRF cookie if it's not present - a javascript client in the same domain can read the cookie to get the value to put in the X-CSRF header.

          set $access_token "";
          set $csrf_check "ok-tokenauth";
          if ($cookie_access_token) {
              set $access_token "bearer $cookie_access_token";
              # cookie auth requires csrf check
              set $csrf_check "fail";
          }
          if ($http_authorization) {
              # Authorization header is present - prefer that token over cookie token
              set $access_token "$http_authorization";
          }

          #
          # CSRF check
          # This block requires a csrftoken for all POST requests.
          #
          if ($cookie_csrftoken = $http_x_csrf_token) {
            # this will fail further below if cookie_csrftoken is empty
            set $csrf_check "ok-$cookie_csrftoken";
          }
          if ($request_method != "POST") {
            set $csrf_check "ok-$request_method";
          }
          if ($cookie_access_token = "") {
            # do this again here b/c empty cookie_csrftoken == empty http_x_csrf_token - ugh  
            set $csrf_check "ok-tokenauth";
          }
          ...
          location /index/ {
              if ($csrf_check !~ ^ok-\S.+$) {
                return 403 "failed csrf check";
              }
              if ($cookie_csrftoken = "") {
                add_header Set-Cookie "csrftoken=$request_id$request_length$request_time$time_iso8601;Path=/";
              }

              proxy_pass http://indexd-service/;
          }
          ...

Thursday, November 23, 2017

Jenkins Backup Pipeline

I recently setup a Jenkins CICD service to complement the Travis based automation already in use where I work. I worked with job-based Jenkins workflows in the past - where we setup chains of interdependent jobs (ex: build, publish assets, deploy to dev environment, ...), but I took this opportunity to adopt Jenkins' new (to me) Pipeline pattern, and I'm glad I did.

A Jenkins pipeline defines (via a groovy DSL) a sequence of steps that execute together in a build under a single Jenkins job. Here's an example pipeline we use to backup our Jenkins configuration to S3 every night.

#!groovy

pipeline {
  agent any

  stages {
    stage('BuildArchive'){
      steps {
        echo "BuildArchive $env.JENKINS_HOME"
        sh "tar cvJf backup.tar.xz --exclude '$env.JENKINS_HOME/jobs/[^/]*/builds/*' --exclude '$env.JENKINS_HOME/jobs/[^/]*/last*' --exclude '$env.JENKINS_HOME/workspace' --exclude '$env.JENKINS_HOME/war' --exclude '$env.JENKINS_HOME/jobs/[^/]*/workspace/'  $env.JENKINS_HOME"
      }
    }
    stage('UploadToS3') {
      steps {
        echo 'Upload to S3!'
        sh 'aws s3 cp --sse AES256 backup.tar.xz s3://cdis-terraform-state/JenkinsBackup/backup.$(date +%u).tar.xz'
      }
    }
    stage('Cleanup') {
      steps {
        echo 'Cleanup!'
        sh 'rm -f backup.tar.xz'
      }
    }
  }
  post {
    success {
      slackSend color: 'good', message: 'Jenkins backup pipeline succeeded'
    }
    failure {
      slackSend color: 'bad', message: 'Jenkins backup pipeline failed'
    }
    unstable {
      slackSend color: 'bad', message: 'Jenkins backup pipeline unstable'
    }
  }
}

If you are a Jenkins user, then take the time to give Pipelines a try. If you're also using github or bitbucket - then look into Jenkins' support for organizations that nicely support pull-request based workflows. Also try the new Blue Ocean UI - it's designed with pipelines in mind.

Saturday, August 19, 2017

stuff an app into a docker image

I had some fun over the last week setting up a docker image for an S3-copy utility, and integrating it into the gulp build for https://apps.frickjack.com. I pushed the image for the s3cp app up to hub.docker.com. I use s3cp to sync a local build of https://apps.frickjack.com up to an S3 bucket. I could probably coerce the AWS CLI (aws s3 sync) into doing the same job, but I wrote this app a while ago, and it includes functionality to automatically gzip-compress each asset before upload, and setup various HTTP headers on the asset (content-encoding, cache-control, content-type, and etag). For example - inspect the headers (using a browser's developer tools) on an arbitrary apps.frickjack.com asset like https://apps.frickjack.com/resources/css/littleware/styleGuide/guide.css.


A few s3cp details - it's a simple scala application with two command line flags (-config aws-key aws-secret, -copy source destination). The '-config' command saves the AWS credential under ~/.littleware/aws/. The code could use a little love - it is hard-coded to use the AWS Virginia region, it could use a '-force' option with '-copy', and its error messages are annoying - but it works for what I need. The source is on github (git clone ...; cd littleware/webapp/littleApps/s3Copy; gradle build; gradle copyToLib; cli/s3cp.sh --help).


I pushed a binary build of s3cp to dockerhub. I run it like this. First, I register my AWS credentials with the app. We only need to configure s3cp once, but we'll want to mount a docker volume to save the configuration to (I need to wire up a more secure way to save and pass the AWS secrets):


docker volume create littleware

docker run -it -v littleware:/root/.littleware \
    -v /home/reuben:/mnt/reuben \
    --name s3cp --rm frickjack/s3cp:1.0.0 \
    -config aws-key aws-secret

After the configuration is done I use s3cp with '-copy' commands like this:


docker run -it -v littleware:/root/.littleware \
    -v /home/reuben:/mnt/reuben \
    --name s3cp --rm frickjack/s3cp:1.0.0 \
    -copy /mnt/reuben/Code/littleware-html5Client/build/ s3://apps.frickjack.com/

I added a gulp task to the apps.frickjack.com code to simplify the S3 deploy - from gulpfile.js:


gulp.task( 'deploy', [ 'compileclean' ], function(cb) {
    const pwdPath = process.cwd();
    const imageName = "frickjack/s3cp:1.0.0";
    const commandStr = "yes | docker run --rm --name s3gulp -v littleware:/root/.littleware -v '" +
        pwdPath + ":/mnt/workspace' " + imageName + " -copy /mnt/workspace/build/ s3://apps.frickjack.com/";

    console.log( "Running: " + commandStr );

    exec( commandStr, 
        function (err, stdout, stderr) {
            console.log(stdout);
            console.log(stderr);
            if ( err ) {
                //reject( err );
            } else {
                cb();
            }
        }
    );
});

Sunday, August 13, 2017

versioning js and css with gulp-rev and gulp-revReplace

I've discussed before how I run https://apps.frickjack.com on S3, but one issue I did not address was how to version updates, so that each visitor loads a consistent set of assets. Here's the problem.

  • on Monday I publish v1 assets to the S3 bucket behind apps.frickjack.com
  • later on Monday Fred visits apps.frickjack.com, and his browser caches several v1 javascript and CSS files - a.js, b.js, x.css, y.css, ...
  • on Tuesday I publish v2 assets to S3 - just changing a few files
  • on Wednesday Fred visits apps.frickjack.com, but for whatever reason his browser cache updates b.js to v2, but loads v1 of the other assets from cache

On Wednesday Fred loads an inconsistent set of assets that might not work together. There are several approaches people take to avoid this problem. Surma gave a good overview of HTTP cache headers and thinking in this talk on YouTube, and Jake Archibald goes into more detail in this bLog (we set cache-headers directly on our apps.frickjack.com S3 objects).

Long story short - I finally wired up my gulpfile with gulp-rev and gulp-rev-replace to add a hash to the javascript and css file names. Each visitor to apps.frickjack.com is now guaranteed to load a consistent set of assets, because an asset's name changes when its content changes. I was really happy to find gulp-rev - it just takes care of things for me. The only gotcha is that gulp-rev-rewrite does not like to work with relative asset paths (ex - <script src="511.js"), so I had to update a few files to use absolute paths (src="/511/511.js") - otherwise things worked great.