Thursday, October 11, 2018

little-elements webapp shell

I recently updated apps.frickjack.com to use unbundled javascript modules to load custom elements and style rules. It worked great, except I soon noticed that initial page load would present an un-styled, no-custom-element rendering of the page until the javascript modules finished loading asynchronously... ugh.

I worked around the problem by implementing a simple application shell. The html pages under https://apps.frickjack.com now extend a compile-time nunjucks template, basicShell.html.njk (npm install @littleware/little-elements). The shell hides the page content with a style rule, maybe loads the webcomponentsjs polyfill, and renders a "Loading ..." message if the main.js javascript module doesn't load fast enough. I also extended the main javascript module on each page to signal the shell when it has finished rendering the initial page content. For example - indexMain.js for https://apps.frickjack.com/index.html looks like this:

import './headerSimple/headerSimple.js';

if ( window['littleShell'] ) {
    window['littleShell'].clear();
}

The index.html nunjucks template looks like this:

{% extends "little-elements/lib/styleGuide/shell/basicShell.html.njk" %}

{% block head %}
<title>frickjack.com</title>
{% endblock %}

{% block content %}
    <lw-header-simple title="Home">
    </lw-header-simple>
    ...
   <script src="{{ jsroot }}/@littleware/little-apps/lib/indexMain.js" type="module"></script>
{% endblock %}

A visitor to https://apps.frickjack.com may now briefly see a "Loading" screen for the first page load, but subsequent page loads generally load fast enough from cache that the shell doesn't bother to show the "Loading" screen. I can improve on this by extending the shell with a service worker - that will be my next project.

I envy gmail's loading screen that renders a crazy animation of a googley envelope. Fancy!

Friday, August 03, 2018

jasminejs tests with es2015 modules

Running jasminejs tests of javascript modules in a browser is straight forward, but requires small adjustments, since a module loads asynchronously, while jasmine's default runtime assumes code loads synchronously. The approach I take is the same as the one used to test requirejs AMD modules:

  • customize jasmine boot, so that it does not automatically launch test execution on page load
  • write a testMain.js script that imports all the spec's for your test suite, then launches jasmine's test execution

For example, this is the root test suite (testMain.ts) for the @littleware/little-elements module:

import './test/spec/utilSpec.js';
import './arrivalPie/spec/arrivalPieSpec.js';
import './styleGuide/spec/styleGuideSpec.js';
import {startTest} from './test/util.js';

startTest();

The startTest() function is a little wrapper that detects whether karmajs is the test runtime, since the test bootstrap process is a little different in that scenario. The karmajs config file should also annotate javascript module files with a module type. For example, here is an excerpt from little-element's karma.conf.js:

    files: [
      'lib/test/karmaAdapter.js',
      { pattern: 'lib/arrivalPie/**/*.js', type: 'module', included: false },
      { pattern: 'lib/styleGuide/**/*.js', type: 'module', included: false },
      { pattern: 'lib/test/**/*.js', type: 'module', included: false },
      { pattern: 'lib/testMain.js', type: 'module', included: true },
      { pattern: 'node_modules/lit-html/*.js', type: 'module', included: false }
    ],

Feel free to import the @littleware/little-elements module into your project if it will be helpful!

Tuesday, July 31, 2018

little-elements

Last week I rolled out an update to apps.frickjack.com that changes the plumbing of the site.

  • The javascript is now organized as unbundled es2015 modules
  • The site also leverages javascript modules to manage html templates and CSS with lit-html

I also setup my first module on npm (here) to track UX components that could be shared between projects - which was fun :-)

wait for kubernetes pods

First, there's probably a better way to do this with kubectl rollout, but I didn't know that existed till last week.

Anyway ... kubernetes' API is asynchronous, so if an operator that issues a series of kubernetes commands to update various deployments wants to wait till all the pods in those deployments are up and running, then he may take advantage of a script like: kube-wait4-pods

(
    # If new pods are still rolling/starting up, then wait for that to finish
    COUNT=0
    OK_COUNT=0
    # Don't exit till we get 2 consecutive readings with all pods running.
    while [[ "$OK_COUNT" -lt 2 ]]; do
      g3kubectl get pods
      if [[ 0 == "$(g3kubectl get pods -o json |  jq -r '[.items[] | { name: .metadata.generateName, phase: .status.phase, waitingContainers: [ try .status.containerStatuses[] | { waiting:.state|has("waiting"), ready:.ready}|(.waiting==true or .ready==false)|select(.) ]|length }] | map(select(.phase=="Pending" or .phase=="Running" and .waitingContainers > 0)) | length')" ]]; then
        let OK_COUNT+=1
      else
        OK_COUNT=0
      fi
      
      if [[ "$OK_COUNT" -lt 2 ]]; then
        echo ------------
        echo "INFO: Waiting for pods to exit Pending state"
        let COUNT+=1
        if [[ COUNT -gt 30 ]]; then
          echo -e "$(red_color "ERROR:") pods still not ready after 300 seconds"
          exit 1
        fi
        sleep 10
      fi
    done
)

For example - this Jenkins pipeline deploys a new stack of services to a QA environment, then waits till the new versions of pods are deployed before running through an integration test suite.

...
    stage('K8sDeploy') {
      steps {
        withEnv(['GEN3_NOPROXY=true', "vpc_name=$env.KUBECTL_NAMESPACE", "GEN3_HOME=$env.WORKSPACE/cloud-automation"]) {
          echo "GEN3_HOME is $env.GEN3_HOME"
          echo "GIT_BRANCH is $env.GIT_BRANCH"
          echo "GIT_COMMIT is $env.GIT_COMMIT"
          echo "KUBECTL_NAMESPACE is $env.KUBECTL_NAMESPACE"
          echo "WORKSPACE is $env.WORKSPACE"
          sh "bash cloud-automation/gen3/bin/kube-roll-all.sh"
          sh "bash cloud-automation/gen3/bin/kube-wait4-pods.sh || true"
        }
      }
    }
...

BTW - the code referenced above is a product of the great team of developers at the Center for Data Intensive Science.

Saturday, April 07, 2018

nginx reverse-proxy tricks for jwt and csrf

I've been doing devops work over the last few months at CDIS. One of the tasks I worked on recently was to extend the nginx configuration (on github as a kubernetes configmap here) for the gen3 data commons stack.

First, we added end-user data to the proxy log by extracting user information from the JWT token in the auth cookie. We took advantage of the nginscript (nginscript is a subset of javascript) to import some standard javascript code (also in the kubernetes configmap) to parse the JWT.

function userid(req, res) {
  var token = req.variables["access_token"];
  var user = "uid:null,unknown@unknown";

  if (token) {
    // note - raw token is secret, so do not expose in userid
    var raw = atob((token.split('.')[1] || "").replace('-', '+').replace('_', '/'));
    if (raw) {
      try {
        var data = JSON.parse(raw);
        if (data) {
          if (data.context && data.context.user && data.context.user.name) {
            user = "uid:" + data.sub + "," + data.context.user.name;
          }
        }
      } catch (err) {}
    }
  }
  return user;
}

Next, we added a CSRF guard that verifies that if a CSRF cookie is present, then it matches a CSRF header. The logic is a little clunky (in github here), because of the restrictions on conditionals in nginx configuration, but it basically looks like the sample configuration below - where $access_token is the JWT auth token (either from a header from non-web clients, or a cookie for web browser clients), and an "ok-SOMETHING" $csrf_check is only required for browser clients. Finally - we do not enforce the HTTP-header based CSRF check in the proxy for endpoints that may be accessed by traditional HTML form submissions (that embed the token in the form body), or for endpoints that are accessed by third party browser frontends (like jupyterhub) that may implement a CSRF guard in a different way. Finally, there's also a little bit of logic that auto-generates a CSRF cookie if it's not present - a javascript client in the same domain can read the cookie to get the value to put in the X-CSRF header.

          set $access_token "";
          set $csrf_check "ok-tokenauth";
          if ($cookie_access_token) {
              set $access_token "bearer $cookie_access_token";
              # cookie auth requires csrf check
              set $csrf_check "fail";
          }
          if ($http_authorization) {
              # Authorization header is present - prefer that token over cookie token
              set $access_token "$http_authorization";
          }

          #
          # CSRF check
          # This block requires a csrftoken for all POST requests.
          #
          if ($cookie_csrftoken = $http_x_csrf_token) {
            # this will fail further below if cookie_csrftoken is empty
            set $csrf_check "ok-$cookie_csrftoken";
          }
          if ($request_method != "POST") {
            set $csrf_check "ok-$request_method";
          }
          if ($cookie_access_token = "") {
            # do this again here b/c empty cookie_csrftoken == empty http_x_csrf_token - ugh  
            set $csrf_check "ok-tokenauth";
          }
          ...
          location /index/ {
              if ($csrf_check !~ ^ok-\S.+$) {
                return 403 "failed csrf check";
              }
              if ($cookie_csrftoken = "") {
                add_header Set-Cookie "csrftoken=$request_id$request_length$request_time$time_iso8601;Path=/";
              }

              proxy_pass http://indexd-service/;
          }
          ...

Thursday, November 23, 2017

Jenkins Backup Pipeline

I recently setup a Jenkins CICD service to complement the Travis based automation already in use where I work. I worked with job-based Jenkins workflows in the past - where we setup chains of interdependent jobs (ex: build, publish assets, deploy to dev environment, ...), but I took this opportunity to adopt Jenkins' new (to me) Pipeline pattern, and I'm glad I did.

A Jenkins pipeline defines (via a groovy DSL) a sequence of steps that execute together in a build under a single Jenkins job. Here's an example pipeline we use to backup our Jenkins configuration to S3 every night.

#!groovy

pipeline {
  agent any

  stages {
    stage('BuildArchive'){
      steps {
        echo "BuildArchive $env.JENKINS_HOME"
        sh "tar cvJf backup.tar.xz --exclude '$env.JENKINS_HOME/jobs/[^/]*/builds/*' --exclude '$env.JENKINS_HOME/jobs/[^/]*/last*' --exclude '$env.JENKINS_HOME/workspace' --exclude '$env.JENKINS_HOME/war' --exclude '$env.JENKINS_HOME/jobs/[^/]*/workspace/'  $env.JENKINS_HOME"
      }
    }
    stage('UploadToS3') {
      steps {
        echo 'Upload to S3!'
        sh 'aws s3 cp --sse AES256 backup.tar.xz s3://cdis-terraform-state/JenkinsBackup/backup.$(date +%u).tar.xz'
      }
    }
    stage('Cleanup') {
      steps {
        echo 'Cleanup!'
        sh 'rm -f backup.tar.xz'
      }
    }
  }
  post {
    success {
      slackSend color: 'good', message: 'Jenkins backup pipeline succeeded'
    }
    failure {
      slackSend color: 'bad', message: 'Jenkins backup pipeline failed'
    }
    unstable {
      slackSend color: 'bad', message: 'Jenkins backup pipeline unstable'
    }
  }
}

If you are a Jenkins user, then take the time to give Pipelines a try. If you're also using github or bitbucket - then look into Jenkins' support for organizations that nicely support pull-request based workflows. Also try the new Blue Ocean UI - it's designed with pipelines in mind.

Saturday, August 19, 2017

stuff an app into a docker image

I had some fun over the last week setting up a docker image for an S3-copy utility, and integrating it into the gulp build for https://apps.frickjack.com. I pushed the image for the s3cp app up to hub.docker.com. I use s3cp to sync a local build of https://apps.frickjack.com up to an S3 bucket. I could probably coerce the AWS CLI (aws s3 sync) into doing the same job, but I wrote this app a while ago, and it includes functionality to automatically gzip-compress each asset before upload, and setup various HTTP headers on the asset (content-encoding, cache-control, content-type, and etag). For example - inspect the headers (using a browser's developer tools) on an arbitrary apps.frickjack.com asset like https://apps.frickjack.com/resources/css/littleware/styleGuide/guide.css.


A few s3cp details - it's a simple scala application with two command line flags (-config aws-key aws-secret, -copy source destination). The '-config' command saves the AWS credential under ~/.littleware/aws/. The code could use a little love - it is hard-coded to use the AWS Virginia region, it could use a '-force' option with '-copy', and its error messages are annoying - but it works for what I need. The source is on github (git clone ...; cd littleware/webapp/littleApps/s3Copy; gradle build; gradle copyToLib; cli/s3cp.sh --help).


I pushed a binary build of s3cp to dockerhub. I run it like this. First, I register my AWS credentials with the app. We only need to configure s3cp once, but we'll want to mount a docker volume to save the configuration to (I need to wire up a more secure way to save and pass the AWS secrets):


docker volume create littleware

docker run -it -v littleware:/root/.littleware \
    -v /home/reuben:/mnt/reuben \
    --name s3cp --rm frickjack/s3cp:1.0.0 \
    -config aws-key aws-secret

After the configuration is done I use s3cp with '-copy' commands like this:


docker run -it -v littleware:/root/.littleware \
    -v /home/reuben:/mnt/reuben \
    --name s3cp --rm frickjack/s3cp:1.0.0 \
    -copy /mnt/reuben/Code/littleware-html5Client/build/ s3://apps.frickjack.com/

I added a gulp task to the apps.frickjack.com code to simplify the S3 deploy - from gulpfile.js:


gulp.task( 'deploy', [ 'compileclean' ], function(cb) {
    const pwdPath = process.cwd();
    const imageName = "frickjack/s3cp:1.0.0";
    const commandStr = "yes | docker run --rm --name s3gulp -v littleware:/root/.littleware -v '" +
        pwdPath + ":/mnt/workspace' " + imageName + " -copy /mnt/workspace/build/ s3://apps.frickjack.com/";

    console.log( "Running: " + commandStr );

    exec( commandStr, 
        function (err, stdout, stderr) {
            console.log(stdout);
            console.log(stderr);
            if ( err ) {
                //reject( err );
            } else {
                cb();
            }
        }
    );
});

Sunday, August 13, 2017

versioning js and css with gulp-rev and gulp-revReplace

I've discussed before how I run https://apps.frickjack.com on S3, but one issue I did not address was how to version updates, so that each visitor loads a consistent set of assets. Here's the problem.

  • on Monday I publish v1 assets to the S3 bucket behind apps.frickjack.com
  • later on Monday Fred visits apps.frickjack.com, and his browser caches several v1 javascript and CSS files - a.js, b.js, x.css, y.css, ...
  • on Tuesday I publish v2 assets to S3 - just changing a few files
  • on Wednesday Fred visits apps.frickjack.com, but for whatever reason his browser cache updates b.js to v2, but loads v1 of the other assets from cache

On Wednesday Fred loads an inconsistent set of assets that might not work together. There are several approaches people take to avoid this problem. Surma gave a good overview of HTTP cache headers and thinking in this talk on YouTube, and Jake Archibald goes into more detail in this bLog (we set cache-headers directly on our apps.frickjack.com S3 objects).

Long story short - I finally wired up my gulpfile with gulp-rev and gulp-rev-replace to add a hash to the javascript and css file names. Each visitor to apps.frickjack.com is now guaranteed to load a consistent set of assets, because an asset's name changes when its content changes. I was really happy to find gulp-rev - it just takes care of things for me. The only gotcha is that gulp-rev-rewrite does not like to work with relative asset paths (ex - <script src="511.js"), so I had to update a few files to use absolute paths (src="/511/511.js") - otherwise things worked great.

Monday, August 07, 2017

use docker's overlay2 storage driver on Ubuntu 16.0+

Docker defaults to using its aufs storage driver on Ubuntu 16.0.4, but the system has a 4.4 kernel, so it's probably a good idea to switch over to the overlay2 driver.


$ lsb_release -a
No LSB modules are available.
Distributor ID: Ubuntu
Description: Ubuntu 16.04.2 LTS
Release: 16.04
Codename: xenial

$ uname -r
4.4.0-83-generic

$ sudo cat /etc/docker/daemon.json
{
  "storage-driver":"overlay2",
  "log-driver": "journald"
}


Monday, June 05, 2017

S3+CloudFront+ACM+Route53 = serverless http2 https site with CDN

The other night I logged into the AWS web console, and upgraded my little S3 hosted site at https://apps.frickjack.com to http2 with TLS - which opens the door for adding a service worker to the site. As a side benefit the site is behind the CloudFront CDN. I also moved the domain's authoritative DNS to Route 53 earlier in the week - just to consolidate that functionality under AWS.

It's ridiculous how easy it was to do this upgrade - I should have done it a while ago:

  • create the Cloud Front network
  • configure the CDN with a certificate setup with the AWS Certificate Manager
  • update DNS for the domain 'apps.frickjack.com' to reference the CDN hostname - Route53 supports an 'alias' mechanism that exposes an A record, but you can also just use a CNAME if you have another DNS provider

Anyway - this is fun stuff to play with, and my AWS bill will still be less than $2 a month.

Monday, May 29, 2017

Content Security Policy and S3 hosted site

When I went to implement a content security policy on http://apps.frickjack.com I was disappointed to realize that the 'Content-Security-Policy' header is not one of the standard headers supported by S3's metadata API. This bLog
explains an approach using lambda on the CloudFront edge, but that looks crazy.

Fortunately - it turns out we can add a basic security policy to a page with a meta tag, so I added a <meta http-equiv="Content-Security-Policy" ...> tag to the nunjucks template that builds apps.frickjack.com's html pages at gulp compile time. The full code is on github:

<meta http-equiv="Content-Security-Policy" 
content="
  default-src 'none'; 
  img-src 'self' data: https://www.google-analytics.com; 
  script-src 'self' https://www.google-analytics.com; 
  style-src 'self' https://unpkg.com https://fonts.googleapis.com; 
  object-src 'none'; 
  font-src 'self' https://fonts.googleapis.com https://fonts.gstatic.com
"
>

Saturday, May 27, 2017

Update apps.frickjack.com - new 511 app

I finally updated my little S3-hosted web sandbox at http://apps.frickjack.com. The update has a few parts. First - I took down some old javascript apps and CSS code that I had developed several years ago using the YUI framework (now defunct), and tried to cleanup and simplify the landing page.

Next, I posted some new code exploring app development and testing with vanillajs, custom elements, and CSS. The new code includes an update to the 511 app for timing labor contractions with the 511 rule. The app uses a simple custom element to show the distribution of contractions over the last hour on a SVG pie-chart. The code is on github.

Finally, I switched the code over to an ISC license. I don't want an LGPL license to discourage people for copying something they find useful.

There's still a lot I'd like to do with apps.littleware - starting with upgrading the site to https by putting the S3 bucket behind cloud front, and wiring up a service worker. We'll see how long it takes me to make time for that ...

Thursday, January 26, 2017

VPN blocking Docker routes on Windows Workaround

Here's the situation. You're stuck in 2017 running Windows 7 with a Cisco VPN client. You're also a Docker evangelist, and run local developer environments using Docker Toolbox on that Windows 7 laptop. Docker Toolbox runs the Docker daemon on a Virtual Box VM running the boot2docker Linux distribution. One of the cool tricks Docker Toolbox manages for you is it sets up a virtual network (VirtualBox host-only network), so the Boot2Docker VM has its own IP address (192.168.0.100 or whatever), and you alias that IP address in \Windows\System32\drivers\etc\hosts, so that you can connect to https://my.docker.vm/services, and everything is super cool - until you connect to that damn Cisco VPN, because the VPN is configured by some bonehead IT Windows group policy to hijack all routes to private network IP addresses, and somehow they wired it so that you can't "route add" new routes to your Docker VM.

Fortunately - there's an easy workaround to this mess. First, identify a block of public IP addresses that you know you don't need to communicate with (I chose the 55.0.0.0/8 block assigned to the DOD Network Information Center), and reconfigure the Virtual Box host-only network to assign addresses from that block rather than the default private network it was originally configured with (the Virtual Box GUI has a tool under File -> Preferences -> Network). I had to reboot to get the boot2docker VM to stick to the new IP address, and screw around with 'docker-machine regenerate-certs', but it eventually worked. Good luck!

Monday, January 23, 2017

Debugging Dockerfile builds

I often find myself in a situation where I'm building an image from some Dockerfile, and the build fails 10 or 15 lines in, and I want to dive in and debug what's going wrong with that failing line. Fortunately - that's easy to do.
Let's suppose you're trying to build an image with a Dockerfile like this:


$ cat Dockerfile
FROM alpine:3.5
RUN echo "Step 2"
RUN echo "Step 3" && exit 1
RUN echo "Step 4"


Of course the build fails on 'exit 1' like this:


$ docker build -t demo:1.0.0 .Sending build context to Docker daemon  60.6 MB
Step 1 : FROM alpine:3.5
 ---> 88e169ea8f46
Step 2 : RUN echo "Step 2"
 ---> Running in 7ec0de04622c
Step 2
 ---> 281d8cac4e45
Removing intermediate container 7ec0de04622c
Step 3 : RUN echo "Step 3" && exit 1
 ---> Running in a8a16cb6d591
Step 3
The command '/bin/sh -c echo "Step 3" && exit 1' returned a non-zero code: 1

Fortunately, the docker build saves an intermediate image after each command in the Dockerfile, and outputs the id of that image (---> 281d8cac4e45), so it's easy to do something like this to debug that failing command:


$ docker run --name debug -v '/home/reuben:/mnt/reuben' -it 281d8cac4e45 /bin/sh
/ # 

Sunday, October 16, 2016

Dynamic karma.conf.js for Jenkins

Karma-runner is a great way to run jasmine javascript test suites. One trick to make it easy to customize karma's runtime behavior is to take advantage of the fact that karma's config file is javascript - not just json, so it's easy to wire up karma.conf.js to change karma's behavior based on an environment variable.

For example, when run interactively karma can watch files, and rerun the test suite in Chrome when a file changes; but when Jenkins runs karma, karma should run through the test suite once in phantomJs, then exit. Here's one way to set that up.

First, if you generate your karma config.js file using karma init, and wire up the file to run your test suite in Chrome and watch files for changes, then you wind up with a karma.conf.js (or whatever.js) file structured like this:

module.exports = function(config) {
    config.set( { /* bunch of settings */ } );
}

To wire up the jenkins-phantomjs support - just pull the settings object out to its own variable, and wire up a block of code to change the settings when your favorite environment variable is set, and wire up Jenkins to set that environment variable ...

module.exports = function(config) {
    var settings = { /* bunch of settings */ },
        i, overrides = {};
    if ( process.env.KARMA_PHANTOMJS ) {  // jenkins is running karma ...
        overrides = {
            singleRun: true,
            reporters: ['dots', 'junit' ],
            junitReporter: {
                outputFile: 'test-result.xml'
            },
            browsers: [ 'PhantomJS', 'PhantomJS_custom'],
            customLaunchers: {
                PhantomJS_custom: {
                    flags: [ '--load-images=true'], 
                    options: {
                        windowName: 'my-window'
                    },
                    debug:true
                }
            }
         };
    }
    for ( i in overrides ) {
        settings[i] = overrides[i];
    }
    config.set( settings );
}

Jenkins can run Karma test suites with a shell script, and Jenkins' JUnit plugin harvests and publishes the test results; works great!

Saturday, September 24, 2016

JAAS vs AWS Security Policy vs RBAC vs FireBase vs ACL vs WTF ?

I've been thinking a bit about authentication and authorization lately, and while I may have my head semi-wrapped around authentication - authorization is kicking my ass a bit. Here's my understanding as of today.

First, there's something called "role based authorization" (RBAC) that pops up a lot, and is embodied in one way in the java EE specification. The basic idea behind RBAC is that a user (or a user group) is assigned a "role" that represents explicitly or implicitly a basket of permissions within an application. For example - an enterprise SAAS product that I work on at my day job defines "super user", "admin", and "standard" user roles under each customer's account. Each user is assigned a role that gives the user a certain basket of global permissions (this user can perform action X on any asset).

I was somewhat confused when I discovered that AWS defines an IAM role as a non-person user (or Principal in the JAAS world). It turns out that AWS implements something more powerful than the RBAC system I'm semi-familiar with - AWS builds up its security model around security policy specifications that can either be assigned to a user or group of users (like an RBAC role) or to an asset (like an ACL). When deciding whether a particular user may perform a particular action on a particular asset the AWS policy engine evaluates all applicable policies to come to a decision.

I have not worked at all with Google's Firebase, but it seems to implement a less powerful (compared to AWS IAM policies), but simpler access control mechanism that splits the difference between RBAC and ACL's via a policy specification that grants and restricts permissions on an application's tree of assets based on XPATH-like selection rules.

On thing I appreciate about Firebase's design is that it empowers the application developer with an access control mechanism, so if a team is building an application on Firebase, then the team's application can take advantage of Firebase's access control system to regulate access to the application's assets by the application's users.

On the other hand, The IAM policy tools in AWS provide a powerful mechanism for regulating what actions different team-members may take on AWS resources (S3 buckets, EC2 VM's, whatever) within a shared AWS account in which the team is deploying some application, but it's not clear to me how that team could leverage IAM's security policy engine within the team's own application to control how the application's (non-IAM) users may interact with the application's (non-AWS) assets. The AWS Cognito service seems to try to point in the direction of leveraging AWS policy within an application, but in an application space where business logic is implemented in the client, and the client interacts directly with the data-store. I'm still a member of the old school that thinks the client should implement UX, and access API's that implement the business logic and validation that precedes data manipulation.

Sunday, August 07, 2016

Custom JUnit4 TestRunner with Guice

It's easy to write your own JUnit test-runner. A developer on the java-platform often writes code in the context of some framework or container platform (Spring, Play, java-EE, OSGi, Guice, whatever) that provides facilities for application bootstrap and dependency injection. When at some point she wants to write a JUnit test suite for some subset of code the developer is faced with the problem of how to recreate the application framework's runtime context within JUnit's runtime context - which assumes a simple no-argument constructor for test-harness classes.

Fortunately - JUnit provides a serviceable solution for augmenting the test suite runtime context in the form of test runners. A developer uses an annotation on his test class to indicate that the tests require a custom runner. For example - Spring provides its SpringJUnit4ClassRunner which allows a test developer to use Spring annotations in her test suties - like this:

@RunWith(SpringJUnit4ClassRunner.class)
class WhateverTest {

     @Service(name="whatever")
     Whatever service;

     @Test
     public void testSomething() {
         assertTrue( ... );
     }
     ...
}



The Littleware project includes a system that extends Guice with a simple module and bootstrap mechanism that I was able to integrate with JUnit with a simple LittleTestRunner that follows the same trick that Spring's test runner uses of simply extending JUnit's default BlockJUnit4ClassRunner - overriding the createTest method - easy cheesey:

package littleware.test;

import java.util.List;
import java.util.logging.Level;
import java.util.logging.Logger;
import littleware.bootstrap.LittleBootstrap;
import org.junit.runners.BlockJUnit4ClassRunner;
import org.junit.runners.model.InitializationError;

/**
 * JUnit4 test runner enabled with littleware.bootstrap Guice bootrap and
 * injection. Imitation of SpringJunit4TestRunner:
 *
 */
public class LittleTestRunner extends BlockJUnit4ClassRunner {
    private static final Logger log = Logger.getLogger(LittleTestRunner.class.getName());

    /**
     * Disable BlockJUnit4ClassRunner test-class constructor rules
     */
    @Override
    protected void validateConstructor( List<Throwable> errors ) {}
    
    /**
     * Construct a new {@code LittleTestRunner} and initialize a
     * {@link LittleBootstrap} to provide littleware testing functionality to
     * standard JUnit tests.
     *
     * @param clazz the test class to be run
     * @see #createTestContextManager(Class)
     */
    public LittleTestRunner(Class<?> clazz) throws InitializationError {
        super(clazz);
        if (log.isLoggable(Level.FINE)) {
            log.log(Level.FINE, "constructor called with [{0}]", clazz);
        }
    }

    /**
     * This is where littleware hooks in
     * 
     * @return an instance of getClass constructed via the littleware managed Guice injector
     */
    @Override
    protected Object createTest() {
        try {
            return LittleBootstrap.factory.lookup(this.getTestClass().getJavaClass());
        } catch ( RuntimeException ex ) {
            log.log( Level.SEVERE, "Test class construction failed", ex );
            throw ex;
        }
    }
}

Now I can write tests in Guice's "inject dependencies into the constructor" style - like this:


/**
 * Just run UUIDFactory implementations through a simple test
 */
@RunWith(LittleTestRunner.class)
public class UUIDFactoryTester {

    private final Provider<UUID> uuidFactory;

    /**
     * Constructor stashes UUIDFactory to run test
     * against
     */
    @Inject()
    public UUIDFactoryTester(Provider<UUID> uuidFactory) {
        this.uuidFactory = uuidFactory;
    }

    /**
     * Just get a couple UUID's, then go back and forth to the string
     * representation
     */
    @Test
    public void testUUIDFactory() {

       //...
    }
}

Tuesday, July 19, 2016

Gradle with "Service Provider" Pattern

I've started dusting off my old littleware project over the last month or so on a "dev" branch in github. As part of the cleanup I'm moving littleware from its old ANT-IVY build system to Gradle. When I started with ant-ivy it was a great alternative to Maven, and allowed me to extend the build setup put in place by a Netbeans IDE's project with transitive dependencies, jenkins integration, scala support - all that great stuff. The problem is that the ant-ivy setup is idiosyncratic, and Gradle and SBT are well designed, widely used systems. I wound up going with Gradle, because I've read a lot of good things about it, and I want to become more familiar with it, and I had used SBT a little in the past, and found SBT's DSL annoying and its extension mechanisms inscrutable, and of course Google is using Gradle to build Android, ...

Anyway - like I said, I'm dusting off the code, and trying to remember what the hell I was thinking with some of it, but so far the Gradle integration has gone well. One of the issues I'm dealing with is the IVY-centric mechanism littleware had setup to pull in optional run-time dependencies. With Gradle the default behavior of the java plugin is to include all compile-time dependencies at run-time - which is what most people expect. Littleware's ant-ivy setup excluded optional dependencies (like mysql-connector versus postgresql - something like that), and included add-on IVY configurations like "with_mysql", "with_postgres", "with_aws", ... that kind of thing, so a client project would have a dependency specification something like this:

<dependency org="littleware" name="littelware" ... conf="compile->compile;runtime->runtime,with_mysql" />

Of course nobody else works like that, so in the interest of clarity and doing what people expect - I'm going to try to rework things, so that littleware's base jar asset includes service provider interfaces (SPI's), that add-on modules implement, so a client that wants to hook up littleware's ability to wire in a MySQL DataSource to a Guice injector would add a dependency to the 'littleware_mysql_module' to their build - something like that. The Jackson JSON parser implements a module system like this, and the SPI pattern is all over java-EE (JDBC, Servlets, whatever); it's a good and widely understood pattern. We'll see how it goes.



Sunday, June 12, 2016

Decouple UX and Services API's in Single Page Apps

An old-school PHP, Struts, JSF, Rails what have you webapp was deployed with a strong binding between the application's UX and backend services. A typical "3 tier" application would involve some kind of server-side MVC framework for generating HTML, and some business objects that would manage the storage and interpretation of data stored in a SQL database, and that was often the whole shebang.

UX in a modern single page application (SPA) is implemented in javascript that accesses backend microservices via AJAX calls to REST API's. Unfortunately there is still a tendency in many projects to combine the web UX with a backend service in a single project - maybe implement an old-school server-side login flow, then dynamically generate an HTML shell that defines the authenticated context in some javascript variables, and a call to the javascript app's bootstrap routine. I am not a fan of this way of structuring an application.

I like the "app shell" approach to structuring a SPA - where the web UX is its own separate project - a UI application that accesses backend micro-services. There are various advantages to this approach, but the immediate benefit is that it simplifies the UX team's life - they can use whatever tools they want (gulp, less, typescript, bower, jslint, ...), develop locally with a simple nodejs+express server and maybe some docker containers providing backend services. The developers can focus on UX and design, and do not have to deal with whatever backend technology might be in use (play, maven, tomcat, rails, whatever, ...). The app-shell development process closely resembles the dev process for a native Android or iOS app.


Sunday, May 15, 2016

Problems with Cookies (session management with multiple authentication providers)


The web project I work on has an old-school login with username and password, check the hash in the database, and set a cookie design that we would like to improve on to support third party API access and multiple external authentication providers (OpenID-Connect, OATH2, SAML).

First, just setting an auth cookies sucks. Suppose a visitor to a web site has two login accounts A and B. He logs into account A in one tab, then, in another tab, signs out of the first account, and signs into account B. Now if the first tab is running a "single page application" javascript app, and the javascript code assumes that it is either signed into a valid session or signed out, then that code in the first tab can wind up inadvertently issuing updates to account B that were intended for account A. An application should not rely on a cookie to track its session, since that cookie can be changed out from under the app by code running in another tab; the app should instead explicitly pass some X-auth header or some similar trick to explicitly specify which session it intends to communicate in.

Second, the old prompt the visitor for a username and password that are validated against an app-maintained (JAAS style) authentication database sucks. A modern app should support multiple identity providers, so that a visitor can authenticate against an identity provider of her choice to establish a login session. An app has a session management service to manage the authentication handshake with different identity providers. This kind of infrastructure works equally well for both native and web apps.

Anyway - that's my current thinking on the subject of authentication and session management. Design-time ideas like these rarely survive implementation of working code.