Thursday, January 26, 2017

VPN blocking Docker routes on Windows Workaround

Here's the situation. You're stuck in 2017 running Windows 7 with a Cisco VPN client. You're also a Docker evangelist, and run local developer environments using Docker Toolbox on that Windows 7 laptop. Docker Toolbox runs the Docker daemon on a Virtual Box VM running the boot2docker Linux distribution. One of the cool tricks Docker Toolbox manages for you is it sets up a virtual network (VirtualBox host-only network), so the Boot2Docker VM has its own IP address (192.168.0.100 or whatever), and you alias that IP address in \Windows\System32\drivers\etc\hosts, so that you can connect to https://my.docker.vm/services, and everything is super cool - until you connect to that damn Cisco VPN, because the VPN is configured by some bonehead IT Windows group policy to hijack all routes to private network IP addresses, and somehow they wired it so that you can't "route add" new routes to your Docker VM.

Fortunately - there's an easy workaround to this mess. First, identify a block of public IP addresses that you know you don't need to communicate with (I chose the 55.0.0.0/8 block assigned to the DOD Network Information Center), and reconfigure the Virtual Box host-only network to assign addresses from that block rather than the default private network it was originally configured with (the Virtual Box GUI has a tool under File -> Preferences -> Network). I had to reboot to get the boot2docker VM to stick to the new IP address, and screw around with 'docker-machine regenerate-certs', but it eventually worked. Good luck!

Monday, January 23, 2017

Debugging Dockerfile builds

I often find myself in a situation where I'm building an image from some Dockerfile, and the build fails 10 or 15 lines in, and I want to dive in and debug what's going wrong with that failing line. Fortunately - that's easy to do.
Let's suppose you're trying to build an image with a Dockerfile like this:


$ cat Dockerfile
FROM alpine:3.5
RUN echo "Step 2"
RUN echo "Step 3" && exit 1
RUN echo "Step 4"


Of course the build fails on 'exit 1' like this:


$ docker build -t demo:1.0.0 .Sending build context to Docker daemon  60.6 MB
Step 1 : FROM alpine:3.5
 ---> 88e169ea8f46
Step 2 : RUN echo "Step 2"
 ---> Running in 7ec0de04622c
Step 2
 ---> 281d8cac4e45
Removing intermediate container 7ec0de04622c
Step 3 : RUN echo "Step 3" && exit 1
 ---> Running in a8a16cb6d591
Step 3
The command '/bin/sh -c echo "Step 3" && exit 1' returned a non-zero code: 1

Fortunately, the docker build saves an intermediate image after each command in the Dockerfile, and outputs the id of that image (---> 281d8cac4e45), so it's easy to do something like this to debug that failing command:


$ docker run --name debug -v '/home/reuben:/mnt/reuben' -it 281d8cac4e45 /bin/sh
/ # 

Sunday, October 16, 2016

Dynamic karma.conf.js for Jenkins

Karma-runner is a great way to run jasmine javascript test suites. One trick to make it easy to customize karma's runtime behavior is to take advantage of the fact that karma's config file is javascript - not just json, so it's easy to wire up karma.conf.js to change karma's behavior based on an environment variable.

For example, when run interactively karma can watch files, and rerun the test suite in Chrome when a file changes; but when Jenkins runs karma, karma should run through the test suite once in phantomJs, then exit. Here's one way to set that up.

First, if you generate your karma config.js file using karma init, and wire up the file to run your test suite in Chrome and watch files for changes, then you wind up with a karma.conf.js (or whatever.js) file structured like this:

module.exports = function(config) {
    config.set( { /* bunch of settings */ } );
}

To wire up the jenkins-phantomjs support - just pull the settings object out to its own variable, and wire up a block of code to change the settings when your favorite environment variable is set, and wire up Jenkins to set that environment variable ...

module.exports = function(config) {
    var settings = { /* bunch of settings */ },
        i, overrides = {};
    if ( process.env.KARMA_PHANTOMJS ) {  // jenkins is running karma ...
        overrides = {
            singleRun: true,
            reporters: ['dots', 'junit' ],
            junitReporter: {
                outputFile: 'test-result.xml'
            },
            browsers: [ 'PhantomJS', 'PhantomJS_custom'],
            customLaunchers: {
                PhantomJS_custom: {
                    flags: [ '--load-images=true'], 
                    options: {
                        windowName: 'my-window'
                    },
                    debug:true
                }
            }
         };
    }
    for ( i in overrides ) {
        settings[i] = overrides[i];
    }
    config.set( settings );
}

Jenkins can run Karma test suites with a shell script, and Jenkins' JUnit plugin harvests and publishes the test results; works great!

Saturday, September 24, 2016

JAAS vs AWS Security Policy vs RBAC vs FireBase vs ACL vs WTF ?

I've been thinking a bit about authentication and authorization lately, and while I may have my head semi-wrapped around authentication - authorization is kicking my ass a bit. Here's my understanding as of today.

First, there's something called "role based authorization" (RBAC) that pops up a lot, and is embodied in one way in the java EE specification. The basic idea behind RBAC is that a user (or a user group) is assigned a "role" that represents explicitly or implicitly a basket of permissions within an application. For example - an enterprise SAAS product that I work on at my day job defines "super user", "admin", and "standard" user roles under each customer's account. Each user is assigned a role that gives the user a certain basket of global permissions (this user can perform action X on any asset).

I was somewhat confused when I discovered that AWS defines an IAM role as a non-person user (or Principal in the JAAS world). It turns out that AWS implements something more powerful than the RBAC system I'm semi-familiar with - AWS builds up its security model around security policy specifications that can either be assigned to a user or group of users (like an RBAC role) or to an asset (like an ACL). When deciding whether a particular user may perform a particular action on a particular asset the AWS policy engine evaluates all applicable policies to come to a decision.

I have not worked at all with Google's Firebase, but it seems to implement a less powerful (compared to AWS IAM policies), but simpler access control mechanism that splits the difference between RBAC and ACL's via a policy specification that grants and restricts permissions on an application's tree of assets based on XPATH-like selection rules.

On thing I appreciate about Firebase's design is that it empowers the application developer with an access control mechanism, so if a team is building an application on Firebase, then the team's application can take advantage of Firebase's access control system to regulate access to the application's assets by the application's users.

On the other hand, The IAM policy tools in AWS provide a powerful mechanism for regulating what actions different team-members may take on AWS resources (S3 buckets, EC2 VM's, whatever) within a shared AWS account in which the team is deploying some application, but it's not clear to me how that team could leverage IAM's security policy engine within the team's own application to control how the application's (non-IAM) users may interact with the application's (non-AWS) assets. The AWS Cognito service seems to try to point in the direction of leveraging AWS policy within an application, but in an application space where business logic is implemented in the client, and the client interacts directly with the data-store. I'm still a member of the old school that thinks the client should implement UX, and access API's that implement the business logic and validation that precedes data manipulation.

Sunday, August 07, 2016

Custom JUnit4 TestRunner with Guice

It's easy to write your own JUnit test-runner. A developer on the java-platform often writes code in the context of some framework or container platform (Spring, Play, java-EE, OSGi, Guice, whatever) that provides facilities for application bootstrap and dependency injection. When at some point she wants to write a JUnit test suite for some subset of code the developer is faced with the problem of how to recreate the application framework's runtime context within JUnit's runtime context - which assumes a simple no-argument constructor for test-harness classes.

Fortunately - JUnit provides a serviceable solution for augmenting the test suite runtime context in the form of test runners. A developer uses an annotation on his test class to indicate that the tests require a custom runner. For example - Spring provides its SpringJUnit4ClassRunner which allows a test developer to use Spring annotations in her test suties - like this:

@RunWith(SpringJUnit4ClassRunner.class)
class WhateverTest {

     @Service(name="whatever")
     Whatever service;

     @Test
     public void testSomething() {
         assertTrue( ... );
     }
     ...
}



The Littleware project includes a system that extends Guice with a simple module and bootstrap mechanism that I was able to integrate with JUnit with a simple LittleTestRunner that follows the same trick that Spring's test runner uses of simply extending JUnit's default BlockJUnit4ClassRunner - overriding the createTest method - easy cheesey:

package littleware.test;

import java.util.List;
import java.util.logging.Level;
import java.util.logging.Logger;
import littleware.bootstrap.LittleBootstrap;
import org.junit.runners.BlockJUnit4ClassRunner;
import org.junit.runners.model.InitializationError;

/**
 * JUnit4 test runner enabled with littleware.bootstrap Guice bootrap and
 * injection. Imitation of SpringJunit4TestRunner:
 *
 */
public class LittleTestRunner extends BlockJUnit4ClassRunner {
    private static final Logger log = Logger.getLogger(LittleTestRunner.class.getName());

    /**
     * Disable BlockJUnit4ClassRunner test-class constructor rules
     */
    @Override
    protected void validateConstructor( List<Throwable> errors ) {}
    
    /**
     * Construct a new {@code LittleTestRunner} and initialize a
     * {@link LittleBootstrap} to provide littleware testing functionality to
     * standard JUnit tests.
     *
     * @param clazz the test class to be run
     * @see #createTestContextManager(Class)
     */
    public LittleTestRunner(Class<?> clazz) throws InitializationError {
        super(clazz);
        if (log.isLoggable(Level.FINE)) {
            log.log(Level.FINE, "constructor called with [{0}]", clazz);
        }
    }

    /**
     * This is where littleware hooks in
     * 
     * @return an instance of getClass constructed via the littleware managed Guice injector
     */
    @Override
    protected Object createTest() {
        try {
            return LittleBootstrap.factory.lookup(this.getTestClass().getJavaClass());
        } catch ( RuntimeException ex ) {
            log.log( Level.SEVERE, "Test class construction failed", ex );
            throw ex;
        }
    }
}

Now I can write tests in Guice's "inject dependencies into the constructor" style - like this:


/**
 * Just run UUIDFactory implementations through a simple test
 */
@RunWith(LittleTestRunner.class)
public class UUIDFactoryTester {

    private final Provider<UUID> uuidFactory;

    /**
     * Constructor stashes UUIDFactory to run test
     * against
     */
    @Inject()
    public UUIDFactoryTester(Provider<UUID> uuidFactory) {
        this.uuidFactory = uuidFactory;
    }

    /**
     * Just get a couple UUID's, then go back and forth to the string
     * representation
     */
    @Test
    public void testUUIDFactory() {

       //...
    }
}

Tuesday, July 19, 2016

Gradle with "Service Provider" Pattern

I've started dusting off my old littleware project over the last month or so on a "dev" branch in github. As part of the cleanup I'm moving littleware from its old ANT-IVY build system to Gradle. When I started with ant-ivy it was a great alternative to Maven, and allowed me to extend the build setup put in place by a Netbeans IDE's project with transitive dependencies, jenkins integration, scala support - all that great stuff. The problem is that the ant-ivy setup is idiosyncratic, and Gradle and SBT are well designed, widely used systems. I wound up going with Gradle, because I've read a lot of good things about it, and I want to become more familiar with it, and I had used SBT a little in the past, and found SBT's DSL annoying and its extension mechanisms inscrutable, and of course Google is using Gradle to build Android, ...

Anyway - like I said, I'm dusting off the code, and trying to remember what the hell I was thinking with some of it, but so far the Gradle integration has gone well. One of the issues I'm dealing with is the IVY-centric mechanism littleware had setup to pull in optional run-time dependencies. With Gradle the default behavior of the java plugin is to include all compile-time dependencies at run-time - which is what most people expect. Littleware's ant-ivy setup excluded optional dependencies (like mysql-connector versus postgresql - something like that), and included add-on IVY configurations like "with_mysql", "with_postgres", "with_aws", ... that kind of thing, so a client project would have a dependency specification something like this:

<dependency org="littleware" name="littelware" ... conf="compile->compile;runtime->runtime,with_mysql" />

Of course nobody else works like that, so in the interest of clarity and doing what people expect - I'm going to try to rework things, so that littleware's base jar asset includes service provider interfaces (SPI's), that add-on modules implement, so a client that wants to hook up littleware's ability to wire in a MySQL DataSource to a Guice injector would add a dependency to the 'littleware_mysql_module' to their build - something like that. The Jackson JSON parser implements a module system like this, and the SPI pattern is all over java-EE (JDBC, Servlets, whatever); it's a good and widely understood pattern. We'll see how it goes.



Sunday, June 12, 2016

Decouple UX and Services API's in Single Page Apps

An old-school PHP, Struts, JSF, Rails what have you webapp was deployed with a strong binding between the application's UX and backend services. A typical "3 tier" application would involve some kind of server-side MVC framework for generating HTML, and some business objects that would manage the storage and interpretation of data stored in a SQL database, and that was often the whole shebang.

UX in a modern single page application (SPA) is implemented in javascript that accesses backend microservices via AJAX calls to REST API's. Unfortunately there is still a tendency in many projects to combine the web UX with a backend service in a single project - maybe implement an old-school server-side login flow, then dynamically generate an HTML shell that defines the authenticated context in some javascript variables, and a call to the javascript app's bootstrap routine. I am not a fan of this way of structuring an application.

I like the "app shell" approach to structuring a SPA - where the web UX is its own separate project - a UI application that accesses backend micro-services. There are various advantages to this approach, but the immediate benefit is that it simplifies the UX team's life - they can use whatever tools they want (gulp, less, typescript, bower, jslint, ...), develop locally with a simple nodejs+express server and maybe some docker containers providing backend services. The developers can focus on UX and design, and do not have to deal with whatever backend technology might be in use (play, maven, tomcat, rails, whatever, ...). The app-shell development process closely resembles the dev process for a native Android or iOS app.