Sunday, October 16, 2016

Dynamic karma.conf.js for Jenkins

Karma-runner is a great way to run jasmine javascript test suites. One trick to make it easy to customize karma's runtime behavior is to take advantage of the fact that karma's config file is javascript - not just json, so it's easy to wire up karma.conf.js to change karma's behavior based on an environment variable.

For example, when run interactively karma can watch files, and rerun the test suite in Chrome when a file changes; but when Jenkins runs karma, karma should run through the test suite once in phantomJs, then exit. Here's one way to set that up.

First, if you generate your karma config.js file using karma init, and wire up the file to run your test suite in Chrome and watch files for changes, then you wind up with a karma.conf.js (or whatever.js) file structured like this:

module.exports = function(config) {
    config.set( { /* bunch of settings */ } );
}

To wire up the jenkins-phantomjs support - just pull the settings object out to its own variable, and wire up a block of code to change the settings when your favorite environment variable is set, and wire up Jenkins to set that environment variable ...

module.exports = function(config) {
    var settings = { /* bunch of settings */ },
        i, overrides = {};
    if ( process.env.KARMA_PHANTOMJS ) {  // jenkins is running karma ...
        overrides = {
            singleRun: true,
            reporters: ['dots', 'junit' ],
            junitReporter: {
                outputFile: 'test-result.xml'
            },
            browsers: [ 'PhantomJS', 'PhantomJS_custom'],
            customLaunchers: {
                PhantomJS_custom: {
                    flags: [ '--load-images=true'], 
                    options: {
                        windowName: 'my-window'
                    },
                    debug:true
                }
            }
         };
    }
    for ( i in overrides ) {
        settings[i] = overrides[i];
    }
    config.set( settings );
}

Jenkins can run Karma test suites with a shell script, and Jenkins' JUnit plugin harvests and publishes the test results; works great!

Saturday, September 24, 2016

JAAS vs AWS Security Policy vs RBAC vs FireBase vs ACL vs WTF ?

I've been thinking a bit about authentication and authorization lately, and while I may have my head semi-wrapped around authentication - authorization is kicking my ass a bit. Here's my understanding as of today.

First, there's something called "role based authorization" (RBAC) that pops up a lot, and is embodied in one way in the java EE specification. The basic idea behind RBAC is that a user (or a user group) is assigned a "role" that represents explicitly or implicitly a basket of permissions within an application. For example - an enterprise SAAS product that I work on at my day job defines "super user", "admin", and "standard" user roles under each customer's account. Each user is assigned a role that gives the user a certain basket of global permissions (this user can perform action X on any asset).

I was somewhat confused when I discovered that AWS defines an IAM role as a non-person user (or Principal in the JAAS world). It turns out that AWS implements something more powerful than the RBAC system I'm semi-familiar with - AWS builds up its security model around security policy specifications that can either be assigned to a user or group of users (like an RBAC role) or to an asset (like an ACL). When deciding whether a particular user may perform a particular action on a particular asset the AWS policy engine evaluates all applicable policies to come to a decision.

I have not worked at all with Google's Firebase, but it seems to implement a less powerful (compared to AWS IAM policies), but simpler access control mechanism that splits the difference between RBAC and ACL's via a policy specification that grants and restricts permissions on an application's tree of assets based on XPATH-like selection rules.

On thing I appreciate about Firebase's design is that it empowers the application developer with an access control mechanism, so if a team is building an application on Firebase, then the team's application can take advantage of Firebase's access control system to regulate access to the application's assets by the application's users.

On the other hand, The IAM policy tools in AWS provide a powerful mechanism for regulating what actions different team-members may take on AWS resources (S3 buckets, EC2 VM's, whatever) within a shared AWS account in which the team is deploying some application, but it's not clear to me how that team could leverage IAM's security policy engine within the team's own application to control how the application's (non-IAM) users may interact with the application's (non-AWS) assets. The AWS Cognito service seems to try to point in the direction of leveraging AWS policy within an application, but in an application space where business logic is implemented in the client, and the client interacts directly with the data-store. I'm still a member of the old school that thinks the client should implement UX, and access API's that implement the business logic and validation that precedes data manipulation.

Sunday, August 07, 2016

Custom JUnit4 TestRunner with Guice

It's easy to write your own JUnit test-runner. A developer on the java-platform often writes code in the context of some framework or container platform (Spring, Play, java-EE, OSGi, Guice, whatever) that provides facilities for application bootstrap and dependency injection. When at some point she wants to write a JUnit test suite for some subset of code the developer is faced with the problem of how to recreate the application framework's runtime context within JUnit's runtime context - which assumes a simple no-argument constructor for test-harness classes.

Fortunately - JUnit provides a serviceable solution for augmenting the test suite runtime context in the form of test runners. A developer uses an annotation on his test class to indicate that the tests require a custom runner. For example - Spring provides its SpringJUnit4ClassRunner which allows a test developer to use Spring annotations in her test suties - like this:

@RunWith(SpringJUnit4ClassRunner.class)
class WhateverTest {

     @Service(name="whatever")
     Whatever service;

     @Test
     public void testSomething() {
         assertTrue( ... );
     }
     ...
}



The Littleware project includes a system that extends Guice with a simple module and bootstrap mechanism that I was able to integrate with JUnit with a simple LittleTestRunner that follows the same trick that Spring's test runner uses of simply extending JUnit's default BlockJUnit4ClassRunner - overriding the createTest method - easy cheesey:

package littleware.test;

import java.util.List;
import java.util.logging.Level;
import java.util.logging.Logger;
import littleware.bootstrap.LittleBootstrap;
import org.junit.runners.BlockJUnit4ClassRunner;
import org.junit.runners.model.InitializationError;

/**
 * JUnit4 test runner enabled with littleware.bootstrap Guice bootrap and
 * injection. Imitation of SpringJunit4TestRunner:
 *
 */
public class LittleTestRunner extends BlockJUnit4ClassRunner {
    private static final Logger log = Logger.getLogger(LittleTestRunner.class.getName());

    /**
     * Disable BlockJUnit4ClassRunner test-class constructor rules
     */
    @Override
    protected void validateConstructor( List<Throwable> errors ) {}
    
    /**
     * Construct a new {@code LittleTestRunner} and initialize a
     * {@link LittleBootstrap} to provide littleware testing functionality to
     * standard JUnit tests.
     *
     * @param clazz the test class to be run
     * @see #createTestContextManager(Class)
     */
    public LittleTestRunner(Class<?> clazz) throws InitializationError {
        super(clazz);
        if (log.isLoggable(Level.FINE)) {
            log.log(Level.FINE, "constructor called with [{0}]", clazz);
        }
    }

    /**
     * This is where littleware hooks in
     * 
     * @return an instance of getClass constructed via the littleware managed Guice injector
     */
    @Override
    protected Object createTest() {
        try {
            return LittleBootstrap.factory.lookup(this.getTestClass().getJavaClass());
        } catch ( RuntimeException ex ) {
            log.log( Level.SEVERE, "Test class construction failed", ex );
            throw ex;
        }
    }
}

Now I can write tests in Guice's "inject dependencies into the constructor" style - like this:


/**
 * Just run UUIDFactory implementations through a simple test
 */
@RunWith(LittleTestRunner.class)
public class UUIDFactoryTester {

    private final Provider<UUID> uuidFactory;

    /**
     * Constructor stashes UUIDFactory to run test
     * against
     */
    @Inject()
    public UUIDFactoryTester(Provider<UUID> uuidFactory) {
        this.uuidFactory = uuidFactory;
    }

    /**
     * Just get a couple UUID's, then go back and forth to the string
     * representation
     */
    @Test
    public void testUUIDFactory() {

       //...
    }
}

Tuesday, July 19, 2016

Gradle with "Service Provider" Pattern

I've started dusting off my old littleware project over the last month or so on a "dev" branch in github. As part of the cleanup I'm moving littleware from its old ANT-IVY build system to Gradle. When I started with ant-ivy it was a great alternative to Maven, and allowed me to extend the build setup put in place by a Netbeans IDE's project with transitive dependencies, jenkins integration, scala support - all that great stuff. The problem is that the ant-ivy setup is idiosyncratic, and Gradle and SBT are well designed, widely used systems. I wound up going with Gradle, because I've read a lot of good things about it, and I want to become more familiar with it, and I had used SBT a little in the past, and found SBT's DSL annoying and its extension mechanisms inscrutable, and of course Google is using Gradle to build Android, ...

Anyway - like I said, I'm dusting off the code, and trying to remember what the hell I was thinking with some of it, but so far the Gradle integration has gone well. One of the issues I'm dealing with is the IVY-centric mechanism littleware had setup to pull in optional run-time dependencies. With Gradle the default behavior of the java plugin is to include all compile-time dependencies at run-time - which is what most people expect. Littleware's ant-ivy setup excluded optional dependencies (like mysql-connector versus postgresql - something like that), and included add-on IVY configurations like "with_mysql", "with_postgres", "with_aws", ... that kind of thing, so a client project would have a dependency specification something like this:

<dependency org="littleware" name="littelware" ... conf="compile->compile;runtime->runtime,with_mysql" />

Of course nobody else works like that, so in the interest of clarity and doing what people expect - I'm going to try to rework things, so that littleware's base jar asset includes service provider interfaces (SPI's), that add-on modules implement, so a client that wants to hook up littleware's ability to wire in a MySQL DataSource to a Guice injector would add a dependency to the 'littleware_mysql_module' to their build - something like that. The Jackson JSON parser implements a module system like this, and the SPI pattern is all over java-EE (JDBC, Servlets, whatever); it's a good and widely understood pattern. We'll see how it goes.



Sunday, June 12, 2016

Decouple UX and Services API's in Single Page Apps

An old-school PHP, Struts, JSF, Rails what have you webapp was deployed with a strong binding between the application's UX and backend services. A typical "3 tier" application would involve some kind of server-side MVC framework for generating HTML, and some business objects that would manage the storage and interpretation of data stored in a SQL database, and that was often the whole shebang.

UX in a modern single page application (SPA) is implemented in javascript that accesses backend microservices via AJAX calls to REST API's. Unfortunately there is still a tendency in many projects to combine the web UX with a backend service in a single project - maybe implement an old-school server-side login flow, then dynamically generate an HTML shell that defines the authenticated context in some javascript variables, and a call to the javascript app's bootstrap routine. I am not a fan of this way of structuring an application.

I like the "app shell" approach to structuring a SPA - where the web UX is its own separate project - a UI application that accesses backend micro-services. There are various advantages to this approach, but the immediate benefit is that it simplifies the UX team's life - they can use whatever tools they want (gulp, less, typescript, bower, jslint, ...), develop locally with a simple nodejs+express server and maybe some docker containers providing backend services. The developers can focus on UX and design, and do not have to deal with whatever backend technology might be in use (play, maven, tomcat, rails, whatever, ...). The app-shell development process closely resembles the dev process for a native Android or iOS app.


Sunday, May 15, 2016

Problems with Cookies (session management with multiple authentication providers)


The web project I work on has an old-school login with username and password, check the hash in the database, and set a cookie design that we would like to improve on to support third party API access and multiple external authentication providers (OpenID-Connect, OATH2, SAML).

First, just setting an auth cookies sucks. Suppose a visitor to a web site has two login accounts A and B. He logs into account A in one tab, then, in another tab, signs out of the first account, and signs into account B. Now if the first tab is running a "single page application" javascript app, and the javascript code assumes that it is either signed into a valid session or signed out, then that code in the first tab can wind up inadvertently issuing updates to account B that were intended for account A. An application should not rely on a cookie to track its session, since that cookie can be changed out from under the app by code running in another tab; the app should instead explicitly pass some X-auth header or some similar trick to explicitly specify which session it intends to communicate in.

Second, the old prompt the visitor for a username and password that are validated against an app-maintained (JAAS style) authentication database sucks. A modern app should support multiple identity providers, so that a visitor can authenticate against an identity provider of her choice to establish a login session. An app has a session management service to manage the authentication handshake with different identity providers. This kind of infrastructure works equally well for both native and web apps.

Anyway - that's my current thinking on the subject of authentication and session management. Design-time ideas like these rarely survive implementation of working code.

Monday, February 15, 2016

Docker and Application Versioning


When transitioning from a virtual machine based deployment model to a container based model there are at least two ways to version an application - versioning the container image or versioning packages that the container downloads at startup.

Incrementing the container image version has the advantage that a cluster administrator can easily determine which version of an application is deployed by querying which version of the image the container is running. Similarly the cluster admin can automate upgrades and rollbacks with scripts that spin up and bring down different versions of a container. The disadvantages of this approach include the need to integrate automated image builds into the continuous integration pipeline (Jenkins or whatever), and the proliferation of dev-only and broken image versions within the Docker registry.

As a first step toward deploying applications with containers the project I'm involved with is moving toward infrequently publishing a container image for each application which contains the application's envelope dependencies. We take advantage of the Chef scripts we already have to build virtual machines by running chef-client (in chef-zero local mode) during the image build, and wiring up the image with an entry-point that once again runs chef-client to update the container at startup time. The Chef scripts download the latest versions of the application and its dependencies into the container as RPM packages from YUM repositories that our continuous integration pipeline already maintains to support our virtual machine devops workflow.

Sunday, January 31, 2016

javascript decomposition with bower's shorthand resolver


We started using bower to manage third party dependencies in our projects, and realized that we could also use bower to help decompose our javascript applications into re-usable independently tested components. We started out with a lazy approach where we setup a jscommon/ folder under which we installed our different components (jscommon/A/, jscommon/B/, jscommon/C/, ...) where each component might have its own build and test scripts - whatever it needs.

We started out representing dependencies between components with file URL's in bower.json files, so if C depended on B and A, then it might have a bower.json file like this:

  {
     ...
     "dependencies" : {
            "A" : "../A",
            "B" : "../B"
     }
   }

Of course - that quickly falls apart when an application's bower.json file has a different relative path to the jscommon/ folder, but using a shorthand resolver solves the problem. An application (or test or whatever) registers a shorthand resolver in a .bowerrc file with the appropriate relative path like this:

{
    ...
    "shorthand-resolver" : "../../{{shorthand}}"
}

, and we specify local dependencies in bower.json with short-hands like this:

  {
     ...
     "dependencies" : {
            "A" : "jscommon/A",
            "B" : "jscommon/B"
     }
   }