Sunday, July 04, 2021

tips for ardour on linux

Problem and Audience

A broad selection of digital audio workstation (DAW) software are available for Mac and Windows computers including logic pro, abelton live, and garage band. These popular DAWs are not available on Linux (which I run on my laptop), but several good DAWs do run on Linux - including ardour. Ardour has a rich set of features, and I have enjoyed learning to use it for my simple podcast and music recording production needs. Ardour is open source (GPL) software, and I started by using the free version available from Ubuntu's package repository. I was so impressed with the software, that I eventually made a contribution on ardour.org to support the team's development efforts and get access to their more recent binary releases.

There are a few Ardour tips and tricks that I learned over my first week experimenting with ardour. First, I installed the JACK audio server. Ardour also supports using the ALSA kernel API directly. I know nothing about audio software, but it seems that JACK (think jacking your guitar into an amp) is the best way to route sound streams between applications on linux, although a lot of software still relies on the older pulse audio sound server.

$ sudo apt install jack2 -y

Next, for some reason ardour wants to pin its memory, so I updated my system's limit on locked memory. On my ubuntu laptop I did the following:

$ ulimit --help | grep -- -l
      -l    the maximum size a process may lock into memory

$ sudo cat - > /etc/security/limits.d/audio.conf <<EOM
# Provided by the jackd package.
#
# Changes to this file will be preserved.
#
# If you want to enable/disable realtime permissions, run
#
#    dpkg-reconfigure -p high jackd

@audio   -  rtprio     95
@audio   -  memlock    unlimited
#@audio   -  nice      -19
EOM

$ sudo usermod -a -G audio $USER

# log-out and log back in for the change to take effect

I am currently just recording with the laptop's built in microphone, and it turns out (on my laptop anyway) that the microphone's input volume is a little low to pick up a guitar playing nearby. Ubuntu 20 includes Gnome's Settings app that in turn includes a sound-settings tool with a slider for adjusting the mic.

Finally, when recording an audio track via the laptop's microphone, we can prevent feedback from the track's monitor (the laptop's speakers) by either monitoring a null input, or sending the monitor output to a device that does not feedback to the microphone (like headphones). An ardour audio track's mixer can be configured to monitor the track's audio input (microphone) or disk input (the files where samples are stored). Configuring the mixer to monitor the disk input effectively sends a null signal to the monitor when recording a new sample. Using headphones or muting the speakers also prevents feedback.

Down the Gear Rabbit Hole

I first became interested in audio production after watching some studio-setup videos on Youtube guitar channels like Paul Davids and Rhett Shull. Suppose you want to record a little guitar performance to an mp3 - how would you do that? First, you can just record directly to an audio-capture app on your phone, tablet, or computer. The cost for that is $0 - great!

Next, you want to be able to edit your recording, maybe record some commentary, maybe publish a podcast. You can use Garage Band or Ardour or some other inexpensive DAW software - cost for that is $0 to $50 - great!

Maybe you now want to get a good microphone. You could get a USB microphone to record a single channel directly into the DAW - something like the RODE NT USB looks pretty awesome for $170.

One problem with a USB microphone is that you can't use it as an input to an amp or a PA, and you are under the impression that things get a little messy if you try to plug two or more usb mic's into a computer to record multiple channels simultaneously. A better option might be to buy a digital audio interface - something like this Focusrite is $170, then get a couple regular microphones - this Rode NT1 kit includes a mic mount and pop filter for $269, and don't forget to get a mic stand - Amazon basics has one for $18.

Of course now that you have a couple nice microphones, then you'll want to get some good monitor speakers (maybe $100 or $200) and headphones (these are $30) to plug into that audio interface. You could then record loops to background tracks, and perform along by playing the background tracks through the monitors!

Now you're on your way to spending $1000 and who knows how much time on audio production, and you think - what kind of cameras and software would you need to capture video to publish to Youtube? Plus, maybe you need a nicer acoustic guitar with an audio pickup you can plug directly into the audio interface? Or an electric guitar with an amp? Or maybe try software modeling amps in the DAW? But you really suck at playing guitar. But recording a performance is a great way to practice an instrument - forces you to get it right. That microphone is really entry level - a better one would show how you suck much more clearly. Maybe you should get a keyboard to control MIDI synthesizers in the DAW - how does that work anyway? You need a bigger room for all this gear - maybe get an acoustic treatment for the room; make it a real little studio. You should really get a dedicated computer for production.

And they got you!

Summary

Ardour is a nice DAW package for Linux that is easy to get started with.

gradle to sbt for scala3

Problem and Audience

Scala 3 is an overhaul of the guts of the scala language type system and compiler that was recently released (in mid 2021) for general use. Unfortunately, the scala plugin for gradle does not yet support the new scala-3 build chain, so we ported littleware to scala's sbt build tool.

Porting a gradle build to an sbt build is straight forward. Both gradle and sbt define a graph of tasks for building projects, and they define the task library (types of tasks) via third-party plugins. Both system's define the task instances in a project's build graph with a user-supplied build file written in a DSL that makes API calls against the runtime. For example, the littleware sub-project in both gradle (build.gradle) and sbt (build.sbt) are similar to each other:

sbt:

lazy val littleware = project
  .in(file("littleware"))
  .settings(
    name := "littleware",
    crossPaths := false,
    autoScalaLibrary := false,
    libraryDependencies ++= Seq(
      junit,
      guice,
      guava,
      "javax.mail" % "javax.mail-api" % "1.5.5",
      "javax" % "javaee-web-api" % "8.0",
      "org.apache.derby" % "derby" % "10.15.2.0",
      "org.apache.derby" % "derbyclient" % "10.15.2.0",
      "org.postgresql" % "postgresql" % "42.2.18",
      "mysql" % "mysql-connector-java" % "8.0.23",
      "org.javasimon" % "javasimon-core" % "4.2.0",
      "javax.activation" % "activation" % "1.1.1"
    ) ++ junitRunnerSet ++ log4jJsonSet,
  )

gradle:

project( ':littleware' ) {
    dependencies {
        implementation 'com.google.inject:guice:4.2.3:no_aop@jar'
        implementation 'junit:junit:4.13.2'
        implementation 'com.google.guava:guava:30.1-jre'
        compileOnly 'javax.mail:javax.mail-api:1.5.5'
        implementation 'javax:javaee-web-api:8.0'
        compileOnly 'org.apache.derby:derby:10.15.2.0'
        compileOnly 'org.apache.derby:derbyclient:10.15.2.0'
        compileOnly 'org.postgresql:postgresql:42.2.18'
        compileOnly 'mysql:mysql-connector-java:8.0.23'
        implementation 'org.javasimon:javasimon-core:4.2.0'
        runtimeOnly 'javax.activation:activation:1.1.1'
    }
}

I am glad that the port to SBT was straight forward, but I'm also annoyed that scala has its own build tool, sbt, rather than simply invest in gradle - which is widely used for building java, Android, and kotlin projects. It would be nice if I could just learn gradle, and use it to build scala (version 3) too. The same could be said for golang and dotnet - which also implement their own build tools rather than use gradle, but gradle depends on the jvm, so it makes more sense for languages in that ecosystem. There is a trade-off between building a custom tool chain that is finely tuned for a particular domain, or using a more generic system that has a large community of users. I expect gradle to support scala 3 in a few months anyway, so we will soon have the best of both worlds.

Summary

It was easy to port littleware's gradle build to sbt, since the two systems implement a task-graph design.

Wednesday, June 23, 2021

Little UX Guidelines

Problem and Audience

A good understanding of a system's UX design drives the architecture of the underlying software that implement that design. The CSS rules, the site map and navigation experience, and the javascript component hierarchy and state management all rely on the developer's mental model of the design she is implementing. Unfortunately, many software developers like myself struggle with UX design. I have held wrong ideas about the relationship between design and software - like believing that design is a separate less technical (less valuable) process from software development, or that an arbitrary design can be layered on top of a web site after it is built (I'll transition to hugo, and slap a nice hugo theme on the site; we'll just change the CSS; we'll build a skin-able system). In fact, it is difficult to build a web site with a consistent overall UX design implemented in a way that can evolve over time and support simple user customization (like a dark theme) while maintaining a comprehensible code base. The good thing about being bad at web design is that there are many opportunities to learn and improve. The bad thing about being bad at web design is that my site sucks - which is the only thing a user cares about.

Design and developer teams need to work together to agree on a mental model for a site's structure and behavior, then codify that model in UX guidelines. Implementing UX guidelines is an evolutionary process that yields artifacts like documentation explaining the high level concepts of the design, tutorials, how-tos, design tools, CSS baselines, component libraries, and SOP's for the processes that shape the teams' daily work.

The UX guidelines for a large organization can become a sprawling manifesto (like Google's material design or Apple's human interface guidelines), but it doesn't have to be complicated for small teams. The important task for the design and dev teams is to come up with a way to effectively communicate and record the ideas that connect design to development in UX guidelines, and agree on a contract that a design and its implementation must both conform to the guides. For example, if the UX guide defines three high level page elements (navigation, content, whitespace, and actions), then a designer should not introduce a new type of element (media player, user documentation, feedback form) without also working through a process to extend the UX guide and its surrounding tools. Anyway, that's my thinking as of this morning, and this document is a small beginning for littleware's UX guide.

The "Bla Guide" model may work well for managing the interaction between other teams as well. It is easy to imagine security, infrastructure and operations, hr, product management, and qa guidelines that are similar to UX guides in their complexity, tooling, and evolution. Inevitably we will need "guidelines for guidelines".

Littleware UX Guidelines

Elements of a page

The elements of a littleware web page may each be classified as either content, metadata, whitespace, or actions. The content is the information that the page wants to present to the user, or more generally where the page engages in a conversation with the user. The content of a blog post would be the blog's essay. The content of a feedback form would be the form. The content of a data dashboard would be the data presentation.

Actions are elements like buttons and forms that present a call to action to the visitor. The "Add to Basket" button on a product detail page is an action, and so is the "enter your e-mail to download our marketing pdf" form on a CRM teaser page. An action is usually a child element of an enclosing content block.

Metadata presents non-content information on topics like the site, page content, author, or publisher. The navigation elements in the page header are metadata, and so are the various "About us" links in the footer. Metadata should be easy to access and understand, but it should not distract from the content.

Whitespace is the empty space that separates content, metadata, and action blocks.

CSS variables for page elements

Littleware's base style helper defines a series of CSS properties (variables) and rules for rendering different elements of a page with a consistent color and font scheme based on the element type. A site may override these variables to define its own style.

Content regions should define style rules with the --lw-primary-text-color, --lw-primary-bg-color, and --lw-primary-font-family variables. We assume section (<section>) blocks hold content, and we define separate CSS classes to allow different background and border colors for different content blocks - lw-section-block1, lw-section-block2, etc. We decided that using a different background color (or even a gradient) for content sections was too distracting, so we use color in more subtle ways like applying it to the bottom border of content sections and the border of content tiles. Since CSS properties cascade in a cool way, the different lw-section-block... CSS classes can each define its own border color property (--lw-section-border-color: var(--lw-sec1-border-color);) that contained elements like tiles can leverage.

:root {
    --lw-primary-text-color: #222222;
    --lw-primary-bg-color: #fefefe;
    --lw-secondary-bg-color: #fafafa;
    --lw-primary-font-family: 'Oswald script=all rev=4', Verdana, sans-serif;
    --lw-sec1-border-color: #bb38b7;
    --lw-sec1-bg-gradient: linear-gradient(var(--lw-primary-bg-color), #fad7f6);
    --lw-sec2-border-color: #0bf749;
    --lw-sec2-bg-gradient: linear-gradient(var(--lw-primary-bg-color), #f1fff1);
    ...
}

...

section {
    font-family: var(--lw-primary-font-family);
    background-color: var(--lw-primary-bg-color);
    color: var(--lw-primary-text-color);
    padding: 10px 5px;
}

.lw-section-block1 {
    font-family: var(--lw-primary-font-family);
    --lw-section-border-color: var(--lw-sec1-border-color);
    border-bottom: thin solid var(--lw-section-border-color);
    min-height: 100px;
    background-color: var(--lw-primary-bg-color);
}

.lw-section-block1_gradient {
    background: var(--lw-sec1-bg-gradient);
    background-color: var(--lw-primary-bg-color);
}

...

/*--- rules for tiles ---- */

.lw-tile-container {
    display: flex;
    flex-wrap: wrap;
    background-color: var(--lw-whitespace-bg-color);
}

.lw-tile {
    width: 300px;
    height: 250px;
    padding: 10px;
    margin: 10px;
    border-radius: 5px;
    border: solid thin var(--lw-section-border-color);
    overflow: hidden;
    background-color: var(--lw-primary-bg-color);
}

The set of CSS rules for metadata-type elements has its own font-family and color scheme. A background color gradient helps distinguish metadata blocks from the content elements that a visitor should focus on.

:root {
    ...
    --lw-secondary-text-color: #777;
    --lw-secondary-bg-color: #fafafa;
    --lw-header-background-color: var(--lw-primary-bg-color);
    --lw-secondary-font-family: 'Noto Sans', sans-serif;
    --lw-nav-border-color: #0BDAF7;
    --lw-nav-bg-gradient: linear-gradient(var(--lw-header-background-color), #f0fdff);
    ...
}

...

h1,h2,h3,h4 {
    color: var(--lw-secondary-text-color);
    font-weight: normal;
    font-family: var(--lw-secondary-font-family);
    margin-top: 10px;
    margin-bottom: 10px;
}

header {
    font-family: var(--lw-secondary-font-family);
    background-color: var(--lw-secondary-bg-color);
    color: var(--lw-secondary-text-color);
}

footer {
    font-family: var(--lw-secondary-font-family);
    background-color: var(--lw-secondary-bg-color);
    color: var(--lw-secondary-text-color);
}

.lw-nav-block {
    font-family: var(--lw-secondary-font-family);
    border-bottom: thin solid var(--lw-nav-border-color);
    background-color: var(--lw-secondary-bg-color);
}

.lw-nav-block_gradient {
    background: var(--lw-nav-bg-gradient);
    background-color: var(--lw-secondary-bg-color);
}

...

Finally, the whitespace separating different content and metadata blocks has its own background color to clarify the page structure for the user.

:root {
    --lw-whitespace-bg-color: #f2f2f4;
    ...
}

...

body {
    ...
    background-color: var(--lw-whitespace-bg-color);
}

...
/*--- rules for tiles ---- */

.lw-tile-container {
    display: flex;
    flex-wrap: wrap;
    background-color: var(--lw-whitespace-bg-color);
}

...

The Rotating Hamburger and OG Javascript

We added a hamburger menu to the header of https://apps.frickjack.com to allow a visitor to easily navigate between the different parts of the site. I like the CSS animation that rotates the hamburger to an "X" when opening, then back to a hamburger when closing. We implement that hamburger and the other drop-down menus on the site with a lw-drop-down web component that wraps the purecss menu.

The lw-drop-down web component takes advantage of some of the drop-down and hamburger example code from the purecss web site. The sample code is written in an old-school jQuery style where the code keeps all its state in the DOM by tracking the custom CSS rules attached to different elements. For example, when the user clicks on the hamburger, the javascript event listener directly modifies the CSS rules attached to different DOM elements. Bootstrap is a popular framework with components that rely on this style of code.

We intend to refactor our lw-drop-down code to a more modern MVC (or component) style that tracks the UI state in javascript variables that drive a render template. For example, when a user clicks on the hamburger, a javascript event listener modifies the javascript variables that feed a template system that manipulates the DOM. React, Angular, Ember, and Vue follow this pattern.

Hugo Shortcodes for Content Tiles

Hugo shortcodes provide a mechanism to safely embed custom html into the markdown files that a hugo content author works with. We provide simple tilecanvas and tile shortcodes to allow an author to indicate that her content may be presented as tiles. The shortcodes are defined in the "littleware" hugo theme under the little-apps github repo.

tilecanvas:

<div class="lw-tile-container">
    {{ .Inner }}
</div>

tile:

<div class="lw-tile">
    {{ .Inner | markdownify }}
</div>

Summary

UX designers and software developers need to clearly communicate UX guidelines that establish a shared mental model for how to describe an implement the user experience.

Thursday, June 10, 2021

Jamstack Cloudformation

Problem and Audience

A simple clouformation template makes it easy to stamp out AWS infrastructure for a jamstack web site. A jamstack is a web site composed of static presentation (non-API) resources (html, css, javascript) assembled at build time - as opposed to a site that requires some kind of dynamic server side rendering of resources at request time. A nice feature of this architecture is that it allows a site to be served inexpensively from a serverless object storage system like S3. We manage https://apps.frickjack.com by copying presentation resources to an S3 bucket that acts as an origin for a cloudfront distribution.

We originally setup the infrastructure for https://apps.frickjack.com by clicking around the AWS web console (like this), but we want to tear down that infrastructure, and move to cloudformation managed infrastructure to realize the benefits of infrastructure as code including:

  • it is less work to manipulate infrastructure by editing json files than clicking through the web console
  • a cloudformation template allows us to deploy multiple copies of our architecture (for different products, test environments, etc) in a consistent way
  • cloudformation templates capture best practices and institutional conventions, and allow infrastructure to evolve over time
  • tracking the cloudformation template and parameters in git gives an audit trail
  • cloudformation makes automation easy

Jamstack Requirements

We have a handful of requirements for our jamstack infrastructure. First we want the S3 bucket to remain private and encrypted. Even though the content of the bucket is publicly accessible via the cloudfront CDN, making the origin bucket conform to standard S3 best practices simplifies compliance, since we do not need to note exceptions to organization policies that expect a bucket to be private and encrypted. The cloudfront CDN is granted access to the private bucket via a bucket policy that gives read permission to an origin access identity associated with the cloudfront distribution.

Most of our other requirements are addressed by adjusting knobs on our cloudfront configuration. For example, we want all http traffic to be redirected to https. We want to use an input parameter to our cloudformation stack to associate an alias domain with the distribution - we manage the DNS setup for the alias in another Route53 stack. We use another input parameter to feed the ARN of ourACM-managed TLS certificate. We configure cloudfront to require clients to use TLS 1.2 or better.

Finally, our little stack tools make it easy to follow the tagging conventions that we want to enforce across our infrastructure.

the little stack

We setup the following template (also in github) to start managing our jamstack infrastructure with cloudformation. The template takes advantage of the nunjucks extensions to cloudformation templates supported by our little stack automation.

One "gotcha" that we ran into was that we originally intended to setup a new stack with the apps.frickjack.com domain alias already in use by our live CDN, copy our web content to the new bucket (modify our codebuild CI pipeline configuration to do that for us), then update DNS to point the apps.frickjack.com domain at the new CDN. However, it turns out that cloudfront does not allow two distributions to have the same alias, so we setup our new CDN with a temporary alias, then took a few minutes of downtime while we removed the apps.frickjack.com alias from the old CDN, then updated our new stack.

{
  "AWSTemplateFormatVersion": "2010-09-09",
  "Metadata": {
    "License": "Apache-2.0"
  },
  "Description":
    "Generalize AWS s3-cdn sample template from - https://github.com/awslabs/aws-cloudformation-templates/blob/master/aws/services/S3/S3_Website_With_CloudFront_Distribution.yaml",
  "Parameters": {
    "CertificateArn": {
      "Type": "String",
      "Description": "ACM Certificate ARN"
    },
    "DomainName": {
      "Type": "String",
      "Description": "The DNS name of the new cloudfront distro",
      "AllowedPattern": "(?!-)[a-zA-Z0-9-.]{1,63}(?<!-)",
      "ConstraintDescription": "must be a valid DNS zone name."
    },
    "BucketSuffix": {
      "Type": "String",
      "Description": "The suffix of the bucket name - prefix is account number",
      "AllowedPattern": "[a-zA-Z0-9-]{1,63}",
      "ConstraintDescription": "must be a valid S3-DNS name"
    }
  },
  "Resources": {
    "S3Bucket": {
      "Type": "AWS::S3::Bucket",
      "Properties": {
        "AccessControl": "Private",
        "BucketName": { "Fn::Join": [ "-", [ { "Ref" : "AWS::AccountId" }, { "Ref": "BucketSuffix" } ] ] },
        "BucketEncryption": {
          "ServerSideEncryptionConfiguration" : [ 
            {
              "BucketKeyEnabled" : "true",
              "ServerSideEncryptionByDefault" : {
                "SSEAlgorithm": "AES256"
              }
            }
          ]
        },        
        "WebsiteConfiguration": {
          "IndexDocument": "index.html",
          "ErrorDocument": "error.html"
        },
        "Tags": [
          {{ stackTagsStr }}
        ]
      }
    },
    "CloudFrontOriginIdentity": {
      "Type": "AWS::CloudFront::CloudFrontOriginAccessIdentity",
      "Properties": {
        "CloudFrontOriginAccessIdentityConfig": {
          "Comment": "origin identity"
        }
      }
    },
    "BucketPolicy": {
      "Type": "AWS::S3::BucketPolicy",
      "Properties": {
        "Bucket": { "Ref": "S3Bucket" },
        "PolicyDocument": {
          "Version": "2012-10-17",
          "Statement": [
            { 
              "Effect": "Allow",
              "Principal": {
                "AWS": { "Fn::Sub": "arn:aws:iam::cloudfront:user/CloudFront Origin Access Identity ${CloudFrontOriginIdentity}" }
              },
              "Action": "s3:GetObject",
              "Resource": { "Fn::Sub": "arn:aws:s3:::${S3Bucket}/*" }
            }
          ]
        }
      }
    },
    "CdnDistribution": {
      "Type": "AWS::CloudFront::Distribution",
      "Properties": {
        "DistributionConfig": {
          "Aliases": [
            { "Ref": "DomainName" }
          ],
          "Origins": [
            { 
              "DomainName": { "Fn::Sub": "${S3Bucket}.s3.${AWS::Region}.amazonaws.com" },
              "Id": "S3-private-bucket",
              "S3OriginConfig": {
                "OriginAccessIdentity": { "Fn::Sub": "origin-access-identity/cloudfront/${CloudFrontOriginIdentity}" }
              }
            }
          ],
          "DefaultRootObject": "index.html",
          "Enabled": "true",
          "Comment": { "Ref": "DomainName" },
          "DefaultCacheBehavior": {
            "AllowedMethods": [ "GET", "HEAD", "OPTIONS" ],
            "CachedMethods": [ "GET", "HEAD", "OPTIONS" ],
            "TargetOriginId": "S3-private-bucket",
            "ForwardedValues": {
              "QueryString": "false",
              "Cookies": {
                "Forward": "none"
              }
            },
            "ViewerProtocolPolicy": "redirect-to-https"
          },
          "ViewerCertificate": {
            "AcmCertificateArn": { "Ref": "CertificateArn" },
            "SslSupportMethod": "sni-only",
            "MinimumProtocolVersion": "TLSv1.2_2019"
          }
        },
        "Tags": [
          { "Key": "Name", "Value": { "Ref": "DomainName" } },
          {{ stackTagsStr }}
        ]
      }
    }
  },
  "Outputs": {
    "CdnAliasDomain": {
      "Value": { "Fn::GetAtt": [ "CdnDistribution", "DomainName" ] },
      "Description": "The URL of the newly created website"
    },
    "BucketName": {
      "Value": { "Ref": "S3Bucket" },
      "Description": "Name of S3 bucket to hold website content"
    }
  }
}

Summary

A simple clouformation template makes it easy to stamp out AWS infrastructure for a jamstack web site.

Monday, June 07, 2021

route53 and cloudformation

Problem and Audience

Managing route53 records with cloudformation is a good idea for the same reasons that tracking other resources with cloudformation (or terraform or whatever) is better than clicking around in the web console - namely:

  • it is less work to manipulate route53 records by editing json files than clicking through the web console
  • tracking the cloudformation template and parameters in git (or whatever code repository) gives an audit trail
  • cloudformation makes automation easy

We setup the following cloudformation template to start managing our simple route53 zones with cloudformation. The template takes advantage of the nunjucks extensions to cloudformation templates supported by our little stack automation.

{
  "AWSTemplateFormatVersion": "2010-09-09",
  "Parameters": {
    "DomainName": {
      "Type": "String",
      "Description": "the domain name"
    }
  },
  "Resources": {
    "HostedZone": {
      "Type" : "AWS::Route53::HostedZone",
      "Properties" : {
          "HostedZoneTags" : [ 
            {{ stackTagsStr }}
          ],
          "Name" : { "Ref": "DomainName" }
        }
    }

    {% if stackVariables.aliasList.length %}
    ,
      {% for item in stackVariables.aliasList %}
      "AliasA{{ item.resourceName }}": {
        "Type" : "AWS::Route53::RecordSet",
        "Properties" : {
            "AliasTarget" : {
              "DNSName" : "{{ item.target }}",
              "HostedZoneId": "{{ item.hostedZoneId }}"
            },
            "Comment" : "{{ item.comment }}",
            "HostedZoneId" : { "Ref": "HostedZone" },
            "Name" : "{{ item.domainName }}",
            "Type" : "A"
          }
      },
      "AliasAaaa{{ item.resourceName }}": {
        "Type" : "AWS::Route53::RecordSet",
        "Properties" : {
            "AliasTarget" : {
              "DNSName" : "{{ item.target }}",
              "HostedZoneId": "{{ item.hostedZoneId }}"
            },
            "Comment" : "{{ item.comment }}",
            "HostedZoneId" : { "Ref": "HostedZone" },
            "Name" : "{{ item.domainName }}",
            "Type" : "AAAA"
          }
      }

      {% if not loop.last %} , {% endif %}
      {% endfor %}
    {% endif %}

    {% if stackVariables.mxConfig %}
    ,
    "MX": {
      "Type" : "AWS::Route53::RecordSet",
      "Properties" : {
          "Comment" : "mx mail config",
          "HostedZoneId" : { "Ref": "HostedZone" },
          "Name" : { "Ref": "DomainName" },
          "ResourceRecords" : {{ stackVariables.mxConfig.resourceRecords | dump }},
          "TTL" : "900",
          "Type" : "MX"
        }
    }
    {% endif %}

    {% if stackVariables.cnameList.length %}
    ,
    {% for item in stackVariables.cnameList %}
    "Cname{{item.resourceName}}": {
      "Type" : "AWS::Route53::RecordSet",
      "Properties" : {
          "Comment" : "{{ item.comment }}",
          "HostedZoneId" : { "Ref": "HostedZone" },
          "Name" : "{{ item.domainName }}",
          "ResourceRecords" : [ "{{ item.target }}" ],
          "TTL" : "900",
          "Type" : "CNAME"
        }
    }
    {% if not loop.last %} , {% endif %}
    {% endfor %}

    {% endif %}

    {% if stackVariables.txtList.length %}
    ,
    {% for item in stackVariables.txtList %}
    "Txt{{item.resourceName}}": {
      "Type" : "AWS::Route53::RecordSet",
      "Properties" : {
          "Comment" : "{{ item.comment }}",
          "HostedZoneId" : { "Ref": "HostedZone" },
          "Name" : { "Ref": "DomainName" },
          "ResourceRecords" : [ {{ item.txtValue | dump }} ],
          "TTL" : "900",
          "Type" : "TXT"
        }
    }
    {% if not loop.last %} , {% endif %}
    {% endfor %}

    {% endif %}

  },

  "Outputs": {
    "NameServers": {
      "Description": "hosted zone nameservers",
      "Value": { "Fn::Join": [",", { "Fn::GetAtt": [ "HostedZone", "NameServers" ] }] }
    }
  }
}

Summary

Managing route53 zones with cloudformation is the right thing to do.

Wednesday, June 02, 2021

Porting https://apps.frickjack.com to hugo

Problem and Audience

A web site may be architected in various ways: from a simple collection of static html, javascript, and css files behind a web server; to a site administered by a content management system; to a web application built on custom server or client side software.

The appropriate design for a particular site is the one that best balances the requirements of the site's different stakeholders. For example, the marketing team may primarily view the site as one part of customer relationship management (CRM). The customer support team might want to publish documentation to the site, or provide tools for a customer to request support. The product team may want the site to provide access to the product's user console application.

Each stakeholder may need to update the site in different ways. The marketing and customer support teams may require a simple mechanism to submit edits for review and publication. The product development team may want to build and test code updates with a CICD pipeline. Neither of those teams may be well versed in graphic design.

apps.frickjack.com and hugo

We just completed a project to transition https://apps.frickjack.com to the hugo static site generator. The https://apps.frickjack.com property acts both as my personal site and as a sandbox for experimenting with the littleware software stack. It is a static multi-page site served from an S3 bucket with a few small javascript web applications and some early integrations with web API's.

The hugo transition allowed us to move the content and theme management for https://apps.frickjack.com from an idiosynchratic templating system to the well documented and community supported process that hugo implements. Hugo's theme design also pushed us to think about what we want the site to provide to its visitors, and whether the landing page clearly conveys those use cases. For example, https://www.salesforce.com/ has a straight forward explanation of what the company is, "the #1 CRM ...", and a call to action "sign up for your free account".

The content management process is still developer oriented in that site updates are managed via github pull requests, and a codebuild CI job updates the site, but the content markdown and theme templates are now managed in their own hugo directory hierarchy. The site's github repo includes more details at https://github.com/frickjack/little-apps/blob/master/Notes/howto/devTest.md.

Summary

We transitioned https://apps.frickjack.com to the hugo static site generator to further decouple the site's content and theme management from the javascript code implementing the dynamic services and applications on the site. We also reorganized the site to better support the experiences we want the site to provide to visitors.

Monday, May 17, 2021

Simple Java/Scala Configuration Injection with Guice

Problem and Audience

One of the things every microservice needs is a mechanism for injecting configuration, so we developed a little json configuration helper for our littleware scala code that overlays a hierarchy of json configuration objects, and integrates with our module runtime and dependency-injection framework.

Configuration in Littleware

Littleware has a simple ServiceLoader based module runtime system that integrates with a guice dependency injection container. In practice what that means is that each java or scala jar includes a Module class that implements a simple callback interface for defining configuration injection bindings, and registering application event listeners (startup and shutdown). We have now augmented this platform with a json configuration helper that allows a module developer to provide configuration defaults on the classpath in the jar file, and the stack operator to override those defaults with a json file on an environment-defined search path or with json in an environment variable.

Here's how it works. The JsonConfigLoader provides a loadConfig helper that takes a key as an argument and returns a JsonObject (we use the gson json library).

The JsonConfigLoader also provides a bindKeys method that consumes a json object and a guice binder, converts the json to a list of (key, value) pairs, maps the values back to strings, and binds each key to its string value using guice's @Named binding facility.

So in the Module.scala (or .java) file described above, the module bootstrap code does something like this:

littleware.scala.JsonConfigLoader.loadConfig(CONFIG_KEY).map(
  {
    jsConfig =>
    littleware.scala.JsonConfigLoader.bindKeys(binder, jsConfig)
  }
)

Finally, a configuration provider can consume the bound configuration strings - like this:

@inject.Singleton()
    class ConfigProvider @inject.Inject() (
        @inject.name.Named("little.cloudmgr.sessionmgr.awsconfig") configStr:String,
        gs: gson.Gson
    ) extends inject.Provider[Config] {
        lazy val singleton: Config = {
            val js = gs.fromJson(configStr, classOf[gson.JsonObject])
            Config(
              js.getAsJsonPrimitive("oidcJwksUrl").getAsString(),
              Option(js.getAsJsonPrimitive("kmsSigningKey")).map({ _.getAsString() }),
              js.getAsJsonArray("kmsPublicKeys").asScala.map({ jsIt => jsIt.getAsJsonPrimitive().getAsString() }).toSet
            )
        }

        override def get():Config = singleton
    }

In the cloudmgr module above the configuration key is LITTLE_CLOUDMGR, so the config loader first loads littleware/config/LITTLE_CLOUDMGR.json off the classpath - which provides some developer defaults. The loader then searches the folders from the LITTLE_CONFIG_PATH environment (or system) variable until it finds a LITTLE_CLOUDMGR.json file, and it loads that, and does a shallow json merge. Finally, the config loader looks for a LITTLE_CLOUDMGR system (or environment) variable, and again merges the keys.

What does our configuration look like? We want to avoid collisions between binding keys from different modules, so the keys in a config json follow the java package reverse-dns pattern. Also, I like to have simple patterns that I can follow, so each service implementation in the module that requires configuration defines its own Config class and Provider[Config] that consumes a particular configuration key (that can be individually overriden via the configuration merge process described above). For example, the cloudmgr module has two service implementation, LocalKeySessionMgr and AwsSessionMgr, and the json configuration for the module looks like this:

{
    "little.cloudmgr.domain" : "test-cloud.frickjack.com",
    "little.cloudmgr.sessionmgr.type": "local",
    "little.cloudmgr.sessionmgr.localconfig": {
        "signingKey": { "kid": "testkey", "pem": "-----BEGIN PRIVATE KEY-----\nMIGHAgEAMBMGByqGSM49AgEGCCqGSM49AwEHBG0wawIBAQQgs02I2exqJsdAoHef\n54/cjmlRvww903MKp0AOPqlRRXqhRANCAATWdeIowEmJ5lxpm7gE8GtvBnB1FBTI\nlcZHdD1FPM90oeEAraGGtnluYYEdPiJP3r29n3qFcGTgvqDAE49bc4om\n-----END PRIVATE KEY-----" }, 
        "verifyKeys": [ 
            { "kid": "testkey", "pem": "-----BEGIN PUBLIC KEY-----\nMFkwEwYHKoZIzj0CAQYIKoZIzj0DAQcDQgAE1nXiKMBJieZcaZu4BPBrbwZwdRQU\nyJXGR3Q9RTzPdKHhAK2hhrZ5bmGBHT4iT969vZ96hXBk4L6gwBOPW3OKJg==\n-----END PUBLIC KEY-----" } 
        ],
        "oidcJwksUrl": "https://www.googleapis.com/oauth2/v3/certs" 
    }, 
    "little.cloudmgr.sessionmgr.awsconfig": {
        "kmsPublicKeys": [ 
            "alias/littleware/api/api-frickjack-com/sessMgrSigningKey", 
            "alias/littleware/api/api-frickjack-com/sessMgrOldKey" 
        ], 
        "kmsSigningKey": "alias/littleware/api/api-frickjack-com/sessMgrSigningKey", 
        "oidcJwksUrl": "https://cognito-idp.us-east-2.amazonaws.com/us-east-2_860PcgyKN/.well-known/jwks.json"
    },
    "little.cloudmgr.sessionmgr.lambdaconfig": {
        "corsDomainWhiteList": [ ".frickjack.com" ],
        "cookieDomain": ".frickjack.com"
    }
}

Summary

We developed a little json configuration helper for our littleware scala code that overlays a hierarchy of json configuration objects, and integrates with our module runtime and dependency-injection framework.

Supporting Cloudformation Patterns with Nunjucks

Problem and Audience

Since cloudformation templates do not natively support the dynamic resource provisioning patterns required for many cloud architectures, various extensions and template generators have emerged (like CDK and SAM). The little stack tools allow the use of the nunjucks template language in cloudformation templates to support various infrastructure patterns.

Overview of little tools

The little tools include the little stack helpers for deploying infrastructure defined by a declarative template file. The little tools include their own library of cloudformation templates. Ideally a template is defined in a generic way, but accepts input parameters that allow different stacks (like prod and dev) to be deployed, so a (template.json, parameters.json) pair defines each infrastructure stack. The end user follows this workflow.

  • select a cloudformation template from the library
  • create a parameters json file defining the input variables that the template requires - the parameters file format extends the cli skeleton (from aws cloudformation update-stack --generate-cli-skeleton) with a littleware block - for example:
{
    "StackName": "name of the stack",
    "Capabilities": [
        ... cloudformation capabilities if any 
        "CAPABILITY_NAMED_IAM"
    ],
    "TimeoutInMinutes": 5,
    "EnableTerminationProtection": true,
    "Parameters" : [
        ... cloudformation input parameters
    ],
    "Tags": [
        ... tags for the stack
            {
                "Key": "org",
                "Value": "applications"
            },
            {
                "Key": "project",
                "Value": "api.frickjack.com"
            },
            {
                "Key": "stack",
                "Value": "reuben"
            },
            {
                "Key": "stage",
                "Value": "dev"
            },
            {
              "Key": "role",
              "Value": "api"
            }
    ],
    "Littleware": {
        "TemplatePath": "lib/cloudformation/cloud/api/authclient/root.json ... path to the template",
        "Variables": { ... supplemental nunjucks input variables
            "authnapi": {
                "lambdaVersions": [
                    {
                        "resourceName": "lambdaVer20200523r0",
                        "description": "initial prod version"
                    },
                    {
                        "resourceName": "lambdaD001000003D20200618r0",
                        "description": "little-authn 1.0.3"
                    },
                    {
                        "resourceName": "lambda20201205r0",
                        "description": "little-authn 1.0.4"
                    },
                    {
                        "resourceName": "lambda20201216r0",
                        "description": "little-authn 1.0.5"
                    }
                ],
                "prodLambdaVersion": "lambda20201216r0",
                "gatewayDeployments": [
                    {
                        "resourceName": "deploy20200523r0",
                        "description": "initial deployment"
                    }
                ],
                "prodDeployment": "deploy20200523r0",
                "betaDeployment": "deploy20200523r0"
            },
            "sessmgr": {
                "kmsKeys": [
                    "sessmgr20210416"
                ],
                "kmsSigningKey": "sessmgr20210416",
                "kmsOldKey": "sessmgr20210416",
                "kmsNewKey": "sessmgr20210416",
                "jwksUrl": "https://cognito-idp.us-east-2.amazonaws.com/us-east-2_860PcgyKN/.well-known/jwks.json",
                "cloudDomain": "dev.aws-us-east-2.frickjack.com",
                "cookieDomain": ".frickjack.com",
                "lambdaImage": "027326493842.dkr.ecr.us-east-2.amazonaws.com/little/session_mgr:3.0.0",
                "lambdaVersions": [
                    {
                        "resourceName": "sessmgr20210416v2m6p1",
                        "description": "initial prod version"
                    },
                    {
                        "resourceName": "sessmgr20210515v3m0p0",
                        "description": "v3.0.0"
                    }
                ],
                "prodLambdaVersion": "sessmgr20210515v3m0p0",
                "gatewayDeployments": [
                    {
                        "resourceName": "deploy20210416r0",
                        "description": "initial deployment"
                    },
                    {
                        "resourceName": "deploy20210514",
                        "description": "add /versions"
                    }
                ],
                "prodDeployment": "deploy20210514",
                "betaDeployment": "deploy20210514"
            }
        }
    }
}
  • use the various little stack commands to create, update, and monitor the cloudformation stack - for example:
    little stack filter ./stackParams.json
    little stack validate ./stackParams.json
    little stack create ./stackParams.json
    little stack events ./stackParams.json
    little stack update ./stackParams.json
    ...

Cloudformation Patterns

Here are a couple examples to illustrate how little stack cloudformation templates use nunjucks to implement patterns that would be difficult with cloudformation alone.

Template decomposition

Splitting a large template between multiple files makes it easier to work with, and nunjucks' import directive provides the functionality to do that. For example, the root.json file of this api gateway template imports separate files to define resources for each API accessed via the gateway.

{% import "./authnApiStage.js.njk" as authnApi with context %}
{% import "./sessionMgrApiStage.js.njk" as sessmgr with context %}

The same import functionality allows the api resource to import its openapi definition from an external file:

    "apiGateway": {
      "Type" : "AWS::ApiGateway::RestApi",
      "Properties" : {
          "Description" : "simple call-through to lambda api",
          "EndpointConfiguration" : {
            "Types": ["EDGE"]
          },
          "MinimumCompressionSize" : 128,
          "Name" : "{{ "authn_api-" + stackParameters.DomainName }}",
          "Body": {% include "./authnOpenApi.json" %},
          "Tags": [
            {{ stackTagsStr }}
          ]
        }
    },

Resource Versioning

Resource versioning is a pattern that a few AWS API's (lambda, kms, and API gateway deployments anyway) rely on, but is not supported well by cloudformation. For example, this little template deploys infrastructure for littleware's session manager API. The "beta" stage of the API is backed by a lambda function, and the "prod" stage of the API is backed by a lambda alias that references a lambda version (snapshot) of the same lambda function. When a user wants to test new lambda code, she updates a variable in the parameters file to point at the Docker image with the new code (a sample parameters file is here)

"Littleware": {
  "TemplatePath": "lib/cloudformation/cloud/api/authclient/apiGateway.json",
  "Variables": {
        ...
    "sessmgr": {
      ...
      "lambdaImage": "027326493842.dkr.ecr.us-east-2.amazonaws.com/little/session_mgr:2.6.1",
      ...

When the new code is ready to be promoted to production, then the developer publishes a new version of the lambda, and points the production alias at that version. The parameters file defines variables like these:

    "lambdaImage": "027326493842.dkr.ecr.us-east-2.amazonaws.com/little/session_mgr:2.6.1",
    "lambdaVersions": [
        {
            "resourceName": "sessmgrVer20210416r0",
            "description": "initial prod version"
        }
    ],
    "prodLambdaVersion": "sessmgrVer20210416r0",

The nunjucks-enhanced cloudformation template looks like this:

    {% for item in stackVariables.sessmgr.lambdaVersions %}
      "{{ item.resourceName }}": {
        "Type" : "AWS::Lambda::Version",
        "Properties" : {
            "FunctionName" : { "Ref": "sessMgrLambda" },
            "Description": "{{ item.description }}"
          }
      },
    {% endfor %}

    "sessMgrLambdaAlias": {
      "Type" : "AWS::Lambda::Alias",
      "Properties" : {
          "Description" : "prod stage lambda alias",
          "FunctionName" : { "Ref": "sessMgrLambda" },
          "FunctionVersion" : { "Fn::GetAtt": ["{{ stackVariables.sessmgr.prodLambdaVersion }}", "Version"] },
          "Name" : "gateway_prod"
        }
    },

The kms API supports a similar mechanism for rotating keys where user code accesses an encryption key through an alias that can be moved to point at a new (rotated) key. Littleware's session manager infrastructure defines 3 alias to asymmetric KMS keys used to sign and verify JWT's: kmsSigningAlias, kmsNewAlias, kmsOldAlias. The signing alias points at the key for signing tokens (tokens expire after an hour). The "old" alias points at the key that was used for signing JWT's in the past, so token verification code can load the old public key after a key rotation. The "new" alias points at the key that will become the signing key after the next key rotation. We define the "new" key, so that verification code can just load all 3 keys at startup time, and continue to work after a key rotation (assuming keys rotate less frequently than we restart our services).

The little stack parameters file defines a name for each kms key managed by a stack, and a target for each kms alias.

            "sessmgr": {
                "kmsKeys": [
                    "sessmgr20210416"
                ],
                "kmsSigningKey": "sessmgr20210416",
                "kmsOldKey": "sessmgr20210416",
                "kmsNewKey": "sessmgr20210416",

The template consumes those variables.

    {#
       Support KMS key rotation.
       Add a new key when it's time to rotate, and
       move the key alias there. 
    #}
    {% for keyName in stackVariables.sessmgr.kmsKeys %}
    "{{ keyName }}": {
      "Type" : "AWS::KMS::Key",
      "Properties" : {
          "Description" : "asymmetric kms key for session mgr jwt signing and validation",
          "KeyPolicy" : {
              "Id": "key-consolepolicy-3",
              "Version": "2012-10-17",
              "Statement": [
                  {
                    "Sid": "Enable IAM User Permissions",
                    "Effect": "Allow",
                    "Principal": {
                      "AWS": {"Fn::Join": ["", 
                        ["arn:aws:iam::", {"Ref": "AWS::AccountId"}, ":root"]
                        ]}
                    },
                    "Action": "kms:*",
                    "Resource": "*"
                  }
              ]
          },
          "KeySpec" : "ECC_NIST_P256",
          "KeyUsage" : "SIGN_VERIFY",
          "PendingWindowInDays" : 7,
          "Tags": [
            { "Key": "Name", "Value": "{{ keyName }}" },
            {{ stackTagsStr }}
          ]
        }
    },
    {% endfor %}

    {% set kmsSigningAlias %}{{ "alias/littleware/api/" + (stackParameters.DomainName | replace(".", "-")) + "/sessMgrSigningKey" }}{% endset %}

    {# old signing key - rotated out #}
    {% set kmsOldAlias %}{{ "alias/littleware/api/" + (stackParameters.DomainName | replace(".", "-")) + "/sessMgrOldKey" }}{% endset %}

    {# new signing key - not yet used for signing #}
    {% set kmsNewAlias %}{{ "alias/littleware/api/" + (stackParameters.DomainName | replace(".", "-")) + "/sessMgrNewKey" }}{% endset %}

    "kmsSigningKey": {
      "Type" : "AWS::KMS::Alias",
      "Properties" : {
          "AliasName" : "{{ kmsSigningAlias }}",
          "TargetKeyId" : { "Ref": "{{ stackVariables.sessmgr.kmsSigningKey }}" }
        }
    },
    "kmsOldKey": {
      "Type" : "AWS::KMS::Alias",
      "Properties" : {
          "AliasName" : "{{ kmsOldAlias }}",
          "TargetKeyId" : { "Ref": "{{ stackVariables.sessmgr.kmsOldKey }}" }
        }
    },
    "kmsNewKey": {
      "Type" : "AWS::KMS::Alias",
      "Properties" : {
          "AliasName" : "{{ kmsNewAlias }}",
          "TargetKeyId" : { "Ref": "{{ stackVariables.sessmgr.kmsNewKey }}" }
        }
    },

The kms alias names are passed as part of the json configuration to the session manager lambda function. Nunjucks' dump filter makes it easy to generate and stringify json:

    "sessMgrLambda": {
      "Type" : "AWS::Lambda::Function",
      "Properties" : {
        "PackageType": "Image",
        "Code" : {
          "ImageUri": "{{ stackVariables.sessmgr.lambdaImage }}"
        },
        "Description" : "session manager API lambda",
        "Environment" : {
          "Variables": {
            "JAVA_TOOL_OPTIONS": "-Djava.util.logging.manager=org.apache.logging.log4j.jul.LogManager",
            "LITTLE_CLOUDMGR": {{
              {
                "little.cloudmgr.domain" : stackVariables.sessmgr.cloudDomain,
                "little.cloudmgr.sessionmgr.type": "aws",
                "little.cloudmgr.sessionmgr.localconfig": {}, 
                "little.cloudmgr.sessionmgr.awsconfig": {
                    "kmsPublicKeys": [
                      kmsSigningAlias, kmsOldAlias, kmsNewAlias
                    ],
                    "kmsSigningKey": kmsSigningAlias,
                    "oidcJwksUrl": stackVariables.sessmgr.jwksUrl
                },
                "little.cloudmgr.sessionmgr.lambdaconfig": {
                    "corsDomainWhiteList": [ stackVariables.sessmgr.cookieDomain ],
                    "cookieDomain": stackVariables.sessmgr.cookieDomain
                }
              } | dump | dump
            }}
          }
        },
        "FunctionName" : { "Fn::Join": [ "-", ["sessmgr", "{{ stackParameters.DomainName | replace(".", "-") }}",  { "Ref": "StackName" }, { "Ref": "StageName" }, "prod"]] },
        "MemorySize" : 768,
        "Role" : { "Fn::GetAtt": ["sessMgrRole", "Arn"] },
        "Tags": [
          {{ stackTagsStr }}
        ],
        "Timeout" : 5,
        "TracingConfig" : {
          "Mode": "Active"
        }
      }
    },

Summary

The little stack tools allow the use of the nunjucks template language in cloudformation templates to support various infrastructure patterns.

Tuesday, April 13, 2021

AWS codebuild for scala, docker (ecr) CI

Problem and Audience

A continuous integration (CI) process that builds and tests our code, then publishes versioned deployable artifacts (docker images) is a prerequisite for deploying stable software services in the cloud. There are a wide variety of good, inexpensive CI services available, but we decided to build littleware's CI system on AWS codebuild, because it provides an easy to use serverless solution that supports the technology we build on (nodejs, java, scala, docker), and integrates well with AWS. It was straight forward for us to setup a codebuild CI process (buildspec.yml) for our little scala project given the tools we already have in place to deploy cloudformation stacks that define the codebuild project and ecr docker repository.

Overview

There were two steps to setting up our CI build: create the infrastructure, then debug and deploy the build script. The first step was easy, since we already have cloudformation templates for codebuild projects and ecr repositories that our little stack tool can deploy. For example, we deployed the codebuild project to build the littlware github repo by running:

little stack create ./stackParams.json

with this parameters file (stackParams.json):

{
    "StackName": "build-littleware",
    "Capabilities": [
        "CAPABILITY_NAMED_IAM"
    ],
    "TimeoutInMinutes": 10,
    "EnableTerminationProtection": true,
    "Parameters" : [
        {
            "ParameterKey": "PrivilegedMode",
            "ParameterValue": "true"
        },
        {
            "ParameterKey": "ProjectName",
            "ParameterValue": "cicd-littleware"
        },
        {
            "ParameterKey": "ServiceRole",
            "ParameterValue": "arn:aws:iam::027326493842:role/littleCodeBuild"
        },
        {
            "ParameterKey": "GithubRepo",
            "ParameterValue": "https://github.com/frickjack/littleware.git"
        }
    ],
    "Tags": [
            {
                "Key": "org",
                "Value": "applications"
            },
            {
                "Key": "project",
                "Value": "cicd-littleware"
            },
            {
                "Key": "stack",
                "Value": "frickjack.com"
            },
            {
                "Key": "stage",
                "Value": "dev"
            },
            {
              "Key": "role",
              "Value": "build"
            }
    ],
    "Littleware": {
        "TemplatePath": "lib/cloudformation/cicd/nodeBuild.json"
    }
}

With our infrastructure in place, we can add our build script to our github repository. There a few things to notice about our build script. First, the littleware git repo holds multiple interrelated projects - java and scala libraries and applications that build on top of them. We are currently interested in building and packaging the littleAudit/ folder (that will probably be renamed), so the build begins by moving to that folder:

  build:
    commands:
      - cd littleAudit

Next, we setup our codebuild project to run the build container in privileged mode, so our build can start a docker daemon, and build docker images:

phases:
  install:
    runtime-versions:
      # see https://github.com/aws/aws-codebuild-docker-images/blob/master/ubuntu/standard/5.0/Dockerfile
      java: corretto11
    commands:
      # see https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-codebuild-project-environment.html#cfn-codebuild-project-environment-privilegedmode
      - nohup /usr/local/bin/dockerd --host=unix:///var/run/docker.sock --host=tcp://0.0.0.0:2375 --storage-driver=overlay &
      - timeout 15 sh -c "until docker info; do echo .; sleep 1; done"

We use gradle to compile our code and run the unit test suite. The org.owasp.dependencycheck gradle plugin adds a dependencyCheckAnalyze task that checks our maven dependencies against public databases of known vulnerabilities:

  build:
    commands:
      - cd littleAudit
      - gradle build
      - gradle dependencyCheckAnalyze
      - docker build -t codebuild:frickjack .

Finally, our post-build command tags and pushes the docker image to an ecr repository. The tagging rules align with the lifecycle rules on the repository (described here and here).

  post_build:
    commands:
      - BUILD_TYPE="$(echo "$CODEBUILD_WEBHOOK_TRIGGER" | awk -F / '{ print $1 }')"
      - echo "BUILD_TYPE is $BUILD_TYPE"
      - |
        (
          little() {
              bash "$CODEBUILD_SRC_DIR_HELPERS/AWS/little.sh" "$@"
          }

          scanresult=""
          scan_in_progress() {
            local image
            image="$1"
            if ! shift; then
                echo "invalid scan image"
                exit 1
            fi
            local tag
            local repo
            tag="$(echo "$image" | awk -F : '{ print $2 }')"
            repo="$(echo "$image" | awk -F : '{ print $1 }' | cut -d / -f 2-)"
            scanresult="$(little ecr scanreport "$repo" "$tag")"
            test "$(echo "$scanresult" | jq -e -r .imageScanStatus.status)" = IN_PROGRESS
          }

          TAGSUFFIX="$(echo "$CODEBUILD_WEBHOOK_TRIGGER" | awk -F / '{ suff=$2; gsub(/[ @/]+/, "_", suff); print suff }')"
          LITTLE_REPO_NAME=little/session_mgr
          LITTLE_DOCKER_REG="$(little ecr registry)" || exit 1
          LITTLE_DOCKER_REPO="${LITTLE_DOCKER_REG}/${LITTLE_REPO_NAME}"

          little ecr login || exit 1
          if test "$BUILD_TYPE" = pr; then
            TAGNAME="${LITTLE_DOCKER_REPO}:gitpr_${TAGSUFFIX}"
            docker tag codebuild:frickjack "$TAGNAME"
            docker push "$TAGNAME"
          elif test "$BUILD_TYPE" = branch; then
            TAGNAME="${LITTLE_DOCKER_REPO}:gitbranch_${TAGSUFFIX}"
            docker tag codebuild:frickjack "$TAGNAME"
            docker push "$TAGNAME"
          elif test "$BUILD_TYPE" = tag \
            && (echo "$TAGSUFFIX" | grep -E '^[0-9]{1,}\.[0-9]{1,}\.[0-9]{1,}$' > /dev/null); then
            # semver tag
            TAGNAME="${LITTLE_DOCKER_REPO}:gitbranch_${TAGSUFFIX}"
            if ! docker tag codebuild:frickjack "$TAGNAME"; then
              echo "ERROR: failed to tag image with $TAGNAME"
              exit 1
            fi
            ...

If the CI build was triggered by a semver git tag, then it waits for the ecr image scan to complete successfully before tagging the docker image for production use:

       ...
          elif test "$BUILD_TYPE" = tag \
            && (echo "$TAGSUFFIX" | grep -E '^[0-9]{1,}\.[0-9]{1,}\.[0-9]{1,}$' > /dev/null); then
            # semver tag
            TAGNAME="${LITTLE_DOCKER_REPO}:gitbranch_${TAGSUFFIX}"
            if ! docker tag codebuild:frickjack "$TAGNAME"; then
              echo "ERROR: failed to tag image with $TAGNAME"
              exit 1
            fi
            # see https://docs.aws.amazon.com/AmazonECR/latest/APIReference/API_ImageScanStatus.html
            docker push "$TAGNAME" || exit 1
            count=0
            sleep 10

            while scan_in_progress "$TAGNAME" && test "$count" -lt 50; do
              echo "Waiting for security scan - sleep 10"
              count=$((count + 1))
              sleep 10
            done
            echo "Got image scan result: $scanresult"
            if ! test "$(echo "$scanresult" | jq -e -r .imageScanStatus.status)" = COMPLETE \
               || ! test "$(echo "$scanresult" | jq -e -r '.imageScanFindingsSummary.findingSeverityCounts.HIGH // 0')" = 0 \
               || ! test "$(echo "$scanresult" | jq -e -r '.imageScanFindingsSummary.findingSeverityCounts.CRITICAL // 0')" = 0; then
               echo "Image $TAGNAME failed security scan - bailing out"
               exit 1
            fi
            SEMVER="${LITTLE_DOCKER_REPO}:${TAGSUFFIX}"
            docker tag "$TAGNAME" "$SEMVER"
            docker push "$SEMVER"
          else
            echo "No docker publish for build: $BUILD_TYPE $TAGSUFFIX"
          fi

Summary

A continuous integration (CI) process that builds and tests our code, then publishes versioned deployable artifacts (docker images) is a prerequisite for deploying stable software services in the cloud. Our codebuild CI project builds and publishes the docker images that we will use to deploy our "little session manager" service as a lambda behind an API gateway (but we're still working on that).

Friday, April 09, 2021

Setup ECR on AWS

Problem and Audience

Setting up an ECR repository for publishing docker images is a good first step toward deploying a docker-packaged application on AWS (ECS, EKS, EC2, lambda, ...). Although we use ECR like any other docker registry, there are a few optimizations we can make when setting up a repository.

Overview

We should consider the workflow around the creation and use of our Docker images to decide who we should allow to create a new ECR repository, and who should push images to ECR. In a typical docker workflow a developer publishes a Dockerfile alongside her code, and a continuous integration (CI) process kicks in to build and publish the Docker image. When the new image passes all its tests and is ready for release, then the developer (or some other process) adds a semver (or some other standard) release tag to the image. All this development, test, and publishing takes place in an AWS account assigned to the developer team linked with the docker image; but the release-tagged images are available for use (docker pull) in production accounts.

With the above workflow in mind, we updated the cloudformation templates we use to setup our user (admin, dev, operator) and codebuild (CI) IAM roles to grant full ECR access in our developer account (currently we only have a dev account).

Next we developed a cloudformation template for creating ECR repositories in our dev account. Our template extends the standard cloudformation syntax with nunjucks tags supported by our little stack tools. We also developed a little ecr tool to simplify some common tasks.

There are a few things to notice in the cloudformation template. First, each repository has an IAM resource policy that allows our production AWS accounts to pull images from ECR repositories in our dev accounts:

"RepositoryPolicyText" : {
    "Version": "2008-10-17",
    "Statement": [
        {
            "Sid": "AllowCrossAccountPull",
            "Effect": "Allow",
            "Principal": {
                "AWS": { "Ref": "ReaderAccountList" }
            },
            "Action": [
                "ecr:GetAuthorizationToken",
                "ecr:BatchCheckLayerAvailability",
                "ecr:GetDownloadUrlForLayer",
                "ecr:BatchGetImage"
            ]
        }
    ]
},

Second, each repository has a lifecycle policy that expires non-production images. This is especially important for ECR, because ECR storage costs ten cents per GByte/month, and Docker images can be large.

{
  "rules": [
    {
      "rulePriority": 1,
      "description": "age out git dev tags",
      "selection": {
        "tagStatus": "tagged",
        "tagPrefixList": [
          "gitsha_",
          "gitbranch_",
          "gitpr_"
        ],
        "countType": "sinceImagePushed",
        "countUnit": "days",
        "countNumber": 7
      },
      "action": {
        "type": "expire"
      }
    },
    {
      "rulePriority": 2,
      "description": "age out untagged images",
      "selection": {
        "tagStatus": "untagged",
        "countType": "imageCountMoreThan",
        "countNumber": 5
      },
      "action": {
        "type": "expire"
      }
    }
  ]
}

Finally, we configure ECR to scan our images for known security vulnerabilities on push. Our little ecr scanreport tool retrieves an image's scan-results from the command line. The workflow that tags an image for production should include a step that verifies that the image is free from vulnerabilities more severe than whatever policy we want to enforce.

Summary

Although we use ECR like any other docker registry, there are a few optimizations we can make when setting up a repository. First, we update our IAM policies to give users and CICD pipelines the access they need to support our development and deployment processes. Next, we add resource policies to our ECR repositories to allow production accounts to pull docker images from repositories in developer accounts. Third, we attach lifecycle rules to each repository to avoid the expense of storing unused images. Finally, we enable image scanning on push, and check an image's vulnerability report before tagging it for production use.

Friday, April 02, 2021

Sign and Verify JWT with ES256

Problem and Audience

A developer of a system that uses json web tokens (JWT) to authenticate HTTP API requests needs to generate asymmetric cryptographic keys, load the keys into code, then use the keys to sign and validate tokens.

We are building a multi-tenant system that implements a hierarchy where each tenant (project) may enable one or more api's. An end user authenticates with the global system (OIDC authentication client) via a handshake with a Cognito identity provider, then acquires a short lived session token to interact with a particular api under a particular project (OIDC resource server).

It would be nice to simply implement the session token as a Cognito OIDC access token, but our system has a few requirements that push us to manage our own session tokens for now. First, each (project, api) pair is effectively an OIDC resource server in our model, and projects and api's are created dynamically, so managing the custom JWT claims with Cognito resource servers would be messy.

Second, we want to be able to support robot accounts at the project level, and a Cognito mechanism to easily provision robot accounts and tokens is not obvious to us. So we decided to manage our "session" tokens in the application, and rely on Cognito to federate identity providers for user authentication.

JWTS with ES256

I know very little about cryptography, authentication, and authorization; but fortunately people that know more than me share their knowledge online. Scott Brady's bLog gives a nice overview of JWT signing. We want to sign and verify JWTs in scala using the elliptic curve ES256 algorithm - which improves on RSA256 in a few ways, and is widely supported.

There are different ways to generate an elliptic curve sha-256 key pair, but EC keys saved to pem files are supported by multiple tools, and are easy to save to configuration stores like AWS SSM parameter store or secrets manager.

This bash function uses openssl to generate keys in pem files.

#
# Bash function to generate new ES256 key pair
#
newkey() {
    local kid=${1:-$(date +%Y%m)}
    local secretsFolder=$HOME/Secrets/littleAudit

    (
        mkdir -p "$secretsFolder"
        cd "$secretsFolder" || return 1
        if [[ ! -f ec256-key-${kid}.pem ]]; then
          openssl ecparam -genkey -name prime256v1 -noout -out ec256-key-${kid}.pem
        fi
        # convert the key to pkcs8 format
        openssl pkcs8 -topk8 -nocrypt -in ec256-key-${kid}.pem -out ec256-pkcs8-key-${kid}.pem
        # extract the public key
        openssl ec -in ec256-pkcs8-key-${kid}.pem -pubout -out ec256-pubkey-${kid}.pem
    )
}

Load the keys into code

Now that we have our keys - we need to load them into our scala application.

class KeyHelper @inject.Inject() (
  gs: gson.Gson, 
  ecKeyFactory:KeyHelper.EcKeyFactory, 
  rsaKeyFactory:KeyHelper.RsaKeyFactory
  ) {    
    /**
     * @return pem input with pem file prefix/suffix and empty space removed
     */
    def decodePem(pem:String): String = {
      pem.replaceAll(raw"-----[\w ]+-----", "").replaceAll("\\s+", "")
    }


    def loadPublicKey(kid:String, pemStr:String):SessionMgr.PublicKeyInfo = {
      val key = ecKeyFactory.generatePublic(decodePem(pemStr))
      SessionMgr.PublicKeyInfo(kid, "ES256", key)
    }


    def loadPrivateKey(kid:String, pemStr:String):SessionMgr.PrivateKeyInfo = {
      val key = ecKeyFactory.generatePrivate(decodePem(pemStr))
      SessionMgr.PrivateKeyInfo(kid, "ES256", key)
    }

    /**
     * Load keys from a jwks url like 
     *    https://www.googleapis.com/oauth2/v3/certs
     */
    def loadJwksKeys(jwksUrl:java.net.URL): Set[SessionMgr.PublicKeyInfo] = {
      val jwksStr = {
        val connection = jwksUrl.openConnection()
        connection.setRequestProperty("Accept-Charset", KeyHelper.utf8)
        connection.setRequestProperty("Accept", "application/json")
        val response = new java.io.BufferedReader(new java.io.InputStreamReader(connection.getInputStream(), KeyHelper.utf8))
        try {
            littleware.base.Whatever.get().readAll(response)
        } finally {
            response.close()
        }
      }

      gs.fromJson(jwksStr, classOf[gson.JsonObject]).getAsJsonArray("keys").asScala.map(
          { 
            json:gson.JsonElement =>
            val jsKeyInfo = json.getAsJsonObject()
            val kid = jsKeyInfo.getAsJsonPrimitive("kid").getAsString()
            val n = jsKeyInfo.getAsJsonPrimitive("n").getAsString()
            val e = jsKeyInfo.getAsJsonPrimitive("e").getAsString()
            val pubKey = rsaKeyFactory.generatePublic(n, e)
            SessionMgr.PublicKeyInfo(kid, "RSA256", pubKey)
          }
      ).toSet 
    }
}

object KeyHelper {
    val utf8 = "UTF-8"

    /**
     * Little injectable key factory hard wired to use X509 key spec for public key
     */
    class EcKeyFactory {
        val keyFactory = java.security.KeyFactory.getInstance("EC")
        val b64Decoder = java.util.Base64.getDecoder()

        def generatePublic(base64:String):ECPublicKey = {
            val bytes = b64Decoder.decode(base64.getBytes(utf8))
            val spec = new X509EncodedKeySpec(bytes)

            keyFactory.generatePublic(spec).asInstanceOf[ECPublicKey]
        }

        def generatePrivate(base64:String):ECPrivateKey = {
            val bytes = b64Decoder.decode(base64.getBytes(utf8))
            val spec = new PKCS8EncodedKeySpec(bytes)

            keyFactory.generatePrivate(spec).asInstanceOf[ECPrivateKey]
       }
    }

    /**
     * Little injectable key factory hard wired for RSA jwks decoding
     * See: https://github.com/auth0/jwks-rsa-java/blob/master/src/main/java/com/auth0/jwk/Jwk.java
     */
    class RsaKeyFactory {
        private val keyFactory = java.security.KeyFactory.getInstance("RSA")
        private val b64Decoder = java.util.Base64.getUrlDecoder()

        def generatePublic(n:String, e:String):RSAPublicKey = {
            val modulus = new java.math.BigInteger(1, b64Decoder.decode(n))
            val exponent = new java.math.BigInteger(1, b64Decoder.decode(e))
            keyFactory.generatePublic(new RSAPublicKeySpec(modulus, exponent)).asInstanceOf[RSAPublicKey]
        }
    }
}

Sign and verify JWTs

Now that we have loaded our keys, we can use them to sign and verify JWTs. Okta has published open source code for working with JWTs , Auth0 has published open source code for working with JWK, and AWS KMS supports elliptic curve digital signing algorithms with asymmetric keys.

import com.google.inject
// see https://github.com/jwtk/jjwt#java-jwt-json-web-token-for-java-and-android
import io.{jsonwebtoken => jwt}
import java.security.{ Key, PublicKey }
import java.util.UUID
import scala.util.Try

import littleware.cloudmgr.service.SessionMgr
import littleware.cloudmgr.service.SessionMgr.InvalidTokenException
import littleware.cloudmgr.service.littleModule
import littleware.cloudutil.{ LRN, Session }

/**
 * @param signingKey for signing new session tokens
 * @param verifyKeys for verifying the signature of session tokens
 */
@inject.ProvidedBy(classOf[LocalKeySessionMgr.Provider])
@inject.Singleton()
class LocalKeySessionMgr (
    signingKey: Option[SessionMgr.PrivateKeyInfo],
    sessionKeys: Set[SessionMgr.PublicKeyInfo],
    oidcKeys: Set[SessionMgr.PublicKeyInfo],
    issuer:String,
    sessionFactory:inject.Provider[Session.Builder]
    ) extends SessionMgr {

    val resolver = new jwt.SigningKeyResolverAdapter() {
        override def resolveSigningKey(jwsHeader:jwt.JwsHeader[T] forSome { type T <: jwt.JwsHeader[T] }, claims:jwt.Claims):java.security.Key = {
            val kid = jwsHeader.getKeyId()
            (
                {
                    if (claims.getIssuer() == issuer) {
                        sessionKeys
                    } else {
                        oidcKeys
                    }
                }
            ).find(
                { it => it.kid == kid }
            ).map(
                { _.pubKey }
            ) getOrElse {
                throw new SessionMgr.InvalidTokenException(s"invalid auth kid ${kid}")
            }
        }
    }

    ...

    def jwsToClaims(jwsIdToken:String):Try[jwt.Claims] = Try(
        { 
            jwt.Jwts.parserBuilder(
            ).setSigningKeyResolver(resolver
            ).build(
            ).parseClaimsJws(jwsIdToken
            ).getBody()
        }
    ).flatMap( claims => Try( {
                    Seq("email", jwt.Claims.EXPIRATION, jwt.Claims.ISSUER, jwt.Claims.ISSUED_AT, jwt.Claims.AUDIENCE).foreach({
                        key =>
                        if(claims.get(key) == null) {
                            throw new InvalidTokenException(s"missing ${key} claim")
                        }
                    })
                    claims
                }
            )
    ).flatMap(
        claims => Try(
            {
                if (claims.getExpiration().before(new java.util.Date())) {
                    throw new InvalidTokenException(s"auth token expired: ${claims.getExpiration()}")
                }
                claims
            }
        )
    )


    def sessionToJws(session:Session):String = {
        val signingInfo = signingKey getOrElse { throw new UnsupportedOperationException("signing key not available") }
        jwt.Jwts.builder(
        ).setHeaderParam(jwt.JwsHeader.KEY_ID, signingInfo.kid
        ).setClaims(SessionMgr.sessionToClaims(session)
        ).signWith(signingInfo.privKey
        ).compact()
    }

    def jwsToSession(jws:String):Try[Session] = jwsToClaims(jws
        ) map { claims => SessionMgr.claimsToSession(claims) }

    ...
}

This code is all online under this github repo, but is in a state of flux.

Summary

To sign and verify JWTs we need to generate keys, load the keys into code, and use the keys to sign and verify tokens. We plan to add support for token signing with AWS KMS soon.

Saturday, March 20, 2021

A little framework for scala builders

Many developers use immutable data structures in application code, and leverage the builder pattern to construct the application's initial state, and progress to new versions of the state in response to user inputs and other side effects. The scala programming language supports immutable data structures well, but does not provide a native implementation of the builder pattern. The following describes a simple but useful scala builder framework that leverages re-usable data-validation lambda functions.

A Scala Framework for Builders

The following are typical examples of an immutable case class in scala:

/**
 * Little resource name URI:
 * "lrn://${cloud}/${api}/${project}/${resourceType}/${drawer}/${path}"
 */
trait LRN {
    val cloud: String
    val api: String
    val projectId: UUID
    val resourceType: String
    val drawer: String
    val path: String
}

/**
 * Path based sharing - denormalized data
 */
case class LRPath(
    cloud: String,
    api: String,
    projectId: UUID,
    resourceType: String,
    drawer: String,
    path: String
) extends LRN {
}

/**
 * Id based sharing - normalized data
 */
case class LRId(
    cloud: String,
    api: String,
    projectId: UUID,
    resourceType: String,
    resourceId: UUID
) extends LRN {
    override val drawer = ":"
    val path = resourceId.toString()
}

Builders like the following simplify object construction and validation (compared to passing all the property values to the constructor).

object LRN {
    val zeroId:UUID = UUID.fromString("00000000-0000-0000-0000-000000000000")

    trait Builder[T <: LRN] extends PropertyBuilder[T] {
        val cloud = new Property("") withName "cloud" withValidator dnsValidator
        val api = new Property("") withName "api" withValidator LRN.apiValidator
        val projectId = new Property[UUID](null) withName "projectId" withValidator notNullValidator
        val resourceType = new Property("") withName "resourceType" withValidator LRN.resourceTypeValidator
        val path = new Property("") withName "path" withValidator pathValidator

        def copy(lrn:T):this.type = this.projectId(lrn.projectId).api(lrn.api
            ).cloud(lrn.cloud).resourceType(lrn.resourceType
            ).path(lrn.path)

        def fromSession(session:Session): this.type = this.cloud(session.lrp.cloud
            ).api(session.api
            ).projectId(session.projectId
            )
    }

    class LRPathBuilder extends Builder[LRPath] {        
        val drawer = new Property("") withName "drawer" withValidator drawerValidator

        override def copy(other:LRPath) = super.copy(other).drawer(other.drawer)

        def build():LRPath = {
            validate()
            LRPath(cloud(), api(), projectId(), resourceType(), drawer(), path())
        }
    }

    class LRIdBuilder extends Builder[LRId] {
        def build():LRId = {
            validate()
            LRId(cloud(), api(), projectId(), resourceType(), UUID.fromString(path()))
        }
    }

    def apiValidator = rxValidator(raw"[a-z][a-z0-9-]+".r)(_, _)

    def drawerValidator(value:String, name:String) = rxValidator(raw"([\w-_.*]+:)*[\w-_.*]+".r)(value, name) orElse {
        if (value.length > 1000) {
            Some(s"${name} is too long: ${value}")
        } else {
            None
        }
    }

    def pathValidator(value:String, name:String) = pathLikeValidator(value, name) orElse {
        if (value.length > 1000) {
            Some(s"${name} is too long: ${value}")
        } else {
            None
        }
    }

    def resourceTypeValidator = rxValidator(raw"[a-z][a-z0-9-]{1,20}".r)(_, _)

    // ...
}

This builder implementation does not leverage the type system to detect construction errors at compile time (this blog shows an approach with phantom types), but it is composable in a straight forward way. A couple fun things about this implementation are that it leverages the builder pattern to define the properties in a builder (new Property... withName ... withValidator ...), and the setters on the nested property class return the parent Builder type, so we can write code like this:

    @Test
    def testLRNBuilder() = try {
        val builder = builderProvider.get(
        ).cloud("test.cloud"
        ).api("testapi"
        ).drawer("testdrawer"
        ).projectId(LRN.zeroId
        ).resourceType("testrt"
        ).path("*")

        val lrn = builder.build()
        assertTrue(s"api equal: ${lrn.api} ?= ${builder.api()}", lrn.api == builder.api())
    } catch basicHandler

Unfortunately, the code (in https://github.com/frickjack/littleware under littleAudit/ and littleScala/) is in a state of flux, but the base PropertyBuilder can be copied into another code base - something like this:

/**
 * Extends Validator with support for some scala types
 */
trait LittleValidator extends Validator {
  @throws(classOf[ValidationException])
  override def validate():Unit = {
    val errors = checkSanity()
    if ( ! errors.isEmpty ) {
      throw new ValidationException(
        errors.foldLeft(new StringBuilder)( (sb,error) => { sb.append( error ).append( littleware.base.Whatever.NEWLINE ) } ).toString
      )
    }
  }


  /**
   * Same as checkIfValid, just scala-friendly return type
   */
  def checkSanity():Iterable[String]
}

trait PropertyBuilder[B] extends LittleValidator {
  builder =>
  import PropertyBuilder._
  type BuilderType = this.type

  import scala.collection.mutable.Buffer

  /**
   * List of properties allocated under this class - used by isReady() below -
   * use with caution.
   */
  protected val props:Buffer[Property[_]] = Buffer.empty

  /**
   * Default implementation is props.flatMap( _.checkSanity ) 
   */
  def checkSanity():Seq[String] = props.toSeq.flatMap( _.checkSanity() )

  override def toString():String = props.mkString(",")   

  def copy(value:B):BuilderType
  def build():B

  /**
   * Typical property, so build has things like
   *     val a = new Property(-1) withName "a" withValidator { x => ... }
   *
   * Note: this type is intertwined with PropertyBuilder - don't
   *    try to pull it out of a being a subclass - turns into a mess
   */
  class Property[T](
      var value:T
    ) extends LittleValidator {    
    type Validator = (T,String) => Option[String]

    def apply():T = value

    var name:String = "prop" + builder.props.size
    var validator:Validator  = (_, _) => None

    override def checkSanity() = validator(this.value, this.name)
    def withValidator(v:Validator):this.type = {
      validator = v
      this
    }

    def withName(v:String):this.type = {
      this.name = v
      this
    }

    override def toString():String = "" + name + "=" + value + " (" + checkSanity().mkString(",") + ")"

    /** Chainable assignment */
    def apply(v:T):BuilderType = { value = v; builder }

    builder.props += this 
  }

  /**
   * Property accepts multiple values
   */
  class BufferProperty[T] extends Property[Buffer[T]](Buffer.empty) {
    def add( v:T ):BuilderType = { value += v; builder }
    def addAll( v:Iterable[T] ):BuilderType = { value ++= v; builder }
    def clear():BuilderType = { value.clear(); builder; }

    def withMemberValidator(memberValidator:(T,String) => Option[String]):this.type =
      withValidator(
        (buff, propName) => buff.view.flatMap({ it => memberValidator(it, propName) }).headOption
      )  
  }  

  class OptionProperty[T] extends Property[Option[T]](None) {
    def set(v:T):BuilderType = { value = Option(v); builder }

    def withMemberValidator(memberValidator:(T,String) => Option[String]):this.type =
      withValidator(
        (option, propName) => option.flatMap({ it => memberValidator(it, propName) })
      )  

  }
}

object PropertyBuilder {  
  /** littleware.scala.Messages resource bundle */
  val rb = java.util.ResourceBundle.getBundle( "littleware.scala.Messages" )

  def rxValidator(rx:Regex)(value:String, name:String):Option[String] = {
    if (null == value || !rx.matches(value)) {
      Some(s"${name}: ${value} !~ ${rx}")
    } else {
      None
    }
  }

  def notNullValidator(value:AnyRef, name:String):Option[String] = {
    if (null == value) {
      Some(s"${name}: is null")
    } else {
      None
    }
  }

  def positiveIntValidator(value:Int, name:String):Option[String] = {
    if (value <= 0) {
      Some(s"${name}: is not positive")
    } else {
      None
    }
  }

  def positiveLongValidator(value:Long, name:String):Option[String] = {
    if (value <= 0) {
      Some(s"${name}: is not positive")
    } else {
      None
    }
  }

  def dnsValidator = rxValidator(raw"([\w-]{1,40}\.){0,10}[\w-]{1,40}".r)(_, _)
  def emailValidator = rxValidator(raw"[\w-_]{1,20}@\w[\w-.]{1,20}".r)(_, _)
  def pathLikeValidator = rxValidator(raw"([\w-:_.*]{1,255}/){0,20}[\w-:_.*]{1,255}".r)(_, _)


}