I wired up a little fishRunner tool (https://github.com/frickjack/littleware-fishRunner) that deploys a java web archive (.war file) to an embedded glassfish server. I setup the fishRunner to simplify launching web services on heroku - a cool polyglot PaaS offering that extends the IaaS from AWS with APIs that automate the load-based allocation of compute nodes for a linux web application, and also manages network load-balancing, logging, database provisioning, and an array of other services. A developer deploys an application to heroku by cloning the application's code to a heroku-hosted git repository. The code includes a configuration file that specifies a linux command line to launch the application. Each launched instance of the application runs in a container similar to a BSD jail that heroku calls a dyno.
Heroku's git-based deployment reflects its roots as a polyglot platform supporting dynamic languages like ruby and php that deploy a webapp by installing code behind a server. When heroku introduced java support in its blog, the company made a virtue out of its necessity to deploy by pushing code that is compiled and executed on the dyno - describing java's enterprise J2EE stack as ill-suited for software-as-a-service (SaaS) application deployment. Heroku encourages java web developers to package an application as an executable with an embedded http server like jetty rather than assemble the application into a web archive (.war file) suitable for submission to a J2EE server container.
I see two shortcomings in heroku's approach to java deployment. First, it requires the developer to manage an embedded server. Heroku's demo app (https://devcenter.heroku.com/articles/getting-started-with-java) shows how configuring jetty is easy for a simple web application, but the embedded container becomes more complicated to manage as the application's complexity increases with technologies like JPA, JDBC, IOC, JNDI, and others. I'm used to developing against a subset of the java EE API's, and delegating to a container (server) the responsibility to manage the environment required by those APIs. Deploying compiled code to a container is common in many java runtimes including android and plugins and extensions for OSGi based platforms like eclipse, netbeans, and glassfish.
My second complaint is that I don't like the idea of deploying java source code that is compiled on the production server. I'm used to a workflow where I build and test locally, and deploy a binary package. When working with a team I would introduce Jenkins or some similar continuous integration service into the mix, so that each patch submitted to a shared repository is automatically checked out, compiled, tested, and deployed to a shared test environment isolated from production. I can imagine a production-deployment setup where once the team is ready to release the code running in the test environment, then the code is deployed to a beta environment that shares the production database, but is not yet visible to the public. The code is finally released publicly by flipping a switch that moves the beta servers into production, and the old production servers stay online as a fallback if something goes wrong. Anyway - that's all just building imaginary castles - my personal configuration management needs are not complex, but back to the point - I don't like the idea of pushing code to the runtime server.
These are small complaints that have been commented on before other places (openShift's bLog, on java.net ). Heroku does now have an "enterprise for java" offering that supports war-file deployment to a tomcat container, and a sample application (https://github.com/heroku/java-sample) illustrates how to include tomcat's webapp-runner in the maven pom for a webapp project that compiles a war. There are also other PaaS clouds that cater to the java EE developer including RedHat's OpenShift, cloudbees, jelastic, Oracle's cloud, HP's cloud, AWS elastic beanstalk, and others.
In the end I'm still working with heroku - it's a great service whose benefits far outweigh its drawbacks: the price is right for development, it frees me from linux administration using IaaS like EC2 directly, my app comes up with a reasonable DNS name for an AJAX service (littleware.herokuapp.com) already network load balanced and with SSL (https), and heroku runs on AWS, so I can access AWS services ( dynamodb, S3, Simple queue, ...) without paying for off-AWS data-transfer. Finally, the fishRunner lets me deploy war-files to heroku in a nice way. The fishRunner takes a similar approach to tomcat's webapp-runner, but runs an embedded glassfish server supporting the java EE 7 web profile. Also - the fishRunner supports downloading a war file from an S3 bucket, so the workflow I envision deploys the fishRunner's code (something like 5 files) to a heroku app. At runtime the fishRunner downloads the war file and JAAS login config files defined in the heroku environment (via heroku config) from S3 (fishRunner can also just use local files for local testing), starts the glassfish server listening on heroku's environment-specified port, registers the postgres database connection pool defined by heroku's DATABASE_URL environment with glassfish's JNDI, configures JAAS, and deploys the WAR.
> java -cp 'target/*;.' littleware.apps.fishRunner.FishApp ... SEVERE: Failed to launch webapp littleware.apps.fishRunner.FishApp$ConfigException: Parameter must be specified in environment or on command line: DATASE_URL at littleware.apps.fishRunner.FishApp.main(FishApp.java:250) Oct 12, 2013 3:52:38 PM littleware.apps.fishRunner.FishApp main INFO: fishRunner key value key value ... Options pulled first from system environment, then overriden by command line values: S3_KEY S3_SECRET S3_CREDSFILE - either both S3_KEY and S3_SECRET or S3_CREDSFILE must be defined WAR_URI - required - either an s3:// URI otherwise treated as local file path PORT - optional - defaults to 8080 if not otherwise specified LOGIN_URI - optional - JAAS login.conf location either and s3:// URI otherwise treated as local file path CONTEXT_ROOT - required - glassfish deploy context root for war DATABASE_URL - required - ex: postgres://user:password@host:port/database
> heroku config === littleware Config Vars CONTEXT_ROOT: littleware_services DATABASE_URL: postgres://... HEROKU_POSTGRESQL_NAVY_URL: postgres://... JAVA_OPTS: -Xmx384m -Xss512k -XX:+UseCompressedOops MAVEN_OPTS: -Xmx384m -Xss512k -XX:+UseCompressedOops PATH: /app/.jdk/bin:/usr/local/bin:/usr/bin:/bin S3_KEY: ... S3_SECRET: ... WAR_URI: s3://apps.frickjack.com/repo/littleware.apps/appsWeb/1.0-SNAPSHOT/appsWeb-1.0-SNAPSHOT.war
With heroku as a deploment platform, I'm back banging on littleware's simple JAAS authentication service. It's running now on my little dyno with CORS headers allowing AJAX access from pages hosted in my S3 bucket at http://apps.frickjack.com/. I'm working to implement the client-side javascript (or typescript) modules that will make the service easy to use on the client side, including support for OpenId via the littleId code I wrote a while ago, but currently the service just plugs in a null JAAS module that accepts any user password. You can get a feel for the service visiting these links in sequence:
- https://littleware.herokuapp.com/littleware_services/auth/login - returns a json block with the id for an unauthenticated session - also stored in a cookie.
- https://littleware.herokuapp.com/littleware_services/auth/login?action=login&user=testuser&password=whatever - authenticates the session for testuser - the returned json block includes a signed web token that other services can use to verify the client's identity. The token is also stored in the cookie.
- https://littleware.herokuapp.com/littleware_services/auth/login - returns the json block with the credentials for the authenticated session