Archive for January, 2009

Remote Scripting for AWS

Saturday, January 24th, 2009

When signing up for Amazon Web Services, you end up generating all of the following identifying information:

  • Your Account Number
  • An Access Key
  • A Secret Access Key
  • An SSL Certificate file
  • The Private Key for your SSL Cert

The various AWS command APIs and tools will require you to provide one more more of these pieces of information. Rather than actually host all of this information on my instances, I have instead chosen to build Ruby scripts using:

I’m small-scale for the moment, so I can keep a centralized list of my instance IDs. Without too much effort, I can look up the instance(s), identify their public DNS (plus everything else), and then open up an ssh connection and push data into a channel’s environment or upload files for bundling purposes.

Here’s a few things that I’ve learned in the process.


Most of your core work gets done on a Net::SSH::Connection::Channel, and sometimes asynchronously (as in the case of my Health Check script). There are a lot of library shorthands — the prototypical Hello World example is channel-less:

Net::SSH.start("localhost", "user") do |ssh|
	# 'ssh' is an instance of Net::SSH::Connection::Session
	ssh.exec! "echo Hello World"

But as with all examples, you soon find that surface-level tools won’t quite do the trick. The main thing you’ll need to have all the time is convenient access to the data come back from the channel, asynchronously or otherwise. So create yourself a simple observer:

Then during and after executions, you’ll have access to all the core information you’ll need to make conditional decisions, etc.

Pushing Your Environment

Many of the AWS tools will recognize specific environment variables. The usual suspects are:


So, you can use Net::SSH::Connection::Channel.env to inject key-value pairs into your connection. However, you’ll need to make some config changes first — and major thanks to the Net::SSH documentation for clarifying this in its own RDoc:

“Modify /etc/ssh/sshd_config so that it includes an AcceptEnv for each variable you intend to push over the wire”

#       accept EC2 config

After you make your changes, make sure to bounce the ssh daemon:

/etc/init.d/sshd restart

Then your selected environment will shine through.

Running sudo Remotely

It’s reasonable to execute root-level commands using sudo, because you can provide a good amount of granular control.

The first thing that we’ll all run into is:

sudo: sorry, you must have a tty to run sudo

Fortunately, there’s Net::SSH::Connection::Channel.request_pty:

ch.request_pty do |ch, success|
	raise "could not start a pseudo-tty" unless success

	#	full EC2 environment
	###ch.env 'key', 'value'

	ch.exec 'sudo echo Hello 1337' do |ch, success|
		raise "could not exec against a pseudo-tty" unless success

I’ve taken the approach of allowing NOPASSWD execution for everything I do remotely, after making darn sure that I had constrained exactly what could be done under those auspices (HINT: pounds of caution, tons of prevention). You can configure all of this by editing /etc/sudoers:

# EC2 tool permissions
username  ALL=(ALL)  NOPASSWD: SETENV: /path/to/scripts/*.rb

Also, if you intend to have those scripts consume the environment variables that you’ve injected into your channel, you’ll also need to annotate the line with SETENV, or they won’t export across into your sudo execution.

If you are more security minded, then there’s many other variations you can play with /etc/sudoers. I haven’t yet experimented within pushing a sudo password across the wire, but that may be where Net::SSH::Connection::Channel.send_data comes into play.

Port Forwarding

I wanted to easily do SSH tunneling / port-forwarding against an instance, referenced by my local shorthand. So I wrote my usual EC2 instance ID lookup and started an Net::SSH connection. Here’s how you put yourself in ‘dumb forward’ mode:

And then [Ctrl-C] will break you out.

Inconsistent EC2 Marshalling

I’d gotten all of my EC2 data parsing working great, then I started to expand my capabilities with the Amazon S3 API gem (aws-s3). Suddenly and magically, the results coming back from my EC2 queries had taken on a new form. Moreover, this was not consistent behavior across all operating systems; at least I do not see it on WinXP, under which my health monitoring checks execute.

The Amazon APIs are SOAP-driven, so I guessed that amazon-ec2 (and aws-3) would both leverage soap4r. Well, that is in fact not the case; Glenn has commented below on this known issue. He uses the HTTP Query API, and the cause of the discrepancy are (as of this writing) still undetermined.

Regardless, our job is to get things to work properly. So for the meanwhile, let’s use a quick & dirty work-around. The two core differences are:

  • the data coming back may be wrapped in an additional key-value pair
  • single value result-sets come back as an Object (vs. an Array with a single Object)

So I crafted a little helper singleton, which which still allows me to detect between ‘valid’ and ’empty’ requests (nil vs []):

That will address our immediate needs for the moment, until a benificent coding Samaritan comes along.

Easy delivery with mstmp and GMail

Friday, January 23rd, 2009

At the moment, I really don’t feel like setting up a full-fledged MTA such as sendmail, postfix or qmail. I want to take the simple course, basically because I’m lazy. Fortunately, there are a variety of simple SMTP ‘relays’ out there such as ssmtp and esmtp. Some network officianatos may consider this to be re-inventing the wheel, but then again, I’m sure glad that my car doesn’t roll on stone cylinders.

After some consideration, I chose to go with msmtp. I like its flexible configuration, and it’s just the right size for the job (with room to grow). The major thing I was looking for was STARTTLS support. I wasn’t so concerned with the trust files and certificates, I just had a need to support GMail’s minimum requirements. Yes, msmtp gives you that and the whole 9 yards, for when I need them all.

msmtp Configuration for GMail

With a combination of their official configuration example plus a few targeted suggestions from Grey Bearded Geek’s take at ssmtp, I came up with the following:

You plop in the GMAIL-USER and GMAIL-PASSWD, and you’re good to go.

Custom From: Address

I soon learned that the from and maildomain settings are irrelevant; Google will not arbitrarily change the From: header of your mail. That makes sense. So the mail will appear as if it’s coming from you, personally. Well, it turns out that there’s a few things you can do to get around that.

  • Create yourself a dedicated GMail account. Now you have isolated your soon-to-be-wildly-popular start-up’s e-mail account from your personal one.
  • Follow the instructions on adding a custom From: address to your account. I had to use the older version of the GMail interface to do so. Google will verify that you own the address — you’d better be able to receive mail at that address — and then you can make it your default.

    GMail will now send your mail as it were coming from that address, but it will do so without providing an alias.

  • When sending your outbound mail, you can include the following headers:
    From:  ALIAS 
    Reply-To:  ALIAS <USER@DOMAIN>

    Google will respect the ALIAS portion of the From: address, though not the address itself. The Reply-To: is optional, but respected in its entirety (alias and address).

Works like a charm.

WordPress and PmWiki under nginx

Friday, January 23rd, 2009

I recently double-checked my nginx configuration against the one that Elastic Dog has so proudly featured. I’m very glad that I did — they provided me with a better understanding of the if / test capabilities of the syntax.

That being said, it still needed some adjustments …


I’m currently running WordPress 2.7 under nginx 0.7.27. Here’s my end configuration:

I’ll provide the content of extra/proxy.conf and fastcgi_params down below — they won’t surprise you — plus the configuration for my upstream fastcgi_cluster.

The purpose of DOMAIN.NAME and LOGFILE is obvious, so let’s skip to the useful stuff.


This is simply the fully-qualified path to the directory where you have installed WordPress. Big shock, I know. I put mine in ‘/var/www/wordpress’.


Specifically, I’m referencing the ‘WordPress address (URL)’ capture block.

WordPress 2.7 supports a differentiation between the root context of your blog and the root context of the WordPress resources themselves. I’ve taken this approach … the URL of this blog post is root-relative to my virtual hostname, but if you do a View Source you’ll see:

<link rel="stylesheet" href="http://blog.cantremember.localhost/wordpress/wp-content/themes/cantremember/style.css" type="text/css" media="screen" />
<link rel="pingback" href="http://blog.cantremember.localhost/wordpress/xmlrpc.php" />

It’s a nice-to-have, and in many ways allows the configuration to be somewhat easier. In WordPress Admin, under Settings | General, I have configured:

  • WordPress address (URL) = http://blog.cantremember.localhost/wordpress
  • Blog address (URL) = http://blog.cantremember.localhost

So my WP-CONTEXT is ‘wordpress’.


Here are the core differentiations between my config and the Elastic Dog one:

My core two sections are the ones with WP-CONTEXT. Before doing anything, I make the $request_filename context-less, so that it’s corrected relative to root. Granted, I could have skipped that step because I used ‘wordpress’ for each, but that doesn’t make for as good an example, and regex’s aren’t that expensive (don’t they have dedicated chips for them by now?).

I was having issues when WordPress wanted to take me to the Admin screen. It used the shortcut ‘/WP-CONTEXT/wp-admin‘, which is great if you’re not doing all this fancy re-writing and fastcgi_index can take over. But we are being fancy. That’s why the $request_filename/index.php text exists. It works like a charm, although there may be a more efficient way to do this.

And here is where it became an advantage to differentiate between blog URLs and WordPress resources. I’ve chosen to make my permalinks dateless — /%postname%/ . Call me crazy, but I like the way it looks on Laughing Squid. Given that’s the case, it’s hard to differentiate between ‘/some-permalink/’ and ‘/wp-admin/’. Splitting them off with the ‘wordpress’ context made this possible.

The final context-less ‘Blog address (URL)’ capture block is exactly what you’d expect.


I liked the simplicity and capabilities of PmWiki 2.2.0. It’s an easier decision, since I have no intention of being a grand-scale collective document facility. PmWiki is a powerful and flexible implementation with a lot of great processing directives that you can embed in a page. Yet that also makes security something of a concern (as some reviewers will point out as well). Global multi-tier password auth is available, and user-based auth is available as necessary.

This configuration is a natural extension of the WordPress one above:

Everything here is obvious, including /PATH/TO/PMWIKI-DIR. Mine is ‘/var/www/pmwiki’. Here’s the lowdown:

In the *.php capture block, you’ll see that the default script is pmwiki.php. I had created a symlink to rename it index.php, but after my config re-adjustment, that became obsolete.

The non-existing file test will be triggered by the following requests:

  • /Main/HomePage
  • /Main/HomePage?action=edit

Those URLs exist because I’m leveraging a feature called $EnablePathInfo. The referenced documentation doesn’t do it justice … this feature allows me to have bare Group/Name URLs, much like I’m doing with my bare blog URLs. I’ll just say that I’m being SEO-minded and leave it at that.

Turning on that feature informs PmWiki to generate the URLs in that format, and it also makes the PHP script capable of parsing the CGI headers to do-the-right-thing. My original configuration required me to perform the following override hack:

include  fastcgi_params;
fastcgi_param SCRIPT_NAME '';

But the revised configuration above simply re-writes the URL into the standard '?n=‘ format and the script never has to deal with CGI headers. The only other rewrite considerations were to transform any querystring ‘?’ into ‘&’ and to remove the leading ‘/’ from the Group/Name combo.


Supporting Configuration

For all means and purposes, I’m using nginx’s default fastcgi_params.

This is extra/proxy.conf, derived from their NginxFullExample, with notes-to-self intact:

This is fastcgi_cluster, which is just a simple example of how to do clustering:

I tried using some of the additional server setting features — commented out above — but they weren’t working in my build of 0.7.27. I can live without them at the moment, but the upstream block capabilities are quite powerful.

I’m running 3 FastCGI instances, each with 5 worker threads. Again, good enough for government work. I custom-built fcgi on OS X, but for my AWS Fedora Core 8 image I just went with spawn-fcgi that comes along with the lighttpd package.

This cluster config is also a nice starter reference for adding load-balancing capabilities to external AWS instances. Given the volatile nature of VM image mappings, I’ve split the cluster config off into its own file for scripted generation.

In Summary

I’m very pleased with nginx. It has been very stable — the only time I’ve taken it down is when setting it up for infinte HTTP 302 redirects, and even then it took several hours of user activity to knock it over. The configuration syntax is very powerful, and I haven’t once been wistful for my old Apache habits :) .

Learning Ruby through Assertions and Podcasts

Thursday, January 22nd, 2009

I’ve been working with the Ruby language since March 2008. So (as of this writing) I’m still on the n00b path.


The first thing I did was to follow the great advice of Dierk Koenig, writer of Groovy in Action and general Groovy / Grails advocate. The book itself doesn’t use the typical print-the-result-based code examples; it encourages the reader to learn the language through assertions. And that’s how I learned Groovy; I took the examples from the book, paraphrased them, tried variations on a theme, and then asserted that my results were true. Now when I need to know how to use a language feature, I simply look back at my assertion code to re-edjumicate myself.

I learned the core Ruby language via Test::Unit. I spent three weeks (please don’t laugh) worth of my daily commute writing assertions for the core language, the standard packages, plus ActiveRecord and other common gems. It allowed me to get a handle on the concepts, syntax, semantics and the sheer range of capabilities of he language. I frequently look back at my language_test_unit.rb to figure out the best use of Array.collect, catch..throw, Regexp quirks, and using declared lambdas as blocks (etc). More importantly, I’ve already written code using all of those techniques, so it’s just a referesher.

I cannot recommend this technique enough for coming up to speed on a language!


With that under my belt, plus some command-line scripts and a solid Rails project, I’m not spending time back-consuming posts from the following Podcast blogs:


Site : Feed

I’m actively back-consuming a lot of content from this wealth that Chris Matthieu has provided. There are some great talks on Journeta, using EC2, great tutorials covering basic and RoR, and some scaling recommendations.

sd.rb Podcast

Site : Feed

Straight from the mouth of the San Diego Ruby Users group. A good variety of topics, focusing more on the Ruby language than on the Rails poster-child itself. Nice talks on rspec, MySQL clustering and Arduino, amongst many others.


Site : Feed

With 145 postings and counting, there’s a lot to be consumed here. However, this is the last on my list, because none of them download to my iPhone 3G :( . Lots of cross coverage on Capistrano, Partials, custom Routes, ruby-debug … the list goes on.

Ruby on Rails Podcast

Site : Feed

Geoffrey Grosenbach’s podcasts are seminal. I’ll leave it up to the reader to pore through the years of accumulated wisdom. How can you go wrong when you’re part of the domain!

In Summary

A number of these feeds provide screencasts and/or video. A few of the files are old-school QuickTime MOVs which are problematic for the iPhone, which is annoying (definitely not the podcaster’s fault… get your head in gear, Apple). And unfortunately when I break away to write down something in, and there’s any visuals associated with the cast, the iPhone halts playback. Grr. So I’m getting into the archaic habit of creating a Notes page and mailing it to myself :)

I recommend each and all of these podcasts. Be prepared to sink a lot of time into them, so you might as well upload them onto your iPhone and take them to the beach!

Remote Debugging using JConsole, JMX and SSH Tunnels

Thursday, January 22nd, 2009

I’ve recently hosted my Spring / Hibernate webapp in the cloud thanks to Amazon Web Services. A future post will mention the monitoring I’ve put in place, but Tomcat keeps dying after about 36 hours. The first thing I need to do is enable debugging and JMX remoting so that I can put JConsole to work.

This turned out not to be as easy as I’d liked. Even with a bevy of useful resources available — lotsa people have run into these issues — it took a while to get the right combination. Let’s hope I can save you a bit of that pain …

JConsole / JMX Remoting via SSH Tunnels

As I mentioned, I’m hosting this solution in the cloud. So, when things go bad, I need to be able to debug remotely. I don’t want to open up my security group to general traffic, so using SSH tunnels is the best option. JConsole is a great tool for measuring current statistics and performance of your Java app, and relies on JMX remoting. It works great locally, or even within a trusted network, but once you’re behind a firewall with broad port-blocking, there are some significant issues. There are several core Java forum topics related to this discussion:

Daniel Fuchs has written several articles which illustrate these issues and provide good solutions. He explains that JMX remoting needs two ports: one for the RMI registry, and one for the RMI connection objects which are stubs used for remoting all the critical data. If you’re using the default JVM agent, you’ll tend to use the following JVM’s on the server:


I’ll come back to these, but the one that’s important here is jmxremote.port. It allows you to specify the RMI registry port, the one that you’ll use to establish your remote connection via JConsole. However, the port for RMI export, which is used for all the critical data callbacks, is randomly chosen (on a session basis or JVM basis, not sure) and cannot be specified. And you can’t open a port for tunneling if you don’t know what it is.

You can see this issue if you crank up the debugging on JConsole. I was having issues getting the logging output so I took the double-barrel approach, using both the -debug argument and the custom java.util.logging descriptor, the contents of which I stole from here. Invoke it as follows:

jconsole -debug -J"-Djava.util.logging.config.file=FILENAME"

The quotes are optional. Provide the logging descriptor filename. You can call out the JMX Service URL or the hostname:port combination at the end if you like. Now you’ll eventually a debug output much like this:

FINER: [ jmxServiceURL=service:jmx:rmi:///jndi/rmi://localhost:PORT/jmxrmi] connecting...
FINER: [ jmxServiceURL=service:jmx:rmi:///jndi/rmi://localhost:PORT/jmxrmi] finding stub...
FINER: [ jmxServiceURL=service:jmx:rmi:///jndi/rmi://localhost:PORT/jmxrmi] connecting stub...
FINER: [ jmxServiceURL=service:jmx:rmi:///jndi/rmi://localhost:PORT/jmxrmi] getting connection...
FINER: [ jmxServiceURL=service:jmx:rmi:///jndi/rmi://localhost:PORT/jmxrmi] failed to connect: java.rmi.ConnectException: Connection refused to host: IP-ADDRESS; nested exception is: Operation timed out

PORT will be the RMI registry port you’re tunneling into. IP-ADDRESS is special, we’ll get to that, and it’s important to note that it’s a ‘ConnectException‘ occurring against that host.

This debugging information can show up rather late in the whole connection process, undoubtedly because it’s an ‘Operation timed out‘ issue, so don’t be surprised if it takes a while. Fortuntely, you can also see immediate verbose feedback when you set up your ssh tunnel connection (see below).

Addressing the Randomly-Generated RMI Export Port

The first problem I chose to resolve was the one relating to the random RMI export port issue. Daniel has provided a fine example of how to implement a custom ‘pre-main’ Agent which you can use to supplant the standard JVM one. There’s his quick’n’dirty version which doesn’t address security — which is where I started. And then there’s a more full-fledged version, which I modified to be configurable.

Most importantly, it builds its JMXConnectorServer with the following service URL:


Traditionally, you’ll see this service URL from the client perspective, where HOSTNAME:RMI-EXPORT-PORT is not defined and you just have ‘service:jmx:rmi:///jndi/rmi://...‘. JConsole will build that sort of URL for you if you just provide HOSTNAME:RMI-REGISTRY-PORT (eg. hostname:port) when connecting.

By calling out the RMI-EXPORT-PORT in the agent’s service URL, you can affix it and tunnel to it. You can use the same port as the RMI registry; this only requires you to open one port for tunneling.

On your client / for JConsole, the HOSTNAME will probably be localhost, where you’ve opened your tunnel like so:

ssh -N -v -L9999:REMOTE-HOST:9999 REMOTE-HOST

9999 is just an example port. REMOTE-HOST is the host you’re tunneling to. You can remove the -v argument, but it’s good to have that around so that you can see the network activity. You can also use the -l argument to specify the login username on the remote host. Note that you’re opening the same port locally as you’re hitting on the server, with no offset. You’ll need to open the same port because your agent on the server is going to need to know what port to callback to itself on for RMI export, and that won’t work if you have an offset. So you might as well use the same port for both the RMI registry and RMI export, and just keep that one port available locally.

On the server in your agent, the HOSTNAME part of the service URL can either be InetAddress.getLocalHost().getHostName(), or an IP address (, or in my case ‘localhost’ just worked fine.

The major reason to create the custom agent is to build the port-qualifying service URL. As usual, the example code takes a lot of shortcuts. So, I built myself a more re-usable agent — influenced by the standard JVM agent’s’s — which allowed me to configure the same sorts of things as mentioned above:

  • hostname : to be used as the HOSTNAME value above
  • port.registry : to be used for RMI-REGISTRY-PORT
  • port.export : to be used for RMI-EXPORT-PORT, defaulting to the same as port.registry if not provided
  • ssl : true to enable SSL
  • authenticate : true to enable credentials / authentication
  • access.file : specifies the location of the user credentials file
  • password.file : specifies the location of the user password file

And so I was able to configure my agent service URL for localhost, using the same port for both RMI requirements, and using simple password-based auth. I did not go down the SSL route, though many of the posts from Daniel and others explain this as well. Do that once you’ve tackled the core problem :)

Another great post relating to this issue mentions that Tomcat has a custom Listener for setting up a similar agent. The example was:

<Listener className="org.apache.catalina.mbeans.JMXAdaptorLifecycleListener" namingPort="RMI-REGISTRY-PORT" port="RMI-EXPORT-PORT" host="localhost"/>

I didn’t look any deeper into this to see whether it supports SSL and/or basic authentication. But it seems clear that this *not* a Java agent, because you have to set those up via’s. Here’s what I needed to add to Tomcat startup for my custom agent:


I’ve omitted the namespace I used for my agent, and the formal name of the ‘pre-main’-compatible JAR file that I built using the instructions that Daniel provided. Tomcat won’t start up properly until the agent is happy; after that, then you’re golden.

So I got Tomcat running, started up an ssh tunnel, and invoked JConsole. And still no matter what I did, I still got ConnectException: Operation timed out. I tried to connect via JConsole in all the following ways:


All of these are valid URLs for connecting via JConsole. For a while there I wasn’t sure whether you could use the same port for both the RMI registry and export, so I could see that the JConsole log was different when I called out the RMI export info explicitly in the service URL. Still, it didn’t seem to help.

Then I started to realize that there were two separate issues going on, although they tended to blend together a lot in the posts I’d been reading.

Addressing the RMI Export Hostname

The short version is, even if you’ve set up your JMX service URL properly on the server — yes, even if you’ve set its HOSTNAME up to be ‘localhost’ — you’ll still need to tell JMX remoting which hostname the RMI export objects should use for callbacks. This requires you to provide the following’s as well:


The useLocalHostname may not be relevant, but it doesn’t hurt. All this time I’d thought that because I was configuring that information in the service URL that RMI would build the objects accordingly. But I was wrong … it doesn’t … you need to call that out separately.

What was not apparent to me — until I started to see the same articles pop up when I revised my search criteria — was the IP-ADDRESS in this exception dump:

FINER: [ jmxServiceURL=service:jmx:rmi:///jndi/rmi://localhost:PORT/jmxrmi] failed to connect: java.rmi.ConnectException: Connection refused to host: IP-ADDRESS; nested exception is: Operation timed out

It was the IP address of my instance in the cloud. The callbacks were being made back to my VM, but they needed to be made to ‘localhost’ so that they could go through the tunnel that I’d opened. The ‘Operation timed out‘ was due to the port being closed, which is the whole reason you’re using ssh tunnels in the first place. Once the RMI exported objects know to use ‘localhost’, that addresses the problem. And magically, JConsole will connect and show you all the data in the world about your server.

So you must provide those’s above regardless of what other configuration you’ve provided in your custom JMX agent.

Additional Concerns

There were a number of other red herrings that I followed for a while, but they were called out as being potential issues, so I kept note.

  • If your server is running Linux, there are a couple of things you’ll want to check, to make sure that your /etc/hosts is self-referencing correctly, and that you’re not filtering out packets.
  • You will have troubles stopping Tomcat when it has been started with your custom JMX agent; you’ll have to kill the process. Apparently agents don’t release their thread-pools very nicely. Daniel provides an example of an agent with a thread-cleaning thread — which still has some limitations, and raises the philosophical question ‘who cleans the thread-cleaner‘? He also provides an agent that can be remotely stopped — which is reasonably complex. I’ll save that one for a rainy day.
  • If you want to use SSL in your authentication chain, read up on Daniel’s other postings, and use the following’s on both the server and when starting JConsole:
  • I have built some Ruby scripts which allow me to dynamically look up an AWS instance’s public DNS entry and then start up a Net::SSH process with port-forwarding tunnels. This works fine for HTTP and even for remote JVM debugging, but it did not work for JMX remoting. I’m not sure why, so you should stick with using ssh for setting up your tunnels.
  • I started out this exercise using Wireshark for packet sniffing. I’m using OS X, and I installed Wireshark — the successor to Ethereal — via MacPorts. It runs under X11, which you’ll need to install from either Apple’s site or your Optional Installs DVD. I couldn’t get any Interfaces (ethernet card, etc.) to show up, until I learned that I should:

    sudo wireshark

    The app will warn you that this is unsafe, but it works. The Nominet team says that you can address this issue by providing:

    sudo chmod go+r /dev/bpf*

    However that is volatile, and has to be done whenever your Mac starts up. More config involved, so I took the easy path.

  • If you’re using a script to start and stop Tomcat, you’ll need to somehow separate out the’s that should be used on startup, and omit them when invoking shutdown. If you invoke shutdown with your debug and/or RMI ports specified, the shutdown will fail because those ports are already in use.

    I’m using the newest standard Tomcat RPM available for Fedora Core 8 — tomcat5-5.5.27 — and it’s uniquely nutty in terms of how it is deployed:


    That’s a very non-standard arrangement. The init.d script awks the *.conf files, and a whole array of other exciting things. I still haven’t gotten it to properly do an init.d restart due to how it blends the JAVA_OPTS handling. So that’s left as a case-specific exercise.

  • The whole reason I went down this path was to address a memory leak relating to Hibernate sessions which I blogged about a long time ago. The fix required me to invoke Tomcat with the following’s:


    The org.objectweb.carol JAR, which these settings were targeted at, is part of my weapp, so it’s available in its own Classloader. However, once I put the custom JMX agent in place, I got:

    FATAL ERROR in native method: processing of -javaagent failed
    Exception in thread "main" java.lang.reflect.InvocationTargetException
    Caused by: java.lang.ClassNotFoundException: org.objectweb.carol.jndi.spi.MultiOrbInitialContextFactory

    Attempting to create a symlink to the app-specific JAR in either common/lib, common/endorsed or shared/lib did not address the issue. I had to hack the JAR into the --classpath in order to get Tomcat to start. And yes, hack was the operative term (again).

In Summary

Frankly, all that discovery was enough for one day. And yes, it took me that long to find all of the corner-cases i was dealing with. I hope that if you find this article that it will make your path a bit easier. I know I’ll be glad that I blogged about the details the next time I bump into the issue!

logrotate Mac OS Launch Daemon with Legacy MacPort

Tuesday, January 20th, 2009

Everbody loves cron. The classic basic scheduler, 80/20 flexibility, gets the job done. So, when I started with OS X, I went looking for cron.

Yes, you can cron if you want to. Or maybe, as the Mac children recommend, you can create Launch Daemons. It’s simply a custom launchd.plist, an XML file to define tasks in Apple’s terms. Sweet, I can do that…

The logrotated.plist Daemon

From what I could tell, out of the box, Leopard doesn’t give you logrotate. So, here it is as a Launch Daemon:

Great. So what do I do with it? Name it anything ‘*.plist’ — logrotated.plist perhaps — and put it here:

  • /System/Library/LaunchDaemons or
  • /Library/LaunchDaemons

I tend to use System for any services I wan to launch on boot (chkconfig-ish), and anything that’s just a scheduled task is more pedestrian.

You register daemons through launchctl. When you change it, just:

$ launchctl unload FILE.plist
$ launchctl load FILE.plist

Sure, not a hard thing. What I did want to point out were some of the things that I learned:

  • Provide each space-separated component of the command line as <string /> element in the ProgramArgument <array />. It’s the easiest way to go.
  • I tend to keep the KeepAlive, LaunchOnlyOnce and RunAtLoad all in sync for each daemon. It seems to guarantee compatability.
  • I still haven’t gotten quite the hand of OnDemand (c’mon, I’m new to this). When you are developing your daemons, keep a watch on /var/log/system.log — if you screw up, you’ll probably see it there in one fashion or another.
  • StartCalendarInterval is the equivalent of the cron schedule mask. See the Nabble post i commented above. You can’t do ranges, but wildcards are do-able sorta by omission. If you want mulltiple disparate schedules, as you see above, you need to pass in an <array /> of <dict />s, not just a single <dict /> when you only need one mask.
  • Yes, datatypes matter! I couldn’t figure out for the life of me why my daemon wouldn’t start, until I realized that my Hour and Minute values were configured as <string />s — vs <integer />s — because I’d been, well, you know, trying to do ranges and wildcards the cron-style way. launchctl didn’t complain, it just didn’t … work. So don’t do that.
  • Any daemon which has a schedule that fires off when the system is down will be automatically executed shortly after the system re-awakens. I don’t know exactly how quickly, but I’ve seen it in action (though I’ve read some dispute about its efficacy in my searches).

The Trials and Tribulations of MacPorts

But of course this Epistle wouldn’t be any fun without a twist, right? Well, lucky us, because it wasn’t that easy.

I’m using MacPorts as my yum-my package manager. It’s great, and easy to set up, but it’s quite as stable as I’ve seen under Linux. Or rather, I’ve seen a disproportionate number of issues in the times I’ve used it. However, I’m eternally grateful because it saves me so much time … the successes far outweigh the problems.

When things do go bad, as they did when I built logrotate 3.7.7, you have to set up a custom Portfile and source repository so that you can effectively drive MacPorts to fetch and build your specific version. logrotate 3.7.1_1 turned out to be much more stable.

Techonova and Joe Homs go into very excellent and welcome detail in their posts on how to pull this off. A summary of what you need to do is:

  • watch the build failure — debug with port -d — to identify where the dying source is located. That’ll be PATH/TO/PROJECT
  • visit and snag the Portfile. That’s a MacPorts spec file, and you can tweak it to do your bidding.
  • create a source directory that you’ll keep around for a while (mine is /Users/Shared/macports).
  • register that directory — your local Portfile repository — with /opt/local/etc/macports/sources.conf (using file:// protocol)
  • copy the Portfile into a PATH/TO/PROJECT sub-directory structure (eg. sysutils/logrotate). Basically, the same path you’ll have snagged it from.
  • pull down a previous source revision and snapshot it locally. That could come from SVN or git, tarball, whatever. Start search from the homepage setting of the Portfile.
  • tweak the Portfile to ‘make sense’ for the source you’ve pulled down. If you based upon a close-enough version, it should be as easy as tweaking the version, etc. — no custom build tasks. I’m glossing over when I say that it’s usually a matter of version-based naming conventions and MD5 checksums.
  • re-build the repository’s PortIndex (which you’ll need to do every time you make an addition / change)

Please feel free to consult the other two posts to fill in the details that I was so cavalier about.

In Summary

This was another one of those occasions when I was glad I kept track of what I was doing while I was in the moment. Pack your config files with comments, because they’re invaluable. And drop READMEs around where you’re likely to find them … I filled in a lot of the extra details for this post from those breadcrumbs I left for myself, things I could have easily wasted 15-20m on re-discovering by braille :)