First, there are two blog articles that I’ve come across,
if
Statement – eliciting a response along the lines of :open_mouth: “you can do that ??!”Secondly, here’s some code I’ve recently written (and redacted for publishing). Let me call attention to lines 22-25:
And with it, a small illustrative Test Suite:
With that all being established, let me toe the line.
We all know this to be true.
GOTO
cannot be trusted.
This is because GOTO
has no moral compass whatsoever.
GOTO
will stand there in a black stove-pipe hat & cloak, glaring at you, curling its mustached menacingly.
Then, with a ghoulish cackle, GOTO
will tie your innocent, helpless code to the railroad tracks.
GOTO
is the bane of our existence.
There can be no doubt of this.
The GOTO
construct – and by association, labels in general – is reprehensible.
Its use should be considered a Faustian bargain from which there can be no possible hope of benefit or redemption.
And hooray – I get my Merit Badge!
And I take it home, and I proudly sew this new Badge onto my sash,
just to the left of “Indent with Spaces”, and below “eval
is Evil”.
But, deep inside of me there is a twinge of guilt. For I do not believe this dictum to be true in all cases. I would even go so far as to say that, sometimes, a label is the right tool for the job.
Oh yeah, I went there.
Of course, GOTO
was pretty essential back in my BASIC days, a time commonly known as “the 1980s”.
I can’t tell you how delighted I would have been to make a Git GIST of some of my old BASIC code.
However, I couldn’t find any in my archive.
I imagine it’s all sitting on dessicated cassette tapes,
packed away somewhere (along with my CoCo) by my loving mother.
But let’s not get all weepy & sentimental for the good old days.
We’re here to talk about GOTO
’s enabler, the label.
And although the label is entirely complicit in the despicable actions of GOTO
,
that does not mean that it cannot be reformed with the help of a little :two_hearts: T.L.C.
I am not going to provide an exhaustive language survey here. Rather, the two practical examples that come to mind are;
A contrived (and unconvincing) example of the Ruby syntax is;
My quick search for a prefab Ruby example led me to this Reddit post, in which the Commenters observe;
[throw/catch is] … basically a GOTO with an optional payload.
The academic answer: raise/rescue is meant for handling exceptional conditions, NOT for control flow. If you want control flow to be able to jump locations (multiple loops, method calls, etc–those not handled by regular conditionals), then you should use throw/catch.
Now of course, not every shit-upon language feature has situational value. I’m wary of the “optional payload” part of Ruby’s syntax (thus I am not demonstrating it).
As far as JavaScript goes …
you won’t catch me using a with
statement –
nor linking to its MDN documentation –
and I avoid iterating on Strings as if they were Arrays.
Sure, I could … but I won’t.
However, I would propose that, discretionally,
Hey, hey. Easy. You’ve read this far … now, hear me out.
Let me break down the essence of why & how I have used this nefarious and much maligned labeled statement syntax in my JavaScript parsing code example.
{ id }
from the URLEach phase is rather compact. At current length, I feel it reads nicely.
Let us take another look at lines 22-25;
processed: {
if (! DOMAINS.has(hostname)) {
break processed;
}
// ...
}
The purpose of the label is for control flow.
I chose the name “processed” so that the break processed
statement would be expressive to future maintaners.
The PR feedback comment it received was:
this is really cool, but I am instinctively fearful of it
Look, I totally get it.
Labels and GOTO
have maintaned a lengthy and contentious co-dependent relationship.
And we’ve all been burned before.
It is completely reasonable to ask; “Is Label really ready to move on?”
Yes, Label is ready. Given the right opportunity, Label can shine. But we’ve all experienced trauma, so please take it slow … and be gentle.
I can understand that. What alternatives might you propose?
Why don’t you refactor it into smaller Functions?
That is a tried-and-true pragmatic approach.
However I would contend that, of the three phases, the decoding & return-value parts are drop-dead simple.
Only the progressive logic is of real “value”, and it could be factored out –
much like I did for some top-level const
s –
but you’ll just end up spreading the end-to-end logic around into two different places.
If the decoding phase were to get more complicated, that might be an incentive to refactor. Whereas the return-value phase is built to match the complexity of what the progressive logic can derive, so they’re already coupled.
I would say that “now is not the time”.
Why don’t you use a
try { } finally { }
statement?
I agree, that is a viable alternative.
But then you end up using a Exception-handling construct to implement the desired control flow, which I would suggest is a worse shoe-horning of the language. Whereas with the labeled statement,
catch(err) { }
or finally { }
syntactic sugar involved;
it’s just a label in front of a blockWhy don’t you replace it all with early returns?
Because that’ll be (a) more lines of code or (b) more complicated code in the same “number of lines”. And, without TypeScript (or a similar formalization) to guide you, the structs of those invididual returns must all be kept in sync.
The decoding step does have its own return, and that makes for a grand total of two. I believe it reads best that way, for purposes of maintainability.
It’s like a
switch { case }
withoutbreak
s !
No. It really isn’t.
For the love of god, refactor this !!!1!
Wow.
Okay. I think I may have overstepped some boundaries here, and for that, I’m sorry. Maybe it would be better to discuss this again at some later time?
Yeah, probably. I could really use some coffee.
Sounds like a good plan :+1: :coffee: .
While you do that, I’ll sum up.
The intent of this post was to demonstrate how a label can provide effective and expressive control flow in modern JavaScript code (for example). It also attempts, in some small way, to diminish the stigma associated with labels as a whole.
Time and time again, GOTO
has shown itself to be a narcissistic blowhard that ruins everything it touches.
Is it any surprise that labels have gotten a bad rap by association?
Perhaps we shouldn’t judge a language feature, even by the admittedly awful company that it keeps.
Here’s a neat little tick for dealing with instanbul / nyc code coverage:
/* istanbul ignore next */
_uncovered: {
// ...
}
It’s been steady-as-she-goes. I have joined a new Employer since February 2019. I’d like to be at this place for a while. Very good people.
I know enough React[2] patterns – pre-Hooks and post – to be “effective” on the Client. It allows me to believe that I’m still full-stack, but mehhhhh – I’m not fooling anyone. I’m Server these days – Node[2] and all the build & deploy DSLs[3] that go along with it. I am not yet Serverless[2], but I do use this amazing Firefox Extension. I have not yet started to learn Rust[2].
About a month ago, I learned that we have to properly tune the VACUUM
& ANALYZE
engines in Postgres to avoid major INDEX slowdowns in a Test Suite environment.
A month before that, I (finally) spun up an ArchLinux-based (Manjaro) server to run VirtualBox to run my radio station’s WinXP-based transmitter, and jacked it all up with a lot of bash
-driven health scripts.
Also, in the past quarter, I set up Let’s Encrypt on my cloud hosts,
and bullied through the usual rvm and Chef upgrade nonsense.
I haven’t moved to Docker[2] yet; that’s coming soon –
my station’s data services require JDK 7 at the latest, and support is falling away.
My home-rolled VPN server is a Docker container running on DigitalOcean, I use that tech on the job, so it’s familiar.
Ultimately, I’ll virtualize everything. Containers upon containers, to keep the old stuff that works good still working.
If I learn anything exciting, I’ll be sure to let you know :ok_hand: I promise.
The topics I’ve chosen to link out to are tooling-specific. Many I use daily, and am I grateful for them.
I didn’t link out to some topics. I think of them as ‘movements’ more than just a tech stack. When I look back, it’ll be interesting to see how each has aged, some like wine, some like milk.
And other topics, in their transience, shall remain nameless.
This Post will harken back to Wandering in the Mojave with JavaScript,
wherein we learned a great many things about Xcode 10 and libstdc++
support.
Having taken one day to rest after a harrowing experience with gyp
,
I awoke on Friday with the notion to document our experience.
My Shiny New Blog is built atop Jekyll,
which means I must now reconstruct my Ruby tooling.
The horse is refreshed, and we break camp on the heels of a peaceful dawn. Although the reinstallation of rvm goes smoothly, in my heart, I know that we’ve only updated some support scripting. The devil will be in the details.
This procedure is a familiar one. I crack open my long-term notes on building Rubies from source under MacOS, and they tell me;
rvm autolibs packages # homebrew
rvm requirements # may crap out on "apple-gcc42" or similar
rvm install ruby-2.3.4
But sigh. We’ve seen this before;
No binary rubies available for: osx/10.14/x86_64/ruby-2.3.4.
Continuing with compilation. Please read 'rvm help mount' to get more information on binary rubies.
You requested building with '/usr/bin/gcc-4.2' but it is not in your path.
A few lines below, my notes contain this cryptic block;
# https://github.com/rvm/rvm/issues/763
# `rvm` makes some bad assumptions about "/usr/bin/gcc-4.2", so either
# create a permanent symlink: `ln -s /usr/bin/gcc /usr/bin/gcc-4.2`
# or use `--with-gcc=/usr/bin/gcc-4.2`
# https://github.com/rvm/rvm/issues/4200
# `--with-gcc=gcc` for pre-2.x
rvm install ruby-2.3.4 --with-gcc=/usr/bin/gcc
The first approach I take is to create the symlink. Unfortunately, that’s no longer possible due to the mysteries of System Integrity Protection. Yet I am loathe to disable the feature; I trust that it serves me in ways that I don’t need to know about.
Our other recourse is --with-gcc
–
and it would seem that approach is going to work for us just dandy –
until, suddenly;
Could not load OpenSSL.
You must recompile Ruby with OpenSSL support or change the sources in your Gemfile from 'https' to 'http'. Instructions for compiling with OpenSSL using RVM are available at http://rvm.io/packages/openssl.
Not so long ago, I’d wrestled with pip servers’ minimum TLS version using Python 2.7 and OpenSSL, and they’d nearly beat me. I brace myself for what could be a significant challenge. Fortunately, some six years past, this Stack Overflow had been authored with a solution.
From its wisdom, I derived;
# it does complain, it but works
# "Do not know how to check/install gcc ..."
brew install openssl
# the modern variant of `rvm pkg install <PACKAGE>`, eg. 'openssl'
rvm autolibs homebrew
rvm install ruby-2.3.4 --with-openssl-dir=`brew --prefix openssl` --with-gcc=/usr/bin/gcc
# `bundler '~> 1.0'`
# "Pessimistic Version" of last reliable version before 2.0
gem install bundler -v '~> 1.0'
We now have a magnificent build of Ruby 2.3.4. “I’ve really gotta up my scripts to use 2.6,” I think to myself as I take a long, cool drink from my canteen.
I roll out a rugged woolen blanket and spread upon it the contents of my blog’s Git repo. The JavaScript gulp tooling is already stable, thanks to my Wandering earlier in the week.
Now, to install the Gems.
# works like butter
nvm use
npm install
# works not so much like butter
rvm use 2.3.4
bundle install
Nope, not like butter at all;
Installing nokogiri 1.7.2 with native extensions
Gem::Ext::BuildError: ERROR: Failed to build gem native extension.
current directory:
/Users/dfoley/REDACTED/ruby_gems/ruby/2.3.0/gems/nokogiri-1.7.2/ext/nokogiri
/usr/local/rvm/rubies/ruby-2.3.4/bin/ruby -r ./siteconf20190211-89093-1clvmz9.rb
extconf.rb
checking if the C compiler accepts ... yes
checking if the C compiler accepts
-Wno-error=unused-command-line-argument-hard-error-in-future... no
Building nokogiri using packaged libraries.
Using mini_portile version 2.1.0
checking for iconv.h... yes
checking for gzdopen() in -lz... yes
checking for iconv using --with-opt-* flags... yes
************************************************************************
IMPORTANT NOTICE:
Building Nokogiri with a packaged version of libxml2-2.9.4
with the following patches applied:
- 0001-Fix-comparison-with-root-node-in-xmlXPathCmpNodes.patch
- 0002-Fix-XPointer-paths-beginning-with-range-to.patch
- 0003-Disallow-namespace-nodes-in-XPointer-ranges.patch
Team Nokogiri will keep on doing their best to provide security
updates in a timely manner, but if this is a concern for you and want
to use the system library instead; abort this installation process and
reinstall nokogiri as follows:
gem install nokogiri -- --use-system-libraries
[--with-xml2-config=/path/to/xml2-config]
[--with-xslt-config=/path/to/xslt-config]
If you are using Bundler, tell it to use the option:
bundle config build.nokogiri --use-system-libraries
bundle install
Note, however, that nokogiri is not fully compatible with arbitrary
versions of libxml2 provided by OS/package vendors.
************************************************************************
Extracting libxml2-2.9.4.tar.gz into
tmp/x86_64-apple-darwin18.2.0/ports/libxml2/2.9.4... OK
Running git apply with
/Users/dfoley/REDACTED/ruby_gems/ruby/2.3.0/gems/nokogiri-1.7.2/patches/libxml2/0001-Fix-comparison-with-root-node-in-xmlXPathCmpNodes.patch...
OK
Running git apply with
/Users/dfoley/REDACTED/ruby_gems/ruby/2.3.0/gems/nokogiri-1.7.2/patches/libxml2/0002-Fix-XPointer-paths-beginning-with-range-to.patch...
OK
Running git apply with
/Users/dfoley/REDACTED/ruby_gems/ruby/2.3.0/gems/nokogiri-1.7.2/patches/libxml2/0003-Disallow-namespace-nodes-in-XPointer-ranges.patch...
OK
Running 'configure' for libxml2 2.9.4... ERROR, review
'/Users/dfoley/REDACTED/ruby_gems/ruby/2.3.0/gems/nokogiri-1.7.2/ext/nokogiri/tmp/x86_64-apple-darwin18.2.0/ports/libxml2/2.9.4/configure.log'
to see what happened. Last lines are:
========================================================================
checking whether to enable maintainer-specific portions of Makefiles... yes
checking build system type... x86_64-apple-darwin18.2.0
checking host system type... x86_64-apple-darwin18.2.0
checking for a BSD-compatible install... /usr/local/bin/ginstall -c
checking whether build environment is sane... yes
checking for x86_64-apple-darwin18.2.0-strip... no
checking for strip... strip
checking for a thread-safe mkdir -p... /usr/local/bin/gmkdir -p
checking for gawk... gawk
checking whether make sets $(MAKE)... yes
checking whether make supports nested variables... yes
checking whether make supports nested variables... (cached) yes
checking for x86_64-apple-darwin18.2.0-gcc... /usr/bin/gcc-4.2
checking whether the C compiler works... no
configure: error: in
`/Users/dfoley/REDACTED/ruby_gems/ruby/2.3.0/gems/nokogiri-1.7.2/ext/nokogiri/tmp/x86_64-apple-darwin18.2.0/ports/libxml2/2.9.4/libxml2-2.9.4':
configure: error: C compiler cannot create executables
See `config.log' for more details
========================================================================
*** extconf.rb failed ***
Could not create Makefile due to some reason, probably lack of necessary
libraries and/or headers. Check the mkmf.log file for more details. You may
need configuration options.
Provided configuration options:
--with-opt-dir
--with-opt-include
--without-opt-include=${opt-dir}/include
--with-opt-lib
--without-opt-lib=${opt-dir}/lib
--with-make-prog
--without-make-prog
--srcdir=.
--curdir
--ruby=/usr/local/rvm/rubies/ruby-2.3.4/bin/$(RUBY_BASE_NAME)
--help
--clean
--use-system-libraries
--enable-static
--disable-static
--with-zlib-dir
--without-zlib-dir
--with-zlib-include
--without-zlib-include=${zlib-dir}/include
--with-zlib-lib
--without-zlib-lib=${zlib-dir}/lib
--enable-cross-build
--disable-cross-build
/Users/dfoley/REDACTED/ruby_gems/ruby/2.3.0/gems/mini_portile2-2.1.0/lib/mini_portile2/mini_portile.rb:366:in `block in execute': Failed to complete configure task (RuntimeError)
from /Users/dfoley/REDACTED/ruby_gems/ruby/2.3.0/gems/mini_portile2-2.1.0/lib/mini_portile2/mini_portile.rb:337:in `chdir'
from /Users/dfoley/REDACTED/ruby_gems/ruby/2.3.0/gems/mini_portile2-2.1.0/lib/mini_portile2/mini_portile.rb:337:in `execute'
from /Users/dfoley/REDACTED/ruby_gems/ruby/2.3.0/gems/mini_portile2-2.1.0/lib/mini_portile2/mini_portile.rb:106:in `configure'
from /Users/dfoley/REDACTED/ruby_gems/ruby/2.3.0/gems/mini_portile2-2.1.0/lib/mini_portile2/mini_portile.rb:149:in `cook'
from extconf.rb:364:in `block (2 levels) in process_recipe'
from extconf.rb:257:in `block in chdir_for_build'
from extconf.rb:256:in `chdir'
from extconf.rb:256:in `chdir_for_build'
from extconf.rb:363:in `block in process_recipe'
from extconf.rb:262:in `tap'
from extconf.rb:262:in `process_recipe'
from extconf.rb:547:in `<main>'
To see why this extension failed to compile, please check the mkmf.log which can
be found here:
/Users/dfoley/REDACTED/ruby_gems/ruby/2.3.0/extensions/x86_64-darwin-18/2.3.0/nokogiri-1.7.2/mkmf.log
extconf failed, exit code 1
Gem files will remain installed in /Users/dfoley/REDACTED/ruby_gems/ruby/2.3.0/gems/nokogiri-1.7.2 for inspection.
Results logged to /Users/dfoley/REDACTED/ruby_gems/ruby/2.3.0/extensions/x86_64-darwin-18/2.3.0/nokogiri-1.7.2/gem_make.out
An error occurred while installing nokogiri (1.7.2), and Bundler cannot continue.
Make sure that `gem install nokogiri -v '1.7.2' --source 'http://rubygems.org/'` succeeds before bundling.
Something’s dreadfully wrong with nokogiri.
I gasp, catch myself, then back away and breathe for a moment under the mid-morning sun. In all directions, the expanse is a roiling sea of dirty browns and gold, with little to break the horizon save for the Funeral Mountains far to the north.
I steel myself with renewed dedication. I will cross this impasse.
No one can ever doubt my commitment to Sparkle Motion.
Let’s check the mkmf.log
, as our massive failure dump suggests;
"/usr/bin/gcc -o conftest -I/usr/local/rvm/rubies/ruby-2.3.4/include/ruby-2.3.0/x86_64-darwin18 -I/usr/local/rvm/rubies/ruby-2.3.4/include/ruby-2.3.0/ruby/backward -I/usr/local/rvm/rubies/ruby-2.3.4/include/ruby-2.3.0 -I. -I/usr/local/opt/openssl/include -I/usr/local/opt/libyaml/include -I/usr/local/opt/readline/include -I/usr/local/opt/libksba/include -I/usr/local/opt/openssl@1.1/include -D_XOPEN_SOURCE -D_DARWIN_C_SOURCE -D_DARWIN_UNLIMITED_SELECT -D_REENTRANT -O3 -fno-fast-math -ggdb3 -Wall -Wextra -Wno-unused-parameter -Wno-parentheses -Wno-long-long -Wno-missing-field-initializers -Wunused-variable -Wpointer-arith -Wwrite-strings -Wdeclaration-after-statement -Wshorten-64-to-32 -Wimplicit-function-declaration -Wdivision-by-zero -Wdeprecated-declarations -Wextra-tokens -fno-common -pipe -O3 -Wall -Wcast-qual -Wwrite-strings -Wconversion -Wmissing-noreturn -Winline conftest.c -L. -L/usr/local/rvm/rubies/ruby-2.3.4/lib -L/usr/local/opt/libyaml/lib -L/usr/local/opt/readline/lib -L/usr/local/opt/libksba/lib -L/usr/local/opt/openssl@1.1/lib -L. -L/usr/local/opt/openssl/lib -fstack-protector -L/usr/local/lib -L/usr/local/opt/libyaml/lib -L/usr/local/opt/readline/lib -L/usr/local/opt/libksba/lib -L/usr/local/opt/openssl@1.1/lib -lruby.2.3.0 -lpthread -lgmp -ldl -lobjc "
Undefined symbols for architecture x86_64:
"_iconv", referenced from:
_main in conftest-08b8ef.o
"_iconv_open", referenced from:
_main in conftest-08b8ef.o
ld: symbol(s) not found for architecture x86_64
clang: error: linker command failed with exit code 1 (use -v to see invocation)
checked program was:
/* begin */
1: #include "ruby.h"
2:
3: #include <stdlib.h>
4: #include <iconv.h>
5:
6: int main(void)
7: {
8: iconv_t cd = iconv_open("", "");
9: iconv(cd, NULL, NULL, NULL, NULL);
10: return EXIT_SUCCESS;
11: }
/* end */
Well, this seems curious;
Undefined symbols for architecture x86_64
Perhaps we’re dealing with the “include path for stdlibc++ headers not found” problem again,
which would lead us back on the path to installing macOS_SDK_headers_for_macOS_10.14.pkg
.
I’d come across this Updated to Mojave article which suggested that very thing.
However, much as when I took this same approach on Wednesday, there is no relief to be found by simply installing the legacy macOS headers.
How about this callout in the massive failure dump?
gem install nokogiri -- --use-system-libraries
[--with-xml2-config=/path/to/xml2-config]
[--with-xslt-config=/path/to/xslt-config]
If you are using Bundler, tell it to use the option:
Alright, I think I will. I’ll tell it to use that very option;
# https://bundler.io/v1.16/bundle_config.html#BUILD-OPTIONS
# "flags to pass to the gem installer"
# `bundle config build.<PACKAGE> "--flags"`
bundle config build.nokogiri "--use-system-libraries"
bundle install
Nope. It’s still not like butter at all;
Installing nokogiri 1.7.2 with native extensions
Gem::Ext::BuildError: ERROR: Failed to build gem native extension.
current directory:
/Users/dfoley/REDACTED/ruby_gems/ruby/2.3.0/gems/nokogiri-1.7.2/ext/nokogiri
/usr/local/rvm/rubies/ruby-2.3.4/bin/ruby -r ./siteconf20190210-21692-mv9hb1.rb
extconf.rb --use-system-libraries
checking if the C compiler accepts ... yes
checking if the C compiler accepts
-Wno-error=unused-command-line-argument-hard-error-in-future... no
Building nokogiri using system libraries.
checking for xmlParseDoc() in libxml/parser.h... yes
checking for xsltParseStylesheetDoc() in libxslt/xslt.h... yes
checking for exsltFuncRegister() in libexslt/exslt.h... yes
checking for xmlHasFeature()... yes
checking for xmlFirstElementChild()... yes
checking for xmlRelaxNGSetParserStructuredErrors()... yes
checking for xmlRelaxNGSetParserStructuredErrors()... yes
checking for xmlRelaxNGSetValidStructuredErrors()... yes
checking for xmlSchemaSetValidStructuredErrors()... yes
checking for xmlSchemaSetParserStructuredErrors()... yes
creating Makefile
current directory:
/Users/dfoley/REDACTED/ruby_gems/ruby/2.3.0/gems/nokogiri-1.7.2/ext/nokogiri
make "DESTDIR=" clean
current directory:
/Users/dfoley/REDACTED/ruby_gems/ruby/2.3.0/gems/nokogiri-1.7.2/ext/nokogiri
make "DESTDIR="
compiling xml_comment.c
make: /usr/bin/gcc-4.2: No such file or directory
make: *** [xml_comment.o] Error 1
make failed, exit code 2
Gem files will remain installed in
/Users/dfoley/REDACTED/ruby_gems/ruby/2.3.0/gems/nokogiri-1.7.2
for inspection.
Results logged to
/Users/dfoley/REDACTED/ruby_gems/ruby/2.3.0/extensions/x86_64-darwin-18/2.3.0/nokogiri-1.7.2/gem_make.out
An error occurred while installing nokogiri (1.7.2), and Bundler cannot
continue.
Make sure that `gem install nokogiri -v '1.7.2' --source 'http://rubygems.org/'`
succeeds before bundling.
Ahh, but what I do see is that the failure dump is much less massive. I dare say that something has improved.
So, let’s examine the gem_make.out
;
compiling xml_comment.c
make: /usr/bin/gcc-4.2: No such file or directory
make: *** [xml_comment.o] Error 1
make failed, exit code 2
Sigh. Yes, we’ve seen this before.
/usr/bin/gcc-4.2
has shown up twice in this very Post.
Huh, maybe I need one of the C++ Standard Libraries from Homebrew,
just like I did on Wednesday.
I believe it’s time to call on our good old friends CC
and CXX
, once again.
For by the grace of God, they are loyal, faithful and true.
brew install gcc
brew list gcc # aaaaand what did we get?
# we got `gcc-8.2`
export CC=/usr/local/Cellar/gcc/8.2.0/bin/gcc-8
export CXX=/usr/local/Cellar/gcc/8.2.0/bin/g++-8
Now, those environment overrides are definitely having an effect;
compiling xml_comment.c
gcc-8: error: unrecognized command line option '-Wshorten-64-to-32'
gcc-8: error: unrecognized command line option '-Wdivision-by-zero'; did you mean '-Wdiv-by-zero'?
gcc-8: error: unrecognized command line option '-Wextra-tokens'; did you mean '-Wextra-semi'?
make: *** [xml_comment.o] Error 1
make failed, exit code 2
But we’re not quite out of the desert just yet.
However, new information has come to light.
A little research on ‘-Wdivision-by-zero’, and I find this Github Issue
in sparklemotion/nokogiri
’s very own repo.
It includes a massive failure dump that looks exactly like mine!
And there, towards the end, we find the solution we’ve been seeking;
CC=llvm-gcc bundle install
worked for my use-case.
As always, we take that to the next level;
# works great
export CC=/usr/bin/llvm-gcc
export CXX=/usr/bin/llvm-g++
# also works great;
# i should have tried that *before* the `brew`-installed version of `gcc`
export CC=/usr/bin/gcc
export CXX=/usr/bin/g++
And with those settings, we have finally achieved our goal;
Installing nokogiri 1.7.2 with native extensions
# ...
Bundle complete! 12 Gemfile dependencies, 46 gems now installed.
Bundled gems are installed into `./ruby_gems`
I have unearthed the tools I need to keep authoring these blog Posts – and even with all the associated crazy, the sun is not yet low in the sky.
Gathering the Github repo back into my knapsack, I sling the rugged blanket over my steed. We start a steady trot to the north – and as it happens, luck is with us – for we reach the next oasis by nightfall.
Later in the evening, I try out another possible variant, given the whole of what I’d written in this Post;
bundle config build.nokogiri "--use-system-libraries --with-gcc=/usr/bin/gcc"
But /usr/bin/gcc-4.2
rears its ugly head again.
It is not to be.
Somtimes the native gcc
compiler in Mojave is the right tool for the job.
I guess that just goes to show us, once again – even when the going gets rough,
you can always rely on your trusty amigos, CC
and CXX
.
And that’s why we have standards & conventions, kids.
After the migration, I reinstalled nvm. It graciously preserved the state of my version installs; I flushed and reinstalled each them out of caution. The new binaries worked effortlessly.
I smile. This will be easy.
On Wednesday, as I surmount what seems an innocuous dune of npm install
s,
my steed stumbles during a build of function-name.
> function-name@1.0.0 install /Users/dfoley/REDACTED/node_modules/function-name
> node-gyp rebuild
CXX(target) Release/obj.target/binding/src/binding.o
warning: include path for stdlibc++ headers not found; pass '-std=libc++' on the
command line to use the libc++ standard library instead
[-Wstdlibcxx-not-found]
In file included from ../src/binding.cc:1:
/Users/dfoley/.node-gyp/6.14.4/include/node/v8.h:21:10: fatal error: 'utility'
file not found
#include <utility>
^~~~~~~~~
1 warning and 1 error generated.
make: *** [Release/obj.target/binding/src/binding.o] Error 1
gyp ERR! build error
gyp ERR! stack Error: `make` failed with exit code: 2
gyp ERR! stack at ChildProcess.onExit (/usr/local/nvm/versions/node/v6.14.4/lib/node_modules/npm/node_modules/node-gyp/lib/build.js:276:23)
gyp ERR! stack at emitTwo (events.js:106:13)
gyp ERR! stack at ChildProcess.emit (events.js:191:7)
gyp ERR! stack at Process.ChildProcess._handle.onexit (internal/child_process.js:219:12)
gyp ERR! System Darwin 18.2.0
gyp ERR! command "/usr/local/nvm/versions/node/v6.14.4/bin/node" "/usr/local/nvm/versions/node/v6.14.4/lib/node_modules/npm/node_modules/node-gyp/bin/node-gyp.js" "rebuild"
gyp ERR! cwd /Users/dfoley/REDACTED/node_modules/function-name
gyp ERR! node -v v6.14.4
gyp ERR! node-gyp -v v3.4.0
gyp ERR! not ok
“Ah,” I thought, and mopped my brow.
include path for stdlibc++ headers not found
An Apple Developer Forums post calls attention to the fact that Xcode has deprecated libstdc++
support.
The Xcode 10 Release Notes clearly state;
Building with libstdc++ was deprecated with Xcode 8 …
Libgcc is obsoleted.
A shocking development. The Forums post speaks of back-copying filesets from Xcode 9. Perhaps the situation is indeed that dire? In due time, we come upon a Github Issue which seems to bare a rich vein of information to help us in making our escape.
First, it recommends the usual unboxing for a fresh Xcode installation;
# "You must agree to both license agreements below in order to use Xcode."
sudo xcodebuild -license
# install command-line tools
xcode-select --install
In addition, the Github Issue references those same Release Notes;
some software may fail to build correctly against the SDK and require macOS headers to be installed in the base system under
/usr/include
…
We must perform some additional unboxing;
# install the legacy macOS headers
# "In a future release, this package will no longer be provided."
open /Library/Developer/CommandLineTools/Packages/macOS_SDK_headers_for_macOS_10.14.pkg
# make sure that you're using the intended version of the command line tools
xcode select -s /Library/Developer/CommandLineTools
Clearly, others have taken this path when they suddenly can’t compile C program on a Mac after upgrade to Mojave
and their efforts are met with success.
Yet, with all our might, we cannot wrest ourselves from this pit of function-name
despair.
> function-name@1.0.0 install /Users/dfoley/REDACTED/node_modules/function-name
> node-gyp rebuild
# ...
gyp ERR! build error
gyp ERR! stack Error: `make` failed with exit code: 2
Night is falling, and we are far too exposed to the elements here. There is an encampment less than 10 kilometers to the southeast. We could reach it safely, but time is of the essence.
Certainly, this path leads us nowhere.
A further scroll through the Github Issue and we come across;
Hi, I got to the bottom of this issue, in the end, after a trawl through clang, distutils and python make/config files.
“Hi” indeed, Mr. Savior! At last, a solution! Let us roll up our sleeves and give it a shot;
export MACOSX_DEPLOYMENT_TARGET=10.9
export CMAKE_OSX_DEPLOYMENT_TARGET=10.9
Alas, what is good for a python in the desert is of no service to us in our function-name
predicament.
> function-name@1.0.0 install /Users/dfoley/REDACTED/node_modules/function-name
> node-gyp rebuild
# ...
gyp ERR! build error
gyp ERR! stack Error: `make` failed with exit code: 2
I must accept the fact that this Github Issue can take us no further. We must be done with it, and move on.
The last of the daylight is fading. Venus hangs a brief ten degrees to the right of the crescent moon. I take in this truly serene and beautiful sight, though it cannot soothe the panic which has begun to rise in my stomach.
More searching unearths another Git Issue which shows promise.
I had to install g++8.2 separately.
Now there’s a fine idea.
Turns out there’s some good documentation on the C++ Standard Libraries in Homebrew.
Though we could call out a specific version, like gcc@7
, we’ll just install the latest;
brew install gcc
brew list gcc # aaaaand what did we get?
Tremendous! We got gcc-8.2
, and with those paths, we have some compiler environment variables to set:
export CC=/usr/local/Cellar/gcc/8.2.0/bin/gcc-8
export CXX=/usr/local/Cellar/gcc/8.2.0/bin/g++-8
Yes, it’s our good old friends CC
and CXX
.
So good to see them out in such an arid environment.
They work virtually everywhere.
And that’s why we have standards & conventions, kids.
And, what of our function-name
?
> function-name@1.0.0 install /Users/dfoley/REDACTED/node_modules/function-name
> node-gyp rebuild
# ...
# some 'note:'s
# some 'warning:'s
#
# until eventually
# ...
redacted@3.1.4 /Users/dfoley/REDACTED
`-- function-name@1.0.0
Our concerns have been addressed with a brew
-installed version of gcc
.
Freed at last, we quickly shake the sand from our boots and set off to the southeast. There can be no doubt that we will sleep deeply tonight.
Now, won’t you please join me as our desertadventure continues with Wandering in the Mojave with Ruby …
At its best, it is a treatise of Best Practices. At its least, it’s a living set of concepts and reminders for myself to make quick copies-and-pastes from.
:clap: cue applause :clap:
]]>mongodb-sandbox
will launch a stand-alone MongoDB Topology for use within a Test Suite.
It spins up a self-contained instance of mongod
on a free local port
and performs all the setup & teardown necessary to ensure an empty database at the start of each Test Case.
For example, using mocha,
const { expect } = require('chai');
const { MongoClient } = require('mongodb');
const { createSandbox } = require('mongodb-sandbox');
const sandbox = createSandbox();
before(function() {
const lifecycle = sandbox.lifecycle(this);
before(lifecycle.beforeAll);
beforeEach(lifecycle.beforeEach);
afterEach(lifecycle.afterEach);
after(lifecycle.afterAll);
});
describe('the Sandbox', () => {
it('is running', () => {
expect(sandbox.isRunning).to.equal(true);
});
it('can be pinged', () => {
const { url } = sandbox.config;
return MongoClient.connect(url, { useNewUrlParser: true })
.then((client) => {
return client.db().admin().ping()
.then((response) => {
expect(response).to.deep.equal({ ok: 1 });
return client.close();
});
});
});
});
I hadn’t found a solution in-the-wild that did this for me.
A while ago, The Code Barbarian had put up an excellent tutorial
on using mongodb-topology-manager.
So I cobbled together an end-to-end solution with the help of mongodb-prebuilt, a module which I learned about from my fiddlings with mockgoose
.
I’d tried to use mockgoose within my professional App code for a while. It was hacky, but it was capable, and a reasonable drop-in.
To suit our in-house needs, I had to fork off a major refactor of its Connection management.
I invested the time to build up a whole Test Suite around my changes,
… but that was just gonna be too much churn by some random cantremember
Guy to ever get merged back into the mainline by the maintainers ¯\_(ツ)_/¯
.
I have not tried out mongodb-memory-server.
I’ve actually just learned about it from the mockgoose
project’s swan-song README.
Hmm.
On the outside, it seems like it’s doing much of what my project does :open_mouth: and that’s my DRY face.
Have you heard of this thing called “mocking” ?
Oh, sure. But then you’re at the mercy of your fragile hacking of either the mongoose or mongodb APIs. Or both, because you can never have too much fragile.
My employer maintains a project whose Test Suite is 100% mocked DB interactions; it’s super hard to maintain – even says the guy :raised_hand: who had to write the thing in the first place. And who is going to justify paying down the technical debt to fix an inconvenient Test Suite?
No one, that’s who. Mocked database calls live forever.
It is so much better to test against a sandbox. The confidence of knowing the business logic produces reasonable queries, etc.
Alright. So just pre-install a MongoDB server.
Uh huh, I know. Every packaging system has one. MongoDB is even prevalent enough that every CI build system has some way to introduce the daemon as a sandbox binary.
But I really don’t want my Test Suites mucking with a locally-running server which might contain real data.
mongodb-sandbox
is built to seize up upon encountering a database that already contains Documents.
And once its self-contained daemon shuts down, all of that data gets lost, which makes for a predictable clean slate every time.
There’s real value in having a run-anywhere self-contained solution for this.
You’d think I would be proud. My first public Open Source contribution to the npm biosphere!
But, not so much.
MongoDB did a naming pattern change to the macOS binaries.
Downloads failed in spectacular fashion.
I pushed a fix for that back into mongodb-download
.
So, subjectively, up to a point my module “just worked”. Until it didn’t.
CircleCI builds start failing at work.
They’d upgraded their Ubuntu 14.04 “circleci/node:8” image, and in the process, libssl1.0.0
was no longer available.
Unfortunately, the pre-built mongod
binaries are frozen in time with that dependency.
The quick patch was to apt-get
libssl1.0.0 as a prelude to our ‘test’ step.
The public key importing too,
Yeah, the whole bit.
Again, poking holes in the magic of my sandbox and its dependencies.
And, poking holes in the wisdom of pinning an employer’s code to my own personal project.
I had an existential quandry in my design.
Should I download the mongod
binary …
The lazy path seemed more interesting, so I went down it – a decision I really regret, because
it takes work to be lazy. Under intermittent conditions, the first Test Case runs significantly longer than the others. The sandbox has to reach into the test framework and tweak the timeout allowance for the first Case. As of now, support is janky, incomplete, and definitely not lazy.
the first time I was riding (offline) on MUNI and tried to run a Test Suite which hasn’t cached the binary, I cursed my own goddamn name.
I will eventually refactor the thing to pay the download penalty up-front.
I haven’t figured out how to get the bootstrapping to work in a truly parallel Test Suite environment,
like the one that ava provides.
The discrete mongod
launches aren’t coordinated so so they end up in a failed race for ports.
Frankly, all of these shortcomings leave me feeling disappointed in my own work and the choices I’ve made in producing it.
As of this writing, version 1.0.x is sealed in amber, and I shall endeavor to improve the module it over time.
I anticipate that the 1.1.x release will include
launch configuration for a Replica Set Topology, which the module has been designed to support since Day One
eager mongodb
downloading
official Test Suite support & examples for at least one other framework
Ultimately, I hope this project can save at least one another Developer from having to home-rolling this solution, or something much like it, again.
I’ve been noticing a lot of public code snippets out there which declare const
s using the
arrow function syntax;
const addOne = (number) => {
return number + 1;
}
As opposed to the classic pre-ES2015 form,
function addOne(number) {
return number + 1;
}
My use of the term “classic” reveals my bias, as opposed to calling it “ancient” or “crappy old”.
There are two things I prefer about the classic form;
Function
by just looking at the lineNow, to be clear, I’m totally down with something along these lines;
Transformer.prototype.transform = function transform(nouns) {
const renamer = (noun) => {
const { renameMap } = this;
return (renameMap.has(noun) ? renameMap.get(noun) : noun);
};
const inclusions = (noun) => this.includeSet.has(noun);
if (this.shouldRename) {
// filter both inbound & outbound
return nouns.filter(inclusions).map(renamer).filter(inclusions);
}
return nouns.filter(inclusions);
};
It’s a convoluted example, and could be approached very differently …
but as written it makes a lot of sense to declare a few Function
const
s that inherit the scope context.
I’m all in favor of adopting a modern JavaScript syntax, as long as it serves a purpose in the code.
You saw me up there deconstructing an Object with reckless abandon, right?
I find that syntax much cleaner and more expressive than a this
assignment, and I’ll choose it in a heartbeat.
But I propose that an old-school function
declaration is the cleaner, more expressive style when either one could meet your needs.
This article is Post #3 in my series, Dumb Shit I’ve Done in a Production Environment. When I wrote up Post #2, I didn’t know it would be a series. And until I thought “hey, this might make a series” and looked through my Archives, I’d totally forgotten about Post #1.
The series will conclude once I stop doing dumb shit in a Production enviroment.
My employer – redacted here, as always – had grown tired of maintaining our in-house RabbitMQ message broker. We’d stopped upgrading at v2.8.7, so we had no DLX capabilities and no simple Web UI for manually re-submitting messages to a queue.
AWS SQS offered both of these features and allowed us to do away with the self-maintenance.
Seeing as we were already invested in Amazon’s Cloud services, we decided to make the switch. It also aligned nicely with our deployment strategy; Elastic Beanstalk provided a simple way to spin up a containerized Worker daemon to consume SQS messages, without all the messy overhead of setting up SNS routes & subscriptions.
The first project our Team converted to SQS was a document generation pipeline.
It was fed by a Lambda that kicked off a cron
task to poll User-defined schedules and publish one message for each Unit of Work.
The daemon received & validated these messages, queried ElasticSearch through Data Services, composed an HTML email, and sent the result off to the User.
This is Event Driven Architecture 101 caliber stuff right here.
The Thundering Herd problem is described as
… a large number of processes waiting for an event are awoken when that event occurs, but only one process is able to proceed at a time. After the processes wake up, they all demand the resource …
Mmmm … in those terms, perhaps what happened with our new pipeline wasn’t a bonafide Thundering Herd. But it’s such a compelling name, so I’m asking you to cut me a little slack here. I could have written a Post entitled “Cascading Failure”, the riveting tale of a waterfall on the losing end of a beaver dam. Instead, I went with the more thrilling “stampede” analogy.
Naming aside, what we experienced was a Poster Child of a system DDOSing itself from the inside. And, much like a stampede, it left in its wake a trail of mayhem, confusion and tears.
Because at 2am PST the morning after we launched, this starts to happen:
It’s important to note here that the company had indexed a lot of ES documents. I mean, a lot. It’s our core business. And we were doing ‘live’ deep querying … Large data ranges; they take a while to resolve … Lots of nested criteria; yeah, they take a while too. And as much as we’d sharded & indexed the dataset, ElasticSearch couldn’t help but be a centralized, blocking resource.
It turns out that our ES clusters were not having a good day. And what didn’t make their day any easier was a batch of emails scheduled for New York City early-birds firing off at 5am EST, followed by a second wave at 7:30am.
With musical accompanyment brought to you by Andy C & Ant Miles.
I’m not sure when the first alarm bells went off, but I first got my first PagerDuty Alert at 6am. Hey hey, turns out I’m the Dev guy on call that week! By this point, our platform had been slowly crumpling for about 1 1/2 hours.
Not long after I’ve cracked my laptop and tried to get a handle on the situation, I’ve got the CTO on the phone demanding to know what’s gone wrong, and more importantly, how to stop it.
I wasn’t awake enough to recognize it immediately … but the SQS daemons on the Worker instances are doing their HTTP POST, waiting for 30 seconds, then timing out on the Unit of Work.
And what does a content pipeline do if, at first, it can’t succeed?
That’s right,
Message processing was dirt simple when we used AMQP; we opened a Channel, subscribed with a concurrency limit, and never timed out on a message. Occasionally the Data Service would produce a failure, perhaps due to a request timeout somewhere along the network. Upon failure we would NACK and, in the absence of a DLX, retry immediately. In practice, retries happened so rarely that the downstream impact was negligible.
But that was then. Now they’re happening every 30 seconds, our Production Services are getting slammed by batch-priority requests, and, of course, that’s impacting the public site.
Which, naturally, leads to more of …
At this point, you may be asking yourself,
“How could one content pipeline be causing so much disruption?”
What a timely question!
Now, while we did have the forethought to fetch the content serially – to, you know, minimize the impact, in the unlikely event that this problem would ever occur in the wild – there was one very important thing that we overlooked.
And though I’m speaking of “we” in these sentences, I really should be saying “I”.
It is now my pleasure to introduce another Minor (yet Vital) Player in our ensemble; the SMTP Proxy. He’s been there the whole time, just out of sight;
Despite all the load, eventually the Data Service will respond successfully. It just takes longer than 30 seconds per request. Our pipeline diligently chugs away, gathers all those hard-earned results, composes an email, and sends it to the Customer.
Yes, the SQS Daemon may have given up … but we sure as hell weren’t going to let some little upstream termination stop us from completing that Unit of Work! Successfully!! Again. And again. And again.
Because, that SQS Daemon? It’s going to try again after 1 minute. At which point it will spawn another pipeline that will chug away, gathering data, competing for Production Services with all of its dilligent siblings. And our Users.
Oh, did I mention that my employer was actively wooing a new customer for a rather sizeable contract?
See, that’s why it’s 6am, and I’m in my jammies with the CTO on the phone demanding to know how to stop it.
Which I ultimately failed to do. My direct Manager was also in on this 6am call. He’s the guy who shut down the Worker instances while I froze, bedazzled by the glamour of tracking down the “how”.
And … it is here that we close the curtain on our little Tragedy.
Sure, let’s start there.
Pro Tip: when you’re On Call, remember to turn off the ‘Do Not Disturb’ filter on your phone. That’s why I first heard about the incident at 6am.
Also, when shit is actively hitting the fan, that is not the time to analyze the problem. At best, launch your feature with a rollback plan that lets you pull the plug quickly. In the absence of a plan, have the instinct to make figuring-out-that-plan your highest priority.
I did not wear the Ops Hat well at all.
Not possible. And ultimately, it doesn’t matter.
The Worker instances were burnt to the ground, and their local logfiles with them. We weren’t pushing ‘info’ log enries info Loggly, so most of the traceable details were lost.
Even so, we had several PagerDuty Alerts go off about too much Loggly traffic during the incident – mostly from Data Service components that the Content Pipeline had pushed to their limits.
Once the fire had gone out, I was able to suss out some analytics from the SendGrid API at the receiving end of our SMTP gateway. This only allowed us to understand the scale of the incident, how many Customers got how many emails, and who should be first in line for apologies.
I must admit, we didn’t give the Worker’s SQS Daemon very good instructions. In fact, through the ‘Inactivity timeout’ & ‘Visibility timeout’ settings, we had given it a 30 second timeout :grimacing: .
“Oh, well there’s your problem,” I hear you think.
Yeah. Well, turns out it wasn’t.
Just you wait. It gets good …
Our ‘Error visibility timeout’ was something small-ish, on the order of 60 seconds. Knowing now that the pipelines would end up overlapping themselves, that seems like such a silly value :grimacing: . Something more along the lines of 5 minutes would have avoided building up such pressure.
I’m pretty sure the concurrency limit we imposed with ‘HTTP connections’ was 25. Again, in retrospect, :grimacing: . But even at a lower threshold, the ‘HTTP connections’ setting is about active requests. The fact that the Worker is still processing abandonned requests isn’t the Daemon’s problem.
And then there were the 10 retries in the ‘Redrive Policy’ for the DLX queue. All told, this was a recipe for a long and sustained disaster.
I’m relieved to announce that when I speak of “we” here, I’m not just talking about myself anymore.
We had adapted the SQS endpoint from the RabbitMQ pipeline. The code was highly segmented, so it was easy to omit some “superflous” features in the new routes.
Those were the features which imposed checks and balances.
Right at the start of the pipeline, after parsing the JSON message, there was a dedupe check. Nothing special. The key storage was 100% in-memory, with no distribution. But we hadn’t spun up a lot of Worker instances, so a dedupe check would have been at least a small bulwark against the tide.
In addition, we kept an audit trail in MongoDB for each Unit of Work. There were a pair of cooperative segments in the AMQP pipeline;
Again, not perfect. But once any given pipeline request had finally reached the finish line, no additional ones for that Unit of Work would have been allowed to start. It’s likely that this would have prevented pressure from building up in the first place.
However, the SQS endpoint never challenged the HTTP Daemon with any of these checks. It just said “okay” and got down to the tedious business of fetching content and composing that all-important email.
“Tedious business” … ?
Oh, it does.
It turns out that what’s good for the Browser isn’t always good for the Daemon. The express App exposed by the Content Pipeline had been configured with a 30 second timeout.
And boy did that take a while to track down!
You see, the Server had been built to consume both RabbitMQ and HTTP traffic. And, being an expert on rendering content, one of the things the Server was asked to serve up was a live email preview. This, unlike its message-based responsibilities, was immediate and synchronous.
In all fairness, a User’s browser should be cut off after waiting an unreasonable period of time. But with an HTTP Daemon that implements its own failover logic, hold that Connection open forever and let the upstream make the decisions.
But we – ahem – I forgot about that setting. So … 30 seconds!
Whoops.
Yeah. When the upstream terminates, seriously … stop processing.
The pipeline was built as a Promise chain, and there were plenty of opportunities to Error out without sending communication to the Customer. Monitor the ‘abort’ on the Request and the ‘close’ on the Response, set a flag, and put circuit-breaker checks all over your pipeline, especially before sending out that email.
You may still waste a lot of cycles and bring your platform to its knees, but at least none of your Customers will get spammed in the process.
]]>I am a Perfectionist at heart, and there are a lot of Best Practices to follow, ones from which these shiny pearls are harvested.
Now, if you’ve come here to see the Attributions for the Creative Commons artwork used on this site – like :point_left: that one – they are down here.
But please … feel free to keep reading. The purpose of this entire post is to pay flattery to the articles, posts, resources, assets and Open Source toolkits which were there at my disposal when I built this blog.
I’ve done my darnedest to convey all the details in a way other than as a listicle or shower of bullet-points.
I’d been chomping at the bit to do a redesign of this thing here for many years. My old design – now lost to the Winds of Time – was *ahem* rather lack-luster. Trust me on this, folks; the Winds of Time can have it.
Then one day, I read a post about the copy-on-write internals of Ruby by Brandur Leach, and I became truly inspired.
I find the blog’s visual design to be a sublime blend of online content and classic newsprint layouts. The information density that it conveys, with its powerful heading, unassuming sidebar, dual-functioning gutters and seamless diagrams, is both magnificent and effective.
Yes, this was what I wanted.
I was further influenced by the strong watermark at the head of John Novak’s blog. I see a poetic balance of vector graphics and discretionary color, the spirit of which which I’ve shamelessly copied.
I appreciate David Clark’s choice of font families & sizing, and his generous use of whitespace. Varuna Jayasiri has a nice columnar layout with just enough color to call focus yet not be distracting. And Hack Cabin, opinionated in its use of strong colors, uses a good set of icons to punctuate metrics like the Word Count and Reading Time.
All of that, yeah. I want it all.
I need me a Brochure for the Ages.
A visual statement of Timeless Web Design.
Something that would bring me more delight than the parade of very capable but otherwise ¯\_(ツ)_/¯
blog post templates
that I see on a weekly basis as I catch up with “the Feeds”.
Sure, for most people it’s about getting the content out there, moreso than making it look “right”. But it would appear that I’ve got a fever, and the only cure is :smoking: rollin’ my own site. Fortunately, I have a smattering of design skills, and I have a friend in Hennie Farrow, my employer’s Head of Design, who had plenty of salient advice for me.
Oh, did I mention the Perfectionist thing? Yeah, I’m one of those.
So, let’s roll up our sleeves and get down to it, shall we?
I went with Jekyll. It’s got a long history, good documentation, it supports Markdown as a first-class citizen, and it’s Ruby.
Out of the box, Jekyll will use its Minima Theme. I needed a good set of training wheels to figure out their Template system, so I also plugged in their Minimal Theme. They are similar in name alone.
Of course once I’d gotten familiar with the Liquid DSL, I went and :smoking: rolled my own Theme. Various examples of date formatting and how to list all the posts, tags and categories (etc.) were of great help.
The core Plugins I chose for the Build were
stack overflow
s.I implemented a few Plugins of my own – such as my Day Calendar widget and Footnotes composer – with the guidance of Liquid for Programmers and the source code of the tools above.
I want my content to be machine-readable in a shit-ton of different ways. Obviously, it’s gotta have a Sitemap and a robots.txt. Adding a humans.txt seems the polite thing to do.
Then, there’s an immense amount of scrape-worthy <metadata />
to be offered up,
much of which is covered by joshbuchea/HEAD.
For instance, just off the top of my <head />
, there’s
schema.org
,
validated by Google’s Structured Data Testing ToolThe HTML also provides some minimal ARIA markup,
enough that the Web Accessibility Checker doesn’t compain too loudly.
I declare Element roles where needed,
and it is easy to apply the empty alt
Attribute & aria-hidden
techniques.
In the interest of being fair & balanced, there’s also an anti-Accesibility feature, as explained in Special Effects.
Major props out to ngrok. Their service is invaluable in projects like these. It allowed me to expose my private HTTP dev environment for public parsing and scrutiny.
And while we’re on the subject of meta-information … I authenticate the blog through keybase.io, and far as I’m concerned, my own work here falls under the Do What the Fuck You Want to Public License.
My blog follows a semblance of the recommended HTML5 semantics,
with a <nav />
and <section />
s and styled <ul />
s galore.
It’s interesting how even minor structural changes can impact machine-parseability,
such as the way Safari Reader renders your site.
On the advice that markup like <article />
should never be styled,
each semantic Element has an inner CSS “Wrapper” <div />
which shapes the container.
This approach comes in real handy when it’s time to implement a Responsive Web Design.
This comparison of PX, EM and REM media queries
led me to do virtually all the CSS sizing in EM units,
including the scaling of SVG assets and pixel-pushing (hah) Elements around by multiples of 0.1em
.
EMs make everything scale just so darn nicely.
The only places where PXs were indispensible were in border-width
s and a touch of letter-spacing
.
There are three major media breakpoints which trigger the page layout. I’ve even come up with nicknames for them:
font-size
and padding at a premuimfont-size
can get kicked up,
and content can flow to the screen’s right edgeOnce you reach “widescreen” mode, paragraphs get constrained to an optimal line length,
as covered in more detail under Font.
Additional breakpoints above “widescreen” will gradually jack up the font-size
to optimize for large-scale displays.
Now, whereas all the text in my CantRemember.com design scales fluidly using vh And vw units, that is an anti-pattern for a blog. Readers should be able to scale the font to whatever size suits their reading comfort.
Well, it just so happens that one side-effect of doing media queries with EM units is that increases in scale gradually trigger the lower breakpoints – at least in Chrome & Firefox, anyway. If you zoom in far enough while browsing from your laptop, you might just see “weensie” in action.
With all that deliberateness in place, it seemed reasonable to lock down the mobile viewport scale. Both Mozilla and Google provide good takes on this technique. I’d imagine folks have better things to do than pinch my pages.
Aside from the Responsiveness, there’s nothing special going on here.
Everything stacks on top of normalize.css.
I use BEM class
-naming conventions and [attribute="behavior"]
selectors.
Those choices, and the use of SASS, keep it all from becoming a tangle.
And thank Goddess that SASS lets you put your @media
selectors anywhere you damn well please.
The styling keeps mostly to the basics, like centering techniques and maintaining aspect ratio for <img />s. All of the Special Effects were implemented with native DOM methods, for which DevDocs is a fantastic reference. I also needed a few reminders of how to measure Element dimension and location.
When this site got built, there was growing adoption of neato-keen features like CSS Variables and Grid Layout. However, Can I Use told me that availability was still limited … whereas here I am, building a site that can be read comfortably using Lynx.
Okay, there is a bit of Flexbox –
the modern spec,
as opposed to the 2009 & 2011 variants –
whose only job is to grant the <footer />
its own multi-column layout.
But even with a good guide and applying just the basics,
I fought enough battles against browser-specific quirks (I’m looking at *you*, Safari)
that it was more reliable to go old-school :metal: for everything else.
All without any uses of float
, of course; I do have some principles …
Also, some judidious uses of Animation, nearly always with opacity
.
Thanks to the community, I can now use CSS Transitions
and CSS Animations like a pro.
As the Attributions below would suggest, this blog is thick with vector graphics. There are tools out there which I could have used to :smoking: roll my own Icon Font. But upon further consideration, naaah.
Throwing caution to the wind, I disregard support for SVG fallbacks
and instead use the simple <img />
tag approach.
A limited set of them get inlined during the Build because that instant-rendering rush is worth some per-page overhead.
I played around with implementing the “watermark” SVG as a CSS Background Image to colorize its stroke. However, would have required an inlining, lest the viewer’s eyes be assaulted with a solid rectangle of background color until the asset had downloaded. Again, naaah.
The range of choices offered by Google Fonts is astounding.
I wanted a nice legible sans-serif <body />
font, so I went with the always-popular “Open Sans” …
it is really easy on the eyes.
Other families in the running were;
Hind Guntur,
Hind Siliguri,
Mukta Vaani,
Noto Sans,
Nunito,
Palanquin,
Roboto &
Raleway
Then, hmmmm … hey, maybe the <h1-6 />
Elements could use a bit of an accent.
So I :smoking: rolled my own Font.
I did not roll my own Font.
I chose “Teko” … it’s clean, legible, and it has zing. Other families under consideration were; Alegreya Sans SC, Anton, Asul, Michroma, Ramabhadra, Secular One & Viga
“Open Sans” offers Italic style and Bold weight variants, which get loaded & leveraged with the @font-face declaration.
There’s a very nice smoothness to a native italic font – the synthesized version isn’t as crisp.
I’m hoping that after gzip
and the Build’s glyph-shaking step, the reading experience is worth the load time.
“Open Sans” also offers some beautiful Light variants, but I felt they would bloat the site (ಥ﹏ಥ) . Instead, I settled on some fine-turning for Webfont Selection and Synthesis.
The “Bulletproof” @font-face syntax seems to be genuinely immune to bullets. And wouldn’t you agree there’s something charmingly 1920-ish about old-style Numerals?
Oh, and really? “Your Body Text Is Too Small”? I don’t think so :punch: .
In widescreen layouts, my text scales up to 14pt
.
Yes, it can drop down to 11pt
under mobile conditions, but, again, :punch: .
I’ve applied some methods for controlling spacing
at both the letter
and word
level, because I felt that subtle adjustments made a big difference in the paragraph flow.
Widescreen layouts – where the sidebar and gutters appear – have their content constrained to { max-width: 65ch }
,
dead center of the range suggested when writing CSS with Accessibility in mind.
Text (other than marginalia) is shown at a minimum of { color: #767676 }
which I’ve been told is as light as you should you go.
This is what I mean by marginalia, which is covered under Special Effects.
I chose FOUT over FOIT when it came to my font loading strategy. The blog falls back to “Lucida” using Windows & Mac native equivalents until “Open Sans” arrives. Yes, the flash of the font rewrite(s) are … disappointing. But it’s the best compromise for readers with limited bandwidth.
Also, :punch: .
I reckon my code should look like code
.
# and I reckon it should look good in a box, too.
# to emphasize my good advice
# like,
rm -rf /
With Jekyll, the Markdown ```
renders to a Rouge / Pygments Element heirarchy,
whereas the {% highlight %}
tag renders to a different one.
Maybe I’ve mis-conifgured Jekyll. Who really knows at this point.
But I also embed my own Gists,
so either way, I have multiple fish to fry.
The green-and-purple statement of the perldoc theme is perfect for my palette of greens. The jwarby/jekyll-pygments-themes Repo houses a good variety of pre-built CSS files from the Pygments gallery.
Unfortunately, perldoc
isn’t offered out-of-the-box from the lonekorean/gist-syntax-themes Repo.
So I had to :smoking: roll my own CSS Selector color mapping.
I used the monokai
Gist,
Jekyll
and Pygments styles as comparative references.
This Issue thread on GitHub-Dark provided a good reference for the Gist selectors.
Then it was just a matter of crushing it.
Which I did.
I crushed it.
And if I didn’t suspect that, at it’s heart, my perldoc
implementation is kinda :poop: , I’d contribute it back to the Community.
But … it kinda is.
You can’t call a site “modern” these days if it doesn’t have a JavaScript build chain.
The current hype du jour was webpack
, but I went with gulp v4.
Their formal introduction of gulp.serial
and gulp.parallel
sealed the deal for me.
If you’re still using v3 or below, here’s a nice upgrade guide.
If you discover that 4.0
has been merged to master
, please let me know so I can update the link.
They’d kept it off on a non-breaking branch for a looong time.
These plugins are the power behind the throne:
Deployment-wise, I know it’s all the rage to host static content from Github Pages or S3, but I’ve already got nginx running on an EC2 instance built from Chef, so I’ll stick with what works.
I believe that there is a fine line between enhancement and gaudiness. So fine a line, in fact, that I drew, erased, and redrew that line at least a dozen times during this project.
I’m pretty sure I mentioned the whole Perfectionist thing within the first 3 sentences.
Every blog needs an on-scroll effect these days. Eschewing the aesthetics of the famous locking static nav, or the ever-popular bouncing profile pic, I instead mounted an opacity gradient over the sidebar logo. This helps it match the color intensity of the massive “watermark” logo. When the watermark and gradient scroll out of frame, the sidebar logo gets its color mojo back.
Oh, I should mention … some of these effects, like that one :point_up: , can only be seen in a “widescreen” layout, as covered under Markup.
Sorry, Tennessee.
Still, I needed a good excuse to use stutrek/scrollMonitor, so under the sidebar logo you’ll find a progress lozenge. It renders passively at 20fps to avoid introducing any jank.
My FOUT strategy – as discussed at length under Font – was implemented with bramstein/fontfaceobserver and a modecum of marker classes.
For social sharing buttons, I went with absolutely fucking nothing.
Oh, but I really do like the way that Github and other sites have passive bookmark indicators on all their <h* />
elements.
So I went and did that client-side with a little DOM jiggling.
I was so delighted by this article on interactive marginalia
that I chose to experiment with faded-out copy in the the right-hand gutter.
It’s an attempt to reduce distraction; the “interactive” part is the full reveal on :hover
or :focus
.
Semantically-speaking, it’s treated as <small />
.
But enough about all that. Let’s talk about the detonator.
It takes on a Lynchian flicker once the heading scrolls out of view. You can make it stop that! with a hover. Clicking on it will produce a UX which clearly violates WCAG 2.0 Guidelines 2.3: Do not design content in a way that is known to cause seizures.
It was inspired by the ‘release of the Hypnodrones’ from Universal Paperclips, but is much much freakier, more like the love-child of Electric Soldier Porygon and Twin Peaks Season 3 Episode 8.
Thank you StackOverflow for describing how to dynamically remove a stylesheet.
All that remains is an HTML Document in all of its naked glory.
The background-color
and font-family
are added cheats to evoke those sweet, sweet memories of Netscape Navigator 3.01 Gold.
And, let me just say right now that mroderick/PubSubJS really ties the room together. All the other effects obediently shut down post-detonation.
If you’re curious, you can always view-source the code. I even left you the comments, kids :relieved: .
And now … finally, as promised waaaaay earlier in this post … here is recognition for the Creative Commons artworks used on this site.
Many of the icons used on this site fall under the Creative Commons BY 3.0 license. For example, article meta-information is denoted by
Clock by Alfa Design | |
Newspaper by unlimicon | |
Business Card by Nanda Ririz | |
Tag by artworkbean | |
Bucket by Austin Condiff |
There’s an occasional left gutter accent
Link from IcoMoon-Free | |
Documents by Kidiladon | |
Check by Austin Condiff | |
Exclamation by Francisco Garcia Gallegos | |
Lotus by Nick Bluth | |
Dagger by Sumana Chamrunworakiat |
And the ever-present <footer />
Book by Mike Rowe | |
RSS by Javi Ayala | |
Github from IcoMoon-Free | |
Twitter from IcoMoon-Free | |
Envelope by Xela Ub |
Plus, for effect,
The nature of which is covered under … you guessed it … Special Effects.
Radiation by Shastry | |
Radiation by Shastry | |
Radiation by Shastry | |
Nuclear by Shastry | |
Mushroom Cloud by icon 54 | |
Explosion by Anbileru Adaleru | |
Ash Cloud by Sarah JOY | |
Atom by Gregor Cresnar | |
Electric by Seymur Allahyarli | |
Fireworks by James Keuning |
The emojis sprinkled throughout these pages are provided by the jemoji
Plugin,
as cross-referenced by the Emoji cheat sheet for GitHub :ok_hand:
and a great listing @ onmyway133/emoji :thumbsup: .
But when it comes down to brass tacks, who doesn’t just ❤❤❤ kicking back with some Character Entity References, right?
]]>… oh, since before the Turn of the Century.
You know, all casual-like. Sometimes I’ll also throw in the phrase “high-order JavaScript”, just to shine one on.
Yet, in going back over my blog posts during a recent refactoring, I was surprised to find how few of them reflected my long and storied history with the language.
I was inspired to correct this gross oversight.
Something deep within me insisted on making this Post into a confessional as well. I felt myself compelled to be achingly, if not brutally, up-front about many (though not all[1]) non-technical challenges I’ve encountered along my path.
I found it hard to retrace my “entire” career without revealing too these junctures which have led to sudden, unexpected growth in my professional character. Upon reflection, it seems this Post was destined to be not only a timeline, but also an exercise in personal accountability.
So; read on and witness my tale with all of its wounds and bruises and closures; or TL;DR it and just jump to things as of Today.
I had no idea know when I’d started putzing around with my first website that I’d be hitching my horse to JavaScript for decades to come. In many ways, I resisted it. I really enjoy writing in the language – much as I do with Ruby – and JavaScript’s ubiquity has made it the de facto language of the Web.
It was inspiring to read mr. Crockford’s opinionated wisdom beamed down from high upon Yahoo headquarters, but even with the potential of Rhino, for a long time the language’s impact seemed to be contained only to the browser. My preference felt like a constraining choice, limiting me to the Front End of the stack.
Then was born the Node.js wrapper around libuv and a revelation of how well JavaScript’s scope-capturing single-threaded features lend themselves towards efficient non-blocking I/O. In a great Cambrian explosion, JavaScript became a full stack language, and today[2] it thrives.
Despite all the WAT inherent in mr. Eich’s invention, it seems to hold boundless potential, inspiring the nasence of WebAssembly via emscripten to polemnics about releasing Zalgo and the phenomenon of JavaScript Fatigue Fatigue.
So, now that I have firmly and decisively taken the reins of my Bandwagon, what has led me to here?
I graduated from the University of Lowell (now UMass Lowell) in 1990. Overall, my college education took me 6 years to complete. It’s safe to say that I needed some motivation to finish; my two best friends there never bothered to get their degrees.
During my course of study, I worked part-time for an Engineering firm in Waltham, MA – the company which employed my father until his retirement[3] One day, my manager simply sat me down and informed me of
That was sufficient motivation. I had my degree within a year.
The Engineering firm in question did business on a grant & contract basis, so I worked on incremental projects for the DOD and other clients. Primarily I was writing hardware control systems in Visual Basic. It was a language that hardware engineers were comfortable understanding, and I’d been scripting in VB for years. Sure, I did some Objective C development for them – using a bonafide windowed Mac IDE – back in ‘91, but it was mostly all-VB all-the-time.
In 1992, my friend Tom suggested that I move to Seattle. This sounded like a good idea to me, so my employer of 7 years and I parted ways. With my belongings in tow, my friend Mark & I made the cross-country drive along I-90 in 72 hours.
I arrived in 1992 with B.S. in Computer Science, yet apparently not a lot of relevant, desirable experience. My 70 WPM typing speed allowed me to stay afloat through temp jobs doing word processing & data entry tasks until I was hired in 1994 by a company that had in-house VBA projects. This was a significant relief, because it validated my degree and skillset, and it allowed me to expand my CV on the Microsoft platform.
Yet the job itself wasn’t nearly as influential to my career as was my introduction to the web …
In the mid ’90s – you know, before the Turn of the Century – my friend Erich from ULowell was working at The Internet Company back in Boston. I found HTML authorship & publishing to be a joy, and I was given permission to host a subdomained website.
This was entirely a personal project, and not only was JavaScript an awesome tool for producing visual effects, but more importantly there was so much to be learned from the view-source of other peoples’ sites.
Let’s call the following code my Lowly Beginnings.
These are excerpts from the <frameset />
-based root page of that site,
designed so that I could have a { position: static }
header – which (a not-yet-existant) CSS would later provide with ease –
yet also trigger rollover animations and related bells & whistles.
var root = '$pathabs';
var okBrowser = ! (((navigator.appName == "Netscape") && (parseInt(navigator.appVersion) < 3 )) || ((navigator.appName == "Microsoft Internet Explorer") && (parseInt(navigator.appVersion) < 2 )));
var gfxBrowser = (((navigator.appName == "Netscape") && (parseInt(navigator.appVersion) >= 3 )) || ((navigator.appName == "Microsoft Internet Explorer") && (parseInt(navigator.appVersion) >= 4 )));
var isLoaded = false;
var cleeclikLoaded = false;
function OnPageLoad() {
if (! okBrowser) return;
if ((typeof navigator.crash) != "undefined")
// it's their own damn fault for implementing it.
navigator.crash();
// loaded!
setLoaded(true);
}
function OnPageUnload() {
setLoaded(false);
}
function setLoaded(b) {
isLoaded = b;
if ((typeof top.setFooterLoaded) != 'undefined')
top.setFooterLoaded(b);
}
function IsHeaderLoaded() {
return (((top.isHeaderLoaded) != 'undefined') && top.isHeaderLoaded());
}
function IsAlgraLoaded() {
return (((typeof self.algra.isLoaded) != 'undefined') && self.algra.isLoaded);
}
function IsBoonerLoaded() {
return (((typeof self.booner.isLoaded) != 'undefined') && self.booner.isLoaded);
}
function mouseover(num, inh, src) {
if (! isLoaded) return;
if (IsHeaderLoaded())
parent.header.mouseover(num, 1, src);
if (IsAlgraLoaded())
self.algra.mouseover(num, 1, src);
if (IsBoonerLoaded())
self.booner.mouseover(num, 1, src);
if (cleeclikLoaded)
cleeclikOver(num, 1, src);
}
function mouseout(num, inh, src) {
if (! isLoaded) return;
if (IsHeaderLoaded())
parent.header.mouseout(num, 1, src);
if (IsAlgraLoaded())
self.algra.mouseout(num, 1, src);
if (IsBoonerLoaded())
self.booner.mouseout(num, 1, src);
if (cleeclikLoaded)
cleeclikOut(num, 1, src);
}
// ------------------------------------------------------
function cleeclikLoad() {
// loaded!
cleeclikLoaded = true;
}
function cleeclikOver(num, inh, src) {
if (! cleeclikLoaded) return;
if ((inh == 1) && (src == 'c')) return;
if (inh == 0) {
mouseover(num, 1, src);
if ((num != $sleepBot) && (num != $sleepMap))
window.setTimeout("mouseout(" + num + ", 0, '" + src + "');", $delayTime);
}
}
function cleeclikOut(num, inh, src) {
if (! cleeclikLoaded) return;
if ((inh == 1) && (src == 'c')) return;
if (inh == 0)
mouseout(num, 1, src);
}
In real time[2], I’ve just looked at that code for the first time in probably 20 years, and said
“wow … that’s not as bad as I remember it being.” [4]
But that being said, I can’t help but notice
setTimeout
was a String[5]top
namespaceself.__NAME__
=> window
+ <FRAME NAME="__NAME__" />
[6]Navigator#crash
:fist:Yet all that aside, the technique was solid.
Cross-frame communication was incredibly reliable if the control code lived in window.top
and all the frames leveraged it.
Some of the ancient pages on my personal site still use this tried-and-true strategy.
I’ll leave it to you, Gentle Reader, to view-source
and witness my dusty code in its full glory.
In the meanwhile, I was hired by another company to maintaining their online CD-ROM catalog. I wrote a static content generator in VB, and Perl CGI scripts to render dynamic content. When that company went under, I was hired by my next employer – one of the few still in business lo these 20 years later – to work on their web-based expense management platform. It was built on IIS and used ASP to interact with in-house DDLs.
The intersection of my Visual Basic and JavaScript skills made me a valuable asset.
Another engineer had built a Java Applet for dynamic CRUD of expense entries. Now, this was before PaaS, so the web-based “product” was installed at and self-hosted by our corporate customers.
Well, surprise surprise, it turned out that some IT Departments were somewhat worried about the security of Java, and they didn’t want it running on in-house browsers. A project was born to replace the Java Applet with a JavaScript equivalent.
As early as 1998, I was implementing a client-server JavsScript “Applet”. The code was a freak of nature that was stable on both Netscape Navigator Gold 3.01 and IE 4.02. I can’t account for whether it was “state-of-the-art” or not, but this was my architectural solution:
<form />
s[7] which responded with payloads marshalled as DOM-parseable HTMLjavascript:
protocoldocument.write
action going in the other Frames at load-time.Here are some redacted code snippets:
// JSDynamic.js
function DynamicPageWrite(wVal, tVal) {
wVal.document.clear();
wVal.document.write(tVal);
wVal.document.close();
}
function DynamicPageLoad(wVal, tRef) {
wVal.location = "javascript:" + tRef;
}
function DynamicPopupWrite(wRef, tRef, nme, ftr) {
eval(wRef + " = window.open('javascript:\"\"', nme, ftr);");
setTimeout("DynamicPageWrite(" + wRef + ", " + tRef + ");", 0);
}
function DynamicPopupLoad(wRef, tRef, nme, ftr) {
eval(wRef + " = window.open('javascript:\"\"', nme, ftr);");
setTimeout("DynamicPageLoad(" + wRef + ", 'opener." + tRef + "');", 0);
}
// JSDialogPopup.js
var OwnerHandle = null, OwnerObject = null;
var OwnerOpen = 1, OwnerNotify = false;
var OwnerCanFocus = 0;
function DialogLoaded() {
DialogConnect();
if ((typeof OwnerHandle.OnDialogOpen) != 'undefined') {
OwnerObject = OwnerHandle.DialogObject;
OwnerCanFocus = OwnerHandle.DialogCanFocus;
}
if (! DialogIsSilent()) {
if (OwnerObject != null) {
OwnerHandle.OnDialogOpen(self);
return;
}
OwnerHandle.DialogOpen = 1;
}
if ((OwnerCanFocus != 0) && ((typeof self.focus) != 'undefined'))
self.focus();
}
function DialogConnect() {
if (OwnerHandle == null) {
OwnerHandle = top.opener;
OwnerNotify = true;
}
}
function DialogConfirmed() {
if (! DialogIsSilent()) {
if (typeof(OwnerHandle.OnDialogConfirm) != 'undefined') OwnerHandle.OnDialogConfirm();
if (typeof(OwnerHandle.OnDialogClose) != 'undefined') OwnerHandle.OnDialogClose();
OwnerHandle.DialogOpen = 0;
DialogSilent();
}
}
So, what have we learned?
doc
assigned as var doc = window.document
mysteriously failing after a Document#write
setTimeout
was a String[5]Also, eval
wasn’t truly evil until I got my hands on it.
A nagging part of me is horrified at how clunky[4] it looks.
But in the end, I had something very XHR-ish,
and it was robust enough for Production use.
Well, at least until the memory leaks started …
At this point in the story, it was 1999, and the Dot-Com was booming. I’d written some Applets for my personal site, and I had decided it was time to embrace not only Java, but also a new home. So my employer & I parted ways, and I headed towards parts south.
When I tell folks how long I’ve been living in San Francisco, I use the phrase
… oh, since the Turn of the Century.
Yep, it’s the same wry joke as the title of this Post, but without the “before”.
I moved to San Francisco in May 2000 and had both an over-priced apartment and several job offers within 6 weeks. I chose the startup where I would become the first in-house Developer, though that was primarily because (refreshingly) more than half of the staff were women.
What I didn’t know at the time was how fortunate I would be when our financial backers were willing to ride out the 2001 “bubble”-pop and keep funding the company until 2005.
At one point, we were asked to implement a live Chat client. Below I’ve redacted the client-side code to negotiate with the Java Applet that proxied to our server.
var mbBrainsLoaded = false;
var mbSessionActive = false;
var miRevisionMinor = -1;
function isAppletValid() {
var oApp;
if ((typeof document.appChat) == "undefined")
return false;
oApp = document.appChat;
if ((typeof oApp.isReady) == "undefined")
return false;
if (miRevisionMinor == -1) {
if ((oApp.getState() == 'H') && (! oApp.isHistoryChanged()))
miRevisionMinor = 4;
else
miRevisionMinor = oApp.getRevisionMinor();
}
return true;
}
function isAppletReady() {
var oApp = document.appChat;
return (mbBrainsLoaded && oApp.isReady());
}
function isAppletInitialized() {
var oApp = document.appChat;
return (isAppletReady() && oApp.isInitialized());
}
// - - - - -
function doAppletInitialize() {
if (! isAppletValid()) {
top.doLoadTechnicalFrames();
return;
}
var oApp = document.appChat;
iTimeoutMinutes = 5;
var oDate = new Date();
var iTimeout = oDate.getTime() + (iTimeoutMinutes * 60000);
if ((! top.isReady()) || (! isAppletReady())) {
var oNow = new Date();
var iNow = oNow.getTime();
if (iNow > iTimeout) {
top.doLoadUnavailableFrames('applet init timeout');
return;
} else {
self.setTimeout('doAppletInitialize();', 100);
return;
}
}
oApp.initLocalInfo(top.getCustomerName(), top.getSHSHTYP(), top.getSHRFNBR());
if (!isAppletInitialized()) {
// ...
oApp.setInitialized(true);
}
if (!oApp.sessionStart(top.getUniqueID())) {
top.doLoadUnavailableFrames('start session');
return;
}
var iTimeoutMinutes = Number(oApp.GetTimeOutLength());
if (String(iTimeoutMinutes) == "NaN") iTimeoutMinutes = 5;
var oDate = new Date();
var iTimeout = oDate.getTime() + (iTimeoutMinutes * 60000);
WaitForSessionAccept(iTimeout);
}
function WaitForSessionAccept(iTimeQuit) {
var oApp = document.appChat;
var sState = String(oApp.getState());
mbSessionActive = false;
switch(sState) {
case "A":
case "H":
if (! ((sState == 'H') && (! oApp.isHistoryChanged()))) {
top.doLoadReadyFrames();
top.doHandOffInitialMessages();
doRenderPlaybackWhenChanged();
return;
}
break;
case "U":
case "D":
oApp.sessionEnd();
top.doLoadUnavailableFrames('chat denied / shutdown');
return;
break;
}
var oNow = new Date();
var iNow = oNow.getTime();
if (iNow > iTimeQuit) {
oApp.sessionEnd();
top.doLoadUnavailableFrames('session wait');
return;
} else {
self.setTimeout('WaitForSessionAccept('+iTimeQuit+');', 1000);
}
}
// - - - - -
function onPageLoad() {
doAppletInitialize();
mbBrainsLoaded = true;
top.setReadyBrains(true);
}
function doEndSession() {
var oApp = document.appChat;
if (! mbSessionActive)
return;
if (isAppletValid() && isAppletReady())
oApp.sessionEnd();
mbSessionActive = false;
}
function doCloseWindow() {
if (mbSessionActive && (! confirm('Are you sure you want to end your session?')))
return;
doEndSession();
top.close();
}
Yep, that looks like my coding style alright, and there are some signs of change
setTimeout
[5]Our general site code had the usual event-driven visual effects and validation and other <form />
submission treatments,
but this Java Applet timing handshake across <frameset />
s made for some trickier stuff.
Which is all fine and dandy – but bear with me here, please – because it’s time for another diversion …
As Engineering Hire #1, I was involved in all of the hiring decisions of our Frontend team. I was a rather intense technical interviewer, especially when it came to the JavaScript language.
Co-workers whom we hired – who are now long-time friends – have recounted to me with amusement about our first interactions over a whiteboard. I am ashamed to say that in 2002 I brought a candidate to tears with my overwhelming line of questioning, but since that incident I’ve become much kinder and more flexible when vetting a person’s skills.
In the fullness of time, I’ve also learned that I sometimes had the wrong answers.
I used to ask a candidate what you’d pass as the first argument to setTimeout
,
which I understood to be a String to be eval
‘d[5].
Yes, that’s an answer – I mean, Chrome 57 still supports it! –
but of course “a Function” is the conventional answer.
I can forgive myself in that I never told someone that “a Function” was wrong; the answer I almost always heard was “I don’t know”. But yes, I was responsible for spreading some poor advice. Thank Goddess I never spoke with someone as skilled as this guy because I would have gotten served.
In 2005, it was obvious that we weren’t going to crack the consumer market, so the company was shuttered. I then fell into a clinical depression that lasted 2-1/2 years. I survived this difficult period through no lack of supportive friends, meditative yoga and the wonders of modern chemistry.
And until I’d re-entered the post-“bubble” hiring market – which had started to recover from the crash – I did not realize how much state-of-the-art had passed me by.
Though I stayed gainfully employed during this transition time, I have some shameful memories:
Plus another interview SNAFU involving security doors which I find hard to summarize. Yeah – those four; those were rough ones.
It was a difficult period of time, and I grew stronger, and I survived it. In the process, I learned many lessons about the habits & discipline it takes to maintain a career in such a rapidly-moving fast-fashion industry.
I would not have the knowledge & skillset I have today[2] were it not for
Time has granted me a greater refinement of craftmanship and maturity of perspective … as will be evidenced by a significant reduction of hand-wringing going forward in this Post :+1: .
Okay, getting back on course now … Weren’t we were talking about JavaScript?
At the time of this writing[2], I’m on my 7th startup here in the Bay Area. Two of them have been in the gaming industry. One job was a 90-minute daily commute over to the East Bay. I have stuck around two startups to turn off the lights, and gotten out just before the collapse of two others.
This seems to be the cycle of life here in Silicon Valley; only the frameworks change. And upon each them, we implement the usual patterns.
Here I am, handling queued events in a Widget built using Prototype and the Dojo Toolkit
dojo.provide('Redacted.AbstractDialog');
dojo.widget.registerWidgetPackage('Redacted');
// ...
dojo.require('dojo.uri');
dojo.require('dojo.widget');
if (! Redacted) { Redacted = {}; }
/**
* abstract Dialog widget
*/
dojo.widget.defineWidget(
'Redacted.AbstractDialog',
'html',
[ dojo.widget.html.Dialog ],
// constructor
function() { },
// body
{
contextBase: Redacted.Settings.context,
templateBase: dojo.uri.dojoUri('../redacted/templates/'),
// ...
_onCloseVoters: [],
_enablers: {},
// ... see postCreate
isSelfClosing: true,
postCreate: function(args, frag) {
// ...
if ('isSelfClosing' in args) { this.setIsSelfClosing(args.isSelfClosing); }
// ...
this._enablers = {
close: new Redacted.Enable([ this.linkImageClose, this.linkImageCloseDisabled ], enShow)
};
},
// ...
// handlers
_clickClose: function(evt) {
if (! this._enablers.close.isEnabled()) { return; }
var i, a = this._onCloseVoters;
for (i=0; i<a.length; ++i) {
var v = a[i], vote = true;
if (! v.func) {
if (dojo.lang.isFunction(v.ctx)) { vote = v.ctx(evt, this); }
}
else if (dojo.lang.isFunction(v.func)) {
vote = v.func.call(ctx, evt, this);
}
else if (dojo.lang.isString(v.func)) {
vote = v.ctx[v.func].call(ctx, evt, this);
}
if (! vote) { return; }
}
if (this.isSelfClosing) { this.hide(); }
this.onClose(evt);
},
}
});
Here I am, several years later, batching AJAX requests through jquery
var Container = function(data) {
this.batch = { queue: [], timeout: null };
// ...
Object.extend(this, data);
// ...
return this;
};
Object.extend(Container.prototype, {
batch: null,
// ...
batch_send: function() {
var b = this.batch;
if (b.timeout) {
window.clearTimeout(b.timeout);
b.timeout = null;
}
if (b.queue.length > 0) {
new Ajax.Request('/container/batch', {
parameters: { queue: Object.toJSON(b.queue), authenticity_token: this.authenticity_token },
asynchronous: true,
evalScripts: true
});
b.timeout = window.setTimeout(this.batch_send.bind(this), 2000);
b.queue.length = 0;
}
},
// ...
});
Here I am, several more years down the line, performing MongoDB atomic operations which rollback unless they converge
'use strict';
var Promise = require('bluebird');
var core = require('redacted-core');
var BaseService = core.BaseService;
/**
* @class
* @name redacted.engage.EngageService
*/
var EngageService = BaseService.extend(
{
// ...
/**
* @protected
* @method
* @param {Object} state
* @param {Array<Integer>} playerIds
* @return {Promise} a Promise resolving
* an Array with {@link redacted.engage.Engagements} for each of the Players
*/
_engagePlayers: function(state, playerIds) {
var self = this;
var Ctor = this.constructor;
var Engagements = this.model(Ctor.modelName);
var engagementId = state.id;
var mightRollback = [];
return Engagements.find(
{ _id: { $in: playerIds } },
null,
{ sort: { _id: 1 } } // in repeatable ID order
).exec()
.then(function(engagements) {
return Promise.all(engagements.map(function(engagement) {
var playerId = engagement._id;
try {
_assertPlayerNotEngaged(playerId, engagement);
}
catch (e) {
return self._scheduleFixupPlayer(playerId)
.throw(new Error("Player " + playerId + " is otherwise engaged"));
}
}))
.return(engagements); // propagate, vs. scoped var
})
.then(function(engagements) {
return engagements.reduce(function(chain, engagement) {
var playerId = engagement._id;
return chain
.then(function() {
return Engagements.findAndModify(
{ _id: playerId, engagedIn: Const.NO_CURRENT }, // assuming you're not previously engaged
{ },
{ $set: { engagedIn: engagementId } }
);
})
.then(function(result) {
if (! result) {
throw new Error("Player " + playerId + " is otherwise engaged");
}
mightRollback.push(engagement);
});
}, Promise.resolve());
})
.then(function() {
if (mightRollback.length !== playerIds.length) {
throw new Error("at least one Player is otherwise engaged");
}
// $findAndModify is done out-of-band with the in-memory Model cache
mightRollback.forEach(function(engagement) {
engagement.__attributes.engagedIn = engagementId;
});
return mightRollback;
})
.catch(function(err) {
return self._rollbackEngagement(state, mightRollback)
.catch(_absorbAnyError)
.throw(err);
});
},
});
module.exports = PvpService;
Alright, alright … that’s some lovely ancient history. I commend both your patience and your spirit, dear Reader.
Because it all leads us to …
Honestly, today[2] most of my ‘living’ projects are Ruby scripts. Do I need to write me some expressive synchronous code which gets executed from the command line and has no performance requirements? Why yes, yes I do. Ruby is excellent for that, and in these cases I choose it over JavaScript.
I am gainfully employed writing JavaScript 90% of the time (curse you CoffeeScript). And I maintain a bunch of JavaScript-related projects in my private repo. But what’s more interesting is my public “showcase” repo.
It’s the JavaScript ES6[8] app which I built to replace all the legacy Perl CGI scripts
for sleepbot.com, the long-lived descendant of
my first website.
Feature-wise, it’s complete fluff, because the Perl scripts it replaced were super-simple.
The interesting part is in the project’s README
…
which has some silly Badges …
you know, for build status and up-to-date-ness, and …
Crap.
See, in real time, I just found out that my Travis CI build is broken.
In the last commit, I upgraded to request@2.81.0
, which works great in Node 6, but is unsupported in Node 0.10 and blew up on latest
(Node 7.10 as of when I pushed).
I added tests for all those versions just because I was curious.
So now, because I haven’t fixed it – which I haven’t[2] – my “showcase” project is out there on Github declaring itself broken to the world. My choices are to (a) have a broken build; (b) have a David DM Badge shaming me for being out-of-date, or; (c) only build against Node 6
I mean, all the cool kids use Repository Badges. I wonder how they manage the social pressure of maintaining them :relieved:
The cool kids also commit PRs to fix issues in Open Source projects. I’ve done that too, and mostly for JavaScript packages.
Several times a year I offer to mentor for NodeSchool SF. It’s an opportunity for me to “give back” to the community while I level up on ‘soft’ skills like How to Teach and How to Lead. I was the initial contributor to their Event Mentor Best Practices Guide.
In my Day Job, I have the opportunity to play with some of the Frontend toys the cool kids use. For example, here I am writing React at a truly introductory level
import React, { PropTypes } from 'react';
import classNamesBind from 'classnames/bind';
import noop from 'lodash/noop';
import styles from './Checkbox.css';
const classNames = classNamesBind.bind(styles);
function Checkbox({ onChange, checkboxStyle, label, disabled, ...rest }) {
const labelClassname = styles[`${checkboxStyle}Label`];
const labelTextClassname = classNames({
[ `${checkboxStyle}LabelText` ]: true,
disabledLabelText: disabled,
});
return ( <label className={labelClassname}>
<input
type="checkbox"
onChange={onChange}
disabled={disabled}
{...rest}
/>
<span className={labelTextClassname}>{label}</span>
</label> );
}
Checkbox.propTypes = {
onChange: PropTypes.func,
disabled: PropTypes.bool,
checkboxStyle: PropTypes.string,
label: PropTypes.string.isRequired,
};
Checkbox.defaultProps = {
onChange: noop,
checkboxStyle: 'normal',
};
export default Checkbox;
And ya know, in this moment I can’t help but think to myself
Oh gawd, that’ll look so crappy to me in 3 years time.
I would say that developing in the JavaScript language and its rich tapestry of frameworks and toolchains does not get boring.
As of today[2], if you do a view-source
of my Current Lister World Map and extract the code,
you’ll see that it’s still written using Prototype and DWR.
The rest of my radio station’s client code still uses an ancient JavaScript code loader called JSAN.
For yeeears, I’ve been saying
You know, I really should re-write that code …
But, ya know, it just plain works. I figure; as long as the client doesn’t throw Errors, and I can repeatedly re-launch my servers into AWS, I shouldn’t spend time re-inventing those wheels. Yep, those creaky ol’ rust-flakin’ wheels.
I’ve been spending time lately[2] putting together site build chains.
Inlining, SVG optimization, code bundling, all that good stuff.
At the top, there’s a Makefile
, and it drives the gulp tooling.
Before gulp
, there was grunt, and also Yeoman.
Now there’s webpack and rollup and parcel.
Oh, oh, … and remember GWT :sweat_smile: ?
I’m proud to say that my radio station stack – Java backend and all — has been running smoothly for twelve years now. I’ve always tried to make technical choices that have long-term maintenance potential. Choices like using Makefiles. By and large, those principles have served me well.
I wonder about my new from-scratch JavaScript projects.
Am I investing in what may become some very ephemeral tooling?
Will gulp
continue to work “forever”, the way my ol’ Ant tasks do?
Gosh, let’s find out!
Some day, I might actually re-write that code! Maybe in React and Redux. Or Vue. Definitely not in Angular, because Angular 2. And I don’t figure Ember or Backbone. Nope, no Prototype or Dojo or YUI or jQuery UI or … wait, there were others too … umm. Well, yeah. You get my point.
As I get older in tech, the more I’ve come to appreciate my repeatable tooling, my documentation and those times that I pay-it-forward to myself with Test Suites. And above all, how nice it is that, to this day, it just plain works.
In conclusion, reproducible processes are critical to — oh, look! :sparkles: sparkly :sparkles:
Node 8.3.0 runs on top of V8 6.0 now. Mmm hmm, and a new JIT means better performance. In the long run, maybe Google will implement ‘strong’ mode. Because performance is cool.
I’ve loved me some async / await that I’ve written. And ES6[8] object destructuring is faaabulous !!!
:rocket: To The Future!
“though not all” <= See what I mean! I didn’t have to say that. Sheesh.
Where “today” is circa Summer 2017, the dawn of my 50th year.
This too will be lost, in time, like tears in rain.
I cannot understate the privilege of having been guided into a professional Engineering environment by my father, a fine Electrical Engineer in his own right.
When I started writing this Post, I worked hard to to accept the fact that I was dredging up ancient stuff, and that it wasn’t going to be pretty. I rationalized:
no Developer is ever that happy with their old stinky code
I bring this archaicness into the light of day in a spirit of full disclosure :relieved: which documents my stylistic evolution over the decades.
Yep … from my first website to my proto-AJAX implementation and on into the early 2000s and my interview questions, I sure believed some crazy shit.
For the longest time, I wrote all of my HTML Element markup in all-caps. Dammit, why am I admitting to these things ??
The page-specific <form />
-build-and-submit()
code was rendered as <script />s by ASP server-side logic.
It’s 1999-era cool, but kinda scary to visually parse nearly 20 years later.
I’ll spare you the horror of having to see it yourself.
Wait, was it ES6 or ES2015? I get the impression that it’s complicated.