Ignored By Dinosaurs 🦕

It's simple really, but you have to know a few things about how a browser renders a page to get it. jQuery is a wonderful thing because it removes the need to know a LOT about a LOT to do with the DOM and still get amazingly cool things done – slideshows, form validation, etc. However, if you go and learn a bit more about how JavaScript works in a browser you can go beyond copying and pasting code snippets and really start getting creative. Or surgical. Or whatever you want.

So when code comes through the wire into your browser, your browser parses it from the top down. That's the fancy, mechanical way of saying it follows the instructions from the top of the page to the bottom. Any time it encounters a piece of JavaScript, be it either directly embedded in the page source or linked to another file (from your site or somewhere else), it will take a detour into that JavaScript and execute any instructions in that script before it moves on with parsing the rest of the page. If you put a humongous blob of JavaScript at the top of the page, your site will seem slower because everybody's browser is stopping to follow/execute the JS before it moves on down into the content of your page.

That means what? Well say you have a date field on a form in your page and you want to use the jQueryUI datepicker on it. All you then need to do is write $('#event-date').datepicker(); and you have a datepicker. But there's a catch. If you put that code at the top of your page (in the head), or in a file that's linked to in the head of your page (using script src), then the browser will execute that instruction as soon as it encounters it. The datepicker won't show up. You'll pull your hair out. The reason is that the browser goes and sets a datepicker() on an element that it doesn't know yet exists because it hasn't gotten down into the page to actually render that yet. It'd be like giving the steps for a recipe before giving the ingredient list. Only your computer is too stupid to look down the page and read the ingredients list. It'll just say "flour? There's no flour on my table. Moving on." and no datepicker.

There are two solutions.

One is to delay the execution of the datepicker() instruction until the page has been rendered/parsed/whatever you want to call it. You do this by putting all of your custom code inside a call to jQuery(document).ready() like this -

jQuery(document).ready(function($) {
 $('#event_date').datepicker();
}

// equivalent (and better) ->

jQuery(function($) {
 $('#event_date').datepicker();
}

That basically “hides” the instruction inside of a function that doesn't get called until the page is ready. When the page is “ready” (that happens once the browser has loaded the whole page and knows what's there) the function fires and your browser gets the instruction to add datepicker functionality to the field that it knows exists now. You can put as many things inside that function as you want, all your code if you like. And you should learn more about JS because it's a wonderfully cool language.

The other solution is to put all of your JS code at the bottom of the page. This was in vogue over the last couple years as a good way to increase the perceived performance of your website anyway, but I bet lots of people were still wrapping their jQuery code in document.ready() even though they didn't need to. I was anyway. Since your JS isn't encountered by the browser until after the rest of the page you don't need to wrap it in that call if you put it at the bottom.

#javascript

I've just recently started to discover what Dropbox is really good at. I've had one for at least a year and almost never used it. The only thing I'd ever really used it for was client assets like PSDs and the like. I just discovered the Dropbox secret weapon – the symlink.

A symlink (short for symbolic link), if you don't know, is basically like a pointer to a folder/directory. It's a really nifty way to help you organize your filesystem. Say you use iTunes and for some reason you like to dig around in your iTunes music folder a lot. Rather than going into the Finder and drilling down into the folder from there, you could just create a symlink from your Desktop into that folder. Then, without actually moving your iTunes folder and possibly screwing things up, you've just created a shortcut to get into that folder.

So say I've got a client. Let's also say I have two different computers that I regularly work from. I've started using some code to test the project with that I don't want to check into their repo because they don't even really need to know that I'm doing automated testing with a tool of my choosing. So I add the directory that contains the tests to my .gitignore file. But (!), then I switch computers and need access to those test files. A git pull doesn't do anything because those files were never checked in. This struck me as the perfect use case for my dormant Dropbox folder.

I moved all the test files into a folder in my Dropbox. Then I created symlinks on both computers from that folder into the project directory. That way, they stay in sync across my two computers without needing to be checked in to the client's repo. Thank you Dropbox!

#devops #workflow #git

Hi, this post is wildly out of date. I tried to follow it to set up some test within the last year and none of this stuff actually worked. The concepts are likely still valid, but don't expect to be able to copy much.

jg – Oct 2015


I went down to Drupaldelphia (the name should be self-explanatory) a couple weeks ago mainly for a session called “Testing your site with CasperJS”. CasperJS is what's known as a “headless” Webkit testing framework. That means it's essentially a browser and can click around your site, fill out forms, test validation, etc. It's pretty much exactly what I've been looking for for one of my clients for a while now. I knew they were out there, but there's nothing like having something shown to you for an hour to really help you get your head around it.

So I came home and immediately went to work trying to figure out how to use it.

I have an assignment right now from a client to reorganize their website per instructions from their SEO vendor. It's about 10 or 12 little, tiny changes to about 50 pages. Most of the changes involve redoing their URLs, splitting content up, updating meta tags, page titles, and the like. The pattern for the updates is completely repetitive, but altogether we're talking about 500 or so changes to a Drupal site, which means it's going to be me sitting there typing all of this stuff into a web page edit form. I will screw up. I wont know it until somebody sends an email yelling at me. Enter Casper.

Casper is a Javascript tool. You can also use Coffeescript.

The tricky thing about Casper is that it basically has two scopes. One is the testing environment in which you write most of your code, and the other is the actual page environment, where the actual markup you might want to test is. Casper has a host of functions that all center around one called Casper.evaluate(). Code executed here in executed in the browser window context/scope.

I had a hell of a time trying to figure out how to test the meta tag for these pages. I knew I wanted to set up an array for the 25 different states =

A few utility functions to help out -

The base url = url = "http://example.com"

Create an instance of Casper. casper = require('casper').create()

I'll plop the code first and explain after.

The first line casper.start() obviously starts the test. each is a method which will look familiar if you work with any modern OO languages. It iterates over an array. The array in this case is states. self is the Casper instance, and state is the name of the current member of the array we're testing.

Most of this is pretty standard Casper 101. The more interesting stuff is about halfway down.

@evaluate drops us into the actual page context. There we can operate on what's in the DOM, as well as things that appear as a result of actions in the DOM. The contents of that function make a selection from one select box. The value of that select box determines what gets returned from a jQuery Ajax call. The return of that Ajax call populates another select box. Ajax 101, but we're testing, automatically, 25 pages here! You have to use legit javascript unless you can figure out how to get jQuery into this context. I haven't figured it out yet because actually learning core JS is something I need to do more of.

After that all happens we drop back into the DOM again to make sure that there are more than one option in the second select box.

Finally – Meta tags

Sorry. So I'm kinda new to JS and function scope gave me a time on this one. The main problem I was having was trying to get a piece of text that could only be found in the DOM scope back into Casper.each() scope so I could run some sort of assert() on it. Turns out it was really easy after several days of passive Googling.

So this drops back into the DOM and pulls out the contents of the Description meta tag and assigns it to the var descrip. Then all you have to do is a simple assertEquals()

Presto.

#testing #javascript

The short version -

https://github.com/JGrubb/laptop


The longer version -

Setting up your Macintosh for Rails development is actually sort of like Rail development itself – you have to at least kind of know about a lot of different things before you can really get anywhere. I'd say I'm new to Rails development even though I've been poking at it since 2.1. 2.1 is the first version I remember after I first bought a Mac and started trying to teach myself to “program”. It's taken me until 3.2 to come back to Rails with enough of a rounded web development skill set to really be able to fly around and build the things that have been locked up in my head for the last 4 years. I've written about this before, as have many others, but Rails isn't exactly a framework for beginners. That is, it requires you to have at least some idea of what you're trying to do before it will let you sit down and do it. Many of us probably grew up programming in Basic, but some of us might have gotten sidetracked into other vocations besides software development. the world has come quite a ways since then. So this post, or series maybe, will be an attempt to teach you not only what to install, but why.


Step 0 – the compiler

If you are on a Mac, you are lucky to already have a rather large lot of tools that software developers use already installed. But they haven't given you everything. Some of the ones that will come in handy down the line as you get further in to this process will have to be installed by you. This will toughen you up and help you get more acquainted with a side of your computer that perhaps you didn't even know about. The world (and by that I mean the guy who made up Homebrew) has made life a lot easier for the Mac-using developer, but in order to access all of those goodies, you'll need to install a compiler. A compiler is basically a piece of software that builds other software.

One of the first pieces of software that you're going to build will be the Ruby language itself, but first things first.

XCode

If you're on Lion or the latest Snow Leopard, you can open up the App Store and search for “XCode”. XCode is the official Apple development environment. With it you can build iPhone apps, applications for the Macintosh, and really pretty much anything that isn't specifically meant to be run on Windows. It's an enormous download, about 4GB recently.

After you download it, open it up, open the Preferences, and find the extra downloads part. What you're looking for is the “Command Line Tools” bundle. It's in one of the tabs toward the right. Download and install. Should be fairly simple. Just to be safe, restart your computer. Once it's back on open the Terminal.app (get used to the terminal, you'll be there a lot) and type which gcc. If it doesn't spit anything back at you, you haven't installed the compiler. Figure it out and come back for the next step – the package manager.

#rails #devops

Did Telluride last year. 'Twas a blissfully awesome return, particularly since a year earlier I thought my music career was over.

So they pick all us rockstars up at the Montrose airport. I waited around for a bit because the other dude that was getting picked up (who turned out to be Michael Daves) got in about 30 minutes after I did. He showed up and I was getting ready to walk out of the airport when somebody called my name. I turned around to see an old neighbor of mine from Boone, who I hadn't seen since. Michael Jordan. That's right. MJ was my neighbor in Boone. Anyway.

Anyway, we're riding up, shooting the shit. Daves lives in Brooklyn, I live in Jersey. So he asks me, “where you from originally?” to which I replied “Atlanta”.

“Really? I'm from Atlanta. Which part?”

“Avondale. The next town over from Decatur.”

“Yeah, I'm from Decatur.”

So then it was where'd we go to school, and I went to Boone, so we knew a bunch of the same Atlanta born Boone musicians that used to go to a pick session at the Freight Room. This was before my time as far as bluegrass was concerned. My last paycheck cashing job, incidentally, was right around the corner 10 years later at the Raging Burrito. It didn't exist then, but it was pretty cool going back to the day. He went to Decatur High, right around the corner from my parents office. I bought my first several basses at Emile Barron in Decatur. Just found this old video of Bill – the man himself.

So after about 3 minutes or so he goes,

“So if you grew up in Avondale, you must've been on the Avondale swim team.”

Hell yes I was. Swim team and the Avondale pool was the best reason to live in Avondale, as far as I was concerned. God knows most of the other kids were fucking assholes just like their parents. I digress.

“Well, we must've swam against each other then because I was on the Decatur team.”

So anyway, that was cool. The I caught his set with Thile the next day and I've been a fan since. Definitely my favorite stuff from Thile.

#music #memories #bluegrass

It's a toy I've been wanting to build for a while. I stole the domain name from Book fair and square, and have been quietly honing the skills to actually build it.

What I'd ultimately like to have is a site that is basically a collection of all current, working bluegrass bands. When you come to the front page, you get a list of bluegrass shows happening in your area. It does this by roughly guessing via your IP address. It has a list of shows in the database that it scrapes from somewhere. I used to think Facebook would come in handy for this, but it seems to have fallen somewhat out of favor as a place that bands keep updated with their shows. I tried Artist Data after that, but they have the most nebulous docs imaginable for the their API, which I'm not even sure is open.

So that's the big trick. Everything else is basically – add band, enter facebook and twitter username. It then gets bio and whatever other profile info from Facebook, whose API is somewhat open, and tweets from Twitter, whose API is way open. Yay Twitter.

It basically an experiment in modern web scraping. My first. It's built on Sinatra, for those who care. the code lives here – https://github.com/JGrubb/sinatra-facebook

Anyway, the prototype – http://sobg.johnnygrubb.com/

It's easily breakable right now.

#music #bluegrass

If you haven't seen this yet, it's a clip taken from yesterday's press event for the new Microsoft tablet. Dude launches Internet Explorer to show us around the browsing experience, but the thing immediately locks up. It's not just slow but completely frozen, and after about 20 seconds when he realizes that he's not going to be able to demo anything else he has to go swap it out for another device to get on with the presentation. Awkward!

Anyway, there's so many things that we (the royal we) want to rip on here. But I started thinking about how many times in a week I experience something similar on my iPad. I can't remember the last time it completely froze on me, but the UI isn't as snappy as it once felt.

I started thinking – is it because the touch UI is supposed to be simulating doing something tactilely – actually touching an icon and moving it with your finger – that this is so annoying? With a computer, you have this interface – the keyboard and the mouse or whatever – that still provides a mental buffer between you and the screen. If it gets unresponsive it's definitely annoying, but not anything like tapping an icon on the screen and getting the computer equivalent of “huh? what? me?”.

I know we've only started to scratch the surface (pun intended) of alternative ways of interacting with computing devices, but seriously – responsiveness of the UIs we already have has got to be a front burner issue. You really want to blow me away Apple? Make iOS 6 not turn my 3GS into an unusable brick, thereby forcing me to upgrade an otherwise perfectly good phone.

#ui

So long story short – I've been doing this thing a few years, I've learned a few tricks and it seems like every single new developer trick I learn about is already set up in the Mac for me.

I just learned about Apache Benchmark. If you don't know, Google it. It lets you ding your webserver with 100 or 100,000 requests to see how it responds under pressure. It's really cool and really simple.

Please don't do this (more than once).

ab -n 100 -c 5 http://www.johnnygrubb.com/

That means, hit my server 100 times, 5 connections at a time on the front page. Running this from the same server that the site is hosted on skips the network, which means you really are just testing the response time of the application.

This is ApacheBench, Version 2.3 <$Revision: 655654 $>
Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
Licensed to The Apache Software Foundation, http://www.apache.org/

Benchmarking www.johnnygrubb.com (be patient).....done


Server Software:        nginx/1.2.1
Server Hostname:        www.johnnygrubb.com
Server Port:            80

Document Path:          /
Document Length:        15489 bytes

Concurrency Level:      5
Time taken for tests:   0.454 seconds
Complete requests:      100
Failed requests:        0
Write errors:           0
Total transferred:      1623300 bytes
HTML transferred:       1548900 bytes
Requests per second:    220.29 [#/sec] (mean)
Time per request:       22.697 [ms] (mean)
Time per request:       4.539 [ms] (mean, across all concurrent requests)
Transfer rate:          3492.19 [Kbytes/sec] received

Connection Times (ms)
              min  mean[+/-sd] median   max
Connect:        0    0   0.1      0       0
Processing:     8   21  31.5     12     209
Waiting:        8   21  31.5     12     208
Total:          8   21  31.5     12     209

Percentage of the requests served within a certain time (ms)
  50%     12
  66%     14
  75%     16
  80%     17
  90%     41
  95%     54
  98%    190
  99%    209
 100%    209 (longest request)

I'll let you decipher what all this means, but that's no caching – DB calls and page builds for every request. Nginx is barely opened up, I have one worker process up. You should see it when I turn page caching on, but I'm leaving it off for now. Running this from here at home gives me

Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
Licensed to The Apache Software Foundation, http://www.apache.org/

Benchmarking www.johnnygrubb.com (be patient).....done


Server Software:        nginx/1.2.1
Server Hostname:        www.johnnygrubb.com
Server Port:            80

Document Path:          /
Document Length:        15489 bytes

Concurrency Level:      5
Time taken for tests:   70.142 seconds
Complete requests:      100
Failed requests:        0
Write errors:           0
Total transferred:      1629080 bytes
HTML transferred:       1553936 bytes
Requests per second:    1.43 [#/sec] (mean)
Time per request:       3507.124 [ms] (mean)
Time per request:       701.425 [ms] (mean, across all concurrent requests)
Transfer rate:          22.68 [Kbytes/sec] received

Connection Times (ms)
              min  mean[+/-sd] median   max
Connect:       28 2923 1651.5   2810   12004
Processing:     0  535 658.8    320    2739
Waiting:        0    5  21.9      0     153
Total:        375 3457 1638.2   3309   12005

Percentage of the requests served within a certain time (ms)
  50%   3309
  66%   3836
  75%   3990
  80%   4326
  90%   4884
  95%   6387
  98%   9240
  99%  12005
 100%  12005 (longest request)

That's how long my internet connection takes to move 100 connections. Anyway, the point is – I had to install that on my server but only knew about it in the first place because it was already installed on this Mac laptop that I bought in 2008.

I bought this 2GB laptop 4 years ago. It was the last big thing I ever will buy with a credit card. I just put a SSD in it over the weekend, and it's gone from being almost unbearably slow to the fastest damn computer I've ever used in my life. I haven't installed anything except the tools I need to do my job. No iTunes. Not importing my mail. I need Spotify to work, so I installed that. Honestly, unless Apple's new commitment to yearly major OS updates screws me or it just melts from use, I don't see myself buying another laptop for at least another few years.

By the way, I'm sure you could melt my server if you really wanted to. Please don't.

#random

So here I go again.

I've written at length on this blog over the years about Drupal. It's a great tool for getting things done right now. It's seen a huge uptake over the last couple years among government agencies and educational institutions, so being versed in Drupal development is a good career bet at the moment, meaning over the next couple years.

In the longer term though, I see it as pretty shaky and here's why.

Drupal's raison d'etre is that non-technical users (meaning – people who aren't programmers) can build fairly complex sites without writing a lick of code. It has seen massive growth, mainly because the fact that websites are actually computer applications with which users interact is a fact that is really just dawning on the majority of people who use the internet and/or need a website for their business. Most people don't have CS degrees, but every business needs a website. Thus, they either turn to someone who knows how to build websites, or they hook up with GoDaddy and do a one-click install of Wordpress.

However...

I think the world at large is rapidly getting used to the idea of writing code, and the proportion of people out there who are at least somewhat comfortable enough with that idea to hack together their idea is growing. And their idea might not be a good candidate for a Drupal site. The other thing about Drupal is that whatever your idea is, you can probably get pretty close without writing code, but chances are high that it's not going to be exactly what you want without writing some code (or hiring someone to write some code). And at that point, you either settle for something that isn't what you want or you teach yourself to write some code. And after you get comfortable with writing code, you start to wonder why you are dealing with all of this Drupal overhead that you didn't ask for in the first place. And some time after that you might find a language like Ruby, that makes infinite more sense to me than PHP, and looks a hell of a lot prettier to boot. And soon after that you might find it hard to get excited about Drupal work, because overriding other people's code is way less fun than writing your own to begin with, and that's the vast majority of working with Drupal code, at least form the backend.

updated Jan 2014 for Rails 4, see bottom


I just got this blog up and running yesterday, a marathon of pain but ultimately successful. So today I wanted to add some Markdown action so I didn't have to drop into TextMate to add <p> tags to everything all the time.

RedCarpet seemed to be the gem that was recommended, but it's recently seen a complete API overhaul that has rendered useless the vast majority of the de facto documentation out there. So, viewers of the Markdown Railscast, this one's for you.

It only involves a few changes, so if you're here it's likely that you know how to install it and you're here because of that “no 'new' method” error. Instead of

<%= Redcarpet.new(@post.body).to_html %>

as mentioned in the Railscast, you'll need to do a little more setting up. This is how I did it.

# app/helpers/application_helper.rb

def dat_markdown(text)
 markdown = Redcarpet::Markdown.new(Redcarpet::Render::HTML,
 :autolink => true, :space_after_headers => true, :no_intra_emphasis => true)
 markdown.render(text).html_safe
end

The first parameter is which renderer you want to use (either HTML or XHTML, for what?), and everything after that is your options glob which you can get from here. For readability you could also do -

# app/helpers/application_helper.rb

def dat_markdown(text)
 options = {
 :autolink => true,
 :space_after_headers => true,
 :no_intra_emphasis => true
 }
 markdown = Redcarpet::Markdown.new(Redcarpet::Render::HTML, options)
 markdown.render(text).html_safe
end

After that all you have to do is

<%= dat_markdown(@post.body) %>

and you're on your merry way. Enjoy.

January 2014 update

Rails 4 – check out Kramdown. This is how easy it is now.

# application_helper.rb

 def markdown_filter(text)
 Kramdown::Document.new(text).to_html.html_safe
 end

and <%= markdown_filter @post.body %>

Super bonus for the demographic reading this – syntax highlighting is included for free. See above. Docs for the options hash are here.

#rails