Ignored By Dinosaurs πŸ¦•

Hey Dennis, I became a first time WNCW member during the Gospel Truth a few weeks ago. It really is a different feeling you get listening to a radio station that you're helping to support, but that's not my point...

I moved up to New Jersey for the Railroad Earth gig a very long time ago now, but I'm an App State alumni, class of 2001. I've been trying to get back down to somewhere in the WNCW listening area ever since but despite my best efforts, me and my family have gone and grown roots up here. I don't know if I'll ever make it back down there, and that makes me sad.

But lately, I've been listening to Goin Across the Mt every weekend on some NPR app on my phone and hearing your voice makes me a little less sad, so thank you.

#life

The beginning

I'll make the β€œwhat is Angular” part as brief as possible.

Angular is a giant JavaScript framework from our friends at Google. It fits into a similar camp with Backbone, a framework that Drupalists will likely have heard of since it's included in D8 core. Angular aims to make it as easy as possible to build a single page CRUD app, and in my limited experience it succeeds.

I've never built anything with Backbone, but have the Peepcode series on it, and have been working heavily with Rails for a good while now. I'll avert a gush on Rails for the time being, but let's just say I really love the way that Rails doesn't really write any markup for you. It's much simpler to find your way through the request/response path, and in general I find developing with Rails to be a vastly more pleasant experience than developing with Drupal.

Alas, I've been a professional Drupal dev for about 4 years now.

The use case

I work at a publishing company. We publish about 26 different pubs, many of them still in print. Within the last year we finished a migration of all of our websites from various proprietary CMSs into a Drupal 7 multisite installation. The sites support desktop only at this point as we are a small company and resources are definitely constrained. (This also has it's upsides which we'll get to).

Last fall we rebuilt the company website from a static HTML site into Drupal 7. Since this was not a part of the multisite install, we were allowed to architect the site from scratch with my boss doing all the site building and myself doing all the theming. Mobile was a big priority this time, so I spent a good chunk of the development cycle implementing mobile-friendly behavior and presentation and generally getting a good feel for how to really do a responsive site. As an aside, for mobile/responsive site building and crafting responsive stylesheets, less is definitely more.

The end of this winter has brought a time table for offering a more accommodating mobile experience on our β€œbrand sites”.

The dream

So my boss and his boss want a mobile site like β€œThe financial Times has”. If you have an iOS device, go to app.ft.com, and if you're a Drupal developer, try and get your head around how you'd pull that off, but try and forget that this is a post/series about Angular first. Pretend that you were going to try and pull that off in a more or less responsive fashion.

I spent a couple of days surveying the landscape for JS libraries that help out with touch behavior, and trying to figure out how to prefetch content so that when the user swipes from page to page or section to section, there wouldn't be a giant delay while the network does its thing transferring 1,000 lbs. of panels-driven markup. This was Monday and Tuesday of last week.

A pause for reflection

My enthusiasm for the project already waning, I sat back and though about how we ought to be doing this thing. What they want is a mobile app, not a responsive website.

The way that you implement a mobile app is not by loading a page of HTML markup with every user action, it's by loading the app once and then communicating with the server via JSON (or XML or whatever if you wanna get pedantic). This kind of thing is supremely easy to do with Rails mainly due to Rails's deep embrace of Rest 6 years ago totally getting ahead of, perhaps even enabling, this whole javascript app revolution in which we find ourselves. Outputting a particular resource as JSON is as simple a line of extra code to allow the controller to respond to a request with a different format.

Step 1, Services

I'd never played with Services, so I didn't know how easy it was to output a node as JSON. On Wednesday of last week, some time after lunch, I decided to find out. Turned out we already had Services installed for another use case that we just recently implemented (offloading our Feeds module aggregation to another open source project called Mule, but that's a whole other series of posts), so all I had to do was bumble through a tutorial on how to set up a Node endpoint.

In less that 5 minutes I had exactly what I needed set up in terms of dumping a node to JSON. I've been reading Lullabot's recent posts about their use of Angular, so the rest of this series will follow along as I learn how to use this giant beast of a JS framework to build the mobile app my boss' boss wants without a minimum of Drupal hackery.

The next post will pick up Thursday morning (as in, 6 days ago) where I first downloaded Angular into the project.

#angular #javascript #drupal


  • Episode 01 – Sharam_s Mix.m4a
  • Episode 02 – Sultan_s Mix.m4a
  • Episode 03 – Dean Coleman_s Mix.m4a
  • Episode 04 – Cedric Gervais_ Mix.m4a
  • Episode 05 – Prompt_s Mix.m4a
  • Episode 06 – David Tort_s Mix.m4a
  • Episode 07 – Miss Nine_s Mix.m4a
  • Episode 08 – Ned Shepard_s Mix.m4a
  • Episode 09 – D-Unity_s Mix.m4a
  • Episode 10 – Pig & Dan_s Mix.m4a
  • Episode 11 – DJ Simi_s Mix.m4a
  • Episode 12 – 16 Bit Lolitas_ Mix.m4a
  • Episode 13 – Kamisshake_s Mix.mp3
  • Episode 14 – Cocodrills_ Mix.m4a
  • Episode 15 – The Flash Brothers_ Mix.m4a
  • Episode 16 – Andrew Bayer_s Mix.mp3
  • Episode 17 – Namito_s Mix.mp3
  • Episode 18 – Dimitri Andreas_ Mix.mp3
  • Episode 19 – Prompt_s Mix.mp3
  • Episode 20 – Boris M.D._s Mix.mp3
  • Episode 21 – Spektre_s Mix.mp3
  • Episode 22 – Alex Kenji_s Mix.mp3
  • Episode 23 – Schadenfreude_s Mix.mp3
  • Episode 25 – Timo Garcia_s Mix.mp3
  • Episode 26 – Anton Pieete_s Mix.mp3
  • Episode 27 – D-Formation_s Mix.mp3
  • Episode 29 – Pig & Dan_s Mix.m4a
  • Episode 30 – Solee_s Mix.mp3
  • Episode 35 – D-Formation_s Mix.mp3
  • Episode 36 – Matt Lange_s Mix.mp3
  • Episode 38 – Coll Selini_s Mix.mp3
  • Episode 39 – Solee_s Mix.mp3
  • Episode 40 – Scalambrin & Sgarro_s Mix.mp3
  • Episode 42 – Pig & Dan_s Mix.mp3
  • Episode 44 – Ahmet Sendil_s Mix.mp3
  • Episode 45 – Federico Epis_s Mix.mp3
  • Episode 46 – Sinisa Tamamovic_s Mix.mp3
  • Episode 47 – Koen Groeneveld_s Mix.mp3
  • Episode 48 – Pig & Dan_s Mix.mp3
  • Episode 49 – Quivver_s Mix.mp3
  • Episode 50 – Tom Middleton_s Mix.m4a
  • Episode 51 – Babicz_s Mix.mp3
  • Episode 52 – Yoshitoshi Best of 2011.m4a
  • Episode 53 – DBN_s Mix.mp3
  • Episode 55 – Tom Novy_s Mix.mp3
  • Episode 56 – Nicole Moudaber_s Mix.mp3
  • Episode 57 – Ahmet Sendil Mix.mp3
  • Episode 58 – Heren Mix.mp3
  • Episode 59 – Sinisa Tamamovic Mix.mp3
  • Episode 60 – Rene Kuppens Mix.mp3
  • Episode 61 – Scalambrin & Sgarro_s Mix.mp3
  • Episode 62 – Pirupa_s Mix.mp3
  • Episode 63 – Darwin & Backwall Mix.mp3
  • Episode 64 – Nicole Moudaber Mix.mp3
  • Episode 65 – Stefano Noferini Mix.mp3
  • Episode 66 – HEREN [Dirt Jugglerz] Guest Mix.mp3
  • Episode 67 – Sharam Live at Sensation (Denmark).mp3
  • Episode 68 – D-Unity Guest Mix.mp3
  • Episode 69 – Sharam Live at Kool Beach BPM Festival.mp3
  • Episode 70 – Sharam Live at BPM Festival (Part 0).mp3
  • Episode 71 – Philipp Straub Guest Mix.mp3
  • Episode 72 – El Mundo Guest Mix.mp3
  • Episode 73 – Sisko Electrofanatik Guest Mix.mp3
  • Episode 74 – Sharam Live at Space Ibiza Opening Fiesta 2013.mp3
  • Episode 75 – Sharam 4th of July Edition.mp3
  • Episode 76 – Mar-T Guest Mix.mp3

#music

This is currently in regards to the Atom editor that I dutifully filled out an β€œinvite” request for. It could be about anything, though. I get this same feeling every time.

It takes me right back to high school gym class and waiting to get picked for a team. And waiting. And waiting. And God this is embarrassing, will somebody please fucking pick me already?


This is a piece of software that you install on your computer. Not a Saas thing that'll buckle under the weight of too many users. I thought Mailbox actually did a pretty cool thing by providing that countdown that gave you all the visibility you really needed into the process, and removed that feeling from the waiting.

And waiting.

And waiting.

#random #memories

That unwanted thing that Google put in your top menu bar? That bothers you, too? Can't find where to turn it off? That's because they wanted to hide it from you.

  1. chrome://flags – put that in your address bar.
  2. Search for 'notifi', as in 'notifications' (which is what it's called right now).
  3. DISABLE

#ui

I've finally been getting around to using Redis for caching on this blog and other Rails projects I've got laying around, and I've finally gotten around to Homebrewing PHP 5.5 to use for my local set up at work. Jose's default PHP 5.5 brew doesn't install the new Opcache, so look around and make sure you install that version because it screams.

I guess you could fairly say that caching has been on my mind a lot lately as I also finally got around to installing and using Memcached here in development with our company's Drupal sites. That's also a separate Jose homebrew install, so look around for that, too. Anyway, after fiddling around with all the Drupal configs that I had commented out previously that dealt with Memcached, I got it up and running. Make that hauling ass. It's a lot of fun.

So of course, the next step is to install some outdated script that gives me some visibility into the cache statistics. Here's where I get to the point.


Out of the box, the Homebrew installation of Memcached gives you 64M, which is also what you get if you install it from Aptitude on Ubuntu. I've finally done enough Ubuntu to get that most config files are stashed somewhere in /etc (/etc/memcached.conf), but where do these files live on the Mac?

This is the only downside to Homebrew, that most of the brewers (and God bless y'all, not complaining) pick some arbitrarily weird places to stash stuff, and crazy commands to access the executables. mysql.server start? What distro does that come from? It just doesn't make a lot of sense, but I guess that's what keeps your mind sharp.

Anyway, I wanted to feed Memcached some more memory, and I couldn't find the config file for the life of me. Maybe you're here for the same reason. I dug pretty deep into Google before I found the answer. It's in ~/Library/LaunchAgents/homebrew.mxcl.memcached.plist.

Out of the box it looks like this β€”

xml version="1.0" encoding="UTF-8"?

Label
homebrew.mxcl.memcached
KeepAlive

ProgramArguments

/usr/local/opt/memcached/bin/memcached
-l
localhost

RunAtLoad

WorkingDirectory
/usr/local


So instead of pointing at a config file, you just pass all the config in the startup command. To bump it up, you need to pass in -m 128, or however much you want to feed it, like this.

xml version="1.0" encoding="UTF-8"?



Label
homebrew.mxcl.memcached
KeepAlive

ProgramArguments

/usr/local/opt/memcached/bin/memcached
-l
localhost
-m
256

RunAtLoad

WorkingDirectory
/usr/local


#devops #generaldevelopment

I've been so busy lately that I forgot to wish you a happy birthday yesterday. I'm sorry. I thought today was yesterday, and I really have been looking forward to this day, er yesterday, for a while now.

We've both been through a hell of a lot in the last five years, I seriously can't believe it's been that long. You were born in the RRE tour bus between Stroudsburg and Easton on our way to play the GA Theater (the old one!). You only stayed in Blogger for all of a week before I whisked you away into Wordpress. I've moved you a lot, and you've helped me keep a handle on where I've been over this vastly more difficult and private and rewarding section of my life. Thanks for sticking with me.

#memories

Because I have to look this up every. single. time.

pcre β€”-

apt-get install libpcre3 libpcre3-dev

zlib β€”-

apt-get install zlibc zlib1g zlib1g-dev

init script

#! /bin/sh

### BEGIN INIT INFO
# Provides: nginx
# Required-Start: $all
# Required-Stop: $all
# Default-Start: 2 3 4 5
# Default-Stop: 0 1 6
# Short-Description: starts the nginx web server
# Description: starts nginx using start-stop-daemon
### END INIT INFO

PATH=/usr/local/nginx/sbin:/sbin:/bin:/usr/sbin:/usr/bin
DAEMON=/usr/local/nginx/sbin/nginx
NAME=nginx
DESC=nginx

test -x $DAEMON || exit 0

# Include nginx defaults if available
if [ -f /etc/default/nginx ] ; then
 . /etc/default/nginx
fi

set -e

case "$1" in
  start)
  echo -n "Starting $DESC: "
  start-stop-daemon --start --quiet --pidfile /var/run/$NAME.pid \
  --exec $DAEMON -- $DAEMON_OPTS
  echo "$NAME."
  ;;
  stop)
  echo -n "Stopping $DESC: "
  start-stop-daemon --stop --quiet --pidfile /var/run/$NAME.pid \
  --exec $DAEMON
  echo "$NAME."
  ;;
  restart|force-reload)
  echo -n "Restarting $DESC: "
  start-stop-daemon --stop --quiet --pidfile \
  /var/run/$NAME.pid --exec $DAEMON
  sleep 1
  start-stop-daemon --start --quiet --pidfile \
  /var/run/$NAME.pid --exec $DAEMON -- $DAEMON_OPTS
  echo "$NAME."
  ;;
  reload)
  echo -n "Reloading $DESC configuration: "
  start-stop-daemon --stop --signal HUP --quiet --pidfile /var/run/$NAME.pid \
  --exec $DAEMON
  echo "$NAME."
  ;;
  \*)
  N=/etc/init.d/$NAME
  echo "Usage: $N {start|stop|restart|reload|force-reload}" >&2
  exit 1
  ;;
  esac

  exit 0

Also, try running configure with

./configure --with-http_realip_module --with-http_gzip_static_module

#devops

Russian Doll caching

It's a well branded name for something that makes total sense after reading one blog post. The second half of this blog post will actually get you pretty much there.

If you're coming from a PHP/Drupal background like me, you might be surprised to find out that the database is not the bottleneck in Rails-land. Whereas on your typical Drupal page you might have anywhere from 50 to 1000 database queries being run, you'll be hard pressed to find a Rails app that comes anywhere near that number of DB calls being made. The Hobo companion, even on the front page (the heaviest page in terms of data so far) only runs about 8-10 queries.

What takes so long in Ruby land is view rendering.

I've witnessed this first hand on my job's Redmine install. I'd have thought that what was taking so long was the database, but what actually takes 95% of the time on each page load is the view rendering. I guess the Basecamp folks noticed the same things, so they went to work on how to speed that up.

Fragment caching

There's always a blurb in evey article about Rails caching that has to do with β€œfragment caching”. Basically, you cache little bits of each page as they're rendered, and the next page requests pull the rendered HTML from the cache to reassemble your page. It's simple, except that it's not. You've heard the old adage about the 2 hardest problems in computer science – cache invalidation is the PITA in this one. That basically means making sure (somehow) that you're not serving stale fragments when something has been updated in the meantime. I'm not sure what the old scheme was for taking care of this, but it wasn't friendly or intuitive.

Fragment caching ++

The solution that they came up with involved making a digest of the actual object being rendered be the cache key.


All blog posts
--------------

<% cache [ "archive", @posts_by_year.first.first ] do %>
  <% @posts_by_year.each do |year| %>
    <% cache [ "archive", year[1].size ] do %>
      ### <%= year[0] %>


      <% year[1].each do |post| %>
        <% cache [ "archive", post ] do %>
        
        #### <%= link_to post.title, post_date_path_helper(post) %> - <%= post.created_at.strftime('%D') %>



        <% end %>
      <% end %>
    <% end %>
  <% end %>
<% end %>

This is the view that renders my blog's index page. I'm still getting the hang of how to name these cache fragments, but the idea is that you recursively wrap each rendered fragment on the page. If one item gets updated, the digest for that item changes, and the cache key for it changes as well. The next time the page is rendered, the value won't exist in the cache for that key (because the key has changed based on the digest of the object). It'll be stuck in the cache, and every fragment that wraps it will be invalidated as well. They'll be re-rendered, but rather than having to re-render everything on the page from scratch, the other items that haven't changed will be pulled from the cache. The vast majority of the page will not have changed, and will still be alive and well in the cache. In this way, the whole page can be 95% cached and only the parts that change will have to go through the whole trip to re-render.

It does still call your database to get the objects for digesting, but as we've already discussed, this is a small cost comparitively. Down the road there are solutions to this issue as well one your optimizations get to that point.

Cache store

When I first started implementing caching on this site, I started off easy with Rack Cache. It's a simple HTTP cache that causes the whole page to be stored with a TTL time on it. The TTL is set in the controller with something like expires_in 5.minutes, public: true. Once I started moving into the fragment caching business, I moved out of Rack Cache and into using memcached as the store. It's easy to set up. So easy I'll probably never write a post about it. It just works.

It did, however, seem to take up a fair share of memory – as you'd expect from a memory cache store. I already had Redis running for queuing background jobs via Sidekiq though, so it occurred to me over dishes that I should give that a try. Turns out it's just as easy as memcached. Just swap out gem 'dalli' in your Gemfile for gem 'redis-rails' and change config.cache_store = :dalli_store to config.cache_store = :redis_store. It seriously doesn't get any easier. Redis is a lot like memcached, except that it has some more advanced fatures that I might never use. It also writes to disk every now and then so if you restart your box, Redis can keep the cache warm rather than losing everything it's stored.

#rails #devops

For a while now I've been leaving my work computer on 24/7 and remoting in to do all my work when I'm out in the world. I have the same public key on both machines (perhaps not the safest thing in the world, true), so I was confused by why I needed to enter my password every time I logged in to my work computer. Not so confused that I took the time to figure it out until recently.

I had set permissions on the authorized_keys file to 400, but for some reason it needs to be 600.

doesn't work

-r-------- 1 grubb staff 410 Nov 27 10:13 authorized_keys

works

-rw------- 1 grubb staff 410 Nov 27 10:13 authorized_keys

#ssh #devops