Ignored By Dinosaurs 🦕

So years ago, in the early days of my HN acct, I bought a book called “Founders at work” that happened to be authored by PG's wife Jessica Livingstone. Great book with a bunch of interviews with various founder/hero types – Woz, Evan Williams, etc.

One of the interviews with with DHH – the Rails guy. I remember him talking about how he used to write PHP and it was just too hard, and that's how he found ruby and started building stuff with it.

So, I've been going through these Laravel screencasts lately. They're very helpful, especially if I were a total noob as far as most of thes concepts go, and the architecture of Laravel is so heavily based on Rails that I pretty much know what's coming next.

The thing about it though, is the amount of Laravel code that you have to write to do the same thing in Rails. And it's just uglier. And it just seems like more work.

So yeah, that's my 1 week assessment of Laravel.

#random

It's very simple. I work at a publishing company in northern New Jersey. I think we hire pretty smart, but the technical interview tends to be more of a conversation about technology than a series of quizzes on the whiteboard. We've been hiring for a few positions lately, and I've recently hit upon the perfect conversational tech interview question. It's separated a lot of wheat from a lot of chaff for me in the past few months.

“What text editor do you prefer?”


I've recently begun using PHPStorm after several years of bouncing between Vim for server side languages and TextMate for Javascript. I've had a license for PHPStorm for almost a year now, but never really got into it. In my younger days I guess I was kind of a hipster in that I thought I too cool for an IDE. Rather, I wanted to be too cool for an IDE. IDEs were something you used if you worked with some bloated language like Java, or some compiled thing like Objective-C. If you used a slim, elegant, interpreted language like Ruby, you only needed a text editor. TextMate was the perfect little text editor for me for a number of years.

When I started here, we were still on Windows machines. Our entire Drupal stack was running on Ubuntu, and all of my background was in Linux and Mac, so it didn't make any sense to me to learn the tooling to work on Windows. So I downloaded VMWare Player and spun up my first Ubuntu 12.04 desktop VM. I taught myself to set up an intermediate LAMP/LEMP stack on this VM, and decided to start using Vim as my editor. I'd read about it plenty and most of the TM community seemed to be scattering to either Vim or Sublime. So Vim it was.

I became pretty proficient with Vim, but when I started playing Angular about this time last year, I was really missing a tree view. One of the killer-est plugins for Vim is vim-rails, and the reason is actually because of the way that Rails is architected. It's very predictable if you go with the flow, and vim-rails let's you jump from controller to it's view to it's model to it's tests, over to any other model with a series of very simple and easy to learn shortcuts (the only kind I ever learn). I found myself really missing something like this with Angular, and just decided to go back to TextMate for Angular projects. TextMate has that fuzzy file search and a file tree explorer by default so it really made getting around an Angular project a little less tiresome. I know, I should have taken the opportunity to get to know Vim better, but I had (as you can imagine) a shitload of work to do.

So I spent several weeks back in TextMate. I'm vastly more productive in an editor with the Twilight theme, by the way. Those weeks go by and it's back to Drupal and Vim. My Vim chops have weakened by this point, and I've gotten pretty used to having the Twilight theme, which apparently didn't exist for Vim. So I spent a day hacking an approximation together. It's also annoying to not have a file tree. I could install NerdTree, and tried to several times, but just never could figure out if I was learning a tool to get a job done, or just as some kind of test.

Enter my first project in Java a month ago.


We have some infrastructure here. It runs a lot of Java. This is a critical and most interesting piece of infrastructure, and we only have one Java guy here and he's pretty booked. Building web sites and writing javascript is fun and all, but underneath that is the layer of data in which my company lives and breathes. This is the layer for which this piece of infrastructure does the plumbing, therefore it has become much more interesting to me as of late.

This requires, obviously, learning a little bit about Java – something I'd stringently avoided for several years now. Java was old, Java was derided, Java could simply not be written without an IDE. These are prejudices which I no longer hold. I take this as a point of pride that I've risen to a level where I no longer define myself by my choice of technology. I've chosen enough of them at this point to see the merits of each for what they're for, as well as what they shouldn't be for. This is called experience, I think.

So anyway, my experience with Java and Eclipse is rather interesting. “Jump to declaration”. Wow. A tree view not just of the files within my project, but of the methods within a file! Wow. It's at this point that I have to jump back into Drupal to do some plumbing between this and that. I fire up PHPStorm, thinking hey, this IDE thing is kinda cool.

Lo and behold, many of the shortcuts in Eclipse are the exact same in PHPStorm. The menu and layout are very similar. I'm being productive in a tool which had previously only been sort of useful under come circumstances. I haven't even gotten to the debugger yet, on account of plumbing it into Vagrant and back being a pain in the ass. Just being able to reformat files and jump to function definitions is pretty nifty.

Then I have to jump back into another Angular project. Hey, I think. PHPStorm is basically also WebStorm, and therefore should be aware of Angular. It is! It indexes my entire project and within 30 minutes I'm being way more productive in PHPStorm on an Angular project than I would've been in TextMate. It's effing amazing! So I'm pretty much sold on PHPStorm, and then I notice that it doesn't support the little Sinatra/Sidekiq endpoint that goes along with this Angular project. I start thinking about IntelliJ a little bit, as I'm really starting to see the gains from being able to use one tool across many different technologies.


My point here is that I can go on for a ridiculously long time about my text editor and why. There's a story, and it also takes a winding path through what technologies I've used, why I used them, why I moved on or stayed with them.

I don't care what editor you use. Ok, I kinda do actually, but I care much more that you have some opinions on the matter. If you don't, I'm not sure you really write code and that's kind of a problem for the types of technical positions that we are filling. These positions require a curious mind, one that likes to check out new things, and one that's ok with using old things if that's the right tool for the job.

If you're interviewing at ABM and you come across this blog post in doing some research about the team and the company, feel free to mention that. Taking the initiative to do some research on your potential teammates speaks much more to the kind of curious mind that you posses than a long winded discussion about your text editor ;).

#general-development

Prelude

Fastly is a CDN (content deliver network). A CDN makes your site faster by acting as a caching layer between the world wide web and your webserver. It does this by having a globally distributed network of servers and by using some DNS trickery to make sure that when someone puts in the address of your website they're actually requesting that page from the nearest server in that network.

$ host www.ecnmag.com
www.ecnmag.com is an alias for global.prod.fastly.net.
global.prod.fastly.net is an alias for global-ssl.fastly.net.
global-ssl.fastly.net is an alias for fallback.global-ssl.fastly.net.
fallback.global-ssl.fastly.net has address 23.235.39.184
fallback.global-ssl.fastly.net has address 199.27.76.185

If they don't have a copy of the requested page, they'll get it from your webserver and save it for the next time. Next time, they served the “cached version” which is way faster for your users and lightens the load on your webserver (since the request never even makes it to your webserver). Excellent writeup here.

There are many different CDN vendors out there – Akamai being the oldest and most expensive that you may have heard of. A new entrant into the market is a company called Fastly. Fastly has decided on using Varnish as the core of their system. They have some heavyweight Varnish talent on the team and have added a few extremely cool features to “vanilla” Varnish that I'll get to in a moment.

Fastly's being built on top of Varnish is cool, mainly because every CDN out there has some sort of configuration language and to throw your hat in with any of them is also to throw your hat in with their particular configuration language. Varnish has a well known config format called VCL (Varnish configuration language) which, on top of having plenty of documentation and users out there already, is also portable to other installations of Varnish so that learning it is time well spent. This is the killer Fastly feature that first drew me in.


(you can skip this – backstory, not technical)

Prior to using the CDN as our front line to the rest of the internet, we'd been on a traditional “n-tier” web setup. This meant that any request to one of our sites from anywhere in the world would have to travel to a single point – our load balancer in Ashburn, Virginia in this case – and then travel all the way back to wherever. In addition to this obvious global networking performance suck, we use a managed hosting vendor, so they actually own and control our load balancer. Any changes that we'd want to have made to our VCL – the front line of defense against the WWW – would have to go through a support-ticket-and-review process. This was a bottleneck in the event of DDos situations, or any change to our caching setup for any reason.

Taking control of our caching front line was a neccessary step. This became the second killer Fastly feature once we started piloting a few of our sites on Fastly.


The killer-est killer feature of all has only just become clear to me. Fastly makes use of a feature called “Surrogate Keys” to improve the typical time-based expiration strategy that we'd been using for years now. They have a wonderful pair of blog posts on the topic here and here.

The way that Varnish works is basically a big, fast key-value store. How keys are generated and subsequently looked up, as well as how their values are stored are all subject to alteration by VCL, so you have a wonderful amount of control over the default methodology. By default it's essentially URLs as keys, and server responses as values, and this will get you pretty far down the line, but where you bump into the limits is as soon as you start pondering that each response has but one key that references it. Conversely, each key references only one object. By default...

Real life example – I work for a publishing company. Our websites are not super complicated IA-wise. We have pieces of content and listing pages of that content, organized mostly by some sort of topic. A piece of content can have any number of topics attached to it, and that piece of content (heretofore referred to as a “node”) should show up on the listing pages for any one of those terms.

Out of the box, Fastly/Drupal works really well for individual nodes. Drupal has a module for Fastly that communicates with their API to purge content when it's updated, so if an editor changes something on the node they won't have to wait at all for their changes to be reflected to unauthenticated users. The same is not true for listing pages. Since these pages are collections of content and have no deeper awareness of the individual members of the collection, they function on a typical time-based expiration strategy.

My strategy for the months since we launched this across all of our sites has been to set TTLs (time to live, basically the length of time something will be cached) as high as I can until an editor complains that content isn't showing up where they want it to. I recently had an editor start riding me about this, so lowered the TTLs to values so low that I knew we weren't getting much benefit of even having caching in the first place. I'd known about this Surrogate Key feature and decided to start having a deeper look.


The ideal caching scenario would have not only the node purged when editors updated it, but to have listing pages purge when a piece of content is published that should show up on that listing. This is where Surrogate Keys come into play. The Surrogate-Key HTTP header is a “space delimited collection of cache keys that pertain to an object in the cache”. If a purge request is sent to Fastly's API to purge “test-key”, anything with “test-key” in the Surrogate-Key header should fall out of cache and be regenerated.

In essence, what this means is that you can associate an arbitrary key with more than one object in the cache. You could tag anything on the route “api/mobile” with a surrogate-key “mobile” and when you want to purge your mobile endpoints, purge them all with one call rather than having to loop through every endpoint individually. On those topic listing pages you could use the topic or topic ID as a surrogate-key, and then any time a piece of content with that topic is added or updated, you can send a purge to that topic ID and have that listing page dropped. And only that listing page dropped.

// the basic algorithm, NOT functional Drupal code

if ($listing_page->type == "topic") {
	$keys = [];
	
	// Topics can have children, so fetch them.
	// pretend this returns a perfect array of topic IDs
	$topics = get_term_children($listing_page->topic);
	// Push the parent topic into the array as well.
	$topics[] = $listing_page->topic;
	
	foreach($topics as $topic) {
	$keys[] = $topic;
	}
	
	$key = implode(" ", $keys);
	add_http_header('Surrogate-Key', $key);
}

This results in a topic listing page getting a header like this -

# the parent topic ID as well as any child topic IDs
Surrogate-Key: 3979 3980 3779

Then, upon the update or creation of a node you do something like this -

// this would be something like hook_node_insert if you're a Drupalist
function example_node_save_subscriber($node) {
 $fastly = new Fastly(FASTLY_SERVICE_ID, API_KEY);
 foreach($node->topics as $topic_id) {
 $fastly->purgeKey($topic_id);
 }
}

This fires off a Fastly API call for each topic on that node that would cause anything with that surrogate key, aka topic ID, to be purged. This would be any topic listing page with this topic ID on it. Obviously if there are 500 topics on any piece of content you'll probably want to move this to a background job so you don't kill something, but you get the idea.


This is sort of like chasing the holy grail of caching. In theory this means that you are turning the caching TTLs up to maximum and only expiring something when it actually needs to be expired based on user action and intent, not based on some arbitrary time that I decide on based on my lust for having everything as fast as possible. The marvelous side effect of this is that (again in theory) everything should load even faster since there's almost no superfluous generation of pages at all.

I just released the code on Friday morning, and the editor who was previously riding me about this topic had only positive feedback for me, meaning – so far, so good.


FYI – the holy grail actually looks more like this -

#braindump #varnish #drupal #devops

And here we are, but first – a story...


Yesterday was my 4 year old's 5th birthday. Michelle and I go back and forth about who's going to take video and who's going to take stills for singing “Happy Birthday”, and decide I'll take the video.

We get up to the part where she's lighting the candle and my phone stops recording – “Your phone is full, please manage storage under blah blah blah...”. This is an iPhone 6+, bought it two months ago. Obviously I bought the 16GB model, but this was never a problem with my 5c so why would it be a problem with the 6+?

It has been a problem with my 6+. I just deleted all my music and podcasts off of the phone less than a week ago, and as you can tell from the screenshot, whatever is on here does not add up to 11.something GB of stuff. Not to mention, why is there only 11.something GB of available storage in the first place. The OS is taking up over a quarter of the disk space??

So I'm googling last night, trying to figure out what's taking up the space on my phone. Obviously something is cached it would seem to me, but I have no control over what this is or how to free up that space. Some random post advises me to backup and restore the phone, which seems really janky to me, but the poster says this will wipe the cached stuff and only leave “your stuff”. I decide to try it, even though I'm respecting myself a little less at this point (I'm a developer for pete's sake, not some non-technical moron who has to search the internet for how to free up storage on his phone. Or am i??).

I'm told by iTunes that I don't have enough free space on the phone to restore from the backup I've just made. I'm wasting my life being frustrated at a phone at this point, rather than spending time with my wife on our son's birthday.

I have a moment and I remember why I jumped to Apple gear in the first place...


After years of loyalty as a Windows user, after years of hating Justin Long's smug pre-hipster persona as the cool kid in the commercial opposite John Hodges, I bought a new HP laptop with Windows Vista on it. I felt betrayed. It was such a poor, clunky experience that I immediately regretted buying the laptop. Two weeks later I bought an iPhone, as chronicalled in part 1 of this series. It was, I can honestly say, life changing. It just worked. It didn't nag me, it didn't crash, it didn't hide useful features behind 3 submenus, it just worked. It catalyzed the entire career path I've been on for the 7 years since I bought it.

You can guess where this is going. My iPhone 6+ is no longer a device that “just works”. It does the exact opposite and costs me video of my 5 year old's birthday. I guess Marco Arment wrote about this a month ago, but I'm officially done paying this much money to be frustrated by technology.


I'll go ahead and say it – if Jobs were still alive, he would've fired the motherfucker who even suggested shipping their top-of-the-line phone with only 16GB of storage, not only because it makes for an obviously crappy user experience but because he had an apparently much clearer long view – that happy customers keep coming back and unhappy customers flee at the first opportunity.

My first opportunity is in 10 months when my T Mobile jump plan comes back around. I'll probably go retro, since I have less than a home screen's worth of apps installed on this thing anyway. The only ones I actually really use are Email, Twitter, and Reddit, and arguably all of those on a phone are just ways to kill time when I could be enjoying the life around me.

Life moves on.

#life #random #iphone

GUIs change, but the command line is eternal. Memorize these 5 commands and a long and happy life awaits.


$cd
change directory. This is how you move 
around the file system.

$ls
List, or tell me what's in this directory. This has a 
huge list of useful modifying flags, such as -l (long, 
tell me the size, ownership, and permissions on each 
thing in here too), or -a (all, as in, show me hidden 
dotfiles as well)

$mv
Move, this is how you move something from here 
to there. This is also how you rename something 
even if it's in the right place.

$cp
Copy. Add -r to make it recursive, else it 
won't copy directories because it won't 
descend into them.

$pwd
present working directory. this 
is how you get it to tell you where you are 
in the file system.

#bareminimum #devops

Notes from Mike Ryan, the Migrate guy, at the bottom. Make sure you read them before copying any of this code.


The Migrate module has some of the best documentation I've ever seen in DrupalLand, but there are still just a couple of things that I've figured out over the last month that I wish had been clearly explained to me up front. This is an attempt to explain those things to myself.

Clearly, you're going to be writing code to perform this migration.

  • There is a module – migrate_d2d – that is specifically for stuff like this. It's aware of Drupal content types' basic schema, so it'll save you a LOT of SQL writing to join each nodes' fields on to the base table.
  • You'll write a class for each migration that you need to perform.
  • You'll need to write a separate class for each content type that you have.
  • You'll need to write classes for roles, users, files, and each taxonomy vocabulary that you have on the site.
  • You'll tell Drupal about these migrations by writing an implementation of hook_cache_clear() that'll “register” the migrations and make them show up in the GUI and in drush. This looks basically like this —

function abm_migrate_flush_caches() {

  $common_arguments = array(
    'source_connection' = 'dfi', // the "source" database connection
    // there's a syntax for this that basically mirrors the way that
    // you set up database connections in settings.php
    'source_version' => 7, // a tip to the migrate_d2d module as to which 
    // version of Drupal you're migrating from
  );
 
  $arguments = array_merge($common_arguments, array(
  'machine_name' => 'AbmRole',
  'role_mappings' => array(
  'administrator' => 'administrator',
  'editorial staff' => 'editorial staff',
  'pre-authorized' => 'pre-authorized',
  'directory listee' => 'directory listee',
  'directory listee - preapproval' => 'directory listee - preapproval',
  'directory manager' => 'directory manager',
  'web production' => 'web production'
  ),
));

Migration::registerMigration('AbmRoleMigration', $arguments['machine_name'],
$arguments);

$arguments = array_merge($common_arguments, array(
  'machine_name' => 'AbmUser',
  'role_migration' => 'AbmRole', // forms the relationship between this 
  // user migration and the role migration that already happened.
  // This only works for a few specific, simple cases.
  // Relating nodes with taxonomy items, for example, happens elsewhere.
  // (in the actual migration class...)
));

Migration::registerMigration('AbmUserMigration', $arguments['machine_name'],
$arguments);
 
$arguments = array_merge($common_arguments, array(
  'machine_name' => 'AbmProdCats', // when you run "drush ms", 
  // (migration-status) this is the name that shows up
  'source_vocabulary' => 'product_categories', // yay machine names
  'destination_vocabulary' => 'product_categories'
 ));

Migration::registerMigration('AbmProdCatsMigration', $arguments['machine_name'],
$arguments);
}

  • Registering the migration also creates a set of database tables for each migration, the most interesting of which is the migrate_map_xxx, where “xxx” is the machine_name of your migration, downcased.

mysql> show tables like 'migrate%';
+------------------------------------+
| Tables_in_for (migrate%) |
+------------------------------------+
| migrate_field_mapping |
| migrate_group |
| migrate_log |
| migrate_map_abmadterms |
| migrate_map_abmappnotes |
| migrate_map_abmarticle |
| migrate_map_abmawardwinners |
| migrate_map_abmblogs |
| migrate_map_abmcompanyprofiles |
| migrate_map_abmdigitaleditions |
| migrate_map_abmevents |
| migrate_map_abmfiles |
| migrate_map_abmnews |
| migrate_map_abmpodcasts |
| migrate_map_abmprodcats |
| migrate_map_abmproductreleases |
| migrate_map_abmproducts |
| migrate_map_abmrole |
| migrate_map_abmtopics |
| migrate_map_abmuser |
| migrate_map_abmvideos |
| migrate_map_abmwebinars |
| migrate_map_abmwhitepapers |
| migrate_message_abmadterms |
| migrate_message_abmappnotes |
| migrate_message_abmarticle |
| migrate_message_abmawardwinners |
| migrate_message_abmblogs |
| migrate_message_abmcompanyprofiles |
| migrate_message_abmdigitaleditions |
| migrate_message_abmevents |
| migrate_message_abmfiles |
| migrate_message_abmnews |
| migrate_message_abmpodcasts |
| migrate_message_abmprodcats |
| migrate_message_abmproductreleases |
| migrate_message_abmproducts |
| migrate_message_abmrole |
| migrate_message_abmtopics |
| migrate_message_abmuser |
| migrate_message_abmvideos |
| migrate_message_abmwebinars |
| migrate_message_abmwhitepapers |
| migrate_status |
+------------------------------------+
44 rows in set (0.00 sec)

mysql> describe migrate_map_abmblogs;
+-----------------+---------------------+------+-----+---------+-------+
| Field | Type | Null | Key | Default | Extra |
+-----------------+---------------------+------+-----+---------+-------+
| sourceid1 | int(10) unsigned | NO | PRI | NULL | |
| destid1 | int(10) unsigned | YES | | NULL | |
| needs_update | tinyint(3) unsigned | NO | | 0 | |
| rollback_action | tinyint(3) unsigned | NO | | 0 | |
| last_imported | int(10) unsigned | NO | | 0 | |
| hash | varchar(32) | YES | | NULL | |
+-----------------+---------------------+------+-----+---------+-------+
6 rows in set (0.00 sec)

mysql> describe migrate_message_abmblogs;
+-----------+------------------+------+-----+---------+----------------+
| Field | Type | Null | Key | Default | Extra |
+-----------+------------------+------+-----+---------+----------------+
| msgid | int(10) unsigned | NO | PRI | NULL | auto_increment |
| sourceid1 | int(10) unsigned | NO | MUL | NULL | |
| level | int(10) unsigned | NO | | 1 | |
| message | mediumtext | NO | | NULL | |
+-----------+------------------+------+-----+---------+----------------+
4 rows in set (0.00 sec)

[!note] Since Migrate is an OOP thing, you can write a parent class for a generic “Node” migration that all of the other specific content types can inherit from. Most of the node migration classes that I wrote look like this, due to most of the fields being set up in the parent class —


class AbmBlogsMigration extends AbmNodeMigration {
  public function __construct(array $arguments) {
    parent::__construct($arguments);
    $this-addSimpleMappings(['field_pipes_flag']); // this is the only 
    // blogs specific field that existed on this content type, all of the fields
    // that are common among all content types are mapped in the parent class - 
    // AbmNodeMigration - in exactly the same manner. Except for some that aren't...
  }
}

[!info] Most fields in a Drupal to Drupal migration will come over easily with Migration::addSimpleMappings(), but some require a little more coddling. These are often fields that represent a relationship to another entity – Taxonomy term references, other node references, etc. These will require something like this —


php

abstract class AbmNodeMigration extends DrupalNode7Migration {
  public function __construct(array $arguments) {
  parent::__construct($arguments);

  $this-addFieldMapping('field_taxonomy', 'field_taxonomy') // sets up the mapping
  ->sourceMigration('AbmTopics'); // tells you which migration to reference.
  // This makes it look to the migrate_map_xxx table to pull out the NEW 
  // destination primary keys. Without this bit, it'll try to bring over the
  // related entity IDs verbatim, which will either fail because there is no
  // taxonomy term/node/whatever with that ID, or it'll just relate it to the
  // wrong entity. Either way, bad. 

  $this->addFieldMapping('field_taxonomy:source_type') // I wish I could tell you more 
  ->defaultValue('tid'); // about what this part means, but I just don't know yet. 
  // All I know is that is the previous lines are not enough to make it work
 }

  • Speaking of that, prior to finally putting the pieces together about how related entities maintain that relationship, I did lots of clever coding to maintain the relationships between imported entities. It's not that complicated, but I was manually looking into the migrate_map_xxx tables to pull destination_ids out. This is obviously wrong abd felt wrong when I was doing it, but it didn't all click until chasing down vague error messages about “field validation errors” in later migrations. It doesn't tell you what fields are in error, it just throws an Exception on these nodes and doesn't save them. I finally ended up dumping $errors in field_attach_validate() and saw that it was always a related entity field that was erroring. It was easy to figure out after that, but it took me several weeks of getting my head around the rest of it all to be able to get to that very simple point.
  • I missed all of that for so long because the user migration has this tidy little line about 'role_migation' that establishes the relationship, so I thought it would/should be something along those lines. I spent a long time in the module code tracing down default arguments and the like before finally just doing it the hard way. This is wrong.
  • Oh, by the way, USE THE LATEST VERSION OF ALL OF THESE MODULES. Migrate finally released 2.6, years in the making apparently, a couple of weeks ago, as I was in the middle of all this. I'd been using the previous stable, which is of course missing years of work, and solves almost all problems out of the box.
  • Here's another little gem regarding files, and making those relationships tie out —

// In AbmNodeMigration::__construct()

 $this-addFieldMapping('field_image', 'field_image') // sets up the mapping
 ->sourceMigration('AbmFiles');
 $this->addFieldMapping('field_image:file_class') // but for some reason it doesn't 
 ->defaultValue('MigrateFileFid'); // work without this part. The answer is
 // here - https://www.drupal.org/node/1540106 - but I haven't had time to 
 // fully absorb that part yet. I glossed over this part of the documentation
 // a dozen times because file_class seems like it'd be unrelated to what you're
 // trying to do - relate nodes and files. file_class sounds like something
 // CSS related. Needless to say, it is not.

Beer shot -


A review from the guy himself —

The blog post looks like a good intro to migrate_d2d 2.0, but I'm afraid now it's a bit dated (as you point out towards the bottom).

hookflushcaches() hasn't been considered a good practice for a while (defining migrations in hookmigrateapi() and using drush migrate-register is preferred – https://www.drupal.org/node/1824884), but I see that migrated2dexample still does it – I'll need to update that before the imminent new release so people aren't misled.

Setting the sourcetype to 'tid' is covered at https://www.drupal.org/node/1224042 – by default the incoming value for a term field is assumed to be the term name, when you're making use of a separate term migration via sourceMigration, the incoming value is a tid and you need to set the sourcetype so the field handler knows what to expect.

The fileclass is similar – normally the value coming in to the file field is assumed to be a URL, but when using a separate file migration and referencing it via sourceMigration it's a fid. The “class” in fileclass is a PHP class – the name of any class implementing MigrateFileInterface can be used here.

Thanks Mike!

#drupal

This shit is all very scary and confusing to you right now. You're about to walk the plank into the unknown. This will be the last “principled” career decision that you make up until the point that I'm writing you this, and you will learn a hell of a lot from it – about yourself, about your marriage, and about life in general. Shit's about to get really difficult for you, in a way that you sense now, and that's why you were up unable to sleep at 3am last night.

There will be no gentle landing, and that hail mary pass that you're hoping to connect with that idea of yours isn't going to connect, at least not as neatly as you need it to. And certainly not as quickly as you need it to.

I'm writing you now to let you know that it's going to be alright. You have this cocky hunch that the move you're making is going to be the best move you ever made, and it will be, mostly in ways that you can't really get yet. But you're going to pay for it, too.

The investment that you're making now and over the coming years is going to come back in a big way. Don't let knowing this make you work any less hard though – it's only under this pressure that you get where you need to go. It's only by putting more time into something that's more difficult than anything you've tried to do before that you get where you need to go.

Now go.

#life

I started this blog almost 6 years ago. Looking back it was basically chronicling the beginning of the darkest years of my life. It was also, however, chronicling the beginning of the most creative years that I've ever had. Lot of shit went down for me 5 years ago, and being a nice round-number-type anniversary I've been going back over these old posts a lot lately, especially the ones where I really took a lot of time to lay it out exactly right. The creative fire is one that I wrote almost exactly 5 years ago, and it startles me now how much I knew intrinsically about the journey that lay ahead of me. It took me a couple hours and several cups of coffee in a Boulder coffee shop to transcribe that passage, by the way, prior to one of my last CO shows. You were there, IIRC.

I truly thank God for that blog post I read, wherever it was, that said something to the effect of “start a blog”.

So, my man – start a blog. I firmly believe everyone should do it. Whether it's a thing you keep doing or not, it doesn't matter. You are going through a rough period right now. I had no idea how close the two of you were, and my heart hurts for you reading what you wrote on *someone else's blog*. You are of of my most intelligent musician friends (a big part of the reason I like you so much, even though we rarely get together), and I had no idea you were so articulate in print. Not that I'm surprised...

Articulate your grief more, friend. Write it down. It's not only therapeutic to analyze how you're feeling and why, you will be profoundly glad when this period is behind you and you can look back and truly remember exactly how you felt now. Because you wrote it down.

I love you, brother. JG

#life

Hoping to land a slot in the SERPs with that title, I'm here to demonstrate today the difference between good 3rd party javascript, and bad 3rd party javascript.

First – the good, as demonstrated by Google Analytics. GA's setup and tracking code, as stepped through here.

(function(i,s,o,g,r,a,m){i['GoogleAnalyticsObject']=r;i[r]=i[r]||function(){
(i[r].q=i[r].q||[]).push(arguments)},i[r].l=1\*new Date();a=s.createElement(o),
m=s.getElementsByTagName(o)[0];a.async=1;a.src=g;m.parentNode.insertBefore(a,m)
})(window,document,'script','//www.google-analytics.com/analytics.js','ga');

ga('create', 'UA-XXXX-Y', 'auto');
ga('send', 'pageview');

The ga() function is what does the magic, but in essence all that the ga() does is push its arguments into an array. All of this functionality is contained within those 4 lines. Those 4 lines also create a script tag that loads the rest of the GA lib, where the functionality to rifle through that array and report its contents back to GA's servers lives.

If that script tag fails to return, nothing happens. More importantly, nothing bad happens. The array that ga() pushes into is a vanilla JS array, it will always be there. If the GA payload doesn't come back, no worries. Your app or website will never know the difference.

This is how knowledgable developers build 3rd party libs that don't suck.

In contrast, here's what Adobe's Omniture does.

// A blob of obfuscated js hundreds and hundreds of
// lines long here. The essence of worst practices.
// All of this results in (hopefully) their lib coming
// down via a generated script tag as well. This script
// tag creates a global object - s (so artfully and helpfully
// named) - that has a giant range of methods to deal with 
// the functionality of their platform - pageviews, events, etc

Later, you track events and the like via a function call like this -

s.tl(this, 'o', 'blah'); // event
s.t() // pageview

What happens though, if their huge lib doesn't come back for some reason? Like, maybe a corporate firewall doesn't feel like letting Omniture code track its users? This would result in there being no global s object on which to call these methods. What happens then?

Your website blows up, that's what. Exceptions are thrown, and god help you if you're running a single page app, because it's toast now.

The only solution I've been able to come up with is to wrap everything in try {} catch {}, which is hideous and wasteful and prone to error when you forget to wrap every single piece of Omniture code with it.

That's why I have the utmost respect for the engineering of GA, and the utmost disdain for Omniture. Oh, and Omniture costs tons of money to use, and the API documentation is buried somewhere in the 5th level of Dante's hell.

#javascript

Simple trick for making sure that anything that you want to listen to window.onscroll doesn't eat up too many cycles while it's doing its thing. It's called “throtting”.


Throttling basically means, if you're receiving a steady stream of input from something, you don't really want to be firing stuff off based on that steady stream. This is a performance suck. Let's say you have this —-

window.addEventListener('scroll', function() {
  // Stuff that's actually kinda CPU intensive like
  // taking measurements, waiting for some element
  // to show up on the screen, for example. 
  console.log('hi!');
});

This function is going to be firing as many times a second as your computer can handle. If you're on a beefy laptop in Chrome, this will probably not be noticeable, but make no mistake — none of your users are on as good a laptop as you are. You will definitely drop frames and your perceived performance will suck wind.

What's the answer? Throttle that code. Like this.

// timeNow is the current time in milliseconds, created by
// casting new Date() to a number with +
var timeNow = +new Date();
window.addEventListener('scroll', function() {
  // if the current time in milliseconds - timeNow is
  // less than 250, abort.
  if((+new Date() - timeNow) < 250) return;
  // Else, reset timeNow to now.
  timeNow = +new Date();
  console.log('hi!');
});

This is hack-y looking because it's kind of a hack. Underscore and Lodash have this built in, but it might be a little heavier than what you need. If you find yourself using this more than once in a file, please either bring in Lodash, or rip off their implementation into your project.

#javascript