Ignored By Dinosaurs 🦕

drupal

There are other articles on this topic around the internet, but for some reason I could never completely make the mental connection on how Drush aliases worked until recently. It's actually really simple to get started, but most other articles tend to throw all the options into their examples so it kind of muddies the waters when you're trying to set yours up. By “you/yours”, of course I mean “I/mine”.

Simple

My work is an Acquia hosting client, and we have a multisite setup. Aliases are a natural fit for multisite configs, so let's show that first.

<?php

// put this in a file at ~/.drush/local.aliases.drushrc.php

$aliases['foo'] = array(
 'root' = '/path/to/docroot',
 'uri' => 'local.foobar.com' // the local dev URL
);

This is all you need to get off the ground and start using aliases locally. If you then run a drush cache-clear drush to reset Drush's internal cache, and then a drush site-alias you should be presented with a listing of your aliases.

@none
@local
@local.foo

The key to this scheme, and something that I feel like was inadequately explained to me even after numerous tutorials, is that the name of the file itself defines the particular group of aliases that this setting will speak to. If you put this into ~/.drush/foo.aliases.drushrc.php then you list of aliases would look like this —-

@none
@foo
@foo.foo

If you're running multisite, you'll have a few more in there —


<?php

// put this in a file at ~/.drush/local.aliases.drushrc.php

$aliases['foo'] = array(
 'root' = '/path/to/docroot',
 'uri' => 'local.foobar.com' // the local dev URL
);
$aliases['bar'] = array(
 'root' => '/path/to/docroot',
 'uri' => 'local.example.com' // the local dev URL
);
$aliases['ibd'] = array(
 'root' => '/path/to/docroot',
 'uri' => 'local.ignoredbydinosaurs.com' // the local dev URL
);


$ drush sa

@none
@local
@local.foo
@local.bar
@local.ibd

Ok, whoop-tee-do, what do you do with that?

Try clearing the cache on one of those sites from anywhere in your file system with drush @local.foo cc all, or clear all the caches on every site in that file with drush @local cc all. This is helpful out of the box even without multisite since you don't have to be in the drupal file tree to call drush and not get yelled at for “not having a high enough bootstrap level”, but this becomes a major time saver in multisite, since the alternative would be cding around constantly to effect commands from different directories in sites/\*.

Nice and simple. Ready to kick it up a notch?

Remote servers

Let's run drush commands on a remote server without having to log in!

<?php

// how about we put this code into
// dev.aliases.drushrc.php

$aliases['foo'] = array(
 'root' = '/var/www/path/to/docroot',
 'uri' => 'dev.foobar.com',
 'remote-host' => 'devbox.example.com',
 'remote-user' => 'ssh_username'
);

$aliases['bar'] = array(
 'root' => '/var/www/path/to/docroot',
 'uri' => 'dev.example.com',
 'remote-host' => 'devbox.example.com',
 'remote-user' => 'ssh_username'
);

This would grow your list of aliases thusly —

$ drush sa

@none
@local
@local.foo
@local.bar
@local.ibd
@dev
@dev.foo
@dev.bar

...and would let you run any old Drush command you want without having to even be bothered with logging in to that server!

Lots more examples and info out there, but this should get you started.

#drupal #devops

We are currently running a Drupal multisite installation on Acquia's enterprise cloud. We have a bunch of different domains for the various sites, as well as the various environments in which they run. The development domains look like pddah.dev.abm, pddah.staging.abm etc, presumably to prevent them from being accessed from the outside world.

This setup requires a rather voluminous sites.php file in the root of the sites/ directory to map all the potential incoming hostnames to their correct websites.

A simpler way around this is to make use of how Drupal maps incoming hostnames to the correct sites/\* folder in the first place.


If there is nothing in the sites/ folder except for default, then that is what will get loaded no matter what the incoming domain. This is Drupal's default config, in fact. If you want to go multisite, you create sites/\* directories for each of your websites' domains and Drupal will figure it out for you. But, it's rules for how it routes are a little bit liberal.

For example, I'm running pddnet.com, but the website actually exists in www.pddnet.com. I only have a pddnet.com folder in sites/ though, so that means that any subdomain of pddnet.com will also route to that directory. If I create a local development domain local.pddnet.com, assuming my local network and apache configs are in order, Drupal will load the config out of the pddnet.com directory without having to do any more work or add anything to sites.php.

This means that you can create dev.pddnet.com, staging.pddnet.com, whateveryouwant.pddnet.com and provided the network plumbing is right between here and there, it'll just work.

Of course, this also requires you having the same settings file in all of these different environments, which means that either you have to have the same DB settings in every environment, or you need to figure out some other way to load in env specific config into that file.

Acquia has a methodology that I'm probably under NDA to not divulge here, but it was devised in an era before modern PHP was a thing. These days we have tools like phpdotenv, and it's that tool that I'm exploring currently for some work that we're doing here that'll span multiple environments.

When I work out how best to integrate it with Drupal, I'll let you know. So far so good though.

#drupal

Prelude

Fastly is a CDN (content deliver network). A CDN makes your site faster by acting as a caching layer between the world wide web and your webserver. It does this by having a globally distributed network of servers and by using some DNS trickery to make sure that when someone puts in the address of your website they're actually requesting that page from the nearest server in that network.

$ host www.ecnmag.com
www.ecnmag.com is an alias for global.prod.fastly.net.
global.prod.fastly.net is an alias for global-ssl.fastly.net.
global-ssl.fastly.net is an alias for fallback.global-ssl.fastly.net.
fallback.global-ssl.fastly.net has address 23.235.39.184
fallback.global-ssl.fastly.net has address 199.27.76.185

If they don't have a copy of the requested page, they'll get it from your webserver and save it for the next time. Next time, they served the “cached version” which is way faster for your users and lightens the load on your webserver (since the request never even makes it to your webserver). Excellent writeup here.

There are many different CDN vendors out there – Akamai being the oldest and most expensive that you may have heard of. A new entrant into the market is a company called Fastly. Fastly has decided on using Varnish as the core of their system. They have some heavyweight Varnish talent on the team and have added a few extremely cool features to “vanilla” Varnish that I'll get to in a moment.

Fastly's being built on top of Varnish is cool, mainly because every CDN out there has some sort of configuration language and to throw your hat in with any of them is also to throw your hat in with their particular configuration language. Varnish has a well known config format called VCL (Varnish configuration language) which, on top of having plenty of documentation and users out there already, is also portable to other installations of Varnish so that learning it is time well spent. This is the killer Fastly feature that first drew me in.


(you can skip this – backstory, not technical)

Prior to using the CDN as our front line to the rest of the internet, we'd been on a traditional “n-tier” web setup. This meant that any request to one of our sites from anywhere in the world would have to travel to a single point – our load balancer in Ashburn, Virginia in this case – and then travel all the way back to wherever. In addition to this obvious global networking performance suck, we use a managed hosting vendor, so they actually own and control our load balancer. Any changes that we'd want to have made to our VCL – the front line of defense against the WWW – would have to go through a support-ticket-and-review process. This was a bottleneck in the event of DDos situations, or any change to our caching setup for any reason.

Taking control of our caching front line was a neccessary step. This became the second killer Fastly feature once we started piloting a few of our sites on Fastly.


The killer-est killer feature of all has only just become clear to me. Fastly makes use of a feature called “Surrogate Keys” to improve the typical time-based expiration strategy that we'd been using for years now. They have a wonderful pair of blog posts on the topic here and here.

The way that Varnish works is basically a big, fast key-value store. How keys are generated and subsequently looked up, as well as how their values are stored are all subject to alteration by VCL, so you have a wonderful amount of control over the default methodology. By default it's essentially URLs as keys, and server responses as values, and this will get you pretty far down the line, but where you bump into the limits is as soon as you start pondering that each response has but one key that references it. Conversely, each key references only one object. By default...

Real life example – I work for a publishing company. Our websites are not super complicated IA-wise. We have pieces of content and listing pages of that content, organized mostly by some sort of topic. A piece of content can have any number of topics attached to it, and that piece of content (heretofore referred to as a “node”) should show up on the listing pages for any one of those terms.

Out of the box, Fastly/Drupal works really well for individual nodes. Drupal has a module for Fastly that communicates with their API to purge content when it's updated, so if an editor changes something on the node they won't have to wait at all for their changes to be reflected to unauthenticated users. The same is not true for listing pages. Since these pages are collections of content and have no deeper awareness of the individual members of the collection, they function on a typical time-based expiration strategy.

My strategy for the months since we launched this across all of our sites has been to set TTLs (time to live, basically the length of time something will be cached) as high as I can until an editor complains that content isn't showing up where they want it to. I recently had an editor start riding me about this, so lowered the TTLs to values so low that I knew we weren't getting much benefit of even having caching in the first place. I'd known about this Surrogate Key feature and decided to start having a deeper look.


The ideal caching scenario would have not only the node purged when editors updated it, but to have listing pages purge when a piece of content is published that should show up on that listing. This is where Surrogate Keys come into play. The Surrogate-Key HTTP header is a “space delimited collection of cache keys that pertain to an object in the cache”. If a purge request is sent to Fastly's API to purge “test-key”, anything with “test-key” in the Surrogate-Key header should fall out of cache and be regenerated.

In essence, what this means is that you can associate an arbitrary key with more than one object in the cache. You could tag anything on the route “api/mobile” with a surrogate-key “mobile” and when you want to purge your mobile endpoints, purge them all with one call rather than having to loop through every endpoint individually. On those topic listing pages you could use the topic or topic ID as a surrogate-key, and then any time a piece of content with that topic is added or updated, you can send a purge to that topic ID and have that listing page dropped. And only that listing page dropped.

// the basic algorithm, NOT functional Drupal code

if ($listing_page->type == "topic") {
	$keys = [];
	
	// Topics can have children, so fetch them.
	// pretend this returns a perfect array of topic IDs
	$topics = get_term_children($listing_page->topic);
	// Push the parent topic into the array as well.
	$topics[] = $listing_page->topic;
	
	foreach($topics as $topic) {
	$keys[] = $topic;
	}
	
	$key = implode(" ", $keys);
	add_http_header('Surrogate-Key', $key);
}

This results in a topic listing page getting a header like this -

# the parent topic ID as well as any child topic IDs
Surrogate-Key: 3979 3980 3779

Then, upon the update or creation of a node you do something like this -

// this would be something like hook_node_insert if you're a Drupalist
function example_node_save_subscriber($node) {
 $fastly = new Fastly(FASTLY_SERVICE_ID, API_KEY);
 foreach($node->topics as $topic_id) {
 $fastly->purgeKey($topic_id);
 }
}

This fires off a Fastly API call for each topic on that node that would cause anything with that surrogate key, aka topic ID, to be purged. This would be any topic listing page with this topic ID on it. Obviously if there are 500 topics on any piece of content you'll probably want to move this to a background job so you don't kill something, but you get the idea.


This is sort of like chasing the holy grail of caching. In theory this means that you are turning the caching TTLs up to maximum and only expiring something when it actually needs to be expired based on user action and intent, not based on some arbitrary time that I decide on based on my lust for having everything as fast as possible. The marvelous side effect of this is that (again in theory) everything should load even faster since there's almost no superfluous generation of pages at all.

I just released the code on Friday morning, and the editor who was previously riding me about this topic had only positive feedback for me, meaning – so far, so good.


FYI – the holy grail actually looks more like this -

#braindump #varnish #drupal #devops

Notes from Mike Ryan, the Migrate guy, at the bottom. Make sure you read them before copying any of this code.


The Migrate module has some of the best documentation I've ever seen in DrupalLand, but there are still just a couple of things that I've figured out over the last month that I wish had been clearly explained to me up front. This is an attempt to explain those things to myself.

Clearly, you're going to be writing code to perform this migration.

  • There is a module – migrate_d2d – that is specifically for stuff like this. It's aware of Drupal content types' basic schema, so it'll save you a LOT of SQL writing to join each nodes' fields on to the base table.
  • You'll write a class for each migration that you need to perform.
  • You'll need to write a separate class for each content type that you have.
  • You'll need to write classes for roles, users, files, and each taxonomy vocabulary that you have on the site.
  • You'll tell Drupal about these migrations by writing an implementation of hook_cache_clear() that'll “register” the migrations and make them show up in the GUI and in drush. This looks basically like this —

function abm_migrate_flush_caches() {

  $common_arguments = array(
    'source_connection' = 'dfi', // the "source" database connection
    // there's a syntax for this that basically mirrors the way that
    // you set up database connections in settings.php
    'source_version' => 7, // a tip to the migrate_d2d module as to which 
    // version of Drupal you're migrating from
  );
 
  $arguments = array_merge($common_arguments, array(
  'machine_name' => 'AbmRole',
  'role_mappings' => array(
  'administrator' => 'administrator',
  'editorial staff' => 'editorial staff',
  'pre-authorized' => 'pre-authorized',
  'directory listee' => 'directory listee',
  'directory listee - preapproval' => 'directory listee - preapproval',
  'directory manager' => 'directory manager',
  'web production' => 'web production'
  ),
));

Migration::registerMigration('AbmRoleMigration', $arguments['machine_name'],
$arguments);

$arguments = array_merge($common_arguments, array(
  'machine_name' => 'AbmUser',
  'role_migration' => 'AbmRole', // forms the relationship between this 
  // user migration and the role migration that already happened.
  // This only works for a few specific, simple cases.
  // Relating nodes with taxonomy items, for example, happens elsewhere.
  // (in the actual migration class...)
));

Migration::registerMigration('AbmUserMigration', $arguments['machine_name'],
$arguments);
 
$arguments = array_merge($common_arguments, array(
  'machine_name' => 'AbmProdCats', // when you run "drush ms", 
  // (migration-status) this is the name that shows up
  'source_vocabulary' => 'product_categories', // yay machine names
  'destination_vocabulary' => 'product_categories'
 ));

Migration::registerMigration('AbmProdCatsMigration', $arguments['machine_name'],
$arguments);
}

  • Registering the migration also creates a set of database tables for each migration, the most interesting of which is the migrate_map_xxx, where “xxx” is the machine_name of your migration, downcased.

mysql> show tables like 'migrate%';
+------------------------------------+
| Tables_in_for (migrate%) |
+------------------------------------+
| migrate_field_mapping |
| migrate_group |
| migrate_log |
| migrate_map_abmadterms |
| migrate_map_abmappnotes |
| migrate_map_abmarticle |
| migrate_map_abmawardwinners |
| migrate_map_abmblogs |
| migrate_map_abmcompanyprofiles |
| migrate_map_abmdigitaleditions |
| migrate_map_abmevents |
| migrate_map_abmfiles |
| migrate_map_abmnews |
| migrate_map_abmpodcasts |
| migrate_map_abmprodcats |
| migrate_map_abmproductreleases |
| migrate_map_abmproducts |
| migrate_map_abmrole |
| migrate_map_abmtopics |
| migrate_map_abmuser |
| migrate_map_abmvideos |
| migrate_map_abmwebinars |
| migrate_map_abmwhitepapers |
| migrate_message_abmadterms |
| migrate_message_abmappnotes |
| migrate_message_abmarticle |
| migrate_message_abmawardwinners |
| migrate_message_abmblogs |
| migrate_message_abmcompanyprofiles |
| migrate_message_abmdigitaleditions |
| migrate_message_abmevents |
| migrate_message_abmfiles |
| migrate_message_abmnews |
| migrate_message_abmpodcasts |
| migrate_message_abmprodcats |
| migrate_message_abmproductreleases |
| migrate_message_abmproducts |
| migrate_message_abmrole |
| migrate_message_abmtopics |
| migrate_message_abmuser |
| migrate_message_abmvideos |
| migrate_message_abmwebinars |
| migrate_message_abmwhitepapers |
| migrate_status |
+------------------------------------+
44 rows in set (0.00 sec)

mysql> describe migrate_map_abmblogs;
+-----------------+---------------------+------+-----+---------+-------+
| Field | Type | Null | Key | Default | Extra |
+-----------------+---------------------+------+-----+---------+-------+
| sourceid1 | int(10) unsigned | NO | PRI | NULL | |
| destid1 | int(10) unsigned | YES | | NULL | |
| needs_update | tinyint(3) unsigned | NO | | 0 | |
| rollback_action | tinyint(3) unsigned | NO | | 0 | |
| last_imported | int(10) unsigned | NO | | 0 | |
| hash | varchar(32) | YES | | NULL | |
+-----------------+---------------------+------+-----+---------+-------+
6 rows in set (0.00 sec)

mysql> describe migrate_message_abmblogs;
+-----------+------------------+------+-----+---------+----------------+
| Field | Type | Null | Key | Default | Extra |
+-----------+------------------+------+-----+---------+----------------+
| msgid | int(10) unsigned | NO | PRI | NULL | auto_increment |
| sourceid1 | int(10) unsigned | NO | MUL | NULL | |
| level | int(10) unsigned | NO | | 1 | |
| message | mediumtext | NO | | NULL | |
+-----------+------------------+------+-----+---------+----------------+
4 rows in set (0.00 sec)

[!note] Since Migrate is an OOP thing, you can write a parent class for a generic “Node” migration that all of the other specific content types can inherit from. Most of the node migration classes that I wrote look like this, due to most of the fields being set up in the parent class —


class AbmBlogsMigration extends AbmNodeMigration {
  public function __construct(array $arguments) {
    parent::__construct($arguments);
    $this-addSimpleMappings(['field_pipes_flag']); // this is the only 
    // blogs specific field that existed on this content type, all of the fields
    // that are common among all content types are mapped in the parent class - 
    // AbmNodeMigration - in exactly the same manner. Except for some that aren't...
  }
}

[!info] Most fields in a Drupal to Drupal migration will come over easily with Migration::addSimpleMappings(), but some require a little more coddling. These are often fields that represent a relationship to another entity – Taxonomy term references, other node references, etc. These will require something like this —


php

abstract class AbmNodeMigration extends DrupalNode7Migration {
  public function __construct(array $arguments) {
  parent::__construct($arguments);

  $this-addFieldMapping('field_taxonomy', 'field_taxonomy') // sets up the mapping
  ->sourceMigration('AbmTopics'); // tells you which migration to reference.
  // This makes it look to the migrate_map_xxx table to pull out the NEW 
  // destination primary keys. Without this bit, it'll try to bring over the
  // related entity IDs verbatim, which will either fail because there is no
  // taxonomy term/node/whatever with that ID, or it'll just relate it to the
  // wrong entity. Either way, bad. 

  $this->addFieldMapping('field_taxonomy:source_type') // I wish I could tell you more 
  ->defaultValue('tid'); // about what this part means, but I just don't know yet. 
  // All I know is that is the previous lines are not enough to make it work
 }

  • Speaking of that, prior to finally putting the pieces together about how related entities maintain that relationship, I did lots of clever coding to maintain the relationships between imported entities. It's not that complicated, but I was manually looking into the migrate_map_xxx tables to pull destination_ids out. This is obviously wrong abd felt wrong when I was doing it, but it didn't all click until chasing down vague error messages about “field validation errors” in later migrations. It doesn't tell you what fields are in error, it just throws an Exception on these nodes and doesn't save them. I finally ended up dumping $errors in field_attach_validate() and saw that it was always a related entity field that was erroring. It was easy to figure out after that, but it took me several weeks of getting my head around the rest of it all to be able to get to that very simple point.
  • I missed all of that for so long because the user migration has this tidy little line about 'role_migation' that establishes the relationship, so I thought it would/should be something along those lines. I spent a long time in the module code tracing down default arguments and the like before finally just doing it the hard way. This is wrong.
  • Oh, by the way, USE THE LATEST VERSION OF ALL OF THESE MODULES. Migrate finally released 2.6, years in the making apparently, a couple of weeks ago, as I was in the middle of all this. I'd been using the previous stable, which is of course missing years of work, and solves almost all problems out of the box.
  • Here's another little gem regarding files, and making those relationships tie out —

// In AbmNodeMigration::__construct()

 $this-addFieldMapping('field_image', 'field_image') // sets up the mapping
 ->sourceMigration('AbmFiles');
 $this->addFieldMapping('field_image:file_class') // but for some reason it doesn't 
 ->defaultValue('MigrateFileFid'); // work without this part. The answer is
 // here - https://www.drupal.org/node/1540106 - but I haven't had time to 
 // fully absorb that part yet. I glossed over this part of the documentation
 // a dozen times because file_class seems like it'd be unrelated to what you're
 // trying to do - relate nodes and files. file_class sounds like something
 // CSS related. Needless to say, it is not.

Beer shot -


A review from the guy himself —

The blog post looks like a good intro to migrate_d2d 2.0, but I'm afraid now it's a bit dated (as you point out towards the bottom).

hookflushcaches() hasn't been considered a good practice for a while (defining migrations in hookmigrateapi() and using drush migrate-register is preferred – https://www.drupal.org/node/1824884), but I see that migrated2dexample still does it – I'll need to update that before the imminent new release so people aren't misled.

Setting the sourcetype to 'tid' is covered at https://www.drupal.org/node/1224042 – by default the incoming value for a term field is assumed to be the term name, when you're making use of a separate term migration via sourceMigration, the incoming value is a tid and you need to set the sourcetype so the field handler knows what to expect.

The fileclass is similar – normally the value coming in to the file field is assumed to be a URL, but when using a separate file migration and referencing it via sourceMigration it's a fid. The “class” in fileclass is a PHP class – the name of any class implementing MigrateFileInterface can be used here.

Thanks Mike!

#drupal

Hello there! There has been a lot of discussion in the Drupalsphere lately about a concept that has been coined “Headless Drupal”, and rightly so. It's basically theming, but throwing out the theme layer. The theme layer in Drupal is very powerful, but has always felt severely over-engineered to me, especially in contrast to pretty much any MVC framework I've played with. With those, you define view-layer variables in a controller class, and you write HTML with a little bit of code to print them out into HTML. It's more work up front, since you have to write code, but it's vastly easier once you get over that hump.

The company I work for has been doing exactly this with AngularJS since early this year, and I've yet to see a concise post about how to get started with it in the context of Drupal. I rarely write “how-to” posts, but I figured it'd be a good way to inaugurate the ABM tech blog.

Our use case (feel free to skip)

Early this calendar year, my boss' boss came to us with a business request — build us a framework on which we can make our up-until-lately desktop-only websites a little more mobile friendly. We were using a rather ungainly combination of Panels and Adaptivetheme, and though those should have given us a good base on which to build, we had managed to mess it up.

Our original themer was a designer who learned CSS on the job, and our stylesheets were an enormous mess. Absolute positioning, widths specified in pixels, subthemes that had huge amounts of repetitive CSS rules that could've been rolled up into the parent theme. Rewriting these sheets would've been prohibitively expensive, and wouldn't have gained us anything in the eyes of the business.

To add to that, the aforementioned boss' boss was really keen on what we in the biz call “HTML5 mobile apps” that felt more like a native app rather than just a website that is readable on the phone. UI patterns would include swiping to navigate between stories, storing articles to read later, offline support, etc. Basically, not things you can do in any Drupal theme that I know of.

I spent a few days in R&D mode trying to figure out how to fake these things with pre-fetching pages so they'd be rendered already when the user swiped, but it was a mess.

I knew in the back of my head that I was doing it the wrong way, but it took some prototyping to convince myself that what we indeed needed to do was to throw out the book on Drupal theming and do this another way.

I love writing Javascript, and I'd finally found a use case for which one of these JS MV* frameworks might actually fit the bill.

The Drupal setup

So, assuming you have a clean Drupal install spun up already (if you don't, may I suggest drush core-quick-drupal?), you'll want to download a couple of modules to get going.

  • Views (obviously! don't forget the ctools dependency!)
  • Views Datasource (this lets you spit out views as JSON).
  • Devel (for generating some content if you don't already have some)
  • CORS (just trust me, I'll get to this one later)

Or you can just do a drush dl views ctools views_datasource cors devel and be done with it.

Enable all these modules – drush en views views_ui devel devel_generate cors views_json -y.

Generate some content – drush generate-content 50 0 --types=article

Ok, you're ready to hop into the UI!


Pretty much all the action at this point is going to happen in Views, so navigate to admin/structure/views, and “Add new view”.

  1. Name it whatever you want, may I suggest “json articles”?
  2. You want to show “Content” of type “Articles” sorted by “Newest first”.
  3. Create a page, yes. Display format will be “JSON Data Document”. Continue and edit. Screenshot of these settings.
  4. Just add a few fields, since you only have the title so far. Add the Body, the nid, and the date in whatever format you please.
  5. You do want to create a label, but you'll be better off customizing it to be all lowercase and to have no spaces. ie. body, post_date, etc. What you should see at this point

In the preview, you should see a pretty-printed json list of your 10 most recent articles, as generated by Devel.


Congrats! The Drupal part is done! We'll be visiting the more interesting part in the next post.

#drupal #angular #javascript

Exposition

One of these days I'll get around to writing the post that's been in my head for 6 months now about how to get up and running with Angular in a Drupal setting. Today is not that day, however.

What I'd like to talk about today is a little hack that I came up with to allow me/us to maintain a single codebase to serve several different mobile apps.

Multisite Drupal

You're likely already familiar with this concept, and it has its proponents and its detractors. The second (anti) blog post has some good points — it is a “hack” and you can paint yourself into a corner if your feature set starts to diverge between your “sites”. Depending upon your business case however, it can be extremely useful.

We use this at my day job to generally good effect. Someday there should be a blog post about the pitfalls of Features and why its siren song leads so many Drupal developers to crash their ships upon the rocky shores of circular dependencies and conflicted configurations, but this is not that post either.

Multisite Angular (to the point)

So how do you do this with Angular? In an Angular setup, you probably don't have any “moving pieces”, ie – this thing is just HTML, JS, and CSS, talking to a server endpoint somewhere. That's cool! But it also means you don't really have the luxury of inspecting requests and setting environment variables on the server (like Drupal does it) to serve multisite. It has to happen in the browser. So what can you look at in the browser to set configuration for your multisite setup?

The URL, of course!

Angular “config” service

I'll just drop the code.

'use strict';

angular.module('mobileApp')
 .factory('Config', ['$window', function($window) {
 // Set up whatever variables you need here
 var test = {
 endpoint: 'http://testing-endpoint.com',
 randomVar: 'foo'
 };

 var prod = {
 endpoint: 'http://production-endpoint.com',
 randomVar: 'bar',
 otherVar: 'baz'
 };

 // A pointer object, basically. Keeps things
 // a little more organized
 var configs = {
 // production config
 'production-mobileapp.com': prod,
 // test config
 'testing-mobileapp.com': test,
 // dev config
 'localhost': test
 }

 return {
 fetch: function() {
 // 'configs' object returns whatever it's holding in the 
 // property with the key of 'window.location.hostname', 
 // which in this case is our config for this "site"
 var config = configs[$window.location.hostname];
 
 // Maybe you have some special snowflakes still
 // This can help you keep the divergence in check
 config.otherVar = config.otherVar || config.randomVar;
 

 return config;
 }
 }

 }]);

In the module that needs to know this stuff, you just pass in Config as a dependency and call fetch() on it.

var siteVars = Config.fetch();

We use this setup to specify, for instance, the path to site specific CSS, or site specific DFP ad tag configuration.

// In mainController.js, or wherever it makes sense

var stylePath = "/sites/" + siteVars.randomVar + "/styles.css"

And then that gets referenced in the head of the doc


A hack? Yes. I've built two fairly large apps with this approach and have yet to paint myself into a corner though, so it's a fairly useful and rather robust hack, IMHO.

#angular #drupal

For some reason I got it into my head the other day to tinker around with MongoDB and Drupal, masochism I guess. Anyway, I finally had the opportunity to start tinkering with it last night.

The set up was fairly easy – if you're on a Mac with Homebrew just hit up the josegonzalez builds of PHP5.5 and then brew install php55-mongo or something like that. Just brew search php55-mongo before you go blindly copying that command in.

Making the connection between Drupal and Mongo was also pretty straightforward, just follow the instructions on this post.

After that I was ready to migrate some content, so I did the old drush | grep mongo to see a list that looked like this —

All commands in mongodb: (mongodb)
  mongodb-cli (mdbc) Open a mongodb command-line interface using Drupal's credentials.
  mongodb-conf Print mongodb connection details using print_r().
  mongodb-connect A string for connecting to the mongodb.
  mongodb-query (mdbq) Execute a query against the site mongodb.
Other commands: (adaptivetheme,archive,browse,cache,coder_review,topic,features_plumber,apachesolr_site,redirect,image,libraries,make,mongodb_migrate,nodequeue_generate,print_pdf,queue,rules_scheduler,runserver,search,shellalias,sitealias,ssh,acquia_search,acquia_spi,strongarm,test,usage,uuid,variable,views_bulk_operations,views_data_export,watchdog,xmlsitemap)
  mongodb-migrate Migrates fields. Run mongodb-field-update first.
  mongodb-migrate-prep Prepare for migrate. Resets existing migration. No data is lost.

So, cool! Only problem is I repeatedly got this back —-

The drush command 'mongodb-migrate-prep' could not be found. Run 'drush cache-clear drush' to clear the commandfile cache if you have installed new extensions.

Over and over, clearing drush cache over and over, until I finally figured to look in the drush files that came with the Mongo module. The trick is that Chx actually meant to say that command is called drush mongodb-migrate-prepare instead of just “prep”.

It's not that reassuring about the experience that lay ahead of me that an error this simple and fixable is lying there unpatched, apparently after 2 years worth of development, since I first tried the green “official” release before going -dev on it. I suppose I'll be submitting a patch for that.

I have another blog post about the migration tribulations, but I eventually got through it and it's kinda cool. Instead of a giant throbbing schema full of field_data_this_and_that, all you have is —-

> show collections
cache
cache_bootstrap
fields_current.file
fields_current.node
fields_current.taxonomy_term
fields_current.user
fields_revision.node
migrate.file
migrate.node
migrate.taxonomy_term
migrate.user
queue.feeds_importer_expire
queue.feeds_push_unsubscribe
queue.feeds_source_clear
queue.feeds_source_import
queue.file_entity_type_determine
queue.print_mail_send
queue.views_bulk_operations
session
system.indexes

Each node document in that collection has every field that it needs to have, right on the node! Win! Now to figure out what to do with it!!

#drupal

The beginning

I'll make the “what is Angular” part as brief as possible.

Angular is a giant JavaScript framework from our friends at Google. It fits into a similar camp with Backbone, a framework that Drupalists will likely have heard of since it's included in D8 core. Angular aims to make it as easy as possible to build a single page CRUD app, and in my limited experience it succeeds.

I've never built anything with Backbone, but have the Peepcode series on it, and have been working heavily with Rails for a good while now. I'll avert a gush on Rails for the time being, but let's just say I really love the way that Rails doesn't really write any markup for you. It's much simpler to find your way through the request/response path, and in general I find developing with Rails to be a vastly more pleasant experience than developing with Drupal.

Alas, I've been a professional Drupal dev for about 4 years now.

The use case

I work at a publishing company. We publish about 26 different pubs, many of them still in print. Within the last year we finished a migration of all of our websites from various proprietary CMSs into a Drupal 7 multisite installation. The sites support desktop only at this point as we are a small company and resources are definitely constrained. (This also has it's upsides which we'll get to).

Last fall we rebuilt the company website from a static HTML site into Drupal 7. Since this was not a part of the multisite install, we were allowed to architect the site from scratch with my boss doing all the site building and myself doing all the theming. Mobile was a big priority this time, so I spent a good chunk of the development cycle implementing mobile-friendly behavior and presentation and generally getting a good feel for how to really do a responsive site. As an aside, for mobile/responsive site building and crafting responsive stylesheets, less is definitely more.

The end of this winter has brought a time table for offering a more accommodating mobile experience on our “brand sites”.

The dream

So my boss and his boss want a mobile site like “The financial Times has”. If you have an iOS device, go to app.ft.com, and if you're a Drupal developer, try and get your head around how you'd pull that off, but try and forget that this is a post/series about Angular first. Pretend that you were going to try and pull that off in a more or less responsive fashion.

I spent a couple of days surveying the landscape for JS libraries that help out with touch behavior, and trying to figure out how to prefetch content so that when the user swipes from page to page or section to section, there wouldn't be a giant delay while the network does its thing transferring 1,000 lbs. of panels-driven markup. This was Monday and Tuesday of last week.

A pause for reflection

My enthusiasm for the project already waning, I sat back and though about how we ought to be doing this thing. What they want is a mobile app, not a responsive website.

The way that you implement a mobile app is not by loading a page of HTML markup with every user action, it's by loading the app once and then communicating with the server via JSON (or XML or whatever if you wanna get pedantic). This kind of thing is supremely easy to do with Rails mainly due to Rails's deep embrace of Rest 6 years ago totally getting ahead of, perhaps even enabling, this whole javascript app revolution in which we find ourselves. Outputting a particular resource as JSON is as simple a line of extra code to allow the controller to respond to a request with a different format.

Step 1, Services

I'd never played with Services, so I didn't know how easy it was to output a node as JSON. On Wednesday of last week, some time after lunch, I decided to find out. Turned out we already had Services installed for another use case that we just recently implemented (offloading our Feeds module aggregation to another open source project called Mule, but that's a whole other series of posts), so all I had to do was bumble through a tutorial on how to set up a Node endpoint.

In less that 5 minutes I had exactly what I needed set up in terms of dumping a node to JSON. I've been reading Lullabot's recent posts about their use of Angular, so the rest of this series will follow along as I learn how to use this giant beast of a JS framework to build the mobile app my boss' boss wants without a minimum of Drupal hackery.

The next post will pick up Thursday morning (as in, 6 days ago) where I first downloaded Angular into the project.

#angular #javascript #drupal

I'm sorry, but if there's one thing I love doing, it's taking Drupal down a peg.


I'm currently investigating doing some real time push notification work on my company's sites to make them more buzzword compliant. This is great because it finally gives me a bona fide excuse to dig into a tech that I've been wanting to find a nice small use case for for a long time – Nodejs. We could easily outsource this piece to something like Pusher, but we outsource a lot of pieces of our architecture, and with each piece comes a little accrual of technical debt. We might be able to skate by without ever having to pay it off, but we've just recently gone through a large exercise with Exact Target, our email service provider, that was vastly less than smooth. So buy in to investigate the merits of keeping it in house is what I got.

Now real time notifications isn't exactly setting up a Wordpress blog, but it's also a pretty well solved problem in this day and age, and the one use case where Node just absolutely earns its bread, so we're looking into Websockets and tying into some simple events in the Drupal stack. I was just thinking about the presentation I went to at DrupalCampNJ a couple years ago by the author of the Nodejs module. His was mainly a plumbing job to expose some of Drupal's hook lifecycle to the Node event loop, and may very well end up being something we leverage, but this phrase popped in my head.

The last thing on Earth I want to do it to couple more shit into Drupal. What I want to do is to break Drupal into little pieces, but it just keeps getting bigger and bigger. Not unlike how Bear Stearns and Wachovia got absorbed into larger banks that then became even larger banks, Drupal is *too big to fail*. I think we're in the twilight of the monolithic CMS age, but plenty of folks are betting that we aren't. I suspect we'll all have jobs one way or another, but something is just fundamentally unsound about the approach with D8. To me. I am a terrible programmer by the way, vastly inferior to all core Drupal devs, lower than dirt. Fair disclosure.

#drupal

There's a question currently on r/drupal that asks the question “What's the best way to get your head around Views?”. There are many excellent answers — “study the Views API file”, “get to know the UI “, “fuck Views because it writes poor queries” — but none of them, to me, really answer the question.

Views, for the uninitiated, is the open source, community contributed module that is the reason Drupal is the powerhouse that it is currently. Yes, there are many excellent Drupal features that have contributed to it's adoption across the CMS marketplace, but there is no other contrib module that increases Drupal's capabilities so vastly as Views. Views is “a query builder”. That means lots of things, since almost any modern CMS is simply a front end to a database somewhere, and the very act of clicking any button on almost any website means that a database is being queried somewhere in the distance. Thus, a “query builder” is a pretty cool tool to have at your disposal. You can contort almost any conceivable feature out of Views if you really know your way around. But how do you learn your way around? To me, that is what the author of the question was getting at.

Well, if you came to Drupal the way I did – trial and bumbling error, and not via a CS program somewhere – then you might not be surprised to learn that a “View” is a standard feature in most RDBMSs – Wikipedia has a great entry. In essence, a “view” in SQL is a predefined query. Views allow a DBA (database administrator) to build up a more complex query that they can then hand off to a “normal” user to use in day to day operations. Maybe this query has a several joins and numerous where clauses that are tough to remember but never change, but the business user needs to supply one varying parameter to get the results they want. Another use case might be limiting access to the DB by granting users access to the views and not to low level querying of the DB (for security reasons). Thus, the seemingly awkwardly named “Views module” actually does exactly what it says it does, if you know the terminology.

Thus, the best way to learn Views is to learn SQL itself. Views' strange terminology (contextual filters, relationships, etc) are just different names for standard query features in MySQL or any other relational database system. Once you start poking around the standard Drupal DB schema, and start stepping through how a simple Drupal View is put together, you can start to understand the deeper mechanics of how the code works under the hood.

When beginning with a new view, the first question you are asked is Show _______ of type _____ sorted by _______. This is the bones of a very simple select query. Show _____ is asking which table in the DB is going to be the base of this query, most often it'll stay on the default “content”, which means the node table. Of type _____ says where type = whatever and sort by _____ does just that. So you end up with something like ...

SELECT * FROM node WHERE type = article ORDER BY date DESC

... and you're off to the races. The rest of the Views wizard allows you to refine this query to (hopefully) pull out what you want to display on the site.

The key to learning Views, therefore, is learning the Drupal database structure in general, and how to query it in straight SQL to get what you want. Once you've wrapped your head around how to join the users table to the role table via the users_roles table in order to pull out every user who is an admin via the mysql command line, it becomes much easier to translate this into a much quicker job in the Views UI. Soon you'll notice that the blocks, users, and comments tables are all plural while node, contact, and role are all singular, and then you'll be well down the path of a deeper understanding of what makes Drupal such an absurdly powerful CMS.

#drupal