Ignored By Dinosaurs 🦕

My brother in law is a recruiter. He historically recruits salesfolks for companies “in the BI space”. I tried to help him out many years ago when I was still on the road playing music, but had absolutely no background to do anything other than plow through the spreadsheet of contacts that he had and try and get a response. Like most witless recruiters. I had no idea what BI was.

Years later, after starting a new job at ABM I still had no idea what BI was. It was something we needed, or something we did, I wasn't really sure. We had a big old database, some stuff was in there, reports came out. Somebody read them. No idea. Didn't seem very intelligent, but apparently it helped with our business, yet most of the time people seemed pissed off at it and the person who ran it.

So here's a quicky definition of “Business Intelligence” for me, 5 years ago.


Companies take in a lot of data. Data can be anything. It can be logs from your webserver. It can be daily dumps from Google Analytics about the traffic on your site. It can be daily dumps from Exact Target or Mailchimp about what emails went out yesterday, on what lists, and which ones were opened, which ones bounced. What videos were played on the sites yesterday? On what kind of browser? Basically it's anything and everything that a business can get their hands on.

Ok, you've got your hands on it, now what? Let's figure out how to figure out what is going on with our business on a macro scale so that the C suite can make decisions and we can all keep our jobs.

This is basically what BI is. Take in data. Munge data. Get answers out of data so that you can run your business.

Obviously today (2016), this is big business. “Big data”, you've heard of that? Very closely related to BI, since the amount of data that we are able to take in these days is so vast that there's no way we could get meaningful answers out of all of it using technology from even just 10 years ago.

Wrangling all this data is a wide open field, and that's where I want to be right now.

#business #data

Yeah, the pricing on the new AWS ES service is too high for you too, huh? Well just using their service is a heck of a lot easier and possibly cheaper in dev time than trying to set it up yourself. Consider that. But possibly together we can make it over the hump.

These are the bits that I was stuck on.


Put all your nodes in the same security group

I have a group for all my EC2 instances that has the appropriate ports opened up.

#devops #elasticsearch

This one really took longer than it needed to.

If you're here, hopefully you've already been through this lesson on setting up the full ELK stack with Logtash-Forwarder thrown in to boot. For me it pretty much ran as intended from top to bottom, so hopefully you're already getting data into Elasticsearch and are flummoxed by how every single other logstash config out there to parse your syslog data doesn't seem to do the job and is still just treating it like every other syslog message.

The rest of the steps to configure Drupal/Logstash

Drupal's “syslog facility” setting

This is more or less the key. You have to dig around in Drupal, as well as your webserver and make sure that Drupal is logging to it's own log. By default it'll just go to syslog and then you'll have a hell of a time distinguishing messages from Drupal on the way in.

If you recall your Logstash-Forwarder config, you tagged the syslog watcher with a "type": "syslog" bit. This is really the only info that logstash has at the point that you're setting up your input filters/grok config.

Regardless of Linux flavor, follow this guide to set up the syslog module to point to a logfile of your choosing – https://www.drupal.org/documentation/modules/syslog. I just copied everything in here, so I now have /var/log/drupal.log and it works just fine. The only thing I haven't figured out yet is that now Drupal is logging to both syslog and drupal.log, so somebody tell me how to stop that from happening.

New Logstash-forwarder config

You'll just need to a) remove the old syslog watcher from your Logstash-Forwarder (henceforth LF) config and b) tell it to now watch the new drupal.log instead. This took the relevant bits of my LF config from this

 {
 "paths": [
 "/var/log/syslog",
 "/var/log/auth.log"
 ],
 "fields": { "type": "syslog" }
 }

to this

 {
 "paths": [
 "/var/log/drupal.log"
 ],
 "fields": { "type": "drupal" }
 }

Don't restart LF just yet, we have to config Logstash to understand what to do with input of “type: drupal” first.

New Logstash config

This is where I wasted most of my time over the last few days. I was under the mistaken impression that I could perform some kind of introspection into the fields that were parsed out and then tell Logstash to do this or that with them. As far as I can tell, you'd need to use the “Ruby” Logstash filter to do that, which I didn't feel like this was that complicated a use-case if I could just figure out the “right” way to do it.

Anyway, you've probably already stumbled across this – https://gist.github.com/Synchro/5917252, and this https://www.drupal.org/node/1761974 both of which, annoyingly, show the same useless config (for me, anyway).

My logs look like this —

Oct 4 08:52:34 690elwb07 drupal: http://www.biosciencetechnology.com|1443963154|php|162.220.4.130|http://www.biosciencetechnology.com/news/2011/04/students-prediction-points-way-hot-dense-super-earth||0||Notice: Trying to get property of non-object in abm_metadata_page_alter() (line 41 of /var/www/cascade/prod/brandsites_multi.com/htdocs/docroot/sites/all/modules/custom/abm_metadata/abm_metadata.module).

The config on that page is presumably looking for a string that begins with “http”, which this clearly does not. Here's the config for this particular sequence.

filter {
 if [type] == "drupal" {
 date {
 match => [ "timestamp", "MMM d HH:mm:ss", "MMM dd HH:mm:ss" ]
 }
 grok {
 match => [ "message", "%{SYSLOGTIMESTAMP:syslog_timestamp} %{SYSLOGHOST:syslog_hostname} %{DATA:syslog_program}(?:\[%{POSINT:syslog_pid}\])?: https?://%{HOSTNAME:drupal_vhost}?\|%{NUMBER:drupal_timestamp}\|(?[^\|]\*)\|%{IP:drupal_ip}\|(?[^\|]\*)\|(?[^\|]\*)\|(?[^\|]\*)\|(?[^\|]\*)\|(?.\*)" ]
 }
 }
}

Now restart Logstash, restart LF, and carry on.

#devops #drupal

So this is just another letter to my younger self, straightening out some mental inconsistencies with how I used to think Memcached worked. Much of this will be in the context of Drupal, since much of my work experience is in the context of Drupal. Memcached is obviously not a Drupal specific construct though.

Expository

The first time I ever installed it was at the behest of my senior dev, who suggested just installing it on my laptop and giving it 64M of memory. Drupal is slow as molasses if you don't enable caching, so out of the box it actually comes with a pretty intelligent caching story. By default however, it caches everything into the database. This is “less than ideal” for sure, but since Drupal came up a long time ago in the shared hosting days, and since you could never really know what resources were going to be on the server in the first place it made sense to use what you knew would be there – namely MySQL.

This is not ideal since you're still hitting the database – often the bottleneck with Drupal – but it lightens the load significantly from the default and will definitely keep your site up under load, to a point.

One of the first places you start looking for performance improvements is in moving that out to something a little more responsive and purpose built. Redis is a newer option, but the old reliable standby is Memcached. Drupal has a bunch of tables in the database that start with cache_, and (simplified) they basically all have the format of (cache key)/(value). Love those out into Memcached and rather than hitting the DB, you're hitting memory. This is ideal, since looking something up in RAM is orders of magnitude faster than calling the DB.

This is why big boy sites use Memcached.

Beginning explorations

A bug appeared months ago on our websites. An editor would make a change to a piece of content, say bold an word, and upon saving the piece of content, they would frequently see the old version of the article and not see the change they just made. Obviously this is really annoying, because then they have to go and redo the change they just made, which then would usually work.

This got really interesting when I discovered that clearing the cache on the site would then make the change appear. Clearly this was an issue in the cache layer somewhere.

We used to use a big name hosting vendor who built the servers for us, and Memcached was installed on every webserver and given 512M to work with. I knew that the load balancer would route authenticated traffic to the same webserver, so this lead to my mistaken notion that each webserver had it's own instance of Memcached to work with and that if the editor would hit a different one on saving the page, perhaps they were getting an old version of the article.

This is not how Memcached works, as it turns out.

Go to Memcached.org

So the introductory page of http://memcached.org/ says

Free & open source, high-performance, distributed memory object caching system

What that means for me is that my mental model of each webserver having it's own pool and being unaware of the others was incorrect. What really happens is that each server you add to the pool adds to the overall cache size, and objects are distributed among them only once. I thought we had 4 512M instances of Memcached, but we really had 1 2G pool.

The wiki has some interesting notes on the the design paradigms that are worth quoting.

The server does not care what your data looks like. Items are made up of a key, an expiration time, optional flags, and raw data.

Funny, so basically the exact same schema as the cache tables in the Drupal database. That's handy.

Memcached servers are generally unaware of each other. There is no crosstalk, no syncronization, no broadcasting. The lack of interconnections means adding more servers will usually add more capacity as you expect.

For everything it can, memcached commands are O(1).

So this means that means that it should basically scale infinitely with the same performance. Whether you have 32M on your laptop, or 48G across 6 servers as we have now in production, the lookup time is constant for a piece of cached data.

What about the problem?

I actually just solved it yesterday. It was this – https://www.drupal.org/node/1679344. Learned a hell of a lot about caching in Drupal and caching in general in the last 6 months before really hunkering down to figure this one out.

#devops

There are other articles on this topic around the internet, but for some reason I could never completely make the mental connection on how Drush aliases worked until recently. It's actually really simple to get started, but most other articles tend to throw all the options into their examples so it kind of muddies the waters when you're trying to set yours up. By “you/yours”, of course I mean “I/mine”.

Simple

My work is an Acquia hosting client, and we have a multisite setup. Aliases are a natural fit for multisite configs, so let's show that first.

<?php

// put this in a file at ~/.drush/local.aliases.drushrc.php

$aliases['foo'] = array(
 'root' = '/path/to/docroot',
 'uri' => 'local.foobar.com' // the local dev URL
);

This is all you need to get off the ground and start using aliases locally. If you then run a drush cache-clear drush to reset Drush's internal cache, and then a drush site-alias you should be presented with a listing of your aliases.

@none
@local
@local.foo

The key to this scheme, and something that I feel like was inadequately explained to me even after numerous tutorials, is that the name of the file itself defines the particular group of aliases that this setting will speak to. If you put this into ~/.drush/foo.aliases.drushrc.php then you list of aliases would look like this —-

@none
@foo
@foo.foo

If you're running multisite, you'll have a few more in there —


<?php

// put this in a file at ~/.drush/local.aliases.drushrc.php

$aliases['foo'] = array(
 'root' = '/path/to/docroot',
 'uri' => 'local.foobar.com' // the local dev URL
);
$aliases['bar'] = array(
 'root' => '/path/to/docroot',
 'uri' => 'local.example.com' // the local dev URL
);
$aliases['ibd'] = array(
 'root' => '/path/to/docroot',
 'uri' => 'local.ignoredbydinosaurs.com' // the local dev URL
);


$ drush sa

@none
@local
@local.foo
@local.bar
@local.ibd

Ok, whoop-tee-do, what do you do with that?

Try clearing the cache on one of those sites from anywhere in your file system with drush @local.foo cc all, or clear all the caches on every site in that file with drush @local cc all. This is helpful out of the box even without multisite since you don't have to be in the drupal file tree to call drush and not get yelled at for “not having a high enough bootstrap level”, but this becomes a major time saver in multisite, since the alternative would be cding around constantly to effect commands from different directories in sites/\*.

Nice and simple. Ready to kick it up a notch?

Remote servers

Let's run drush commands on a remote server without having to log in!

<?php

// how about we put this code into
// dev.aliases.drushrc.php

$aliases['foo'] = array(
 'root' = '/var/www/path/to/docroot',
 'uri' => 'dev.foobar.com',
 'remote-host' => 'devbox.example.com',
 'remote-user' => 'ssh_username'
);

$aliases['bar'] = array(
 'root' => '/var/www/path/to/docroot',
 'uri' => 'dev.example.com',
 'remote-host' => 'devbox.example.com',
 'remote-user' => 'ssh_username'
);

This would grow your list of aliases thusly —

$ drush sa

@none
@local
@local.foo
@local.bar
@local.ibd
@dev
@dev.foo
@dev.bar

...and would let you run any old Drush command you want without having to even be bothered with logging in to that server!

Lots more examples and info out there, but this should get you started.

#drupal #devops

I remember being very confused by this one early on. There were boatloads of tutorials on how to change your $PATH, but what that even means in the first place I just kinda had to figure out over the course of it all. It's actually pretty simple. Here's my attempt.

If you're coming from a Windows background, and you were in the habit of being really fussy about where you installed software on the hard drive, you may have just known how to fire up any old piece of software on your system. You navigated to the application in Windows Explorer and double clicked on it. It was really simple. That icon that you actually clicked on was the “executable”, which is to say the file that starts the whole show.

Unix, Linux, and Mac systems also have executables. On a Mac, it's (represented by) the icon you click on to start the app. When you start getting deeper into development and start using the command line more, you're eventually going to come across some installation instructions that advise you to “update your path” for some reason. They usually give you a copy and paste thing to go along with it. But what does it mean?

Let me give you an example. Here's my path on this laptop right here —

MacPro-JGrubb 福 /usr/local/etc/ansible ➤ e0db473|master✓
10165 ± : echo $PATH [23h53m]
/usr/local/heroku/bin:/Users/jgrubb/.rbenv/bin:/Users/jgrubb/.rbenv/shims:/usr/local/bin:/usr/bin:/bin:/usr/sbin:/sbin:/usr/X11/bin:/Users/jgrubb/.composer/vendor/bin

Now, never mind that there's a ton of garbage in there, and why is Heroku at the beginning of the path like that? I don't even remember doing that. Anyway, when you first start your computer up, or when you first start a shell session, your environment fires up and loads a bunch of configs. One of the configs that it loads is the list of places to look for the aforementioned executables.

In a nutshell, your computer is going to look down that path from left to right. The different locations are separated by :, so that's a list of different locations on the computer that will be scanned to see that “that thing” is installed there. For instance – the MacIntosh comes preinstalled with Vim and Ruby. By default, stuff that comes with the OS is installed in /usr/bin. But, if you want a more recent version of Ruby, you might install it into /usr/local/bin (this is where Homebrew puts much of its stuff). If your path did not have /usr/local/bin before /usr/bin, you'd still be executing the system version of Ruby instead of the one you want. It's really simple, but again – took me a while.

So, how do you change it? Presuming you use Bash for a shell, you probably have a file called ~/.bashrc or possibly ~/.bash_profile. If you don't, you can safely create either of those files and put in a line like this —

export PATH=/usr/local/bin:/or/whatever/usually/bin:$PATH

This just says “hey, whatever my path is now, add those two directories to the beginning of it, and then assign that to be my new $PATH”.

Questions?

#devops #generaldevelopment

So yeah. I mean, obviously I can't read your mind and get to all the *real* reasons that you'd be wanting to move on, so I'll just kinda riff on my experience and hopefully it won't come out too narcissistic. I don't really have time to write this down in a narrative fashion, so I'm just going to bullet point some of the thoughts I still remember from that period.

  • One of the things that I remember well is the space that was opened up in my mind for not having a clock ticking in the background until I hit the road again. I'd had that clock in my head for so long that it was strange to not have it there anymore. I imagine that getting out of jail must be really disorienting for the first little while.
  • An interesting thing – my wife and I had been married for a couple years and together for several by this time. We owned a house and had two kids now, but we'd never really lived together until I quit that band. That was a revelation that took some trips to therapy to reveal, because “things should be great – I'm finally off the road, but things are actually terrible! WTF?”.
  • I was also immediately struck by how I was able to form a bond with my oldest son that I'd never have been able to form if I hadn't quit. The lack of that ticking clock also meant that he could count on me being there for bedtime and reading stories and all that. There was no more Dad going out of town for a little while. This was wonderful.
  • Along with this however came unemployment. I skipped out with an idea and a path to follow but not the means or skills to execute the idea, and my path was very high up on a hill from where I was at the time and required a lot of lost, spiritual bushwhacking to get to. This meant scrounging hard for freelance web dev work for the first year, and not making ends meet, which was brutal enough but...
  • Couple that with what I remember most clearly from that period. This was 2010, and a *lot* of people were out of work, so there were more than a few stories in the media about this situation. The more thoughtful of them described the loss of “identity” that comes along with the loss of employment. I can only relate this to my own experience...
    • Being a musician is a very monastic pursuit in that we feel it as a calling from a higher power to follow this path.
    • It is, at the same time, a very self-centered pursuit in that you're basically going out and strutting like a rooster in front of a hall full of people every night and putting all other obligations on the back burner to do so. This requires pretty much total, all encompassing dedication on the part of the musician toward this pursuit. When you take that away (in my humble opinion), you're taking away much more than just a job and a lifestyle. It requires a shift in personal purpose – a mental, emotional, and spiritual retooling that was the most excruciatingly difficult thing I've ever been through. Couple this with the job market in 2010 and I definitely looked back and wondered if I'd done the right thing a lot. (even though I knew I'd done the right thing....)
    • Also, like I said yesterday, the letting go of that dream was really, really hard. I knew it would be, but the difference between committing to and doing is just as hard as the other major life commitments – marriage, fatherhood. After my share of those types of commitments, I realize that there is a strangely dissonant mental space between making and following through on these types of commitments.
  • The gigging musician gets used to having people tell him how cool he is regardless of actual job perfomance. Turns out this is the opposite of how the rest of the world works. Removing this stimulus is actually a good thing, though. I wrote about this here – Pruning the Ego

I did a lot of yoga and had a lot of therapy for the first year because I needed something to hold on to. Then at this point, Tyler Grant and Billy and Drew basically saved my life by opening up that slot in ENB and then letting me fill it. My very first trip out with them I (seemingly randomly) landed 2 contracts and went from desititute to fully employed within 4 weeks.

Like I said in that post yesterday – it's complicated. I knew at the time that I was making the right decision. I did not know at that time how long the investment would take to actually come back, though. This was basically hubris, but I thought I could engineer a lucky career landing since I'd lived an extremely luck-filled life up to that point.

I've often considered this whole experience to be the actual transition to adulthood for me.


So this is not the cheeriest summary, right? But this was mostly the wartime account, which you'd expect to be burly. After that year, things started steadily improving.

  • I found plenty of work once I actually learned enough about my new trade.
  • We had #3 in 2012, shortly before ENB began to wind back down.
  • I took a full time job, my first ever, in late 2012 because the pressure of working for myself while supporting a family of 5 was just a bit much. It felt sad at the time, but this job has turned out to largely be the team experience I always longed for in RRE, and I'd consider the past 2 years to be as fruitful a creative period as any I've ever had. I'm just not on a stage doing it. Rather, the stage is different and I don't have to pack up every night.
  • I'm making grownup money now, which changes your outlook on life in a really crazy way when you don't have to constantly choose between paying the power bill and buying groceries.
  • I have a really amazing relationship with my wife now, and I can tell you for a fact that we could not have gotten to this place with my still being in that situation.
  • I have 3 of the coolest little boys, and I don't ever have to tell them I won't be home for a little while.
  • I'm happy now. Even though it was fun, I was not happy then.
  • I always knew that I didn't want to be on the road forever, but exact how to accomplish that didn't reveal itself to me until shortly before I began this blog (6 years ago now).

In short, the investment that I viewed it as at the time didn't work out *how* or *when* I wanted it to at the time, but that's the great thing about life. It has more than worked out – it has come back big time. These subtleties and complexities are what makes life what it is, yknow? Perhaps this is why so many people just keep on doing what they always do, whether they're happy with it or not. It also makes it hard to sum up in a Facebook post.

I salute you though, and anyone who decides to change something they're not at peace with. Please reach out if you ever want to.

#life

When I first started getting into this, I read a lot on PHP and remember clearly having my eyes go crossed when I came across code like this —

<?php
// Example code: Creating Drupal 7 nodes by POSTing from cURL in PHP:

$site = "127.0.0.1/d7";
$user = "someusername";
$pass = "theusersassword";
$crl = curl_init();
curl_setopt($crl, CURLOPT_RETURNTRANSFER, 1);
curl_setopt($crl, CURLOPT_USERAGENT, 'PHP script');
curl_setopt($crl, CURLOPT_COOKIEJAR, "/tmp/cookie.txt");
curl_setopt($crl, CURLOPT_COOKIEFILE, '/tmp/cookie.txt');

/// etc, etc, etc
// taken from http://blog.ampli.fi/creating-drupal-7-nodes-with-php-via-the-restws-api/

So, in a nutshell, Curl is a very popular Linux/Unix command line program for doing things across the internet – downloading things, uploading things, pinging remote servers with requests, etc. Curl has a wealth of options that you can set with command line flags (-I to only get response headers, -D to post form data, -X to specify a HTTP method, etc).

PHP has a built in function for working with Curl, thereby making it easy to programmatically make HTTP calls to other servers. Of course, to really be able to use Curl in a way that's analogous to it's usage on the command line, you need a way to set those flags. curl_init() sets all that up for you, and all you really need to do after that is set whatever flags you need in calls to curl_setopt().

See the docs here – https://php.net/manual/en/function.curl-init.php.

#php #generaldevelopment

Expository info (skippable)

So, for nigh 6-7 years now I've been a Rails enthusiast. I bought the PragProg AWDWR book when it was covering Rails 2.x and had the beach and hammock on the cover, and then proceeded to take years to figure out everything the book was actually talking about from the bottom of the stack to the top. I find it very enjoyable to be able to get my ideas out (in code), and Rails is still one of the cushiest frameworks around in terms of ease of use. It's almost as though it were a central tenet of it's philosophy...

For nigh 6 years now, I've been a Drupal professional. Drupal isn't the sexiest platform on the block, but it's been marketed in an absolutely genious manner and has seen incredible traction in sectors that have jobs-o-plenty — government, educational institutions, publishing, basically anything that needs a robust CMS. I do not, however, find it very enjoyable to work with. The aspects of Drupal that are the most “Drupalistic” – the Form API, render arrays, “configuration as convention” – just feel so out of sync with how the rest of the development world does things. However, see the above point about jobs-o-plenty, and there is still plenty of fun code to be written around the edges of a larger and more mature Drupal installation.

Drupal 8 is going to come out some day though, and with it will come the invalidation of pretty much everything the wider Drupal community has ever known about writing code for Drupal. It's going to require those of us who enjoy writing code to basically start from scratch with a new framework, only this framework is GIGANTIC and carries with it plenty of interesting architecture opinions and non-opinions alike from it's 12 year history. So I suspect I'm not alone in thinking “if I have to learn an entirely new thing anyway, will learning D8 be the most effective use of my limited time?”

—

So, given these factors, I've been watching Laravel for a long time, since V3. I haven't really started playing with it until this most recent major release – V5. My initial thoughts were “oh, this is basically PHP on Rails” – I mean this as a complement. But, since I already knew enough about Rails to be effective and have fun, why not spend the time learning Rails better? So that's what I did before the Drupal community at large took up “get off the island” as a mantra.

For the last few months though, I've spent a lot of time building small-ish things with Laravel, and have even had the team begin a Lumen-backed project. I've been going back through the old Laravel podcasts, since I think podcasts are a wonderful way to absorb the philosophy of whichever developer they have on. (Kudos to you Taylor for consistently being on the podcast about your framework. That actually says a lot about your dedication to moving this whole thing forward).

Ok, with that long winded exposition out of the way, a few thoughts...


The aforementioned ruminations

  • I like how Laravel is immediately familiar to someone who's worked with Rails down to the API of say, establishing relationships in Eloquent or the up and down methods on migrations. Almost any server-side framework that has come since Rails in any language has been either an embrace of or a reaction to its widely marketed “opinions”. I was listening to an older Laravel podcast where the topic was some haters in the PHP community (supposedly) accusing Laravel of being “too Rails-y” or something. I consider this a plus to the framework (obviously), and I think designing a framework around the opposite paradigm – trying to not make it too Rails-y – immediately builds walls around our various yards in the pan-linguistic developer community. If the rules and the terminology are similar, even if it's in a different programming language, we can all get up to speed with the new thing faster and stop wasting our time learning new concepts that are actually the exact same concepts by a new name.
  • I like how Laravel takes Rails' opinionated-ness and actually goes a step further. Rails does not have Auth built into the core. For some reason Rails has Active Job and Turbolinks and Coffeescript, but not Auth. I'd challenge anyone out there to find me a public Rails app without Auth. Building Auth into the framework is an obvious move, and I thank Laravel for doing so. Same goes for billing, or having a Redis cache driver, or having built in support for queued jobs. I was listening to a really old Laravel podcast last night, and Taylor basically described exactly the interface that now exists in L5 – a Billable trait that only uses Stripe and that's that. The kind of simplicity that can only come out of being opinionated. The kind of simplicity that Drupal can never have precisely because of it's lack of opinions in the architecture.
  • Though it's got its share of fans, I don't get a sense of religion off of Laravel (yet). There's a religion around Drupal, and I think religion is mostly a dangerous thing in that it encourages its followers to follow, but not to ask too many questions of its leaders. The relatively insane pace of development and major version bumps and refactoring of the file structure in these bumps is a double edged sword, but it kinda keeps everyone from getting too set in the “old ways” and offers to me a subconscious clue that Laravel is still casting about for the “righter” way to do things, that nothing is too sacred yet.
  • With that, I like that Laravel is finally offering a LTS release. This will allow me to actually sell it to my boss, and feel better about investing time in really learning the thing. I'm feeling more than a little burned by Angular (talk about learning a mountain of new jargon), but at least the whole Angular 2 flap is a good learning experience for evaluating a new technology. “Be skeptical of shiny new things.”

#laravel #rails

We are currently running a Drupal multisite installation on Acquia's enterprise cloud. We have a bunch of different domains for the various sites, as well as the various environments in which they run. The development domains look like pddah.dev.abm, pddah.staging.abm etc, presumably to prevent them from being accessed from the outside world.

This setup requires a rather voluminous sites.php file in the root of the sites/ directory to map all the potential incoming hostnames to their correct websites.

A simpler way around this is to make use of how Drupal maps incoming hostnames to the correct sites/\* folder in the first place.


If there is nothing in the sites/ folder except for default, then that is what will get loaded no matter what the incoming domain. This is Drupal's default config, in fact. If you want to go multisite, you create sites/\* directories for each of your websites' domains and Drupal will figure it out for you. But, it's rules for how it routes are a little bit liberal.

For example, I'm running pddnet.com, but the website actually exists in www.pddnet.com. I only have a pddnet.com folder in sites/ though, so that means that any subdomain of pddnet.com will also route to that directory. If I create a local development domain local.pddnet.com, assuming my local network and apache configs are in order, Drupal will load the config out of the pddnet.com directory without having to do any more work or add anything to sites.php.

This means that you can create dev.pddnet.com, staging.pddnet.com, whateveryouwant.pddnet.com and provided the network plumbing is right between here and there, it'll just work.

Of course, this also requires you having the same settings file in all of these different environments, which means that either you have to have the same DB settings in every environment, or you need to figure out some other way to load in env specific config into that file.

Acquia has a methodology that I'm probably under NDA to not divulge here, but it was devised in an era before modern PHP was a thing. These days we have tools like phpdotenv, and it's that tool that I'm exploring currently for some work that we're doing here that'll span multiple environments.

When I work out how best to integrate it with Drupal, I'll let you know. So far so good though.

#drupal