Ignored By Dinosaurs 🦕

devops

Nginx configs

So I recently had a couple of seemingly disparate tasks come across my desk. We recently launched a HMTL mobile app, an Angular front end to our Drupal sites. We decided to completely decouple the mobile app from the Drupal codebase after a fairly long exploratory period trying out different approaches.

When launch day finally came, we set up the mobile app at app.$$BRAND.com, with our main sites at www.$$BRAND.com. Acquia has this cool feature in their Varnish layer that will filter user-agent strings for ones that match a set of “mobile” user agents that they have defined in a central file. So flipping folks on iPhones over to the mobile site was a piece of cake. What I forgot was that the same logic wouldn't be triggered for the reverse — flipping desktop users to the desktop version of the site from app.$BRAND.com. (Our mobile app is hosted on our own, not with Acquia).

I already had a big list of regexes to test mobile user agent strings with (thanks Acquia!), so the trick was to recreate that in Nginx, the webserver for our mobile app.

Not wanting to do a bunch of evil if {} statements in the Nginx configs, I cast about for a more elegant solution, eventually stumbling upon map.

The Nginx map module

http://nginx.org/en/docs/http/ngxhttpmap_module.html.

So basically what this does is to test any arbitrary Nginx variable for some condition, and spit out a custom variable that you can user in your config. An example —-

map $http_user_agent $device_redirect {
  default "desktop";
  ~(?i)ip(hone|od) "mobile";
  ~(?i)android.\*(mobile|mini) "mobile";
  ~Mobile.+Firefox "mobile";
  ~^HTC "mobile";
  ~Fennec "mobile";
  ~IEMobile "mobile";
  ~BB10 "mobile";
  ~SymbianOS.\*AppleWebKit "mobile";
  ~Opera\sMobi "mobile";
}

This takes a look at the incoming user agent string (fun fact — grab any request header with $http_NAME_OF_HEADER) and compares it against a set of regexes. If one of them is a match, then the $device_redirect variable gets set to “mobile”, otherwise, it's set to the default of “desktop”. This gets used later in the config —

if ($device_redirect = "desktop") {
  return 301 $scheme://$desktop_host$request_uri;
}

In other words, if the user agent doesn't match one of those regexes, redirect the user to the desktop site. Pretty neat!

As a side note, does anyone else think it's weird that the comparison syntax in that if statement only uses one '='? But yeah, that's the right way.

Later that day, on a different Angular app

So that mobile app and this one that I'm about to talk about kinda conform to the Drupal concept of “multisite”. That is, a single codebase that serves a variety of different sites. I figured out a pretty simple hack for this one that maybe I'll share in another blogpost if I get around to it, but basically it involves setting a siteVar in the bootstrapping phase of the Angular app based off of the window.location.hostname. I have a Config Angular service that stores the mapping of hostname to siteVar. It's easy and it works.

The way we serve different stylesheets to different brands is by setting up a sites/$BRAND/ folder that houses site specific styles, etc. When Angular bootstraps, it uses the siteVar variable to fill in $BRAND, and the site specific stuff is loaded. It's easy and it works. Except in Firefox.

Firefox doesn't like this setup, and particularly on the favicon, it proved to be a real PITA.

The default Yeoman favicon would always show up in Firefox, nothing else, since we had that favicon path set dynamically by Angular after the app loaded. it just wasn't responding to any of the usual hacks, and it turns out FF has a long and storied history with the favicon.

Having just found the perfect hammer for the previous problem, I thought I'd see if it could solve this one.

Map all the things...

So for this one, I have an Nginx map companion to the one that I have in Angular.

map $http_host $sitevar {
  default "";
  ~rdmag "rd";
  ~alnmag "aln";
  ~bioscience "bt";
  ~cedmagazine "ced";
  # etc...
}

This maps the incoming host variable on the request to a $sitevar variable, used like this...

location /favicon.ico {
  try_files /sites/$sitevar/favicon.ico $uri;
}

So the browsers that respect the dynamic favicon path continue to work, and if FF never cooperates, Nginx catches that request and fixes the problem before anybody knows...

#devops #generaldevelopment #angular

I've finally been getting around to using Redis for caching on this blog and other Rails projects I've got laying around, and I've finally gotten around to Homebrewing PHP 5.5 to use for my local set up at work. Jose's default PHP 5.5 brew doesn't install the new Opcache, so look around and make sure you install that version because it screams.

I guess you could fairly say that caching has been on my mind a lot lately as I also finally got around to installing and using Memcached here in development with our company's Drupal sites. That's also a separate Jose homebrew install, so look around for that, too. Anyway, after fiddling around with all the Drupal configs that I had commented out previously that dealt with Memcached, I got it up and running. Make that hauling ass. It's a lot of fun.

So of course, the next step is to install some outdated script that gives me some visibility into the cache statistics. Here's where I get to the point.


Out of the box, the Homebrew installation of Memcached gives you 64M, which is also what you get if you install it from Aptitude on Ubuntu. I've finally done enough Ubuntu to get that most config files are stashed somewhere in /etc (/etc/memcached.conf), but where do these files live on the Mac?

This is the only downside to Homebrew, that most of the brewers (and God bless y'all, not complaining) pick some arbitrarily weird places to stash stuff, and crazy commands to access the executables. mysql.server start? What distro does that come from? It just doesn't make a lot of sense, but I guess that's what keeps your mind sharp.

Anyway, I wanted to feed Memcached some more memory, and I couldn't find the config file for the life of me. Maybe you're here for the same reason. I dug pretty deep into Google before I found the answer. It's in ~/Library/LaunchAgents/homebrew.mxcl.memcached.plist.

Out of the box it looks like this —

xml version="1.0" encoding="UTF-8"?

Label
homebrew.mxcl.memcached
KeepAlive

ProgramArguments

/usr/local/opt/memcached/bin/memcached
-l
localhost

RunAtLoad

WorkingDirectory
/usr/local


So instead of pointing at a config file, you just pass all the config in the startup command. To bump it up, you need to pass in -m 128, or however much you want to feed it, like this.

xml version="1.0" encoding="UTF-8"?



Label
homebrew.mxcl.memcached
KeepAlive

ProgramArguments

/usr/local/opt/memcached/bin/memcached
-l
localhost
-m
256

RunAtLoad

WorkingDirectory
/usr/local


#devops #generaldevelopment

Because I have to look this up every. single. time.

pcre —-

apt-get install libpcre3 libpcre3-dev

zlib —-

apt-get install zlibc zlib1g zlib1g-dev

init script

#! /bin/sh

### BEGIN INIT INFO
# Provides: nginx
# Required-Start: $all
# Required-Stop: $all
# Default-Start: 2 3 4 5
# Default-Stop: 0 1 6
# Short-Description: starts the nginx web server
# Description: starts nginx using start-stop-daemon
### END INIT INFO

PATH=/usr/local/nginx/sbin:/sbin:/bin:/usr/sbin:/usr/bin
DAEMON=/usr/local/nginx/sbin/nginx
NAME=nginx
DESC=nginx

test -x $DAEMON || exit 0

# Include nginx defaults if available
if [ -f /etc/default/nginx ] ; then
 . /etc/default/nginx
fi

set -e

case "$1" in
  start)
  echo -n "Starting $DESC: "
  start-stop-daemon --start --quiet --pidfile /var/run/$NAME.pid \
  --exec $DAEMON -- $DAEMON_OPTS
  echo "$NAME."
  ;;
  stop)
  echo -n "Stopping $DESC: "
  start-stop-daemon --stop --quiet --pidfile /var/run/$NAME.pid \
  --exec $DAEMON
  echo "$NAME."
  ;;
  restart|force-reload)
  echo -n "Restarting $DESC: "
  start-stop-daemon --stop --quiet --pidfile \
  /var/run/$NAME.pid --exec $DAEMON
  sleep 1
  start-stop-daemon --start --quiet --pidfile \
  /var/run/$NAME.pid --exec $DAEMON -- $DAEMON_OPTS
  echo "$NAME."
  ;;
  reload)
  echo -n "Reloading $DESC configuration: "
  start-stop-daemon --stop --signal HUP --quiet --pidfile /var/run/$NAME.pid \
  --exec $DAEMON
  echo "$NAME."
  ;;
  \*)
  N=/etc/init.d/$NAME
  echo "Usage: $N {start|stop|restart|reload|force-reload}" >&2
  exit 1
  ;;
  esac

  exit 0

Also, try running configure with

./configure --with-http_realip_module --with-http_gzip_static_module

#devops

Russian Doll caching

It's a well branded name for something that makes total sense after reading one blog post. The second half of this blog post will actually get you pretty much there.

If you're coming from a PHP/Drupal background like me, you might be surprised to find out that the database is not the bottleneck in Rails-land. Whereas on your typical Drupal page you might have anywhere from 50 to 1000 database queries being run, you'll be hard pressed to find a Rails app that comes anywhere near that number of DB calls being made. The Hobo companion, even on the front page (the heaviest page in terms of data so far) only runs about 8-10 queries.

What takes so long in Ruby land is view rendering.

I've witnessed this first hand on my job's Redmine install. I'd have thought that what was taking so long was the database, but what actually takes 95% of the time on each page load is the view rendering. I guess the Basecamp folks noticed the same things, so they went to work on how to speed that up.

Fragment caching

There's always a blurb in evey article about Rails caching that has to do with “fragment caching”. Basically, you cache little bits of each page as they're rendered, and the next page requests pull the rendered HTML from the cache to reassemble your page. It's simple, except that it's not. You've heard the old adage about the 2 hardest problems in computer science – cache invalidation is the PITA in this one. That basically means making sure (somehow) that you're not serving stale fragments when something has been updated in the meantime. I'm not sure what the old scheme was for taking care of this, but it wasn't friendly or intuitive.

Fragment caching ++

The solution that they came up with involved making a digest of the actual object being rendered be the cache key.


All blog posts
--------------

<% cache [ "archive", @posts_by_year.first.first ] do %>
  <% @posts_by_year.each do |year| %>
    <% cache [ "archive", year[1].size ] do %>
      ### <%= year[0] %>


      <% year[1].each do |post| %>
        <% cache [ "archive", post ] do %>
        
        #### <%= link_to post.title, post_date_path_helper(post) %> - <%= post.created_at.strftime('%D') %>



        <% end %>
      <% end %>
    <% end %>
  <% end %>
<% end %>

This is the view that renders my blog's index page. I'm still getting the hang of how to name these cache fragments, but the idea is that you recursively wrap each rendered fragment on the page. If one item gets updated, the digest for that item changes, and the cache key for it changes as well. The next time the page is rendered, the value won't exist in the cache for that key (because the key has changed based on the digest of the object). It'll be stuck in the cache, and every fragment that wraps it will be invalidated as well. They'll be re-rendered, but rather than having to re-render everything on the page from scratch, the other items that haven't changed will be pulled from the cache. The vast majority of the page will not have changed, and will still be alive and well in the cache. In this way, the whole page can be 95% cached and only the parts that change will have to go through the whole trip to re-render.

It does still call your database to get the objects for digesting, but as we've already discussed, this is a small cost comparitively. Down the road there are solutions to this issue as well one your optimizations get to that point.

Cache store

When I first started implementing caching on this site, I started off easy with Rack Cache. It's a simple HTTP cache that causes the whole page to be stored with a TTL time on it. The TTL is set in the controller with something like expires_in 5.minutes, public: true. Once I started moving into the fragment caching business, I moved out of Rack Cache and into using memcached as the store. It's easy to set up. So easy I'll probably never write a post about it. It just works.

It did, however, seem to take up a fair share of memory – as you'd expect from a memory cache store. I already had Redis running for queuing background jobs via Sidekiq though, so it occurred to me over dishes that I should give that a try. Turns out it's just as easy as memcached. Just swap out gem 'dalli' in your Gemfile for gem 'redis-rails' and change config.cache_store = :dalli_store to config.cache_store = :redis_store. It seriously doesn't get any easier. Redis is a lot like memcached, except that it has some more advanced fatures that I might never use. It also writes to disk every now and then so if you restart your box, Redis can keep the cache warm rather than losing everything it's stored.

#rails #devops

For a while now I've been leaving my work computer on 24/7 and remoting in to do all my work when I'm out in the world. I have the same public key on both machines (perhaps not the safest thing in the world, true), so I was confused by why I needed to enter my password every time I logged in to my work computer. Not so confused that I took the time to figure it out until recently.

I had set permissions on the authorized_keys file to 400, but for some reason it needs to be 600.

doesn't work

-r-------- 1 grubb staff 410 Nov 27 10:13 authorized_keys

works

-rw------- 1 grubb staff 410 Nov 27 10:13 authorized_keys

#ssh #devops

After reading the news about Fastcgi support landing in HHVM, I had to finally give it a try. I'll assume that you're at least familiar with HHVM and it's design goals, but if you aren't the HHVM wiki is a good place to start. In a nutshell, it's a new PHP runtime courtesy of Facebook that, if you can get it working, promises to run circles around any PHP interpreter currently on the market.

So I came into work on Wednesday fired up to give it a try. I wasn't expecting anything at all, since the sites I work with for my day job are pretty large and have a decent number of contrib modules installed. In case you're wondering, core Drupal is supposedly 100% on the HHVM now, but contrib is a different story.

The first thing you should know is that it only runs on 64-bit version of Ubuntu, so head over to Digital Ocean and fire one up. I prefer 12.04, so that's what I conducted this experiment on. The first link above gives instructions on how to install HHVM via apt, so that's the route I went. I first tried on the tiny little box that this site runs on, which is a 32-bit version of Ubuntu, and while the apt repo would update itself with the new HHVM repo, it wouldn't install. So, onto plan B, which involved a 1G box running a 64-bit version.

This one installed from the HHVM repo without a hitch — init script in /etc/init.d/ which pointed to some configs in /etc/hhvm. A quick perusal of that config script, which looked very much like an Nginx config, looked pretty straightforward. Installing Nginx and git and everything else I need to stand a site up was routine. Looking good. So sudo service hhvm start, and we're off to the races. top showed the hhvm process running as www-data so I hit the root URL. Page not found was the only feedback I got. curl -I gave me an x-something-something header that said HPHP, so I was puzzled. HipHop was listening on port 80 and was directly catching the web traffic, as opposed to standing behing Nginx and listening on port 9000.

It took me about 30 minutes of fiddling around, but the real clue was in the instructions for how to start HHVM on a system that doesn't have a repo installer.

cd /path/to/your/www/root
hhvm --mode server -vServer.Type=fastcgi -vServer.Port=9000

So, I took another look at the /etc/hhvm/server.hdf config file that the init script was pointing to and noticed that it was set to listen on port 80, not port 9000, and it was set inside of a Server {} block. That Server {} block that looked like the perfect place to put Type = fastcgi, so I did, and changed the port to 9000. The docs indicated that the process needs to be started from the root of your PHP app, but that might only apply to non-fastcgi HHVM. I started it from there anyway, and I finally had the thing working.

Standard “standing an existing Drupal site up on a new box” fiddling around with connections and file system settings and I actually got the thing to stand up after about 90 minutes of playing with it.

Thoughts

I'm a pretty good sysadmin for a front end developer, but I'm still just a front end developer. It was, however, very easy to find my way around the configs and start to get a sense of what HHVM does. I ended up upsizing the box to a 4G instance for a little bit after it started giving me some memory-related errors that I didn't have the skills to diagnose. I have no idea if that much RAM is needed, but it cost about $.20 to bump it up for a couple hours and find out.

The beautiful thing about it was that once I finally figured out how to stand it up correctly, my existing Nginx/FPM config (a derivative of Perusio's) worked out of the box with absolutely no other intervention. When I got stuck once by a cache clear that I suspect was the real cause of my memory issue, I shut HHMV down, brought FPM up and got unstuck. After I was stabilized again, I shut FPM back down and brought HHVM back up. It was seamless.

I finally gave up after a little bit because although it was working, the HHVM logs were full of notices, and I repeatedly hit a wall with some function, first in the gd lib and then in apachesolr, that didn't like the input it was being fed. I had to get back to “real work”, but I will dive back in very soon and hopefully be able to contribute some feedback. The maintainers are extremely friendly and active on IRC.

I'm very, very excited for the future of this project and would highly recommend giving it a try. I was amazed that I was able to get anything to load on it at all, and am salivating at the thought of actually getting the VM warmed up with a bunch of repeated requests. When it is ready for prime time across a wide variety of PHP apps, it's going to change the way people think about interpreted languages in general, and is going to single handedly contribute to a faster web. For this, we thank you Facebook.

#drupal #devops

Hi there, probably-front-end-dev-who's-met-and-used-Sass-and-likes-what-they-see. This is for you.

RubyGems

Sass is made out of Ruby – it's a very pleasant, general purpose programming language that's pretty easy to learn and like. Ruby has a package management system whereby libraries of Ruby code are bundled up into what's known as “Gems”. Sass is a gem. When you install it, you get a couple of new executables to play with in the terminal, namely sass and sass-convert. The latter of these will help get you started with Sass by converting your straight CSS to Sass. RubyGems inspired PHP's new-but-already-dominant package manager, Composer.

rbenv

If you are a Mac user, and you are using the version of Ruby that came with your Mac, you are using a version of Ruby that's actually beyond End Of Life. If all you're ever interested in is Sass, it'll keep working for a while longer but eventually you'll be left behind. A relic. This is the bad news. The good news is that the Ruby community has been working on this problem for a while.

[!info] Because Ruby 1.9 came out a while back and has a bunch of cool new stuff in the form of performance enhancements, syntactic polish, and overall love via it's contributors, and because 1.8 is in life's endzone, and because using outdated versions of open source software just isn't your preferred thing, you'll want to use 1.9. This is how.

The most commonly blogged about solution to this in the Ruby world is RVM. We're not going to talk about that. We're going to talk about a solution called rbenv. Rbenv is a more recent and lightweight solution to this multiple Ruby versions problem that doesn't require sudo to install and update Gems, and allows you to install almost any version of Ruby you desire (of which there are plenty, but that's more than you need to know right now).

Rbenv works on any *nix based system and installation is super simple

$ git clone https://github.com/sstephenson/rbenv.git ~/.rbenv

This installs rbenv, the version manager. Add rbenv to your $PATH -

$ echo 'export PATH="$HOME/.rbenv/bin:$PATH"' >> ~/.bash_profile

$ echo 'eval "$(rbenv init -)"' >> ~/.bash_profile

(Zsh users put those two lines in ~/.zshrc).

You might as well go all the way and install the Ruby version installer, a separate tool – ruby-build.

git clone https://github.com/sstephenson/ruby-build.git \ ~/.rbenv/plugins/ruby-build

At this point, you'll reload your shell – exec $SHELL – and you're ready to rumble. Ruby 2.0.0 was released earlier this year, so unless you really like living on the edge, 1.9.3 is a safe bet.

rbenv install 1.9.3-p448 – (the most recent release as of this time, refer to changelog).

I almost forgot to mention rbenv rehash – probably the rbenv command you'll use the most. “rehash” basically tells rbenv to reload itself after you gem install any new gem that comes with an executable (like Sass). If you install a new gem and for some reason your computer acts like it has no idea, it's almost certainly this.

Incidentally, both of these tools were written by the same guy – Sam Stephenson. He works at 37 Signals, the home of Basecamp, the original Ruby on Rails app, created by a mystical figure known simply by his initials.

Matz/DHH

Super quick Ruby history lesson...


Ruby recently celebrated it's 20th birthday which, without doing any consultation of Wikipedia, makes it roughly the same age as PHP. Ruby's creator and spiritual leader is a guy named Yukihiro Matsumoto, or Matz for short. PHP obviously grabbed it's share of the market more quickly, and Ruby had scarcely gotten off of the island of Japan for about the first half of that until it was catapulted onto the world stage by one man – DHH.

DHH is a charismatic developer from northern Europe with a fondness for business and hair gel. DHH cast off his PHP chains when he found Ruby, created an honest to god framework out of it, open sourced it, and then ran with it. Much of Rails' rise to prominence coincided with the rise of Github and the two together are probably largely responsible for touching off Git's adoption in the greater marketplace.

Rails has impacted the design of almost every single web framework that has come since in any language, either directly borrowing from it's ideas or reacting to it's opinions. Sass came in it's wake, and here we are.

The Well Grounded Rubyist

If I can recommend one Ruby book to get, it's The Well Grounded Rubyist by David Black.

Black is one of the few western developers who has been doing Ruby since before Rails came along, and is a preeminent authority on the language. This book was my introduction to Ruby's version of OOP, which is indescribably more elegant, consistent, and to-the-point than PHP's, and reads almost like a great novel in the way that it builds in intensity from beginning to end and rewards repeated readings. No, I'm not shitting you.

#drupal #ruby #devops


The program "postgres" was found by "/usr/local/Cellar/postgresql/9.2.4/bin/initdb"
but was not the same version as initdb.

I've been battling this for the last couple of hours, trying to figure out why I can't make Postgres run as easily on my desktop as I did on my laptop. Homebrew took care of it all, just leaving me with the agony of taking off the MySQL training wheels to figure out this new and scary Postgres admin syntax.

So I uninstalled the Homebrew version and went to the EnterpriseDB site and downloaded the official installer for Mac. This didn't yield any results either, and seemed to want you to use the GUI tools to administer it anyway. which psql kept giving me /usr/bin/psql, which should've been more of a clue, but I'm not that quick. psql --version kept giving me 9.0.2, which also should've been more of a clue, but I just figured I must've installed Postgres a long time ago and gave up and forgot about it.

Then I remembered. Mac OSX server comes with Postgres. That's why it's reporting /usr/bin for all it's paths instead of /usr/local/bin (the homebrew default), or /Library/Postgres, the official installer default.

There was also an unkillable set of _postgres processes in ps that I couldn't figure out how to kill. So, the flamethrower method is to delete everything in /usr/bin that relates to PG – psql, postgres_real, anything you can find. Don't forget /usr/bin/initdb because that's what was throwing the above error. Then you can get on with the homebrew installer.

#postgres #devops

I'm the honcho in charge of our in-house bug tracker – Redmine. Redmine is a rather large Ruby on Rails project, thus nobody in house when I started here had any knowledge or interest in maintaining the thing, since Ruby servers have a bad rap for being kind of finicky to set up, at least in relation to PHP. So it goes.

I recently upgraded to the lastest stable release – 2.3.1 – and decided to 86 Passenger as our app server in favor of Unicorn. I've been setting up all my Ruby servers with Unicorn lately and find it to be easier than Passenger, even tho ease of deployment is Passenger's whole selling point. I find the Nginx reverse proxying back to a pool of app servers, ala PHP-FPM that's running this site currently to be an easy mental model to get my head around. I get Passenger's modrails/modphp approach, I just prefer the other.

Anyway, sorry.

So I upgraded the whole infrastructure last week to Ruby 1.9.3-p392, Redmine 2.3.1, and everything went fairly smoothly. I was alerted to a bug this morning though, where attachments were being mangled. Basically, everything was being truncated to the first 48k in the file, and this applied to images as well as PDFs and Excel spreadsheets. I dove into the files/ directory in Redmine and saw that the files were all there, and that the file sizes were correct, so they were getting to the server, just not coming back.

I suspected I'd broken something with the new app server, but it took me a while to track it down. I'd previously been running the Nginx process and the Passenger process as the same user on the server, something I changed in this recent deployment. After trotting through all the Unicorn logs, the Rails logs, and what I thought were the Nginx logs, I found some “other” Nginx logs that happened to be the ones that were actually being used now. They were filled with this —

2013/05/22 14:13:58 [crit] 17604#0: \*9408 open() "/opt/nginx/proxy_temp/7/06/0000000067" failed (13: Permission denied) while reading upstream, client: 65.126.154.6, server: _, request: "GET /attachments/download/2323/AdvertiserEmailLeads_with_Verbiage_Changes.xls HTTP/1.1", upstream: "http://unix:/tmp/redmine.sock:/attachments/download/2323/AdvertiserEmailLeads_with_Verbiage_Changes.xls", host: "redmine.advantagemedia.com", referrer: "http://redmine.advantagemedia.com/issues/4958"

2013/05/22 14:14:43 [crit] 21936#0: \*8 open() "/opt/nginx/proxy_temp/1/00/0000000001" failed (13: Permission denied) while reading upstream, client: 65.126.154.6, server: _, request: "GET /attachments/download/1454/Balluff_112012html5.zip HTTP/1.1", upstream: "http://unix:/tmp/redmine.sock:/attachments/download/1454/Balluff_112012html5.zip", host: "redmine.advantagemedia.com", referrer: "http://redmine.advantagemedia.com/issues/4229"

So that's a good thing, because we're getting really warm by this point. Basically, the /opt/nginx/proxy_temp directory is full of proxy temp files that were still owned by the old nginx user. Now that the Nginx process was running as user nobody, the ownership and permissions were wrong. So a chown -R nobody on that proxy_temp directory and everything was right with the world.

#devops

At my gig we host our sites on Acquia's dev cloud. The dev environment is pretty locked down, obviously, since you don't want multiple, publicly accessible copies of your sites floating around out there, especially when it could be in any state of brokenness at any given time. So the way we do it is to use .dev domains that aren't publicly routable via DNS. We have a big ole master hosts entry in the local network that takes any of those dev domains and routes them to the proper IP.

Today however, I go out and my car won't start. This means I'm working from home, and that I need to add these host entries on my local machine myself since I'm not on the company network. As I type this I realize I could VPN, but that's no fun and I'm already done with this method that I'm about to explain.

So, I go to Acquia's cloud panel thingy. They have a “servers” menu item, but it only gives you the names of your servers, not IP addresses. Oh, we also have a load balancer sitting in front of everything that I can't log in to to get the IP that way (via ifconfig, presumably). So, they do give you this really vague

The following tables show all of the servers and the services(s) they provide for each of your site's environments. Each server's full DNS name is server.prod.hosting.acquia.com.

So anyway, the answer is the dig command – more or less a DNS swiss army knife.

dig server.prod.hosting.acquia.com spits back a wealth of info at you, including the IP address of your load balancer, which you can then put into an entry in your /etc/hosts file.

Love this stuff...

#drupal #devops