Ignored By Dinosaurs 🦕

javascript

Problemspace

You have a decent sized project and your deployments are taking a while. Platform.sh rebuilds your entire application from scratch with each git push so in some cases the process of downloading all those 3rd party packages can take quite a while. We can and do manage local caches of some composer packages due to our PHP heritage, which helps to make composer install a pretty snappy affair, but it's simply not possible to effectively do this with Nodejs.

Compounding this problem for npm is the fact that npm's dependency graph, that is the dependencies of your dependencies, have to be worked out every time you run npm install. This can lead to developers in your org installing different versions of packages which will cause you problems.

Most other package managers overcome this with the use of a “lockfile”. A lockfile is a file that's generated when you run composer install for the first time, or bundle install if you're working Ruby. A lockfile is the result of the dependnecy graph being worked out, and then specifying the exact versions of each package. This file is checked into Git, and each dev in the project gets the exact same versions of the packages required for the project.

Solutionspace

I was listening to the most recent Laravel podcast over the weekend and they got to talking about a new quasi-npm-replacement that had just come out called Yarn.

Yarn aims to be an almost drop in replacement for npm. There are a number of ways of installing it, but the most simple is just npm install -g yarn. My coworkers thought I was trolling them with that, but it makes perfect sense if you think about it.

The only other step is run the yarn command locally in order to have Yarn traverse your node_modules directory and build up the Yarn lockfile – yarn.lock. Then commit that to git and let's rock and roll on your .platform.app.yaml. We're going to require Yarn in the global dependencies section -

dependencies:
  ruby:
    sass: "**" # not required, just assuming
    nodejs:
    gulp-cli: "**" # same here
    yarn: "**"

And then replace npm install in your hook.build with yarn install instead, like so -

hooks:
  build: |
    yarn install
    gulp default // for a Laravel project

This took my previously 6 minute builds down to about 1 minute. In other words, the time that it took out of my build phase was longer than the time it took to completely move from npm to Yarn in the first place. The reason for this speed boost is that Yarn doesn't have to generate the dependency graph every single time (like npm does) since the lockfile, and Yarn downloads the packages in parallel rather than whatever npm does, which must be one at a time.

If you're using npm install as part of your build step on Platform.sh, it's really a no-brainer. Check it out!

#javascript #platformsh

Hoping to land a slot in the SERPs with that title, I'm here to demonstrate today the difference between good 3rd party javascript, and bad 3rd party javascript.

First – the good, as demonstrated by Google Analytics. GA's setup and tracking code, as stepped through here.

(function(i,s,o,g,r,a,m){i['GoogleAnalyticsObject']=r;i[r]=i[r]||function(){
(i[r].q=i[r].q||[]).push(arguments)},i[r].l=1\*new Date();a=s.createElement(o),
m=s.getElementsByTagName(o)[0];a.async=1;a.src=g;m.parentNode.insertBefore(a,m)
})(window,document,'script','//www.google-analytics.com/analytics.js','ga');

ga('create', 'UA-XXXX-Y', 'auto');
ga('send', 'pageview');

The ga() function is what does the magic, but in essence all that the ga() does is push its arguments into an array. All of this functionality is contained within those 4 lines. Those 4 lines also create a script tag that loads the rest of the GA lib, where the functionality to rifle through that array and report its contents back to GA's servers lives.

If that script tag fails to return, nothing happens. More importantly, nothing bad happens. The array that ga() pushes into is a vanilla JS array, it will always be there. If the GA payload doesn't come back, no worries. Your app or website will never know the difference.

This is how knowledgable developers build 3rd party libs that don't suck.

In contrast, here's what Adobe's Omniture does.

// A blob of obfuscated js hundreds and hundreds of
// lines long here. The essence of worst practices.
// All of this results in (hopefully) their lib coming
// down via a generated script tag as well. This script
// tag creates a global object - s (so artfully and helpfully
// named) - that has a giant range of methods to deal with 
// the functionality of their platform - pageviews, events, etc

Later, you track events and the like via a function call like this -

s.tl(this, 'o', 'blah'); // event
s.t() // pageview

What happens though, if their huge lib doesn't come back for some reason? Like, maybe a corporate firewall doesn't feel like letting Omniture code track its users? This would result in there being no global s object on which to call these methods. What happens then?

Your website blows up, that's what. Exceptions are thrown, and god help you if you're running a single page app, because it's toast now.

The only solution I've been able to come up with is to wrap everything in try {} catch {}, which is hideous and wasteful and prone to error when you forget to wrap every single piece of Omniture code with it.

That's why I have the utmost respect for the engineering of GA, and the utmost disdain for Omniture. Oh, and Omniture costs tons of money to use, and the API documentation is buried somewhere in the 5th level of Dante's hell.

#javascript

Simple trick for making sure that anything that you want to listen to window.onscroll doesn't eat up too many cycles while it's doing its thing. It's called “throtting”.


Throttling basically means, if you're receiving a steady stream of input from something, you don't really want to be firing stuff off based on that steady stream. This is a performance suck. Let's say you have this —-

window.addEventListener('scroll', function() {
  // Stuff that's actually kinda CPU intensive like
  // taking measurements, waiting for some element
  // to show up on the screen, for example. 
  console.log('hi!');
});

This function is going to be firing as many times a second as your computer can handle. If you're on a beefy laptop in Chrome, this will probably not be noticeable, but make no mistake — none of your users are on as good a laptop as you are. You will definitely drop frames and your perceived performance will suck wind.

What's the answer? Throttle that code. Like this.

// timeNow is the current time in milliseconds, created by
// casting new Date() to a number with +
var timeNow = +new Date();
window.addEventListener('scroll', function() {
  // if the current time in milliseconds - timeNow is
  // less than 250, abort.
  if((+new Date() - timeNow) < 250) return;
  // Else, reset timeNow to now.
  timeNow = +new Date();
  console.log('hi!');
});

This is hack-y looking because it's kind of a hack. Underscore and Lodash have this built in, but it might be a little heavier than what you need. If you find yourself using this more than once in a file, please either bring in Lodash, or rip off their implementation into your project.

#javascript

So tracking pixels. They sound awful. They sort of are, but we all use them. One just fired off on you a minute ago when you loaded this page. That's how Google Analytics works its magic. But how do they work? The GA tracking code is Javascript and doesn't say anything about an image pixel.

Step inside...


Dat JS

So that javascript does a few things, primarily it creates another javascript tag that pulls down the real “payload”, which is a hackerish term for “a bigger ball of code”. I haven't analyzed that code yet, but one of the things that it does is build a profile of your browser that you're on and the page that you're looking at. Once it does that it pings GA's tracking servers with that profile which counts as a “pageview”. That's the ga('send', 'pageview') bit. But how does that work?

A tracking pixel!

Placement in the DOM, you need not...

So a pretty interesting thing about tracking pixels, and anything in the browser really is that it doesn't actually need to be put on the page to exist in memory somewhere. In fact, even if that pixel is only 1x1 in size, it could bump something out of the way enough to trigger a repaint of the webpage, which might alert you to that pixel's existence, which is something that advertisers and their ilk stringently avoid.

So basically, that ga('send', 'pageview') ends up generating a request to a server somewhere. That request looks like this

https://www.google-analytics.com/collect?v=1&_v=j29&a=806595983&t=pageview&_s=1&dl=http%3A%2F%2Fwww.ignoredbydinosaurs.com%2F2014%2F09%2Fdeconstructing-the-google-analytics-tag&ul=en-us&de=UTF-8&dt=Deconstructing%20the%20Google%20Analytics%20tag%20%7C%20Ignored%20by%20Dinosaurs&sd=24-bit&sr=1440x900&vp=1334x479&je=1&fl=15.0%20r0&_u=MACAAAQAI~&jid=&cid=1626931523.1412365384&tid=UA-8646459-1&z=962163205

In the network tab of your devTools in your favorite browser you can break down all those query string params into something a little more interesting.

v:1
_v:j29
a:806595983
t:pageview
_s:1
dl:http://www.ignoredbydinosaurs.com/2014/09/deconstructing-the-google-analytics-tag
ul:en-us
de:UTF-8
dt:Deconstructing the Google Analytics tag | Ignored by Dinosaurs
sd:24-bit
sr:1440x900
vp:1334x479
je:1
fl:15.0 r0
_u:MACAAAQAI~
jid:
cid:1626931523.1412365384
tid:UA-8646459-1
z:962163205

Some of that stuff is understandable, some of it is not. But the point is that that request actually trigger a response of a 1x1 pixel.

# Response headers

access-control-allow-origin:\*
age:66666
alternate-protocol:443:quic,p=0.01
cache-control:private, no-cache, no-cache=Set-Cookie, proxy-revalidate
content-length:35
content-type:image/gif
date:Fri, 03 Oct 2014 18:23:52 GMT
expires:Mon, 07 Aug 1995 23:30:00 GMT
last-modified:Sun, 17 May 1998 03:00:00 GMT
pragma:no-cache
server:Golfe2
status:200 OK
version:HTTP/1.1
x-content-type-options:nosniff

If this were the first time I'd visited the internet, there would almost certainly be a set-cookie header in there as well, but since they set that cookie on me a LONG time ago, it doesn't get sent.

The kinda creepy thing is that since Google Analytics is on a large number of sites, and their origin servers are on the same domain (cookies), they can follow you around the internet from site to site to site in a way that nobody else can (save perhaps the other giant analytics providers, which probably have nowhere near the reach, unless you count Facebook).


Wow, cool. So what?

So that image pixel is not really the point. It gets returned in that response, but doesn't get put on the page. It plants a cookie on you, big deal.

But what happens at Google is that the request that was made in the first place gets logged. It gets broken down by its query string params, and that's how they build the tool. That's how you know what size browser people are on, what part of the world they're from, what they looked at, and what they clicked on (if you're tracking events).

The really interesting part to me, and the part I haven't figured out yet, is how they store ALL that data on the backend to build the reports out of. Think about it — they're basically logging every request made to every website that is running their code. That's a really big number, even for Google. And then they're able to pull your report suite out of all that data, and sort it out by whatever you wanna know. Seems pretty cool, and also well beyond the capability of normal relational DBs.

A post in the future, I imagine....

#javascript #analytics

Hello there! There has been a lot of discussion in the Drupalsphere lately about a concept that has been coined “Headless Drupal”, and rightly so. It's basically theming, but throwing out the theme layer. The theme layer in Drupal is very powerful, but has always felt severely over-engineered to me, especially in contrast to pretty much any MVC framework I've played with. With those, you define view-layer variables in a controller class, and you write HTML with a little bit of code to print them out into HTML. It's more work up front, since you have to write code, but it's vastly easier once you get over that hump.

The company I work for has been doing exactly this with AngularJS since early this year, and I've yet to see a concise post about how to get started with it in the context of Drupal. I rarely write “how-to” posts, but I figured it'd be a good way to inaugurate the ABM tech blog.

Our use case (feel free to skip)

Early this calendar year, my boss' boss came to us with a business request — build us a framework on which we can make our up-until-lately desktop-only websites a little more mobile friendly. We were using a rather ungainly combination of Panels and Adaptivetheme, and though those should have given us a good base on which to build, we had managed to mess it up.

Our original themer was a designer who learned CSS on the job, and our stylesheets were an enormous mess. Absolute positioning, widths specified in pixels, subthemes that had huge amounts of repetitive CSS rules that could've been rolled up into the parent theme. Rewriting these sheets would've been prohibitively expensive, and wouldn't have gained us anything in the eyes of the business.

To add to that, the aforementioned boss' boss was really keen on what we in the biz call “HTML5 mobile apps” that felt more like a native app rather than just a website that is readable on the phone. UI patterns would include swiping to navigate between stories, storing articles to read later, offline support, etc. Basically, not things you can do in any Drupal theme that I know of.

I spent a few days in R&D mode trying to figure out how to fake these things with pre-fetching pages so they'd be rendered already when the user swiped, but it was a mess.

I knew in the back of my head that I was doing it the wrong way, but it took some prototyping to convince myself that what we indeed needed to do was to throw out the book on Drupal theming and do this another way.

I love writing Javascript, and I'd finally found a use case for which one of these JS MV* frameworks might actually fit the bill.

The Drupal setup

So, assuming you have a clean Drupal install spun up already (if you don't, may I suggest drush core-quick-drupal?), you'll want to download a couple of modules to get going.

  • Views (obviously! don't forget the ctools dependency!)
  • Views Datasource (this lets you spit out views as JSON).
  • Devel (for generating some content if you don't already have some)
  • CORS (just trust me, I'll get to this one later)

Or you can just do a drush dl views ctools views_datasource cors devel and be done with it.

Enable all these modules – drush en views views_ui devel devel_generate cors views_json -y.

Generate some content – drush generate-content 50 0 --types=article

Ok, you're ready to hop into the UI!


Pretty much all the action at this point is going to happen in Views, so navigate to admin/structure/views, and “Add new view”.

  1. Name it whatever you want, may I suggest “json articles”?
  2. You want to show “Content” of type “Articles” sorted by “Newest first”.
  3. Create a page, yes. Display format will be “JSON Data Document”. Continue and edit. Screenshot of these settings.
  4. Just add a few fields, since you only have the title so far. Add the Body, the nid, and the date in whatever format you please.
  5. You do want to create a label, but you'll be better off customizing it to be all lowercase and to have no spaces. ie. body, post_date, etc. What you should see at this point

In the preview, you should see a pretty-printed json list of your 10 most recent articles, as generated by Devel.


Congrats! The Drupal part is done! We'll be visiting the more interesting part in the next post.

#drupal #angular #javascript

If you're a web developer, I'm sure you've placed this snippet of code more into more than a few projects.

(function(i,s,o,g,r,a,m){i['GoogleAnalyticsObject']=r;i[r]=i[r]||function(){
(i[r].q=i[r].q||[]).push(arguments)},i[r].l=1\*new Date();a=s.createElement(o),
m=s.getElementsByTagName(o)[0];a.async=1;a.src=g;m.parentNode.insertBefore(a,m)
})(window,document,'script','//www.google-analytics.com/analytics.js','ga');

ga('create', 'UA-XXXX-Y', 'auto');
ga('send', 'pageview');

Let's unpack it a little bit -

(function(i, s, o, g, r, a, m) {
 i['GoogleAnalyticsObject'] = r;
 i[r] = i[r] || function() {
 (i[r].q = i[r].q || []).push(arguments)
 }, i[r].l = 1 \* new Date();
 a = s.createElement(o),
 m = s.getElementsByTagName(o)[0];
 a.async = 1;
 a.src = g;
 m.parentNode.insertBefore(a, m)
})(window, document, 'script', '//www.google-analytics.com/analytics.js', 'ga');

Now let's make those local variables a little more clear -

(function() {
	var a, m;
	window['GoogleAnalyticsObject'] = 'ga';
	window['ga'] = window['ga'] || function() {
	(window['ga'].q = window['ga'].q || []).push(arguments)
	}, window['ga'].l = 1 \* new Date();
	a = document.createElement('script'),
	m = document.getElementsByTagName('script')[0];
	a.async = 1;
	a.src = '//www.google-analytics.com/analytics.js';
	m.parentNode.insertBefore(a, m)
})()

So if you go to your javascript console and type “GoogleAnalyticsObject”, you'll get back the string “ga”. window.ga is a function, but since functions in javascript are also objects, it has a property called q, which is just an array. This is reminiscent of the old ga.js syntax which went something like this —

var _gaq = _gaq || [];
 _gaq.push(['_setAccount', 'UA-XXXX-Y']);
 _gaq.push(['_trackPageview']);

 (function() {
 var ga = document.createElement('script'); ga.type = 'text/javascript'; ga.async = true;
 ga.src = ('https:' == document.location.protocol ? 'https://ssl' : 'http://www') + '.google-analytics.com/ga.js';
 var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(ga, s);
 })();

_gaq is/was just a plain old Javascript array, which gives it the push() method for free. This new ga.q property serves the exact same purpose, an array to push things into and wait for something to come along later and pop them off. That something that comes along later is whatever is contained in that async script that this snippet also builds.

This is super clever because it doesn't have to wait for anything, it can go ahead and do all its business the instant the page loads and even if the main tracking script doesn't come down for some reason, nothing breaks.

Back to analytics.js...

Whatever you hand as arguments to ga() gets fed into ga.q right here —

window['ga'] = window['ga'] || function() {
	(window['ga'].q = window['ga'].q || []).push(arguments)
}

If you pop open the console on the front page of this blog, and type in ga.q, you'll get this —

> ga.q
[
Arguments[3]
0: "create"
1: "UA-8646459-1"
2: "ignoredbydinosaurs.com"
callee: function (){
length: 3
__proto__: Object
, 
Arguments[2]
0: "send"
1: "pageview"
callee: function (){
length: 2
__proto__: Object
]

Those are stashed in the queue because as soon as that first bit of code is parsed out, there are two quick calls to ga(), and that's exactly what they have as their arguments. It's so simple, it's almost stupid to explain, but the script is so heavily optimized it's not at all obvious on first glance what's going on here.

Moving on, there's another property of the ga function/object – ga.l. ga.l gets initialized to a javascript timestamp (in milliseconds). new Date() returns a javscript Date object, but multiplying it by the integer 1 casts it into a number, which automatically converts it into the number of milliseconds since the epoch. Another way of writing this would be +new Date() – another, albeit less clear way, of performing the same casting to a timestamp. ga.l's purpose is to provide a time for the initial “create” and “pageview” calls to ga().

Lastly, an asynchronous javascript tag is written to make the call to fetch the ga.js script from Google's servers so the real magic can start.

Another interesting bit is that the a and m parameters are not assigned anything at the IIFE call at the end. This leaves them as undefined in the script until they are assigned the script tags toward the end of this snippet. Another way of writing the exact same thing would be to only have (i,s,o,g,r) as parameters to the function, and then declaring var a, m; somewhere in this snippet. I'm not sure off hand if this is a memory or performance optimization or if it's just a handy way to save a couple bytes over the network, but someday I'll figure it out.


Thanks for sticking with me – this is one of the most common little snippets that I've probably placed in my web development career, and I'd never totally dug in to understand what exactly it does beyond writing an async script tag. The pattern of declaring a “plain old javascript array” and then pushing your “stuff” into it as a queue for working with later is an extremely common pattern in 3rd party javascript, since you want everything to be as performant as possible, and you want to make sure you don't break anything if for some reason the rest of your payload script doesn't actually load.

#javascript #analytics

I spent about an hour pulling my hair over this one. I'm deploying an Angular to dev for the first time, it works fine locally, but everything is busted when I grunt build and push it up to a server. I'm using ngMin and using the supposedly safe syntax for defining all my dependencies, but unfortunately any Google search that includes “grunt build” and/or “minify angular” only turns up answer that pertain to that fairly well know problem.

So, I commented out Uglify in the build process and am still getting the error, only it's a lot easier to track down now, because my JS isn't minified. It blows up on the first one of my controllers that I wrote in Coffeescript, and is wrapped by Coffee's default function wrapper.

If this sounds like you, go to your Gruntfile and add an option to the coffeescript config —-

  coffee: {
    options: {
      sourceMap: true,
      sourceRoot: ''
    },
  }

becomes this —-

  coffee: {
    options: {
      sourceMap: true,
      sourceRoot: '',
      bare: true
    },
  }

Just make sure you're defining your scripts with (one of) the approved syntax(s) for keeping stuff out of the global scope -

angular.module('fooApp').controller('fooController', function(){
 // stuff here
});

#javascript #angular

The beginning

I'll make the “what is Angular” part as brief as possible.

Angular is a giant JavaScript framework from our friends at Google. It fits into a similar camp with Backbone, a framework that Drupalists will likely have heard of since it's included in D8 core. Angular aims to make it as easy as possible to build a single page CRUD app, and in my limited experience it succeeds.

I've never built anything with Backbone, but have the Peepcode series on it, and have been working heavily with Rails for a good while now. I'll avert a gush on Rails for the time being, but let's just say I really love the way that Rails doesn't really write any markup for you. It's much simpler to find your way through the request/response path, and in general I find developing with Rails to be a vastly more pleasant experience than developing with Drupal.

Alas, I've been a professional Drupal dev for about 4 years now.

The use case

I work at a publishing company. We publish about 26 different pubs, many of them still in print. Within the last year we finished a migration of all of our websites from various proprietary CMSs into a Drupal 7 multisite installation. The sites support desktop only at this point as we are a small company and resources are definitely constrained. (This also has it's upsides which we'll get to).

Last fall we rebuilt the company website from a static HTML site into Drupal 7. Since this was not a part of the multisite install, we were allowed to architect the site from scratch with my boss doing all the site building and myself doing all the theming. Mobile was a big priority this time, so I spent a good chunk of the development cycle implementing mobile-friendly behavior and presentation and generally getting a good feel for how to really do a responsive site. As an aside, for mobile/responsive site building and crafting responsive stylesheets, less is definitely more.

The end of this winter has brought a time table for offering a more accommodating mobile experience on our “brand sites”.

The dream

So my boss and his boss want a mobile site like “The financial Times has”. If you have an iOS device, go to app.ft.com, and if you're a Drupal developer, try and get your head around how you'd pull that off, but try and forget that this is a post/series about Angular first. Pretend that you were going to try and pull that off in a more or less responsive fashion.

I spent a couple of days surveying the landscape for JS libraries that help out with touch behavior, and trying to figure out how to prefetch content so that when the user swipes from page to page or section to section, there wouldn't be a giant delay while the network does its thing transferring 1,000 lbs. of panels-driven markup. This was Monday and Tuesday of last week.

A pause for reflection

My enthusiasm for the project already waning, I sat back and though about how we ought to be doing this thing. What they want is a mobile app, not a responsive website.

The way that you implement a mobile app is not by loading a page of HTML markup with every user action, it's by loading the app once and then communicating with the server via JSON (or XML or whatever if you wanna get pedantic). This kind of thing is supremely easy to do with Rails mainly due to Rails's deep embrace of Rest 6 years ago totally getting ahead of, perhaps even enabling, this whole javascript app revolution in which we find ourselves. Outputting a particular resource as JSON is as simple a line of extra code to allow the controller to respond to a request with a different format.

Step 1, Services

I'd never played with Services, so I didn't know how easy it was to output a node as JSON. On Wednesday of last week, some time after lunch, I decided to find out. Turned out we already had Services installed for another use case that we just recently implemented (offloading our Feeds module aggregation to another open source project called Mule, but that's a whole other series of posts), so all I had to do was bumble through a tutorial on how to set up a Node endpoint.

In less that 5 minutes I had exactly what I needed set up in terms of dumping a node to JSON. I've been reading Lullabot's recent posts about their use of Angular, so the rest of this series will follow along as I learn how to use this giant beast of a JS framework to build the mobile app my boss' boss wants without a minimum of Drupal hackery.

The next post will pick up Thursday morning (as in, 6 days ago) where I first downloaded Angular into the project.

#angular #javascript #drupal

Dropdown menus may seem like something that falls under the “solved problem” category, but they can be surprisingly tricky. Tutorials that you find online will usually walk you through a very simple example that assumes markup that you never have. This will not be that. We're going to talk about the theory behind building a drop down so that you can better reason your way through the mess of markup that you're given.

If you're working with Drupal and your requirements are outside the scope of what Nice Menus can give you (which happens as soon as you hear the word “responsive”), this tutorial is for you.

[!note] Be advised that some parent themes do not render the submenus even if you set the parents to “expanded” in the menu admin. I'm not sure what the logic is for that, but it's a feature you should be aware of in some base themes.


Beginning

Your menu markup is going to look something like this —

* Item 1
* Item 2
* Item 3
	+ Sub-item 1
	+ Sub-item 2
	+ Sub-item 3
* Last item

Out of the box that'll render something like this

  • Item 1
  • Item 2
  • Item 3
    • Sub-item 1
    • Sub-item 2
    • Sub-item 3
  • Last item

If you are working with Drupal, you're going to have to dig through a lot of layers of wrapper divs, and there will be a lot more classes added to each item, but the general structure is the same. One early gotcha is that all the submenu uls are also given a class of .menu, which is annoying at best.

* Item 1
* Item 2
* Item 3
	+ Sub-item 1
	+ Sub-item 2
	+ Sub-item 3
* Last item

Ignoring all that, the general idea is to hide the submenu ul, get all the top level items to line up next to each other, and show the submenus when you hover over a parent item. How?

ul {
	float: left;
	margin: 0;
	padding: 0;
}
ul li {
	position: relative;
	float: left;
	list-style: none;
	list-style-image: none; /\* DRUPAL!! \*/
	padding: .5em 1em;
	margin: 0;
}
ul li ul {
	display: none;
	position: absolute;
	width: 10em;
	left: 0;
	top: 1em;
}

ul li:hover ul {
	display: block;
}

JSBin here

Play by play

I'll assume that the left floating stuff is understandable. The real action with a drop down happens with the display:none; position:absolute set on the submenus and the position:relative set on the parent -s. position: relative means nothing much to the item on which it's set (unless you start adding positioning rules to it as well). It's importance here is because any child elements/nodes that are absolutely positioned inside of it will now be positioned as if they exist inside that element. Without position:relative on that item, the absolutely positioned elements inside of it will position themselves relative to the body element, or the first containing ancestor that is positioned relatively. See here for an example.

As an aside, these two ALA articles are required reading if this part makes your eyes cross.

The rest of this is hopefully understandable. display: none on the submenu hides it from view, until you hover over it's parent -, at which point it gets the display property of block, which makes it show up in your browser. Since it's absolutely positioned, it'll need a width specified. You'll need something narrow enough to prevent short item from floating next to each other, but wide enough to keep longer items from breaking to too many lines.

On Superfish, Nice Menus, javascript, etc

Presumably, you might have heard of Superfish. It was the defacto JS solution to drop downs for many years, most of them in IE6/pre-jQuery era. IE6 has a (ahem) “feature” where only certain elements properly respond to the :hover pseudo-selector. That meant for a great many years that the only real solution was to patch this behavior with javascript. Fortunately, you only have to deal with this issue now if you still support IE6.

The other, definitely legitimate issue, is that using CSS only means that the instant you leave the zone of the parent item (don't forget the the parent - is wrapped around the entire submenu), your submenu will disappear. This means either judicious placement of your submenu, or utilizing some javascript to make your menu behave a bit more smoothly. Both are good solutions, imo.

Here's an updated JSBin. Note in the collapsed CSS column I've commented out this part —

ul li:hover ul {
	/*display: block;*/
}

This means we'll be hiding and showing the dropdown with javascript (jQuery in this example). I've added a class of expanded to the parent * to make selector targeting easier. Here's the full javascript -

jQuery(function($){
	var timerOut;

	$('.expanded').on('mouseover', function(){
		clearTimeout(timerOut);
		var self = this;
		setTimeout(function(){
		$('ul', self).show();
	}, 300);

	}).on('mouseout', function() {
		var self = this;
		timerOut = setTimeout(function() {
		$('ul', self).hide();
	}, 300);
	});
});

So, setTimeout returns a numeric timer id that you can use to cancel out the setTimeout callback if you need to. Since we're going to need access to one event handler's timeout in another event handler, we're going to declare the variable for the timerId outside the scope of both of them – var timerOut in the outer function.

Any time you use jQuery on(), the element that is triggering the handler is this inside the callback function (the function after 'mouseover'). We'll assign that to var self; since we're going to enter another context once we enter the setTimeout() callback. By the way, all of this gobbledygook about scope and this is THE trick to Javascript. Understand function scope in Javascript and you'll be highly paid. I'm still getting there myself.

So anyway, discounting that bit, it's very simple. When you mouseover the parent, show the submenu. When you mouseout of the parent, hide the submenu. All we're doing is adding a delay to those actions firing. The trick is cancelling that hide() call if the user decides within 300ms that they didn't mean to wander out of the submenu. That's where clearTimout() works it's magic in the mouseover function. If there is a mouseout timer still ticking, it's ID will be assigned to timerOut and it'll get cleared. If it's already hidden the submenu, no harm and no foul.

Note that if $('ul', self) looks weird, what that means is the item in the context of self is what we're trying to find. Omit the context and it implicitly becomes the whole window. Add the context and is almost the same as saying $('li.expanded ul'). I say “almost” because the second, longer example will actually grab *any* ul inside of *any* li.expanded, which is not what you want. That's why specifying the context not only shortens your code and improves performance since the whole DOM doesn't need to be searched each time, but also scopes your selector dynamically based on which element triggered the handler. I know this is total babble, and I'm sorry.

Final gotcha

Drupal's version of jQuery is so dated that on() isn't available. If you have the option of jQuery_updating to 1.7, you can enjoy the modern era. If your site breaks, as is often the case, and you're stuck with lt 1.7, you'll need to use bind() instead. It works more or less the same in these use cases, but being familiar with event delegation is another JS Jedi trick, and the one promoted by modern Javascript authors.

In closing

This got longer than I wanted, but it's not the easiest thing in the world to build the ubiquitous drop down menu. My first one took me at least a week, and I think I eventually stumbled on Nice Menus to actually get the job done. Luckily, modern browser environments are much more predictable than they used to be, so knowing how to fish on your own is much easier these days, and the taste of a fish you caught on your own is always superior to something bought at the store, right?

This post touched on the word “responsive” at the top, and I'll follow up with how to work with that. If you've come this far, you've set yourself up nicely for an easier mobile menu job without having to fight against a bunch of other people's code.

#ui #css #javascript #theory

// creates a global variable called urlParams
// adapt as needed.

// will forcefully downcase all query string params

// use --
// http://www.ignoredbydinosaurs.com?foo=bar&test=2
// urlParams.foo // bar
// urlParams.test // 2

window.urlParams = (function () {
var match,
pl = /\+/g, // Regex for replacing addition symbol with a space
search = /([^&=]+)=?([^&]\*)/g,
decode = function (s) { return decodeURIComponent(s.replace(pl, " ")); },
query = window.location.search.substring(1);

var params = {};
while (match = search.exec(query)) {
params[decode(match[1]).toLowerCase()] = decode(match[2]);
}

return params;
})();

#javascript