Ignored By Dinosaurs 🦕

Prologue

I have a lot of friends in successful working bands that all have one thing in common – a fairly useless website. Pretty much every band I know, with a very few exceptions, treats their website as a tour poster. Most of them have some kind of “store”, but frequently they're stocked with leftovers from last year's road merch. You often have to go through PayPal to checkout. I don't know of anyone short of Radiohead that actually lets you download digital goods directly from them.

Railscasts, Peepcode, Lullabot, and every other player in the digital goods space that caters to developers have had their act together on this for years now, but for some reason the real promise of the internet age hasn't paid a visit to the music business yet.

The promise

Perhaps I misunderstood, but the real promise of the internet age as it pertains to musicians was that the old, centralized means of distributing your music were going to be torn down and in it's place would be a democratic, no-barriers system for getting your music out instantly to the whole world. Services that have sprung up in the last 10 years – iTunes, Spotify, Soundcloud, and the like – have almost universally propped up this old, centralized system. Services like Distrokid are well intentioned and barking toward the right tree, but still miss the point.

The problem

The problem is that the music business was so profitable for so many decades, so many people got rich off the old label system that the world in general is clearly having a hard time letting go. Every single service that I've mentioned so far has some sort of “gatekeeper” mechanism in place, be it label affiliation or whatever, and profit motive for the business owners as a central tenet.

“How can I (or my investors) make money off the music business?”

What I haven't seen done

What I haven't seen done yet is for someone to come along and offer a service that has altruism toward the musicians that this whole business revolves around in the first place as a fundamental principle (distrokid partially excepted).

The solution

How about someone build an open source CMS for bands and musicians, ala Wordpress since every band I know is already on that and comfy with it, but built with commerce as the fundamental purpose of the site? Plug in a Stripe API key for credit card processing, an S3 key for storing their digital goods, and tada! – you can sell your own music through your own website and keep 100% of the net for yourself.

Obviously the simple features that every band wants – tour dates, photos, bios, etc – would be there, but instead of having the store be a page on the site, have the tour dates be a page in the store. I know they must be out there, but I literally don't know of a single band that approaches their online presence this way.

But, but, how do I make money off this?

I dunno. How about a hosted service, ala Wordpress.com for bands that don't want to deal with setting it up themselves? How about taking a percentage of fees for ticket sales if they want to activate that module? How about consulting fees for custom implementations? I think there are actually plenty of ways, but only if you start with the libre, open version at the core.

Postscript

There must be someone out there working on this idea – I had it years ago and it just seems too obvious at this point. I've been working on it off and on for most of this year. I've even got a guinea pig client lined up, but I'm that stereotypical musician that has kids, gets off the road, and is now so busy with my 9-5 and my 3 kids and trying to find contract work to fill in the financial gaps that I can't keep the momentum going to get it finished and launched. So...

If this idea is interesting to you and you feel like building it, call me. If you're already building it and want someone to help sell it, call me. The band I quit 4 years ago sold 9000 tickets at Red Rocks this summer. I'm now in a band with two guys from Leftover Salmon and a guy from String Cheese Incident. If you've never heard of these bands then you're not a hippy, but trust me – there is serious money to be made in this grassroots, live-music, no-label-affiliation sector of the music business if for no other reason than nobody is looking at this market at all, and the customer base in this market is extremely loyal. And I've got an iPhone stuffed full of potential clients.

All this product would have to do is show some real revenue increases for a couple of bands. Once that happens, the real promise of the internet as it pertains to the music business can start to be realized.

Postpostscript

I'm a developer now. I'm handy enough with the backend, and what I consider pretty good with the front end of things, especially as far as the technical needs of this project are concerned. Point is, I can contribute a lot – business, development, a laundry list of clients, implementation details (if you want em) – I just can't do it all myself. I've been holding on to this idea for so long, trying to build up the dev chops to execute that it's starting to eat me, especially since I now have the dev chops and no time.

So if you're reading this and it strokes a chord, drop me a line – therealjohnnygrubb@gmail.com

#music #business

Hi there, probably-front-end-dev-who's-met-and-used-Sass-and-likes-what-they-see. This is for you.

RubyGems

Sass is made out of Ruby – it's a very pleasant, general purpose programming language that's pretty easy to learn and like. Ruby has a package management system whereby libraries of Ruby code are bundled up into what's known as “Gems”. Sass is a gem. When you install it, you get a couple of new executables to play with in the terminal, namely sass and sass-convert. The latter of these will help get you started with Sass by converting your straight CSS to Sass. RubyGems inspired PHP's new-but-already-dominant package manager, Composer.

rbenv

If you are a Mac user, and you are using the version of Ruby that came with your Mac, you are using a version of Ruby that's actually beyond End Of Life. If all you're ever interested in is Sass, it'll keep working for a while longer but eventually you'll be left behind. A relic. This is the bad news. The good news is that the Ruby community has been working on this problem for a while.

[!info] Because Ruby 1.9 came out a while back and has a bunch of cool new stuff in the form of performance enhancements, syntactic polish, and overall love via it's contributors, and because 1.8 is in life's endzone, and because using outdated versions of open source software just isn't your preferred thing, you'll want to use 1.9. This is how.

The most commonly blogged about solution to this in the Ruby world is RVM. We're not going to talk about that. We're going to talk about a solution called rbenv. Rbenv is a more recent and lightweight solution to this multiple Ruby versions problem that doesn't require sudo to install and update Gems, and allows you to install almost any version of Ruby you desire (of which there are plenty, but that's more than you need to know right now).

Rbenv works on any *nix based system and installation is super simple

$ git clone https://github.com/sstephenson/rbenv.git ~/.rbenv

This installs rbenv, the version manager. Add rbenv to your $PATH -

$ echo 'export PATH="$HOME/.rbenv/bin:$PATH"' >> ~/.bash_profile

$ echo 'eval "$(rbenv init -)"' >> ~/.bash_profile

(Zsh users put those two lines in ~/.zshrc).

You might as well go all the way and install the Ruby version installer, a separate tool – ruby-build.

git clone https://github.com/sstephenson/ruby-build.git \ ~/.rbenv/plugins/ruby-build

At this point, you'll reload your shell – exec $SHELL – and you're ready to rumble. Ruby 2.0.0 was released earlier this year, so unless you really like living on the edge, 1.9.3 is a safe bet.

rbenv install 1.9.3-p448 – (the most recent release as of this time, refer to changelog).

I almost forgot to mention rbenv rehash – probably the rbenv command you'll use the most. “rehash” basically tells rbenv to reload itself after you gem install any new gem that comes with an executable (like Sass). If you install a new gem and for some reason your computer acts like it has no idea, it's almost certainly this.

Incidentally, both of these tools were written by the same guy – Sam Stephenson. He works at 37 Signals, the home of Basecamp, the original Ruby on Rails app, created by a mystical figure known simply by his initials.

Matz/DHH

Super quick Ruby history lesson...


Ruby recently celebrated it's 20th birthday which, without doing any consultation of Wikipedia, makes it roughly the same age as PHP. Ruby's creator and spiritual leader is a guy named Yukihiro Matsumoto, or Matz for short. PHP obviously grabbed it's share of the market more quickly, and Ruby had scarcely gotten off of the island of Japan for about the first half of that until it was catapulted onto the world stage by one man – DHH.

DHH is a charismatic developer from northern Europe with a fondness for business and hair gel. DHH cast off his PHP chains when he found Ruby, created an honest to god framework out of it, open sourced it, and then ran with it. Much of Rails' rise to prominence coincided with the rise of Github and the two together are probably largely responsible for touching off Git's adoption in the greater marketplace.

Rails has impacted the design of almost every single web framework that has come since in any language, either directly borrowing from it's ideas or reacting to it's opinions. Sass came in it's wake, and here we are.

The Well Grounded Rubyist

If I can recommend one Ruby book to get, it's The Well Grounded Rubyist by David Black.

Black is one of the few western developers who has been doing Ruby since before Rails came along, and is a preeminent authority on the language. This book was my introduction to Ruby's version of OOP, which is indescribably more elegant, consistent, and to-the-point than PHP's, and reads almost like a great novel in the way that it builds in intensity from beginning to end and rewards repeated readings. No, I'm not shitting you.

#drupal #ruby #devops

Whenever a new developer shows up in some online thread asking for advice on how to learn to code, the replies always include “find an open source project to help with”. The 5th birthday of the Macintosh that I bought to learn to code is any day now, and I've just now worked up the chops and the courage to follow that advice. Here's what I'd say to a younger me.

When people say that, it's usually really intimidating to think about. What project? How do I get involved? What if I suck and get laughed off the internet? Well..

Pick a big one. Pick Drupal. Drupal is a huge, beautiful mess of an open source project and Drupal developers are highly in demand right now. This means that there is lots to work on, and when you've got something to show you can get paid decently well to do it. The advice is always to “scratch your own itch”, and indeed that's what pretty much every developer in open source is doing. I just had my first patch applied to a project. It took me about 3 weeks from start to finish, but the majority of that wasn't actually writing code. It was learning about what the other code did that I was trying to patch so that I could write a feature that actually followed Drupal conventions. This was a very simple little feature, but what I learned between 3 weeks ago and now ties into a LOT of core Drupal principles that have totally enabled me to write this other module that I need for work and have it work as intended on the first try.

So, in summary – help out on an open source project. It'll make you a better developer faster than anything else.

#drupal #opensource

What's the difference between assigning an anonymous javascript function to a variable or just declaring a named function in the first place? Turns out “hoisting” of the function only works if you declare it as a named function in the first place. Assigning an anonymous function to a variable doesn't perform the hoist.

(function() {
	console.log(f()); // 'hello'
	function f(){
		return 'hello';
	};
})();

(function() {
	console.log(f()); // undefined
	var f = function() {
		return 'hello';
	};
})();

#javascript


The program "postgres" was found by "/usr/local/Cellar/postgresql/9.2.4/bin/initdb"
but was not the same version as initdb.

I've been battling this for the last couple of hours, trying to figure out why I can't make Postgres run as easily on my desktop as I did on my laptop. Homebrew took care of it all, just leaving me with the agony of taking off the MySQL training wheels to figure out this new and scary Postgres admin syntax.

So I uninstalled the Homebrew version and went to the EnterpriseDB site and downloaded the official installer for Mac. This didn't yield any results either, and seemed to want you to use the GUI tools to administer it anyway. which psql kept giving me /usr/bin/psql, which should've been more of a clue, but I'm not that quick. psql --version kept giving me 9.0.2, which also should've been more of a clue, but I just figured I must've installed Postgres a long time ago and gave up and forgot about it.

Then I remembered. Mac OSX server comes with Postgres. That's why it's reporting /usr/bin for all it's paths instead of /usr/local/bin (the homebrew default), or /Library/Postgres, the official installer default.

There was also an unkillable set of _postgres processes in ps that I couldn't figure out how to kill. So, the flamethrower method is to delete everything in /usr/bin that relates to PG – psql, postgres_real, anything you can find. Don't forget /usr/bin/initdb because that's what was throwing the above error. Then you can get on with the homebrew installer.

#postgres #devops

I'm the honcho in charge of our in-house bug tracker – Redmine. Redmine is a rather large Ruby on Rails project, thus nobody in house when I started here had any knowledge or interest in maintaining the thing, since Ruby servers have a bad rap for being kind of finicky to set up, at least in relation to PHP. So it goes.

I recently upgraded to the lastest stable release – 2.3.1 – and decided to 86 Passenger as our app server in favor of Unicorn. I've been setting up all my Ruby servers with Unicorn lately and find it to be easier than Passenger, even tho ease of deployment is Passenger's whole selling point. I find the Nginx reverse proxying back to a pool of app servers, ala PHP-FPM that's running this site currently to be an easy mental model to get my head around. I get Passenger's modrails/modphp approach, I just prefer the other.

Anyway, sorry.

So I upgraded the whole infrastructure last week to Ruby 1.9.3-p392, Redmine 2.3.1, and everything went fairly smoothly. I was alerted to a bug this morning though, where attachments were being mangled. Basically, everything was being truncated to the first 48k in the file, and this applied to images as well as PDFs and Excel spreadsheets. I dove into the files/ directory in Redmine and saw that the files were all there, and that the file sizes were correct, so they were getting to the server, just not coming back.

I suspected I'd broken something with the new app server, but it took me a while to track it down. I'd previously been running the Nginx process and the Passenger process as the same user on the server, something I changed in this recent deployment. After trotting through all the Unicorn logs, the Rails logs, and what I thought were the Nginx logs, I found some “other” Nginx logs that happened to be the ones that were actually being used now. They were filled with this —

2013/05/22 14:13:58 [crit] 17604#0: \*9408 open() "/opt/nginx/proxy_temp/7/06/0000000067" failed (13: Permission denied) while reading upstream, client: 65.126.154.6, server: _, request: "GET /attachments/download/2323/AdvertiserEmailLeads_with_Verbiage_Changes.xls HTTP/1.1", upstream: "http://unix:/tmp/redmine.sock:/attachments/download/2323/AdvertiserEmailLeads_with_Verbiage_Changes.xls", host: "redmine.advantagemedia.com", referrer: "http://redmine.advantagemedia.com/issues/4958"

2013/05/22 14:14:43 [crit] 21936#0: \*8 open() "/opt/nginx/proxy_temp/1/00/0000000001" failed (13: Permission denied) while reading upstream, client: 65.126.154.6, server: _, request: "GET /attachments/download/1454/Balluff_112012html5.zip HTTP/1.1", upstream: "http://unix:/tmp/redmine.sock:/attachments/download/1454/Balluff_112012html5.zip", host: "redmine.advantagemedia.com", referrer: "http://redmine.advantagemedia.com/issues/4229"

So that's a good thing, because we're getting really warm by this point. Basically, the /opt/nginx/proxy_temp directory is full of proxy temp files that were still owned by the old nginx user. Now that the Nginx process was running as user nobody, the ownership and permissions were wrong. So a chown -R nobody on that proxy_temp directory and everything was right with the world.

#devops

At my gig we host our sites on Acquia's dev cloud. The dev environment is pretty locked down, obviously, since you don't want multiple, publicly accessible copies of your sites floating around out there, especially when it could be in any state of brokenness at any given time. So the way we do it is to use .dev domains that aren't publicly routable via DNS. We have a big ole master hosts entry in the local network that takes any of those dev domains and routes them to the proper IP.

Today however, I go out and my car won't start. This means I'm working from home, and that I need to add these host entries on my local machine myself since I'm not on the company network. As I type this I realize I could VPN, but that's no fun and I'm already done with this method that I'm about to explain.

So, I go to Acquia's cloud panel thingy. They have a “servers” menu item, but it only gives you the names of your servers, not IP addresses. Oh, we also have a load balancer sitting in front of everything that I can't log in to to get the IP that way (via ifconfig, presumably). So, they do give you this really vague

The following tables show all of the servers and the services(s) they provide for each of your site's environments. Each server's full DNS name is server.prod.hosting.acquia.com.

So anyway, the answer is the dig command – more or less a DNS swiss army knife.

dig server.prod.hosting.acquia.com spits back a wealth of info at you, including the IP address of your load balancer, which you can then put into an entry in your /etc/hosts file.

Love this stuff...

#drupal #devops

I was on the phone with Mom yesterday, and we got to talking about technology – a thing that actually happens fairly frequently. Being an only kid, she's genuinely interested in everything that I do and it's been helpful to have someone who's mostly non-technical to bounce explanations off of when I'm getting my head around a new piece of gear.

The piece of gear that I was explaining the other day was something called Mongo DB. Mongo's parent company is called 10gen, and they landed on the startup scene about 5 years ago or so with their flagship product, Mongo DB. Mongo is currently the pre-eminent player in the “NoSQL” database market. The NoSQL flavor of databases has come en vogue in the last few years in certain technology sectors, primarily ones that are evolving so quickly that having to slow down to put forethought into your data store and how it's going to be structured might literally be the difference between your whole company suceeding or not. That whole sentence will make more sense in a minute, but first it'd be helpful to understand how a traditional, “relational” database works.

The Relational model

The relational model of storing data has been around for more than 40 years. Wikipedia, of course, has a great article – http://en.wikipedia.org/wiki/Relational_database – giving a technical overview, and I wrote my own article 4 years ago while trying to get my own head around the concept – http://www.ignoredbydinosaurs.com/2009/04/chapter2-databases. Wow, that's hilarious reading that article actually, since my “Railroad Earth setlist database” example is well underway – http://www.hobocompanion.org/. So yeah, read that second one for the idiot's guide.

The classic example I gave to my mom was that of a common blog. You have a table for all your posts. Each post in that table has a row, and each row has something like a numeric ID, and the text of that post. When you want to read a blog post, the URL in that address bar of your says something to the effect of “give me post #1”. Whatever blog hosting software your using then turns around to the database behind it and says “select everything from the posts table where the ID of the post is 1”. The database hands back the post, the software renders some HTML out of it and sends it back to your browser. This is the simplest example of your interacting with a database.

The relational model typically comes into play when you visit a blog that has comments. Now you don't just have a Posts table, but you also have a Comments table. That Comments table will most likely have the same ID column, and a column called “body” or something like that for storing the text of the comment. However, this table will also have a column called something like “postid”, and what gets stuffed in that column is the (you guessed it) ID of the blog post that this comment “relates” to. So now when your reader comes by, the blog software turns around to the database and asks for two things this time – “select everything from the posts table where the ID of the post is 1”, and then “select everything from the comments table where the postid is 1”. This second “query”, if you're lucky enough to write something that people respond to, will return a list of comments, an array if you will, that your blog software will then convert to HTML and append to your blog post in the form of the comments section. Pretty simple, or is it?

Issues with the relational model

For the purposes of this simplistic example, this hopefully isn't that hard to get your head around. “But”, you might be wondering, “does that really make sense to store blog posts over here and comments over there? They're clearly related. When are you ever going to be examining a post's comments without wanting that post also?”

Very good. And this is a prime example of when a “non-relational” or “NoSQL” database might make a lot of sense.

The non-relational model

Let's stick with the same example, the blog post and comments, but let's think about how to “model” this in a non-relational way. By the way, if you're still with me, you have a deeper technical understanding than 99% of everybody around you.

The non-relational data model would look more like a sheet of paper. In fact, the concept of one entity and all the data that pertains to that one entity is known in Mongo as a “document”, so truly this is a decent way to think about it.

The way that your blogging software interacts with this blog post is theoretically a lot simpler in a non-relational database. Request comes in, blogging software turns around to Mongo and says “Please give me back this specific post and everything related to it”, in this case a listing of comments. BUT, that's not all. Since we're not forced to be too uptight about having to define how the data is structured beforehand, what if we want to tag that post with an arbitrary number of categories? No problem, stick them on the same document and when blog software says “gimme everything on Post #1”, the tags, the comments, and any other randomly associated data come back with it. Your software doesn't need to know ahead of time, you don't need to know ahead of time. It all just kind of works, provided you know what you're doing on the software level.

This amount of flexibility is what makes Mongo a very popular “data store” with quickly moving techonology companies that might be chasing an idea that changes from week to week or day to day. You don't have to rearchitect your entire system when you discover this particular use case down the road where you need to stick some “extra data” on this particular “thing”.

Issues with the non-relational model

It seems obvious that for this example a non-relational database makes a lot of sense, but like any useful tool, it has it's limits.

For this example, let's use a grocery list. You can model the grocery list in a lot the same way – a piece of paper, you write the items you want on there. But let's say, for the purposes of this example, you figure out after a couple of months that you want to keep track of how many loaves of bread you actually bought last year. Well, with the non-relational model you might literally have to go through every list and count each loaf individually (*simplistic example alert*), whereas had you modeled this in a relational way, you could get that count back almost instantly. Yes, it'd be a bit more work on the front end – essentially you'd have a Lists table and an Items table. But, modelled correctly, you'd also have a table in the middle called a “join table” that allows you to associate any number of items with any number of lists. After a while, were you truly the hacker, you could probably write some code that'd predict what you need to put on that list without even having to put it on there yourself, based strictly off of patterns that are easily discernable in a relational database.

That's why the Hobo Companion has a relational database behind it. It's super easy to count how many and which shows each user was at (2000+ shows tagged between about 30 users so far). It's super easy to count the number of times they've played Mighty River. It's super easy to count the number of times that I played Mighty River (somewhere around 260 times). It's super easy to figure out where in the setlist they typically put Dandelion Wine (Either first, or at the end of the first set). These calculations are far from impossible in a non-relational database, but they are also pretty far from trivial.

In situations where you absolutely want to be really uptight about your data and how it's structured – think bank accounts as the classic example – a relational database is absolutely required, not just because it's theoretically better suited for the job, but it's also a “mature technology” that's older than I am.

The wonderful point here is that we actually have decent options depending on what it is we are actually trying to do. Startup hackers love Mongo precisely because it lets you move fast. You can rock out an application in a weekend without having to spend most of one day setting up the equipment in the studio first.

Thanks for sticking with me.

#databases #theory

I have a friend for whom I'm building a site right now, and I chose Rails to do so. I think I'll probably reach for Rails for most sites I build until I get bored of it, which isn't going to happen any time soon. I also learned a few things about different browser's implementations of HTML5 audio, which I'll get into first.

My buddy is in a band, and so part of the functionality of the site is a photo gallery, and another part is a music player. Whereas in the past I'd have just reached for something off the shelf like jPlayer, I decided to go the HTML5 route this time, as I was fairly confident that my buddy wouldn't be asking for legacy browser support. In any event, he's not paying for legacy browser support, so I decided to teach myself a few long overdue tricks.

First up is a bit of a primer on the HTML Audio element. I found, as usual, Mozilla to have the most understandable and trustworthy documentation – here, and here. Building the player was fairly straightforward, but required a bunch of repetitive code that I'm too embarrassed to post here. It's fairly simple to create an audio element, list out a bunch of song objects with recording attributes attached to them, set a data-source attribute on those songs that points to the path where your uploaded recording file lives, courtesy of the Carrierwave and jQuery FileUpload gems. When you click on one of the songs, it kicks off the player.play() method and your song plays.

The surprise was that Firefox, in which I work every day, doesn't like mp3 files. There's some contradictory info out there about whether or not FF does or doesn't support mp3, but my version does not, so I had to figure out how to get an ogg version of the file up there also.

The method I came up with was to install FFmpeg to do the conversion, but to place the conversion into a background Sidekiq job so it didn't hang up the browser when my buddy uploaded his song. Sidekiq makes this so absurdly easy, and the Railscast steps you right through the process. Basically any processing that you'd want to do in the create or update action in your controller can be moved into a Sidekiq worker that's called instead of doing the processing synchronously. Watch -

# songs#create

def create
  @song = Song.new(params[:song])
  @song.title = @song.default_name unless @song.title
  if @song.save
    ConversionWorker.perform_async(@song.id)
    #logger.debug path
    redirect_to music_path
  end
end

And the worker —

# app/workers/conversion_worker.rb

class ConversionWorker
  include Sidekiq::Worker

  def perform(song_id)
    song = Song.find song_id
    path = "#{Rails.root}/public#{song.recording_url}"
    ogg = path.gsub(/mp3$/, "ogg")
    yup = %x[ffmpeg -i #{path} -acodec libvorbis #{ogg}]
  end
end

Converting stuff with FFmpeg was really straightforward, but only in Ubuntu. I fought with trying to get it set up with libvorbis on the Mac and eventually gave up. %x[] is the easiest way I found to execute shell commands from Ruby, complete with string interpolation. Basically this says – load that song, give me the recording_url (convenient that these Carrierwave methods are in scope), and create an ogg version. Do that by putting it right next to the mp3 version, but with the ogg file extension.

#ruby #rails #devops

I didn't remember mixing this but it must've been some time last year. This was the gig that Vince Herman opened up and Phil Lesh pulled up in his Ferrari out front and helped us level the joint for the second set. Strangely, there were no tapers present whatsoever, so no tapes showed up on the Archive. This was, however, one of the tours that we were taping for Elko, so we were running a 24 track rig that evening. I remember it being a challenge to get everyone down to 24 inputs, and being really glad that we figured it out later.

I remember listening to the little slice that we did put up on the Archive while I was doing this and being amazed how much reverb I put on it. Here's a drier version of the whole show.

2005-04-16SanFrancisco_CA.zip

#music