Step 2. Learn to program

Here’s the thing. Computers don’t think like you and me. They think in 1s and 0s. Every programming language ever invented has basically been a way to translate what a human wants to have done into a stream of 1s and 0s to be read by the computer. The translation into binary can happen at a couple of different points along the way from programmers head to users computer screen, but there are basically two different methods for how different languages do this. Some languages, especially the older (pre-1990) ones are “compiled”. That’s a really byzantine process that basically involves taking the code that you have written, translating it into some 1s and os, linking it with other files in your code, and wrapping it up into an “executable” - a program that you can execute on your computer or whatever. The advantages of this method are thus:

Some larger applications can have bazillions of lines of code to execute (think about a Windows installer disk). If all of these lines of code were translated into binary at the time you run the application, the app would run so slowly that it’d be useless. This was obviously a big deal before CPU speeds got to be what they are now. Back in the day, the act of compiling was in itself a very time consuming procedure, and still can be today, so by doing it beforehand you save that much time and effort on the part of your computer. You also have the advantage of tuning your application on a lower level with regards to interaction with the Operating System (we’ll get to that), thus making it possible for your app to perform much more nimbly. So basically, most of the benefits of compiled languages are in the performance arena.

The disadvantages of compiled languages have mostly to do with portability. Since the code has already been bundled into binary, the only machine on which the executable will run is the one for which it was written. That’s why you have Mac and PC versions of the same software on the shelves. They had to be written and compiled separately since the two different computers have different methods of operating at the OS level and thus need different sets of instructions to accomplish the same task. C is the mother language from which virtually all other languages are descended, and the perfect example of a compiled language. iPhone apps are written in a language called Objective-C, descended from C with an OO twist. Since iPhone apps are written for one device and one device only, and since application performance on a phone really matters (think battery life), compilation makes perfect sense.

The other side of the coin, and I’ve been looking for this side for a while and have only recently found it, is an “interpreted” language. HTML is the perfect example of an interpreted language. HTML is only lines of code, just like an iPhone app or the Windows executable. The difference is that you leave the code alone and don’t compile it. The code is instead “interpreted” at the time it’s run, in the case of HTML by the browser that you’re working in. The pros and cons of interpreted languages are exactly the opposite of compiled languages. You only write the code once and it works for every machine out there. It’s left alone until that particular section of code is needed at “runtime”, at which point whatever is doing the interpreting, be it the browser or the server on the other end, crunches your code into binary and spits it out into whatever the application needs at the time. As computers have gotten more and more powerful, this method seems to be gaining a lot of traction as the preferred method of writing new applications. Some examples are JavaScript, PHP, and Ruby - languages that get chewed up into something actionable at runtime. In the case of Ruby on Rails, which is the hot-shit dev platform at the time of this writing, the code actually gets interpreted twice - once on the server side where the code is translated into HTML, and again on the browser side when the HTML is translated into something that you can read and work with.

Twitter was originally written with Ruby on Rails. If you think about it, Twitter is just an application that communicates over the internet. It really only does that one thing. They don’t even display advertising (yet). Many of the traditional features of a commercial website are not there, so why write it in HTML? Especially when you have this vast database of information that gets displayed uniquely on every different computer screen out there, it’s a perfect solution. Parts of Facebook are written with PHP, another interpreted language. PHP is designed to interact with info in a database - users and their profiles, pictures, text, whatever - and to write the HTML to display that info on the fly at the time of the page request.

Pretty cool, huh? Comments? Suggestions?