Ignored By Dinosaurs 🩕

generaldevelopment

Not sure if I mentioned this in the last post, but what I'm doing right now is essentially building a data warehouse for the company. It's from scratch, there was nothing here beforehand, so I get get to/have had to chose everything from the tech stack to the processes to my favorite part of late: naming things.

The name you give to a piece of software, or a command line flag, or a column in a database is an act of asynchronous communication with another human. You are asking them to care about the thing you've built. If they chose to work with your tool, you are asking them to understand the choices that you made in its design.

The most selfless thing you can do is think about how much effort it takes for the user who did not design the thing to understand how to use the thing. Name things according to what they are and what they do, make it intuitive. This is what design is. It's not the parts that most people will never see, it's the parts that most people will only see.

#generaldevelopment #data

WTF is “continuous integration”??

So, back in the old days, building websites was simple. You had plenty of options for how to build one but me personally, I was totally fine with Wordpress and Drupal. They didn't force me to know what was going on under the hood to be productive and they provided a way for me to learn that stuff along the way while feeding my family. I learned CSS on the job while building the first couple things I ever built.

Cascading Style Sheets (CSS) is a stylesheet language used to describe the presentation of a document written in HTML or XML (including XML dialects such as SVG, MathML or XHTML). CSS describes how elements should be rendered on screen, on paper, in speech, or on other media. – https://developer.mozilla.org/en-US/docs/Web/CSS

Being a curious and hungry developer, I soon reached for a tool that I saw discussed called SASS that was something called a “CSS Preprocessor”. Basically it was a computer language that was like CSS, and that generated CSS. It allowed you to reuse bits and pieces of styles in ways that stock CSS did not at the time.

It was a tool that made your life as a front end developer easier, but there was a catch – now you had to process that SASS into CSS at some point along the way. There was now an extra piece of gear involved in your development –> production lifecycle.

Another example

Frontend development is insanely complex these days. Plain old Javascript that browsers can digest and execute has exploded like a million suns into various dialects, innumerable frameworks, and more tooling than any sane developer can keep track of. Gone are the days of including jQuery (and my interest in frontend dev along with it).

This is (arguably) great for developers to be more productive and provide high quality experiences to their audience, but there is much, much more to think about in terms of how to deliver those experiences to the end user. Yes, my marketing speak is rather loathesome.

By “deliver” I mean the nuts and bolts – all that Javascript has to go over a network connection from a server in a rack to a user's browser in order to execute and Do Stuff. You don't want your user to have to download 500,000 JS files weighing 40MB just to load your website, so you do a bunch of stuff to optimize that:

  • you smash all those files into 1 file, this cuts down on num_requests because those cost time
  • you compress that file into something smaller using gzip or the like, this cuts down on the weight of that code traveling over the wire because that costs time too
  • you might have to compile your chosen dialect of JS into “vanilla” JS that the browser can actually execute.

And you're not going to want to do this by hand every time, so you reach for a tool that does it all for you. There is now an extra piece of gear involved in your development –> production lifecycle.

Just one more

Package managers. They are a wonderful development, and I'm talking about Rubygems, Composer, Pip, and the like. They allow the OSS community to write, publish, and use bits of code (libraries) that other people have written so you don't have to reinvent the same wheel everytime you want to add a login form to your site. Just gem install devise and bam, your Rails site has user login forms, password resets, and a ton of other functionality that you didn't have to build yourself.

All of these package managers operate on a similar principle – they have a list of the packages required for a given project and when one of your colleagues adds a new one, it gets added to the list. All you have to do to add it to your local working copy is composer install and bam, you have that new library too.

Ideally though, you're not committing that package to your Git repo because that's “3rd party code”. It bloats your Git repo which is just a bad smell, but it's also code that you don't own and should never really touch. What should be committed to your Git repo is just your code. At deploy time, just run composer install and you're done, but you need to have a Thing that will be able to execute composer install.

There is now an extra piece of gear involved in your development –> production lifecycle.

Continuous Integration

That extra piece of gear is what's commonly referred to as a “build pipeline” or a “continuous integration layer”. It's essentially just a process that will execute your instructions for how to tranform your development codebase into your production codebase or artifact.

Plenty of players in the space, starting with a simple Bash script on the user's laptop before pushing to Git. More involved and flexible options include Jenkins, Travis, or CircleCI. Gitlab has a built in CI pipeline, it's part of why they will ultimately win over Github.

Problem with all these tools is that they're other tools, and you have to pay for them and maintain them and learn how to use them and deal with API changes etc. Platform.sh has this stuff all built in, and we're the only PaaS vendor that I know of that has this level of continuous integration built right in in the form of our build and deploy hooks.

We have shortcuts for Composer based projects, and full support for adding any number of build time dependencies (probably a whole nother post's worth of material to discuss). Most other PaaS vendors make you buy CircleCI and wire the stuff together in order to achieve the same level of cushy, modern developer experience that we offer, and that's where I've run out of steam on this post. If you got questions, ask em.

#generaldevelopment #platformsh

The internet is hard these days.

It started simply enough – for instance, all you really needed was a Geocities account and some initiative to learn HTML and you could have your own place to put whatever you wanted and make it available to the entire world. From such simple seeds, complex structures did grow..

Geocities was permanently shut down in 2009, at once both a tragedy for the loss of so much content, so much history, and yet also a wake-up call for so many of us that we needed to have control over our own content, applications, and businesses. Many of us chose to host our own websites so that a seemingly arbitrary decision from some faceless corporate power couldn't upend overnight what we'd created over years. But that decision also made things a little more complicated.

At that point all you really needed was a web hosting account somewhere with Apache HTTPD installed as a web server. Then you could edit and upload your HTML just as before, only this time it couldn't be taken away from you because you were (more) in control of the setting.

Somewhere along the way you were likely introduced to something like Wordpress or Drupal or Ruby on Rails, which were all essentially frontends to some kind of database, and that database is where you would store your content. This was a wonderful development, not only enabling non-technical users to publish content to the web without knowing anything about HTML or FTP, but also for small businesses to be able to create their first eCommerce stores online and take advantage of an entire global market. Again, this was the march of technological progress creating new opportunities for primitive man to make use of highly advanced tools without having a Computer Science degree.

But, as the saying goes, with great power comes great responsibility. The responsibility in this case is that of having to set up and maintain your own web hosting infrastructure. Some of us like this kind of work – setting up and managing fleets of servers with all manner of different pieces of software on them to serve the world's internet needs – but some of us really just like writing code and building websites and applications.

Platform.sh is a new breed of hosting service, and was created expressly for this second group of technologists.

Platform.sh gives you an incredibly flexible set of tools with which you can build and deploy a huge range of different types of applications to the world with the click of a button or a push to a Git repo. Platform.sh currently has support for PHP, Ruby, Python, and Node.js with other runtimes like Go and Java either in public beta or in planning mode internally.

Platform.sh was also architected from the ground up to fulfill the promise of Git as a codebase management tool. No longer are your working feature branches trapped on your machine or in a remote repo with no context to make them live (and testable) for your teammates. Platform.sh will provision a fully functional, completely segregated hosting environment for each one of your Git branches, with all of the data and uploaded files that your app's codebase needs to be a fully functioning application.

Lastly, Platform.sh was designed to grow with your project as your project's needs evolve. The days of filing support tickets to have a PHP extension or a new database server installed are effectively over. Not only will we provision all of your application's software dependencies – Redis, MySQL, ElasticSearch, and many others – but you can choose from many different versions of each of these dependencies. Want to see how ruby:2.4 or python:3.6 or postgresql:9.6 or php:7.2-rc improves your app's performance? We always endeavor to provide the latest versions of each so that upgrading the underlying software on which your application runs is as simple and painless as changing a line of configuration.

If this sounds like something that might take some of the tedium out of your development day or possibly increase the velocity with which your business can bring new ideas and features to your customers then please, read on. In the coming chapters we'll walk you through getting started with Platform.sh by stepping you through setting up your first project and deploying a simple app with a few commands. After that we'll dive deep into more advanced topics such as -

  • Configuring your project with YAML
    • routes.yaml
    • services.yaml
    • .platform.app.yaml or “app yaml”
  • Managing and interacting with various administrative functions of your project both via the user interface as well as the Platform CLI
    • branching
    • merging
    • backups
  • How all of this seeming magic works under the hood
    • containers
    • environment variables
    • copy on write

So, welcome to The Little Platform.sh Book! We sincerely hope to make internet-ing a little bit easier for you and your team.

#generaldevelopment #platformsh

Hello, and welcome to “Platform.sh from Scratch”. In this prologue to the series, I'll go over some of the very highest level concepts of Platform.sh so that you'll have a clearer understanding of what this product is and why it came to be.

Platform.sh is a “Platform as a Service”, commonly referred to in this age of acronyms as a “PaaS”. The platform that we provide is essentially a suite of development and hosting tools to make developing software applications a smoother end-to-end process. In order to understand what this means though, I'm going to have to go into some detail in this first sidebar. Skip this if you're comfortable with this post so far.


[!abstract]

PaaS

Everyone has heard of Salesforce. Salesforce has come to be the poster child for what is now referred to as “SaaS” – Software as a service. Prior to the SaaS era if you wanted a piece of software, be it a video game or Quickbooks or anything else, you had to drive to a store and buy a box with some disks in it. Once the internet reached a level of market penetration into people's homes though, those stores went out of business. This is an obvious evolution in hindsight. SaaS is a high level thing – it's a runnable piece of software that you'll access over the internet via a URL. You might be able to modify/config it a little bit, but will never be your entire business. It's not your product. It's someone else's and will likely play some fractional part in your overall business plan.

Almost everyone by this point has heard of Amazon Web Services – AWS. AWS is basically what people are talking about when they say “The Cloud”. AWS is a suite of products that emerged from Amazon when they figured out that they needed a huge amount of datacenter capacity to be able to withstand massive retail events like “Black Friday” and “Cyber Monday”, and that for most of the rest of the year they had tons of excess capacity sitting around draining money from their wallet. What to do with all that excess capacity? Sell it to someone else.

This relatively simple premise has evolved over the last 10-12 years into numerous products from S3 (basically a giant, limitless hard drive in the sky) to EC2 (basically a giant, limitless hosting server in the sky) to Redshift (basically a giant, limitless database that can be used for data warehousing) to SES (a simple service that sends emails) to an ever growing host of other services that always seems to come out just before your start-up figures out that it needs them.

AWS and “the cloud” in general is often given the acronym “IaaS” – infrastructure as a service. They're selling you the low level hardware abstractions that you can assemble into an infrastructure on which to run your software and by extension your company. It requires a decent bit of specialized knowledge for how to use the individual pieces as well as how to plumb them together, but for all intents is infinitely flexible. It's this level that has had most of my interest for the past few years.

In the middle of these two is what's called “platform as a service” – PaaS. This is what Platform.sh is – a suite of software and hosting services that lets you efficiently build and develop your software application, and then deploy your software application to a hosting environment that doesn't require as much specialized knowledge on how to plumb all the pieces together. Nor does the hosting environment require you – and this is a most important detail – to set up monitoring and alerting for if something goes wrong in the public environment.

The PaaS takes elements of both IaaS and SaaS to allow you to build your software product but not have to hire an extra person just to know the low level server business.

So, back to the program. The development tool set of Platform.sh is entirely based around Git. Just in case the reader is not already familiar with Git, I should explain this a little bit.


[!abstract]

Git

Software projects are typically composed of lots of files. If you want to add a new feature, you might be required to make changes in more than one of those files. Of course, before you get started you'll want to make some kind of backup just in case. If it turns out that the change was buggy or unneeded and you want to revert back to a previous state, you'd just restore those few files back to their previous versions.

What if, however, you're working with a bunch of different people and more than one person is working on that change (an utterly common scenario)? How do you manage those backups between all those people? Saving copies of files is basically impossible to manage after a very short while, so out of this need SCM (source code management) was born. It's been through several different iterations by this point, and at this point in time the version of SCM that is leading the market is called Git.

Git is pretty cool. It basically takes snapshots of your entire project whenever you tell it to. It then keeps track of all those snapshots and lets you share those snapshots among a team of developers. Any snapshot can be reverted, and you can see the full history of every change to the codebase so you can keep track of “what happened when”. But wait! There's more!

This is not an exclusive feature of Git, but it has a feature called “branching”. Branching is intuitively named, and is basically the concept of taking a specific snapshot and making changes based off of that one snapshot while other work continues on down the main code line. This is the recommended way to work if you're going to make any kind of significant change to the software, and this method of working allows you to keep the main code line (almost always referred to as the “master branch”) in 100% working order. It can be thought of as having a furniture workshop away from your house where you can work and keep the house clean for company to come over at any moment, as opposed to working in the house and risking having a wreck to present should company decide to drop by.

In essence branching is making a complete copy of your project at a point in time that you can hack on all you like without disturbing anyone else. If and when the change is ready, you “merge” the code back in to the master branch, test it out to make sure everything is still groovy and then you can release the feature or bug fix to the public.

gitGraph
   commit
   commit
   branch develop
   checkout develop
   commit
   commit
   checkout main
   merge develop
   commit
   commit

You can read more about the super basics here if you wish. For now, all you really need to know is that Git

  • Makes it easier to develop software as a team
  • Makes it very cheap and easy to try out new features without breaking anything
  • Makes it easier to manage changes to your software and to revert back to a known non-broken state

update: Hey look, a really great post explaining all this better than I did.


Platform.sh has taken this branching and merging workflow and extended it out into the entire hardware stack. When you're building a software project of any size, there are considerations beyond just the code your team is writing.

Most applications of any size connect to some kind of database in the background, this is where they save “data stuff”. User uploaded images are a very common thing in the web app world, so if those images aren't there the app will look busted.

You can branch your code all you like, but you need these other supporting resources to really do your job. Platform.sh makes branches not just of your code, but the entire infrastructure that your project runs on. This allows you to use the common branching/merging workflow with the complete support of everything else that your application depends on.

This may seem like an obvious feature, since how can you develop a new feature without being able to run it (?), but no other service that I know of actually does this. A branch in Git triggers (for all intents) a complete copy of your production site without requiring you to set up any new servers, copy databases over, copy images and everything else, etc, etc, etc. It's a significant hassle to do all this stuff, trust me, and it slows the team down every time you have to do it. Removing this need removes a major friction point in the workflow for building new features on your software product.

But wait! There's more!!!

This is where it starts getting really, really good. In case you're not aware, there's a website called GitHub. It's where a whole lot of folks have decided to host their “git repos” – repo being short for “repository”, which is basically that series of snapshots of the state of your project/codebase back to the beginning of time. This is the repo for this blog – https://github.com/JGrubb/django-blog, and here's some of the code that just ran to generate this page you're reading – https://github.com/JGrubb/django-blog/blob/master/blog/views.py#L26-L33. Pretty cool, right? And if I were working on a project with a buddy, we could both use this same repo and work on the same project, whether I'm in Germany or New Jersey or wherever. I can pull his changes over and he can pull mine and this is basically how open source software gets written these days.

The same workflow applies though – if you want to make a new feature or even if you just want to fix a bug, you'd make a new branch and do your work and then “submit a merge request”. This basically pings the person who runs the project and says “hey, I would like to suggest making this change. Here's the code I'm changing, maybe you could look it over and if you agree with this change you can merge it in”. By way of an example, here's a list of “pull (aka merge) requests” for the codebase that comprises the documentation for Platform.

Again, this is how software gets written and it's pretty mind blowing if you think about it. We software developers are so used to it our minds cease to be amazed, but not because it's not amazing. I mean, currently participating in that list of PRs are folks from France, Chicago, Hong Kong, the UK, and so on. Amazing. It is also, however, a pain in the ass.

It's a pain in the ass because it's typically impossible to tell if something works or not just by looking at the code, so you have to pull their changes over to your computer and test them out somehow. I bet you can see where this is going! Platform has a GitHub integration (BitBucket too) that will automatically build a working version of any merge request that someone opens against your project. That let's you go visit a working copy of the project and test it out without having to do a thing. Now, I don't care how long you've been doing this, that is mind blowing. For example, here's Ori's (currently work in progress) PR for adding the Ruby runtime documentation – https://github.com/platformsh/platformsh-docs/pull/339. If you click the “show all checks” link down toward the bottom, it expands with a little link “details”. That link takes you to a complete copy of the documentation with Ori's change added to it, so you can read it like you normally would, rather than reviewing a “diff”. It's the future now!

What this means in the wider scope is that your time to set up new things to test out new ideas, only to have to tear it down once the tests pass is time that you don't have to waste anymore. You can test changes out and keep right on moving.

This GitHub integration is only one of the really cool and unique features that Platform provides, but this post has gotten absurdly long already. Fortunately, this is intended to be the prologue to this series, so I'll touch on as many of those features as I can as the series progresses.

#generaldevelopment #devops #theory #platformsh

So this is it, my week in between my old job and my new job. And I'm bored out of my mind. So I'm going to do something I've never done before – write a blog post on my phone. We'll see how this goes.


I was texting w one of my former coworkers a little earlier today. He was picking my brain about how the AWS command line tools work and so I was explaining some things to him in too much detail. I've noticed this thing that I have where I want to explain the why of things and not just give a one liner that can be copied and pasted to accomplish a job. If this blog has any regular readers maybe you've noticed this as well. Whatever.

So he was kicking some cli one liners to me to look at and in one of them was providing incorrect arguments to one of the options. I referred him to the excellent documentation page on this particular command and I think he must've gotten it sorted out after that because i didn't hear back from him after. I realized consciously something that i guess I've been doing a lot of the last few years – reading a lot of documentation, for fun.

I think one of the common modes to be in when you're reading documentation is that of trying to figure out a solution to a specific problem. I'm trying to figure out how to upload some files from my laptop to S3, so I'm researching the page for the correct arguments to pass. I'm trying to figure out how to avoid the N+1 problem on the blog listing page, so I'm reading up on which methods are available in the Django ORM. This is fine obviously, but I think what really separates senior devs from non-senior devs are aimless wanderings through the documentation for a project.

It's in these wanderings that you discover what a tool can do before you actually need it to. When the need finally does arrive, it's much less of an interruption to your workflow to look up the correct syntax for a feature that you already know exists. Otherwise you're stopping productive work for X amount of time to see if the your concept for the solution has some corresponding feature in said tool and by that time your flow is shot.

So when admonished to RTFM, take it as a passive aggressive offering of really good advice.

#generaldevelopment

I'm starting a new gig in a couple weeks. I found out about the existence of the gig in the first place because I follow the guy who will be my new boss on Twitter, since he's been around the Drupal scene and very long time and following the leaders in your industry is basically one of the core things that Twitter is *for*.

You don't need to necessarily know every single leader in every single lang, but if you work on the web and want to move your career forward, you'd do well to know who most of these people are, in absolutely no order. This is absolutely (and hopefully obviously) not an exhaustive list, more of a starter pack.

  • Dries Buytaert
  • Taylor Otwell
  • Fabien Potencier
  • Guido van Rossum
  • Jacob Kaplan-Moss
  • Kenneth Reitz
  • Matz
  • David Heinemeier Hansson
  • Rich Hickey
  • Brendan Eich

If you're a Drupal dev, you should add these to the list

  • Larry Garfield
  • Robert Douglass
  • Peter Wolanin
  • Angie Byron
  • Jeff Eaton

Getting to know not just the tools out there but the folks who made them and their reasons for making them can really help you decide if a given tool was created for a use case like yours.

#generaldevelopment

I work with a guy. He's incredibly smart. He's the seniormost developer here, and if you need to learn something new and get something large done, he's the guy to do it. We basically dropped him off in the AWS jungle and told him to learn Hadoop and the entire Hadoop ecosystem for a data warehouse project and he did it.

I work with another guy. He's also incredibly smart. But he asks me for the answer before attempting to find it on his own more often than not. He's got a point when he says “it's a lot faster for me to just ask you rather than spend time trying to find it on my own”, because he's here to do a job after all. I get that. But the best analogy I can come up with is a spin on the old adage -

You can give a man a fish, and he eats for a day. You can teach a man to fish and he eats for a lifetime.

There's a third kind of person, though – the person who goes out and finds out about fishing on their own and then teaches themselves how to fish. This person will be your boss, and will always be employed.

#generaldevelopment #life

Problemspace

I want to be able to link a set of posts together in an order. If there is a next post relative to the one I'm on, I want a button to show up that says “next post” and links to it. If there is a previous post relative to the one that I'm on, I want a button that says “previous post” and links back to it. Pretty simple, conceptually. Basically I want to reproduce parts of the Drupal book.module as minimally as possible.

So my first naive attempt was to add 2 ForeignKey fields to the Post model – “previous” and “next”.

class Post(models.Model):

	title = models.CharField(max_length=255)
	body = models.TextField()
	summary = models.TextField(null=True, blank=True)
	slug = models.SlugField(max_length=255)
	pub_date = models.DateTimeField('Published at')
	published = models.BooleanField()
	tags = models.ManyToManyField(Tag)
	created = models.DateTimeField(auto_now_add=True)
	updated = models.DateTimeField(auto_now=True)
	previous_post = models.ForeignKey(
		'self',
		related_name='previous_post',
		blank=True,
		null=True,
		on_delete=None
	)
	next_post = models.ForeignKey(
		'self',
		related_name='next_post',
		blank=True,
		null=True,
		on_delete=None
	)

This worked on the front end but immediately raised a stink alarm, for a couple of reasons.

  • You'd have to go and save this info twice for it to really work. Once on the current post and again on the referred post to link it back. == Workflow suck

  • The truth about this ordering would be stored in two places, so it'd be really easy to mess something up and get out of sync.

This is essentially a doubly-linked list if you're keeping score, with the encumbant maintenance problems.

So I thought to perhaps override the save() method in order to hook into the operation and automatically populate the correct field on the referred item, but then of course, I'd have to do all kinds of gymnastics to watch for if that field were to be removed at some point and remove the corresponding field on the referred item, etc. I mean, it's a blog who gives a shit, but I've been doing this for long enough now that I can't help myself.

Another option in this same vein is to use the Django “signals” subsystem to hook into the same functionality, but the smell remains.

After coming home from DrupalCon it occurred to me that really all I need is the one pointer, since I should be able to derive the pointer back. I just had to figure out how to do it...

This is a pretty obvious use case – automatically deriving any pointers back to the current item. It just requires one extra DB query to ask “give me any items where the previouspostid is this item's id”.

The key is the related_name argument to the model.

I think this is automatically set for a normal ForeignKey field, but on models where the foreign key points back to the same model it's required. Judging from the docs, I was trying all manner of post.post_set, etc but it's actually just post.previous_post, which is counter-intuitive since what you're actually getting back from that is the “next” post. I chose to keep the “previous” field since you could just add the previous post as you're authoring the current one.

Current post model looks like this —

class Post(models.Model):

	title = models.CharField(max_length=255)
	body = models.TextField()
	summary = models.TextField(null=True, blank=True)
	slug = models.SlugField(max_length=255)
	pub_date = models.DateTimeField('Published at')
	published = models.BooleanField()
	tags = models.ManyToManyField(Tag)
	created = models.DateTimeField(auto_now_add=True)
	updated = models.DateTimeField(auto_now=True)
	previous = models.OneToOneField(
		'self',
		related_name='previous_post',
		blank=True,
		null=True,
		on_delete=None
	)

And the prev/next fields look like this —

{% if post.previous %}
 
[← previous: {{ post.previous.title }}]({% url )

{% endif %}
{% with next_post=post.previous_post %}
 {% if next_post %}
 
[next: {{ next_post.title }} →]({% url )


 {% endif %}
{% endwith %}

note

This might not technically be a linked list in the strictest sense, since a singly-linked list has pointers to the next node in the chain. I've implemented it here as a “previous” pointer, since it makes more sense in the edit workflow. Since it makes more sense, hopefully we'll make more cents!

Stay tuned for the next episode where I decide that I'd like to have a Table of Contents and rip this whole thing out and do it over again.

#generaldevelopment #django

So here it is. The last version of this blog – a Rails frontend to a Postgres backend – actually stood for almost 2 and a half years. I think that's probably a record.

In keeping with my decided new theme for this blog however, I've decided to rewrite the thing in Django. Not that you can't google it yourself, but Django is (at a high level) basically the Python version of Rails. Actually, it's basically the Python version of every MVC web framework. It's been around for 10 years, so it is far from the hot-new-thing. I've finally been doing this for long enough that I shy away from the hot-new-thing and actively seek out boring, tested solutions to problems.

At work we've begun a small project that we were targeting to build on Drupal 8. Faced with the timeframe, the relative lack of basic modules for building Drupal 8 sites, and the learning curve for the code that we'd inevitably have to write on our own I pitched the idea to my team to try something completely different. I prefaced it with “this is a terrible idea, so raise your hand at any point”, but surprisingly they were all amenable. We all spent a day going through the amazing tutorial and the amazing documentation and they were still on board. So I decided to rebuild this blog to take the training wheels off and give us all some reference code for some of the simple features that weren't walked through in the tutorial – taxonomy, sitemaps, extending templates, etc.

Amazingly it took me all of 4 hours to rebuild the whole thing and migrate the data from one PG schema into the one that Django wants to use. Django is even easier to use than Rails – a fact that blew my mind once I started playing with it.

The deployment story however, is a shit show. I spent as many days trying to get this thing up on a Digital Ocean server as I spent hours building the application in the first place. I'm hoping to find that there is an easier, more modern means for serving Python apps in 2016 after some more digging.

Anyway, thanks for stopping by!

#generaldevelopment #python #django

I remember being very confused by this one early on. There were boatloads of tutorials on how to change your $PATH, but what that even means in the first place I just kinda had to figure out over the course of it all. It's actually pretty simple. Here's my attempt.

If you're coming from a Windows background, and you were in the habit of being really fussy about where you installed software on the hard drive, you may have just known how to fire up any old piece of software on your system. You navigated to the application in Windows Explorer and double clicked on it. It was really simple. That icon that you actually clicked on was the “executable”, which is to say the file that starts the whole show.

Unix, Linux, and Mac systems also have executables. On a Mac, it's (represented by) the icon you click on to start the app. When you start getting deeper into development and start using the command line more, you're eventually going to come across some installation instructions that advise you to “update your path” for some reason. They usually give you a copy and paste thing to go along with it. But what does it mean?

Let me give you an example. Here's my path on this laptop right here —

MacPro-JGrubb 犏 /usr/local/etc/ansible ➀ e0db473|master✓
10165 ± : echo $PATH [23h53m]
/usr/local/heroku/bin:/Users/jgrubb/.rbenv/bin:/Users/jgrubb/.rbenv/shims:/usr/local/bin:/usr/bin:/bin:/usr/sbin:/sbin:/usr/X11/bin:/Users/jgrubb/.composer/vendor/bin

Now, never mind that there's a ton of garbage in there, and why is Heroku at the beginning of the path like that? I don't even remember doing that. Anyway, when you first start your computer up, or when you first start a shell session, your environment fires up and loads a bunch of configs. One of the configs that it loads is the list of places to look for the aforementioned executables.

In a nutshell, your computer is going to look down that path from left to right. The different locations are separated by :, so that's a list of different locations on the computer that will be scanned to see that “that thing” is installed there. For instance – the MacIntosh comes preinstalled with Vim and Ruby. By default, stuff that comes with the OS is installed in /usr/bin. But, if you want a more recent version of Ruby, you might install it into /usr/local/bin (this is where Homebrew puts much of its stuff). If your path did not have /usr/local/bin before /usr/bin, you'd still be executing the system version of Ruby instead of the one you want. It's really simple, but again – took me a while.

So, how do you change it? Presuming you use Bash for a shell, you probably have a file called ~/.bashrc or possibly ~/.bash_profile. If you don't, you can safely create either of those files and put in a line like this —

export PATH=/usr/local/bin:/or/whatever/usually/bin:$PATH

This just says “hey, whatever my path is now, add those two directories to the beginning of it, and then assign that to be my new $PATH”.

Questions?

#devops #generaldevelopment