Ignored By Dinosaurs 🦕

Easy Markdown with Syntax Highlighting, PHP Edition

Hi there, and welcome back to this 14th installment of “I rewrote my blog in another framework that I'm interested in learning, this time in Laravel”. The trick that we'll be exploring today is that, in contrast to Python (the last version was in Django), PHP's lib story is a bit more sparse for this exact use case. However, I'm completely pleased with the outcome, so let's get busy!

PHP Markdown

Google that term and you'll find this – https://michelf.ca/projects/php-markdown/. This appears to be basically the most robust and well maintained Mardown parser for PHP, so that's where I started. It's quite simple to add to a Laravel project – composer require michelf/php-markdown and then (for the purposes of syntax highlighting) you'll want to use the MarkdownExtra class. Here's the Laravel code for rendering this article body that you're reading right now -

public function rendered_body() {
	$parser = new MarkdownExtra();
	$parser->code_class_prefix = "language-";
	return $parser->transform($this->body);
}

Pretty darn simple. The only option code_class_prefix will be explained shortly.

Syntax Highlighting

In contrast to my previous post on the matter, the PHP landscape does *not* have the most robust syntax highlighting parser in the universe at its disposal. After casting about in vain for a pure PHP solution I had one of those “I wonder if there's a javascript lib for this” moments. Turns out there are a couple..

Prismjs is the lib I chose, primarily because of the well known pubs namedropped on the front page and the fact that it has my beloved Twilight theme right out of the box.

Installation was straightforward and took about 3 minutes following the instructions on the website.

The only trick is that by default, the Markdown parser wraps its code blocks with a class of lang-$language, and we need it to be language-$language for prism to correctly work its magic. Luckily this is exactly what that configuration item is for above.

Conclusion

So that's it! Literally 10 minutes worth of work between “I wonder if there's a JS highlighter that I can use instead of trying to do this in pure PHP” and having this up and running.

#laravel #php

Hello, and welcome back to Platform.sh from scratch. In this post we'll be reconfiguring your Laravel app that we've been working on in the previous posts to use Redis as a cache and session store, rather than the default file store. We'll also install the Platform CLI and use it to SSH into our application container and get a feel for the layout of the filesystem after it's deployed to its environment.

But first, I'd like to have a brief chat about Git...


Using the tools the way they're meant to be used

Ok, we've gotten this far and we're feeling good about life, but we aren't really doing anything that mindblowing yet. We've spent two posts configuring an app to run on a new hosting vendor, whooptidoo. Now, don't get me wrong – we specified our entire project's infrastructure in code. We are free to change around our project's infrastructure however we see fit, without having to file a ticket and wait for support to change it for us. And yet, we're still working on the Git branch that represents the production state of our website, aka “Master”.

Before we continue on, let's put a little insurance in place, courtesy of Git and Platform.sh.

Go to your project admin screen and click the “Branch” button, which is the orange one in the top right. Name this new branch “dev”, or really whatever you prefer.

Now say this out loud -

  • I will never push straight to master again
  • I will never push straight to master again
  • I will never push straight to master again

I'm serious. This is important. You may have worked with Git branches before, and you might be familiar with the stress saving benefits of using them, but you've probably never worked with a hosting vendor who makes it so dead simple to really use them in your day to day development work. In fact, when I first started this job I told people that I worked for a hosting vendor. Now that I understand the power of the tools that we provide I say

Platform.sh is a software company that builds tools to make your job as a developer or web application owner easier and less stressful. We also happen to host the sites with which you use our tools.

Now that you've created that branch at Platform.sh you have a byte for byte copy of your master environment, complete with web accessible URL. Any work that you do from now on will land in that dev branch before it gets merged into master. In this way you'll be able to fully QA and test out new changes before deploying them to production.

This branch only exists at Platform.sh for now, so create and checkout a local dev branch to continue on.

git checkout -b dev

A new Redis service!

So far we haven't actually built any logic into this application, nor have we even activated Laravel's built in user authentication feature, so let's go ahead and do that. Following along with the Laravel docs, run this artisan command locally to scaffold out the files that are required.

php artisan make:auth

You can run the migration locally with php artisan migrate but you'll also want to add this to the bottom of your .platform.app.yaml file -

hooks:
  deploy: |
    php artisan migrate --force

Per the previous post, this will run the database migrations for you when you deploy your app on Platform.sh. At this point you can add, commit, and push to Platform.sh and experience the joy of having bona fide user auth in just a few minutes. Thanks Taylor!

Now that you've committed that, let's head back over to the config folder and switch from using the default file session store in favor of the Redis store. At the top of config/session.php, change the SESSION_DRIVER setting from file to redis. As long as we're at it, let's also go to config/cache.php and change the default CACHE_DRIVER setting to redis as well. Now let's set up your app to use Redis.

In your .platform/services.yaml file you're going to add a new Redis service -

rediscache:
  type: redis:3.0

and in your .platform.app.yaml file we're going to add that new service to the relationships section -

relationships:
  database: "pgsql:postgresql"
  redis: "rediscache:redis" # this is new!

This is all that's required on our end to add a new service to your project, but you'll need to “enhance” your app just a bit to make use of it. In the previous post we added to the top of the config/database.php file to enable your app to find the Postgres database that we're using. That file also contains the configuration for Redis, so go there now and change this -

$config = new Platformsh\ConfigReader\Config();

if ($config->isAvailable()){
  $pltrels = $config->relationships;
  $database = $pltrels['database'][0];
  putenv("DB_CONNECTION={$database['scheme']}");
  putenv("DB_HOST={$database['host']}");
  putenv("DB_PORT={$database['port']}");
  putenv("DB_DATABASE={$database['path']}");
  putenv("DB_USERNAME={$database['username']}");
  putenv("DB_PASSWORD={$database['password']}");
}

to this -

$config = new Platformsh\ConfigReader\Config();

if ($config->isAvailable()){
  $pltrels = $config->relationships;
  $database = $pltrels['database'][0];
  putenv("DB_CONNECTION={$database['scheme']}");
  putenv("DB_HOST={$database['host']}");
  putenv("DB_PORT={$database['port']}");
  putenv("DB_DATABASE={$database['path']}");
  putenv("DB_USERNAME={$database['username']}");
  putenv("DB_PASSWORD={$database['password']}");
  if(isset($pltrels['redis'])) {
    $redis = $pltrels['redis'][0];
    putenv("REDIS_HOST={$redis['host']}");
    putenv("REDIS_PORT={$redis['port']}");
 };
}

That is all that's required to enable your app to be able to use the Redis service in your Platform.sh environment. Add, commit, and push!

git push platform dev

While that's building, let's install the Platform CLI.


The Platform CLI

As the docs say, “the CLI is the official tool to use and manage your Platform.sh projects directly from your terminal. Anything you can do within the Web Interface can be done with the CLI.” Fun fact, the project management interface is actually an AngularJS application, and both it and the CLI interact with the same set of APIs on the backend to manage your project. Almost anything that you can do from the UI you can also do from the CLI, and vice versa.

Follow the instructions in this section to install the CLI, and do make sure you read the rest of that docs page for some more background information.

Logs!

Let's use the newly installed CLI to check out some logs, since logging is crucial to knowing what's going on inside not just your application but also the environment in which it's running. Running platform logs will give you several options for which log you'd like to inspect -

> platform logs
Enter a number to choose a log:
 [0] access
 [1] deploy
 [2] error
 [3] php.access
 [4] php

Let's check out the deploy.log and see what goes on in there -

[2016-10-03 17:23:01.523855] Launching hook 'php artisan migrate --force'.

Migration table created successfully.
Migrated: 2014_10_12_000000_create_users_table
Migrated: 2014_10_12_100000_create_password_resets_table

So the deploy log is actually the stdout output from whatever you have in your .platform.app.yaml file in the hooks.deploy section. Pretty handy for debugging as you're building up new steps. By default, the platform logs command will just cat the entire file out to your screen, but you can also pass the name of the log that you want to access as an argument to the command, and pass it a flag --tail if you want to tail the log, like so -

> platform logs --help
Command: environment:logs
Aliases: log
Description: Read an environment's logs

Usage:
 platform environment:logs [--lines LINES] [--tail] [-p|--project PROJECT] [--host HOST] [-e|--environment ENVIRONMENT] [-A|--app APP] [--] []

Arguments:
 type The log type, e.g. "access" or "error"

Options:
 --lines=LINES The number of lines to show [default: 100]
 --tail Continuously tail the log
 -p, --project=PROJECT The project ID
 --host=HOST The project's API hostname
 -e, --environment=ENVIRONMENT The environment ID
 -A, --app=APP The remote application name
 -h, --help Display this help message
 -q, --quiet Do not output any message
 -V, --version Display this application version
 -y, --yes Answer "yes" to all prompts; disable interaction
 -n, --no Answer "no" to all prompts
 -v|vv|vvv, --verbose Increase the verbosity of messages

Examples:
 Display a choice of logs that can be read:
 platform environment:logs

 Read the deploy log:
 platform environment:logs deploy

 Read the access log continuously:
 platform environment:logs access --tail

 Read the last 500 lines of the cron log:
 platform environment:logs cron --lines 500

Merging

So your dev brnach should be done building by now. Check out that everything is working the way that you expect, and if it is, let's merge the dev branhc into master. This will constitute a production deployment. You can either click the “Merge” button in your project admin UI, or you can run this from the CLI – platform merge. This will give you some interactive output so you can confirm that you're merging into the environment that you want.

One last trick for now – run platform ssh from the root of your project. Sure enough, this will SSH you into your application's PHP container, so you can get a feel for what goes on in there. A few tips -

  • The root of the application will be /app and the user will be web.
  • Those very same logs can be found at /var/log, just like normal!
  • You can check out the generated Nginx config file at the usual location as well – /etc/nginx/nginx.conf. This can be *very* useful for debugging complex configurations in your .platform.app.yaml file.
  • You can get a list of running processes the same as normal too – top. You'll see that there's not really anything going on in there beyond what your app needs to run, since OS level processes are not running in your LXC container.
  • Every app container has a Java executable. There be dragons, but you could theoretically whip up some fairly complex setups with Java dependencies if you ever needed to.

That concludes this post, and the series! We'll dive into other features, but with what you've learned in the past 3 posts you should have about 90% of what you need to orient yourself within our product.

#platformsh #laravel

Hello (!) and welcome back to Platform.sh from scratch. In this post we'll learn about how to set up the Laravel app from the previous post to hook in to various services on Platform, starting with a database connection and moving on to using Redis as a cache and session store. Along the way we'll visit Platform.sh's “environment variables” feature, and we'll set up our first fully functioning deploy hook.

Prerequisites – go through the previous post and get that far...

Let's get started!


So the first step is to add a database to your services.yaml file. Let's choose PostgresQL, which is my personal preference for open source databases these days (mostly due to the fact that it hasn't been bought by Oracle and subsequently forked). Add this to your services.yaml file, which should currently be empty.

# adds a Postgres 9.3 service to your project
# and gives it about a gigabyte of disk space
pgsql:
  type: postgresql:9.3
  disk: 1024

And in your .platform.app.yaml add this anywhere -

# This is how you define service relationships in your application
# I personally think this should've been named "services" but such is life
relationships:
  database: "pgsql:postgresql"

As you can see, setting up your project to provision new services is super easy and as the platform matures we'll likely support several versions of any given piece of software. This will someday allow users to easily test out upgrading something like a database to a new major version in another branch without worrying about the usual hassles.

Now we need to set up our application to use these new services. This is fairly straightforward, but feels a little strange the first time so I'll walk you through the general algorithm that you'll use no matter what the framework or language you're using.

Platform.sh encodes many key pieces of information about your application into OS environment variables. If you SSH into your app container you can echo $PLATFORM_APPLICATION and get back a long string that's in essence the base64 encoded version of your .platform.app.yaml file. Same with $PLATFORM_ROUTES. This is how we store metadata about your application, and you'll make use of these variables as well to establish database connections. The basic algorithm for finding DB connection info is

  • read and decode $PLATFORM_RELATIONSHIPS into a json string.
  • parse that json string into an object and use the attributes of that object to set the connection info

So with that in mind, let's get your Laravel app wired up.


First let's make use of a nice little composer package that Platform.sh has authored in order to simplify this bit. Run composer require platformsh/config-reader and install this package. Next let's head to config/database.php and add this little snippet to the top -

php // <- leave that out, it's for syntax highlighting only
$config = new Platformsh\ConfigReader\Config();

if ($config-isAvailable()){
  $pltrels = $config->relationships;
  $database = $pltrels['database'][0];
  putenv("DB_CONNECTION={$database['scheme']}");
  putenv("DB_HOST={$database['host']}");
  putenv("DB_PORT={$database['port']}");
  putenv("DB_DATABASE={$database['path']}");
  putenv("DB_USERNAME={$database['username']}");
  putenv("DB_PASSWORD={$database['password']}");
}

You can read the source of the composer package yourself, but essentially the $config instance has properties for each of the encoded environment variables that Platform.sh adds to your environments. In the case of the relationships attribute, you might have several databases defined, so it's a simple matter of digging into that property to pull that values out.

In the case of Laravel, it makes use of the vlucas/phpdotenv package to read certain settings out of environment variables, so it's really just a matter of translating the nested variables that Platform.sh provides into what Laravel is already expecting.

I suspect it would be astonishingly easy for someone to come up with a drop in “Laravel Helper” package that would set all this up for automatically, but I haven't gotten that far just yet.

With this much code, your app is now ready to connect to the database in whichever environment you'll be deploying so go ahead and commit this.

git add . && git commit -m "adding platform db config"

There's one more blocker for your Laravel app that you'll need to take care of before you're really up and running and that's the need for the APP_KEY variable to be defined. Time to leran about Platform.sh's environment variables feature!


Platform.sh Environment Variables

You're likely familiar with the situation where you need to make use of some “privileged” data in your application, and that you don't want to store that data in Git. Or perhaps the use case is that you have different settings for certain things in development than you do in production, perhaps a DEBUG flag or something like that. The usual solution for these cases is to use OS environment variables (just like we do). We provide a feature easily setting variables that you can read in your environments, so let's set one up for the Laravel SECRET_KEY.

Head to your project's admin page and click on “configure environment”, then “Variables”. The simplest thing to do is pull the SECRETKEY out of your local .env file, so click “add variable” and give it a name of “SECRETKEY” and put the value in the “value” field. This will trigger a redeployment of your application.

Last step is to add some code in to your application to read those variables back out. Add this to the top of your config/app.php file -


$config = new \Platformsh\ConfigReader\Config();

if($config-isAvailable()){
  foreach($config->variables as $k => $v) {
    putenv("$k=$v");
  }
}

Provided you've already set up that SECRET_KEY variable, it'll be read further down that file and you're good to go, so let's commit and push to platform.

git add .
git commit -m "adding platform environment variables config"
git push platform master

So this post is already a bit long so I'll just touch on one last point quickly. Now that you have this database all hooked up and ready to go, you're likely going to want to use it! Should you want to automate the process of applying your database migrations, you can do that in a simple deploy hook.

hooks:
  deploy: |
    php artisan migrate --forcxe

The --force flag sounds scary but all it does it disable the command needing feedback which of course, you are unable to provide yourself. The other option is to SSH into the app server after deployment and run the command yourself, which I'll demonstrate in the next post.

#platformsh #laravel

Hi there and welcome back to Platform.sh from scratch. In this post we'll convert a Laravel app for use on Platform and learn a few tricks that will hopefully inform converting any app for use on Platform.


Step 1 is to get going with a new Laravel app, so follow the instructions on installing Laravel and setting up a new project. Initialize a git repo, add a new platform, and add the Platform.sh git remote to your local repo. All of this is documented in the previous post.

Now, at this point you can try and push code to us but we'll reject it because you don't have any of the Platform config files in place. Let's use the exact same routes file as the previous Silex project.

# .platform/routes.yaml
"http://www.{default}/":
  type: upstream
  upstream: "app:http"
"http://{default}/":
  type: redirect
  to: "http://www.{default}/"

So that's step one. We'll get to the services.yaml file in just a minute, but let's go ahead and stub it out – touch .platform/services.yaml.

Now let's get to work on the .platform.app.yaml file, which will define what your new Laravel app will need to run. One of the key differences between this application and the previous one is that for this one we're actually going to need some writable disk space. Laravel expects a few directories to be present (and to be writable by the web user) in order to write logs and caches and such. We glossed over that bit in the previous post, so I'll now take a moment to talk about Platform.sh's read-only filesystem.


[!info]

Aside – the read-only filesystem

Platform.sh, like some other cloud PaaS providers, utilizes a read-only filesystem. When your application is deployed, we package up a snapshot of your application code and mount it into it's environment. This means that the days of being able to edit code directly on the server, or of being able to FTP code up to the server are effectively over.

All of your app's code must be in Git in order to be deployed, which has quite a few advantages. Not least of them are accountability for who did what and when to your codebase. Of course, you likely need to have some part of your filesystem be writable for logs and file uploads, so we take care of that for you but first let me expand on the benefits of going read-only.

Benefit one – consistency

As we've discussed, Platform.sh's entire workflow is built around Git. This means that each commit has a SHA hash that identifies each unique commit within your project. If you cut a new Git branch out of another, that new branch will have the same SHA hash as the currently existing branch that you cut it from.

> ~/work/php/magento-platform-sh
> $> git checkout -b test_branch master 93783b2 # <- SHA hash of this commit
> Switched to a new branch 'test_branch'
> 
> ~/work/php/magento-platform-sh
> $> test_branch 93783b2 # <- Same SHA hash
> ~~~
> 
> Platform.sh sees that the two SHA hashes are the same and doesn't bother building your new environment's codebase again since it's already built it. It just uses the already packaged code snapshot from the original branch and creates an environment around it. This saves time, but when it really shines is when it heads in the opposite direction - on merge. When merging a feature branch into a long running develop or master branch, Platform.sh sees that the code snapshot has already been built and deploys that into your master environment. 
> 
> What this means is that you are 100% guaranteed that the code being deployed into your master environment is precisely the same code that you just tested out in your feature branch. Nobody snuck anything new in there via FTP or editing directly, so you can be confident in your deployments. This leads neatly into benefit two...
> 
> __Benefit two - security__
> 
> As you're likely aware there is a large class of exploits, particularly in PHP web apps, that take advantage of that fact that a great many of them allow write access to files that the web server will execute. This means that nefarious users can sometimes find security holes that will allow them to upload executable PHP files to the server and then use those files to gain "elevated privileges", another way of saying "hack your server". With a read-only filesystem, many of those exploits are blocked before they can even happen.
> 
> __Lastly, of course you need to write some files__
> 
> So yes, your web app likely has something in there that needs to be writable. It might be for uploads or it might be for caching or it might be for logs. We got you covered, but you have to specify which directories to make writable in your `.platform.app.yaml` config file. 

So I'm just going to drop the `.platform.app.yaml` in here and explain it bit by bit.

~~~yaml
# the name of this particular app, remember that we allow you
# to create a project out of 1 or more apps, so this gives our
# Laravel app a name...
name: app
type: php:7.0 # SSIA
build:
  # Same as before, this alerts our system to look for a composer.json
  # or composer.lock file and install the dependencies defined therein
  flavor: composer
# basic web configuration for this particular app. Laravel apps have a
# "public" folder that serves as the web docroot.
web:
  locations:
  "/":
    root: "public"
    index:
      - index.php
    allow: true
    passthru: "/index.php"
# How much disk space to allot to this app. 
disk: 2048
# This is where you define your writable file system paths. the keys are the
# paths in your app that need to be writable/uploadable. The values are always
# going to be named "shared:files/$WHATEVER_HERE", where "WHATEVER_HERE" can be
# any arbitrary identifier.
mounts:
  # Laravel uses a directory off the root called "storage" for logs and cache.
  "/storage/app/public": "shared:files/storage/app/public"
  "/storage/framework/views": "shared:files/storage/framework/views"
  "/storage/framework/sessions": "shared:files/storage/framework/sessions"
  "/storage/framework/cache": "shared:files/storage/framework/cache"
  "/storage/logs": "shared:files/storage/logs"
  # And another cache directory here.
  "/bootstrap/cache": "shared:files/bootstrap_cache"

So this brings us to a decision point. Inside the storage directory are nested a few more directories. Laravel sets these directories up for you and then drops a .gitignore inside each of them. This is handy, but presents a small challenge to Platform.sh. Any directories that you declare as writable (or mountable) will be emptied out on the first build and deploy. This means that those nested directories that exist in your git repo will be deleted and you'll be left with an empty storage directory.

This will cause you some headaches when you try to deploy your Laravel app. There are two solutions to this – either add each directory that needs to be writable inside of storage to the mounts directive or recreate them in a “deploy hook” instead. We'll go with option A but I want to introduce you to hooks, so here's what option B looks like -

# .platform.app.yaml, after all the rest ...
mounts:
  # Laravel uses a directory off the root called "storage" for logs and cache.
  "/storage": "shared:files/storage"
  # And another cache directory here.
  "/bootstrap/cache": "shared:files/bootstrap_cache"
hooks:
  deploy: |
    mkdir -p storage/app/public
    mkdir -p storage/framework/views
    mkdir -p storage/framework/sessions
    mkdir -p storage/framework/cache
    mkdir -p storage/logs

This second method here took quite a bit of trial and error to figure out, and either method is valid. It would've taken me a bit if trial and error to go with option A as well, since I'm not very familiar with Laravel.

Now I'll take a moment to explain what hooks are.

Aside – hooks

Hooks are what they sound like – commands that will execute at certain points in the deployment lifecycle. In this case, we have two – build and deploy.

Build hooks run while your app is being packaged up, before it's sent to the application's environment. The filesystem is still writable at this point, so if you need to make any modifications to the file structure of your project, this is your chance to do so. Since your project doesn't have an environment yet, it doesn't yet have access to the various services that you've declared in services.yaml. So, no database is available at this point.

Deploy hooks run after your project has been mounted into the app environment. The versioned file system is no longer writable, but you do have access to your services at this point. This is typically when you do things like migrate databases or clear caches.

So you might notice that we're performing those mkdir calls in the deploy hook, which seems to contradict what I just said. However, those directories are being created inside of what you've declared as a writable directory in the mounts directive, so nothing breaks and indeed you have access to that mounted directory, where you wouldn't have had access in the build hook.

Here's the full .platform.app.yaml at this point, for reference.

name: app
type: php:7.0
build:
  flavor: composer
disk: 2048
web:
  locations:
    "/":
      root: "public"
      index:
        - index.php
      allow: true
      passthru: "/index.php"
mounts:
  # Laravel uses a directory off the root called "storage" for logs and cache.
  "/storage/app/public": "shared:files/storage/app/public"
  "/storage/framework/views": "shared:files/storage/framework/views"
  "/storage/framework/sessions": "shared:files/storage/framework/sessions"
  "/storage/framework/cache": "shared:files/storage/framework/cache"
  "/storage/logs": "shared:files/storage/logs"
  # And another cache directory here.
  "/bootstrap/cache": "shared:files/bootstrap_cache"

So save that, commit to git and push it to your Platform git remote and you should be on your way. You can find the full repository here on Github. The most important difference between that repo and this project is that we have yet to set this project up with a database connection, which we'll do in the next post.

#platformsh #laravel

Hi there and welcome back to Platform from scratch. Today we're going to take a very simple Laravel application that will make use of Postgres on the backend as a database.

The complete example application can be found here – https://github.com/JGrubb/platformsh-laravel-example

The very first step of this will be to add in the appropriate .platform/services.yaml file. This file was left intentionally empty in the setup for this Laravel application, as we didn't have a need for a working database and were just getting our app set up and running. Now however, we're going to add a very simple configuration into services.yaml.

# This is the "name" of this service and can be any arbitrary string.
# You could name this "foo" and that's the name you'd use in your
# .platform.app.yaml file in the next step.
pgsql:
  # This is the actual service you'll be using.
  type: postgresql:9.3
  # How much space you want to give this in megabytes. 
  # 1 gig will get us going
  disk: 1024

Over in .platform.app.yaml we'll add in the relationships section to the file.

relationships:
  # This config takes the form of "relationship_name: service_name:driver"
  #
  # Instead of "database", you could also call this "bar" and that's how it'll
  # show up in the $PLATFORM_RELATIONSHIPS environment variable. I'll show
  # you how that manifests itself in just a moment.
  #
  # "pgsql" in this case is the name you gave it in services.yaml and
  # "postgresql" is the part that you can't just arbitrarily name. That
  # is the hint to our container build system that you need a PG database
  # service.
  database: "pgsql:postgresql"

Commit this and push it up to your platform remote. This will trigger a rebuild of your application, and when that's done let's SSH into your environment with the Platform CLI – $ platform ssh.

Once you're in, try this – echo $PLATFORM_RELATIONSHIPS. You'll get a big base64 encoded string as a result. This is because you can't set complex objects or even JSON as an environment variable, so let's decode that by piping it to base64 --decode – echo $PLATFORM_RELATIONSHIPS | base64 --decode

This should give you back something like

{"database": [{"username": "main", "password": "main", "ip": "250.0.96.171", "host": "database.internal", "query": {"is_master": true}, "path": "main", "scheme": "pgsql", "port": 5432}]}.

sidebar

Just for the fun of it, let's try this in the relationships section -

relationships:
  dark_chocolate: "pgsql:postgresql"

Sure enough —

web@4ikq2xigwlw5s-master--app:~$ echo $PLATFORM_RELATIONSHIPS | base64 --decode
{"dark_chocolate": [{"username": "main", "password": "main", "ip": "250.0.96.171", "host": "big_daddy.internal", "query": {"is_master": true}, "path": "main", "scheme": "pgsql", "port": 5432}]}

So the basic gist of how you establish a connection to any kind of service that you set up in services.yaml should now be clearer than it was before, and we'll now set about adding the code to our Laravel app that will make use of these environment variables.

// at the top of config/database.php. This will decode the base64
// encoded envvar and expand it into the variables that Laravel is
// expecting.
if ($relationships = getenv('PLATFORM_RELATIONSHIPS')){
  $pltrels = json_decode(base64_decode($relationships), TRUE);
  $database = $pltrels['database'][0];
  putenv("DB_CONNECTION={$database['scheme']}");
  putenv("DB_HOST={$database['host']}");
  putenv("DB_PORT={$database['port']}");
  putenv("DB_DATABASE={$database['path']}");
  putenv("DB_USERNAME={$database['username']}");
  putenv("DB_PASSWORD={$database['password']}");
}

This particular piece of code is not Postgres specific, and in fact will work just fine with MySQL as well. The beauties of abstraction...

The final step in this process is optional, but if you want to have artisan migrate the database on deploy rather than logging into the server and running it manually you'd add this to the bottom of your hooks.deploy in .platform.app.yaml —

hooks:
  deploy:
    # other commands
    php artisan migrate --force

The --force flag will allow migrate to run in a “production” environment.

There is one final step to be aware of, and that's that the pdo_pgsql extension is not enabled by default in the PHP containers. You'll need to add this somewhere in .platform.app.yaml -

runtime:
  extensions:
    - pdo_pgsql

If you were using MySQL, this step would not be needed as pdo_mysql is enabled by default. Indeed, if you're using Postgres, you can disable the MySQL extension if you wish -

runtime:
  extensions:
    - pdo_pgsql
  disabled_extensions:
    - pdo_mysql

For reference, here's the complete .platform.app.yaml —

name: app
type: php:5.6
runtime:
  extensions:
    - pdo_pgsql
build:
  flavor: composer
disk: 2048
web:
  locations:
    "/":
      root: "public"
      index:
        - index.php
      allow: true
      passthru: "/index.php"
mounts:
  "/storage": "shared:files/storage"
  "/bootstrap/cache": "shared:files/bootstrap_cache"
relationships:
  database: "pgsql:postgresql"
hooks:
  deploy: |
    mkdir -p storage/app/public
    mkdir -p storage/framework/views
    mkdir -p storage/framework/sessions
    mkdir -p storage/framework/cache
    mkdir -p storage/logs
    php artisan migrate --force

#platformsh #postgres #laravel

Hello, and welcome to “Platform.sh from Scratch”. In this prologue to the series, I'll go over some of the very highest level concepts of Platform.sh so that you'll have a clearer understanding of what this product is and why it came to be.

Platform.sh is a “Platform as a Service”, commonly referred to in this age of acronyms as a “PaaS”. The platform that we provide is essentially a suite of development and hosting tools to make developing software applications a smoother end-to-end process. In order to understand what this means though, I'm going to have to go into some detail in this first sidebar. Skip this if you're comfortable with this post so far.


[!abstract]

PaaS

Everyone has heard of Salesforce. Salesforce has come to be the poster child for what is now referred to as “SaaS” – Software as a service. Prior to the SaaS era if you wanted a piece of software, be it a video game or Quickbooks or anything else, you had to drive to a store and buy a box with some disks in it. Once the internet reached a level of market penetration into people's homes though, those stores went out of business. This is an obvious evolution in hindsight. SaaS is a high level thing – it's a runnable piece of software that you'll access over the internet via a URL. You might be able to modify/config it a little bit, but will never be your entire business. It's not your product. It's someone else's and will likely play some fractional part in your overall business plan.

Almost everyone by this point has heard of Amazon Web Services – AWS. AWS is basically what people are talking about when they say “The Cloud”. AWS is a suite of products that emerged from Amazon when they figured out that they needed a huge amount of datacenter capacity to be able to withstand massive retail events like “Black Friday” and “Cyber Monday”, and that for most of the rest of the year they had tons of excess capacity sitting around draining money from their wallet. What to do with all that excess capacity? Sell it to someone else.

This relatively simple premise has evolved over the last 10-12 years into numerous products from S3 (basically a giant, limitless hard drive in the sky) to EC2 (basically a giant, limitless hosting server in the sky) to Redshift (basically a giant, limitless database that can be used for data warehousing) to SES (a simple service that sends emails) to an ever growing host of other services that always seems to come out just before your start-up figures out that it needs them.

AWS and “the cloud” in general is often given the acronym “IaaS” – infrastructure as a service. They're selling you the low level hardware abstractions that you can assemble into an infrastructure on which to run your software and by extension your company. It requires a decent bit of specialized knowledge for how to use the individual pieces as well as how to plumb them together, but for all intents is infinitely flexible. It's this level that has had most of my interest for the past few years.

In the middle of these two is what's called “platform as a service” – PaaS. This is what Platform.sh is – a suite of software and hosting services that lets you efficiently build and develop your software application, and then deploy your software application to a hosting environment that doesn't require as much specialized knowledge on how to plumb all the pieces together. Nor does the hosting environment require you – and this is a most important detail – to set up monitoring and alerting for if something goes wrong in the public environment.

The PaaS takes elements of both IaaS and SaaS to allow you to build your software product but not have to hire an extra person just to know the low level server business.

So, back to the program. The development tool set of Platform.sh is entirely based around Git. Just in case the reader is not already familiar with Git, I should explain this a little bit.


[!abstract]

Git

Software projects are typically composed of lots of files. If you want to add a new feature, you might be required to make changes in more than one of those files. Of course, before you get started you'll want to make some kind of backup just in case. If it turns out that the change was buggy or unneeded and you want to revert back to a previous state, you'd just restore those few files back to their previous versions.

What if, however, you're working with a bunch of different people and more than one person is working on that change (an utterly common scenario)? How do you manage those backups between all those people? Saving copies of files is basically impossible to manage after a very short while, so out of this need SCM (source code management) was born. It's been through several different iterations by this point, and at this point in time the version of SCM that is leading the market is called Git.

Git is pretty cool. It basically takes snapshots of your entire project whenever you tell it to. It then keeps track of all those snapshots and lets you share those snapshots among a team of developers. Any snapshot can be reverted, and you can see the full history of every change to the codebase so you can keep track of “what happened when”. But wait! There's more!

This is not an exclusive feature of Git, but it has a feature called “branching”. Branching is intuitively named, and is basically the concept of taking a specific snapshot and making changes based off of that one snapshot while other work continues on down the main code line. This is the recommended way to work if you're going to make any kind of significant change to the software, and this method of working allows you to keep the main code line (almost always referred to as the “master branch”) in 100% working order. It can be thought of as having a furniture workshop away from your house where you can work and keep the house clean for company to come over at any moment, as opposed to working in the house and risking having a wreck to present should company decide to drop by.

In essence branching is making a complete copy of your project at a point in time that you can hack on all you like without disturbing anyone else. If and when the change is ready, you “merge” the code back in to the master branch, test it out to make sure everything is still groovy and then you can release the feature or bug fix to the public.

gitGraph
   commit
   commit
   branch develop
   checkout develop
   commit
   commit
   checkout main
   merge develop
   commit
   commit

You can read more about the super basics here if you wish. For now, all you really need to know is that Git

  • Makes it easier to develop software as a team
  • Makes it very cheap and easy to try out new features without breaking anything
  • Makes it easier to manage changes to your software and to revert back to a known non-broken state

update: Hey look, a really great post explaining all this better than I did.


Platform.sh has taken this branching and merging workflow and extended it out into the entire hardware stack. When you're building a software project of any size, there are considerations beyond just the code your team is writing.

Most applications of any size connect to some kind of database in the background, this is where they save “data stuff”. User uploaded images are a very common thing in the web app world, so if those images aren't there the app will look busted.

You can branch your code all you like, but you need these other supporting resources to really do your job. Platform.sh makes branches not just of your code, but the entire infrastructure that your project runs on. This allows you to use the common branching/merging workflow with the complete support of everything else that your application depends on.

This may seem like an obvious feature, since how can you develop a new feature without being able to run it (?), but no other service that I know of actually does this. A branch in Git triggers (for all intents) a complete copy of your production site without requiring you to set up any new servers, copy databases over, copy images and everything else, etc, etc, etc. It's a significant hassle to do all this stuff, trust me, and it slows the team down every time you have to do it. Removing this need removes a major friction point in the workflow for building new features on your software product.

But wait! There's more!!!

This is where it starts getting really, really good. In case you're not aware, there's a website called GitHub. It's where a whole lot of folks have decided to host their “git repos” – repo being short for “repository”, which is basically that series of snapshots of the state of your project/codebase back to the beginning of time. This is the repo for this blog – https://github.com/JGrubb/django-blog, and here's some of the code that just ran to generate this page you're reading – https://github.com/JGrubb/django-blog/blob/master/blog/views.py#L26-L33. Pretty cool, right? And if I were working on a project with a buddy, we could both use this same repo and work on the same project, whether I'm in Germany or New Jersey or wherever. I can pull his changes over and he can pull mine and this is basically how open source software gets written these days.

The same workflow applies though – if you want to make a new feature or even if you just want to fix a bug, you'd make a new branch and do your work and then “submit a merge request”. This basically pings the person who runs the project and says “hey, I would like to suggest making this change. Here's the code I'm changing, maybe you could look it over and if you agree with this change you can merge it in”. By way of an example, here's a list of “pull (aka merge) requests” for the codebase that comprises the documentation for Platform.

Again, this is how software gets written and it's pretty mind blowing if you think about it. We software developers are so used to it our minds cease to be amazed, but not because it's not amazing. I mean, currently participating in that list of PRs are folks from France, Chicago, Hong Kong, the UK, and so on. Amazing. It is also, however, a pain in the ass.

It's a pain in the ass because it's typically impossible to tell if something works or not just by looking at the code, so you have to pull their changes over to your computer and test them out somehow. I bet you can see where this is going! Platform has a GitHub integration (BitBucket too) that will automatically build a working version of any merge request that someone opens against your project. That let's you go visit a working copy of the project and test it out without having to do a thing. Now, I don't care how long you've been doing this, that is mind blowing. For example, here's Ori's (currently work in progress) PR for adding the Ruby runtime documentation – https://github.com/platformsh/platformsh-docs/pull/339. If you click the “show all checks” link down toward the bottom, it expands with a little link “details”. That link takes you to a complete copy of the documentation with Ori's change added to it, so you can read it like you normally would, rather than reviewing a “diff”. It's the future now!

What this means in the wider scope is that your time to set up new things to test out new ideas, only to have to tear it down once the tests pass is time that you don't have to waste anymore. You can test changes out and keep right on moving.

This GitHub integration is only one of the really cool and unique features that Platform provides, but this post has gotten absurdly long already. Fortunately, this is intended to be the prologue to this series, so I'll touch on as many of those features as I can as the series progresses.

#generaldevelopment #devops #theory #platformsh

Hello, and welcome back to “Platform.sh from scratch”, see also the first post in the series. The goal here will be to augment the official documentation with a short tutorial that shows how to set up a project for proper functioning on Platform.sh. We'll dive into the “why” as little as possible here. For now let's dive straight into the “how”.

We're going to start with a very basic application, the example Silex app on the front page of the Silex website. This will be a standard Composer based project, so we'll need a composer.json file to start with.

The project structure will look like this —

jgrubb in ~/play/php/silex-test on master λ tree -I vendor
.
├── app
│   └── index.php
├── composer.json
└── composer.lock

The composer.json file can be created by running composer require silex/silex, or you can just copy this into composer.json at the root of your project directory —

{
 "require": {
 "silex/silex": "^2.0"
 }
}

Run a quick composer install and the rest of the dependencies will be pulled down and placed into the standard vendor directory. We're going to be using Git here, and in general you don't want to version 3rd party dependencies like those that Composer downloads. Let's create a .gitignore file and add the vendor directory to it.

echo "vendor" >> .gitignore

The entirety of the application codebase looks like this —

// in app/index.php
require_once __DIR__.'/../vendor/autoload.php';

$app = new Silex\Application();

$app-get('/hello/{name}', function($name) use($app) {
  return 'Hello '.$app->escape($name);
});

$app->run();

Very simply, all this app does add a route that responds to requests along the path of hello/{whatever}. As long as you've used the same directory structure, you can cd into the app directory and run php -S 127.0.0.1:8080 to fire up the local php webserver and then head to localhost:8080/hello/user.

If all is working as expected, let's head to Platform and get this thing ready for the development process.


Navigate to the platform website and register a new account. You have 1 month to (freely) evaluate whether or not Platform fits your needs, so let's get going. I'm assuming that you can find your way through the registration and login workflow and find your self back on the “your account” page, so let's add your SSH public key into your account and that will be all for configuring your account for now. Under “account settings” –> SSH keys you can add a public key.

[!info] Sidebar – SSH public keys

This is required for one main reason – it allows us (Platform) to securely authenticate you when you push code to a project. This is standard for most public code repos (GitHub, Bitbucket), and we use this method as well. A massive side benefit of this workflow is that it allows any Platform account holder to invite any other Platform account holder to their project. This means that agencies can invite developers to projects, users can invite other developers to help with their projects, and the overall friction of matters of authentication and authorization on projects is reduced to virtually zero. You won't likely notice this benefit until a little later on, but when you want to share a project (or even a specific branch of a project) with another user no new account/password/validation workflow is required, and work can begin immediately.

Once that's done, let's go back to the main account page and “create a new platform”.

To me this workflow is pretty self explanatory and the defaults are the correct settings for now, so select which region you'd like your project to be hosted in and get through the checkout workflow. Like I said, nothing will be charged for a month, so have no fear. I have to go through this flow as well, and I work here...

After you get through that flow, you'll be dropped into the “projects” admin of Platform.sh. This is where you'll likely be spending the vast majority of your time.

Name your app (I'm creatively naming mine “Example Project”), and then for this project you'll want to choose to “import your existing code”. This option will present you with a URL for a git repo to which you'll be pushing your code. Now is the time to initialize a git repo in your codebase.

git init


[!info] Sidebar – infrastructure as code

We'll get into the mechanics later, but this would be a good opportunity to explain the overall ethos of Platform.sh which is that you are going to be specifying your infrastructure – that is the underlying software systems on which your project will be running (MySQL, Redis, etc) – in code. You'll be able to manipulate the infrastructure required to run your project in the same way that you manipulate the behavior of your app through writing code, and you'll push these hardware requirements to us in the form of code.


Run git add ., which will add all 4 files in your project to git, and then git commit -m "init commit" to commit your code. After that you'll want to cruise back over to the Platform project admin screen and copy the lines under “Push an existing repository on the command line” and drop them into your terminal. This will add the Platform git server as a remote for your project, so now you have somewhere to push your code. We're almost there!

But we're not totally there yet, there's one more step. You need to tell Platform what your project needs to run or you won't be able to push your code up to us.

All Platform hosted projects require 3 files – .platform/routes.yaml, .platform/services/yaml and .platform.app.yaml. The routes file is just that – it's sort of like a front controller for your entire project. What this means in practice is that you can have multiple applications running in your project (under different paths), but for now you really just need to route any request to your little example application.

This is a nice standard starting point for any given PHP project, so place this in .platform/routes.yaml —

"http://www.{default}/":
 type: upstream
 upstream: "app:http"
"http://{default}/":
 type: redirect
 to: "http://www.{default}/"

No, this is not the most beautiful file, but all you really need to know about this is that all URLs that enter this project will either have a base URL of

  • www.whatever.foo and will be routed to your codebase, or they'll be
  • whatever.foo and will be redirected to www. See step 1.

Another file that you need to have in place is the .platform.app.yaml file, which is a file that describes the high level requirements of this application. The most bare bones file is all that we need and it'll look like this —

# The name param is linked to the "upstream" parameter in
# routes.yaml. If you called the app "foo", then the
# upstream parameter would look like `upstream: "foo:http"`
name: app
# The "type" parameter takes the form "language:version".
# This could be `python:3.5` for example
type: php:5.6
# Look for a composer.lock (or composer.json) and download
# the listed dependencies
build:
 flavor: composer
# How much disk space will this app need? This is primarily used for
# user uploaded assets, so for this application you don't really need
# anything here, 256 would be fine. You can always grow
# this later, so this is a safe starting point. (in MB)
disk: 2048
# Now that a request has gotten this far, how do you want
# it handled? We'll go into more detail about these params
# in a later post. This section can be thought of as
# somewhat analogous to an Apache or Nginx config file.
web:
 locations:
 "/":
 root: "app"
 passthru: "/index.php"
 index:
 - index.php
 allow: true

There is more information on the documentation website about this file, and it's all worth your time.

The services file will define what other services (this is where MySQL comes in) your app depends on but since we don't need any yet, this can remain empty. It *does* need to be there however, or you won't be allowed to push your project, so for now just create an empty file – touch .platform/services.yaml.

Your project's file layout should now look like this (excluding git stuff) —

├── .platform
│   └── routes.yaml
│   └── services.yaml
├── .platform.app.yaml
├── app
│   └── index.php
├── composer.json
└── composer.lock

So with those three files in place, add them and commit them to your git repo —

git add . && git commit -m "adding platform config"

git push platform master and you are off! If you are still looking at the dialog screen in the Platform project admin, you can click “continue” now and you will (or will shortly) see a log screen of all the relevant activity for this project – git commits and the project creation before that.


After this and every successful git push to Platform those files will be analyzed to make sure that the infrastructure that your project requires and the infrastructure that is available to that project are in line. If something has changed or if this is the first time you've pushed code to this project, the environment will need to be set up with the services that are expected. This takes a moment, and then your code will be mounted into the environment. At this point you'll have a running project that you can visit by going to the project admin dashboard and following the “access site” link near the top of the page.

This concludes this step! It may seem like a lot to get a 5 line PHP project going, but think of what you *didn't* just have to do – spin up a server, set up a shell environment that feels home-y on that server, set up a LAMP stack, set up a build process for getting your code onto the server in a defined, runnable state, fuss with DNS or local host entries. We haven't even touched on the aspects of Platform that will completely blow your mind, so stay tuned.

#platformsh

So this is it, my week in between my old job and my new job. And I'm bored out of my mind. So I'm going to do something I've never done before – write a blog post on my phone. We'll see how this goes.


I was texting w one of my former coworkers a little earlier today. He was picking my brain about how the AWS command line tools work and so I was explaining some things to him in too much detail. I've noticed this thing that I have where I want to explain the why of things and not just give a one liner that can be copied and pasted to accomplish a job. If this blog has any regular readers maybe you've noticed this as well. Whatever.

So he was kicking some cli one liners to me to look at and in one of them was providing incorrect arguments to one of the options. I referred him to the excellent documentation page on this particular command and I think he must've gotten it sorted out after that because i didn't hear back from him after. I realized consciously something that i guess I've been doing a lot of the last few years – reading a lot of documentation, for fun.

I think one of the common modes to be in when you're reading documentation is that of trying to figure out a solution to a specific problem. I'm trying to figure out how to upload some files from my laptop to S3, so I'm researching the page for the correct arguments to pass. I'm trying to figure out how to avoid the N+1 problem on the blog listing page, so I'm reading up on which methods are available in the Django ORM. This is fine obviously, but I think what really separates senior devs from non-senior devs are aimless wanderings through the documentation for a project.

It's in these wanderings that you discover what a tool can do before you actually need it to. When the need finally does arrive, it's much less of an interruption to your workflow to look up the correct syntax for a feature that you already know exists. Otherwise you're stopping productive work for X amount of time to see if the your concept for the solution has some corresponding feature in said tool and by that time your flow is shot.

So when admonished to RTFM, take it as a passive aggressive offering of really good advice.

#generaldevelopment

I'm starting a new gig in a couple weeks. I found out about the existence of the gig in the first place because I follow the guy who will be my new boss on Twitter, since he's been around the Drupal scene and very long time and following the leaders in your industry is basically one of the core things that Twitter is *for*.

You don't need to necessarily know every single leader in every single lang, but if you work on the web and want to move your career forward, you'd do well to know who most of these people are, in absolutely no order. This is absolutely (and hopefully obviously) not an exhaustive list, more of a starter pack.

  • Dries Buytaert
  • Taylor Otwell
  • Fabien Potencier
  • Guido van Rossum
  • Jacob Kaplan-Moss
  • Kenneth Reitz
  • Matz
  • David Heinemeier Hansson
  • Rich Hickey
  • Brendan Eich

If you're a Drupal dev, you should add these to the list

  • Larry Garfield
  • Robert Douglass
  • Peter Wolanin
  • Angie Byron
  • Jeff Eaton

Getting to know not just the tools out there but the folks who made them and their reasons for making them can really help you decide if a given tool was created for a use case like yours.

#generaldevelopment

Problemspace

We recently migrated from using a local Drupal filesystem (Gluster) to using S3 to house our uploaded site assets. This was relatively simple, and killed at least two birds for us, metaphorically speaking. Some of my findings are chronicled in the previous post linked above.

We are loving that we don't have to worry about syncing files between environments anymore, which means that when we are developing a site locally, the image sources are all pointing to their S3 URLs and so everything Just Works. The only tiny problem is that if anyone needs to upload an image in development it goes up to the same production S3 bucket. Obviously this costs us next to nothing, but it bothers my sense of cleanliness.

Solutionspace

S3 has a “lifecycle management” feature that will let you Do Stuff with your bucket assets. Do Stuff is things like delete assets after a certain period, or move them to another “storage class”, which is not in the scope of this post...

The limitations of their lifecycle mgmt are a major bummer. They can only be applied to directories within a bucket or to entire buckets themselves. They cannot be applied (simply) to individual objects. If they could, then the fix would be simple – have a hook on file uploads that adds a “delete-after” header to objects that are uploaded from anything but the production environment.

I'm starting a new job in a few weeks, and they use Ceph for managing network files. I haven't even gotten on board with this gig yet, so I don't know what's going on behind the scenes, but Ceph does have this individual object expiration feature, at least according to their bug tracker. I'm wondering if this can be brought to bear on this issue, because once you don't have to move or copy files between environments anymore, going back feels kind of anachronistic.

#devops