Ignored By Dinosaurs 🦕

platformsh

WTF is “continuous integration”??

So, back in the old days, building websites was simple. You had plenty of options for how to build one but me personally, I was totally fine with Wordpress and Drupal. They didn't force me to know what was going on under the hood to be productive and they provided a way for me to learn that stuff along the way while feeding my family. I learned CSS on the job while building the first couple things I ever built.

Cascading Style Sheets (CSS) is a stylesheet language used to describe the presentation of a document written in HTML or XML (including XML dialects such as SVG, MathML or XHTML). CSS describes how elements should be rendered on screen, on paper, in speech, or on other media. – https://developer.mozilla.org/en-US/docs/Web/CSS

Being a curious and hungry developer, I soon reached for a tool that I saw discussed called SASS that was something called a “CSS Preprocessor”. Basically it was a computer language that was like CSS, and that generated CSS. It allowed you to reuse bits and pieces of styles in ways that stock CSS did not at the time.

It was a tool that made your life as a front end developer easier, but there was a catch – now you had to process that SASS into CSS at some point along the way. There was now an extra piece of gear involved in your development –> production lifecycle.

Another example

Frontend development is insanely complex these days. Plain old Javascript that browsers can digest and execute has exploded like a million suns into various dialects, innumerable frameworks, and more tooling than any sane developer can keep track of. Gone are the days of including jQuery (and my interest in frontend dev along with it).

This is (arguably) great for developers to be more productive and provide high quality experiences to their audience, but there is much, much more to think about in terms of how to deliver those experiences to the end user. Yes, my marketing speak is rather loathesome.

By “deliver” I mean the nuts and bolts – all that Javascript has to go over a network connection from a server in a rack to a user's browser in order to execute and Do Stuff. You don't want your user to have to download 500,000 JS files weighing 40MB just to load your website, so you do a bunch of stuff to optimize that:

  • you smash all those files into 1 file, this cuts down on num_requests because those cost time
  • you compress that file into something smaller using gzip or the like, this cuts down on the weight of that code traveling over the wire because that costs time too
  • you might have to compile your chosen dialect of JS into “vanilla” JS that the browser can actually execute.

And you're not going to want to do this by hand every time, so you reach for a tool that does it all for you. There is now an extra piece of gear involved in your development –> production lifecycle.

Just one more

Package managers. They are a wonderful development, and I'm talking about Rubygems, Composer, Pip, and the like. They allow the OSS community to write, publish, and use bits of code (libraries) that other people have written so you don't have to reinvent the same wheel everytime you want to add a login form to your site. Just gem install devise and bam, your Rails site has user login forms, password resets, and a ton of other functionality that you didn't have to build yourself.

All of these package managers operate on a similar principle – they have a list of the packages required for a given project and when one of your colleagues adds a new one, it gets added to the list. All you have to do to add it to your local working copy is composer install and bam, you have that new library too.

Ideally though, you're not committing that package to your Git repo because that's “3rd party code”. It bloats your Git repo which is just a bad smell, but it's also code that you don't own and should never really touch. What should be committed to your Git repo is just your code. At deploy time, just run composer install and you're done, but you need to have a Thing that will be able to execute composer install.

There is now an extra piece of gear involved in your development –> production lifecycle.

Continuous Integration

That extra piece of gear is what's commonly referred to as a “build pipeline” or a “continuous integration layer”. It's essentially just a process that will execute your instructions for how to tranform your development codebase into your production codebase or artifact.

Plenty of players in the space, starting with a simple Bash script on the user's laptop before pushing to Git. More involved and flexible options include Jenkins, Travis, or CircleCI. Gitlab has a built in CI pipeline, it's part of why they will ultimately win over Github.

Problem with all these tools is that they're other tools, and you have to pay for them and maintain them and learn how to use them and deal with API changes etc. Platform.sh has this stuff all built in, and we're the only PaaS vendor that I know of that has this level of continuous integration built right in in the form of our build and deploy hooks.

We have shortcuts for Composer based projects, and full support for adding any number of build time dependencies (probably a whole nother post's worth of material to discuss). Most other PaaS vendors make you buy CircleCI and wire the stuff together in order to achieve the same level of cushy, modern developer experience that we offer, and that's where I've run out of steam on this post. If you got questions, ask em.

#generaldevelopment #platformsh

The Book of Platform – 2.md

Getting started

In this chapter we'll walk through the process of setting up and deploying your very first project on Platform.sh. We'll go through the free trial and checkout process, clone a simple project from a public GitHub repository, and push it to your new Platform.sh project.

After that we'll dazzle you with how easy it is to set up development environments for various features with the click of a button. We'll make a couple of simple code changes and deploy them each separately to their own hosting environments and then merge everything back together to get it ready for a production release.

Step 1 – new project creation

The easiest way to get started is to take advantage of Platform.sh's free 1 month trial. This will allow you to step through this on boarding tutorial free of charge and get familiar with the way we make it easier for you to decide for yourself whether or not Platform.sh is a good fit for your team.

On the front page of Platform.sh's website you can find a link to a Free Trial, currently at top right of the main navigation. Clicking through that brings you to the account creation workflow. I personally prefer just creating an account on the site for whatever given service, but you are also free to take advantage of our Bitbucket, GitHub, or Google integrations for authentication.

#platformsh

The internet is hard these days.

It started simply enough – for instance, all you really needed was a Geocities account and some initiative to learn HTML and you could have your own place to put whatever you wanted and make it available to the entire world. From such simple seeds, complex structures did grow..

Geocities was permanently shut down in 2009, at once both a tragedy for the loss of so much content, so much history, and yet also a wake-up call for so many of us that we needed to have control over our own content, applications, and businesses. Many of us chose to host our own websites so that a seemingly arbitrary decision from some faceless corporate power couldn't upend overnight what we'd created over years. But that decision also made things a little more complicated.

At that point all you really needed was a web hosting account somewhere with Apache HTTPD installed as a web server. Then you could edit and upload your HTML just as before, only this time it couldn't be taken away from you because you were (more) in control of the setting.

Somewhere along the way you were likely introduced to something like Wordpress or Drupal or Ruby on Rails, which were all essentially frontends to some kind of database, and that database is where you would store your content. This was a wonderful development, not only enabling non-technical users to publish content to the web without knowing anything about HTML or FTP, but also for small businesses to be able to create their first eCommerce stores online and take advantage of an entire global market. Again, this was the march of technological progress creating new opportunities for primitive man to make use of highly advanced tools without having a Computer Science degree.

But, as the saying goes, with great power comes great responsibility. The responsibility in this case is that of having to set up and maintain your own web hosting infrastructure. Some of us like this kind of work – setting up and managing fleets of servers with all manner of different pieces of software on them to serve the world's internet needs – but some of us really just like writing code and building websites and applications.

Platform.sh is a new breed of hosting service, and was created expressly for this second group of technologists.

Platform.sh gives you an incredibly flexible set of tools with which you can build and deploy a huge range of different types of applications to the world with the click of a button or a push to a Git repo. Platform.sh currently has support for PHP, Ruby, Python, and Node.js with other runtimes like Go and Java either in public beta or in planning mode internally.

Platform.sh was also architected from the ground up to fulfill the promise of Git as a codebase management tool. No longer are your working feature branches trapped on your machine or in a remote repo with no context to make them live (and testable) for your teammates. Platform.sh will provision a fully functional, completely segregated hosting environment for each one of your Git branches, with all of the data and uploaded files that your app's codebase needs to be a fully functioning application.

Lastly, Platform.sh was designed to grow with your project as your project's needs evolve. The days of filing support tickets to have a PHP extension or a new database server installed are effectively over. Not only will we provision all of your application's software dependencies – Redis, MySQL, ElasticSearch, and many others – but you can choose from many different versions of each of these dependencies. Want to see how ruby:2.4 or python:3.6 or postgresql:9.6 or php:7.2-rc improves your app's performance? We always endeavor to provide the latest versions of each so that upgrading the underlying software on which your application runs is as simple and painless as changing a line of configuration.

If this sounds like something that might take some of the tedium out of your development day or possibly increase the velocity with which your business can bring new ideas and features to your customers then please, read on. In the coming chapters we'll walk you through getting started with Platform.sh by stepping you through setting up your first project and deploying a simple app with a few commands. After that we'll dive deep into more advanced topics such as -

  • Configuring your project with YAML
    • routes.yaml
    • services.yaml
    • .platform.app.yaml or “app yaml”
  • Managing and interacting with various administrative functions of your project both via the user interface as well as the Platform CLI
    • branching
    • merging
    • backups
  • How all of this seeming magic works under the hood
    • containers
    • environment variables
    • copy on write

So, welcome to The Little Platform.sh Book! We sincerely hope to make internet-ing a little bit easier for you and your team.

#generaldevelopment #platformsh

Problemspace

You've got a new site on Platform.sh that is basically at the end of its development stage, and you're preparing to go live. You've decided on Cloudflare to host your DNS. Cloudflare is a good choice for smaller sites, and I recommend it often. Is has a few things going for it -

  • It has a free tier, which gives you pretty much everything you really need for a personal or small business site.
  • it has a very robust and modern global network.

One of the main features that a modern DNS provider needs to have in order to work with Platform.sh is somethat that's colloquially known as “CName Flattening”. This solves an age-old problem (in internet years) – being able to point your “root domain” to a domain name rather then an IP address. This post explains it better than I can, and that's not eh point of this post anyway.

Another nice feature of Cloudflare is something they call “Flexible SSL”. SSL, HTTPS, TLS, they all mean basically the same thing – that the traffic from your user's browser to your website is being encrypted. This is very important in a practical sense if you're ever on a public wifi network, for example. Setting up SSL on your website can be a bit of a headache though, involving buying a cert, generating some rather arcane crytographic keyfiles, installing them correctly on your server, etc.

Cloudflare offers a bit of a relief from this headache with “flexible SSL”. That means that your site can use Cloudflare's SSL cert to encrypt the traffic from user to Cloudflare (remember, cloudflare is sitting in between your users and your website's server). Traffic from Cloudflare to your website then travels unencrypted over plain old HTTP. This is “suboptimal”, but it does alleviate some of the attack vectors on your users.

Cloudflare's "Flexible SSL" option
 HTTPS HTTP
User ----->----> Cloudflare ----->----> "Origin" server (your website)

The other alternatives to this are either running your site unencrypted over HTTP or using “full SSL”, in which you have to install a cert at the origin in order to encrypt the traffic between Cloudflare and your website.

Cloudflare's "Full SSL" option
 HTTPS HTTPS
User ----->----> Cloudflare ----->----> "Origin" server (your website)

Getting there...

So you've been in development this whole time, you're using HTTPS and redirecting HTTP traffic to HTTPS like a good net citizen and now it's time to go live. You figure you'll just skip setting up a SSL and use Cloudflare's Flexible option. You immediately run into a problem though, of a redirect loop. Here's why..

  • User requests website over HTTP, gets redirected to HTTPS by your application
  • HTTPS travels from user to Cloudflare where it's decrypted and handed back to your website over HTTP
  • Repeat until browser crashes and tells you you have a redirect loop.

You have two options at this point – either allow HTTP traffic as well so that traffic can flow from Cloudflare to your server without being redirected, or go full SSL.

Option A will not work very well in the end. Since your app thinks it's running HTTP, all of the in-site generated links will point to the HTTP version of the pages, which means that as soon as someone clicks a link on your site, they'll be on HTTP. No good.

Solutionspace

Get a cert, install it, and go Full SSL.

Soon come

A post about using letencrypt to set up certs on your Platform.sh project.

#devops #platformsh

Problemspace

You have a decent sized project and your deployments are taking a while. Platform.sh rebuilds your entire application from scratch with each git push so in some cases the process of downloading all those 3rd party packages can take quite a while. We can and do manage local caches of some composer packages due to our PHP heritage, which helps to make composer install a pretty snappy affair, but it's simply not possible to effectively do this with Nodejs.

Compounding this problem for npm is the fact that npm's dependency graph, that is the dependencies of your dependencies, have to be worked out every time you run npm install. This can lead to developers in your org installing different versions of packages which will cause you problems.

Most other package managers overcome this with the use of a “lockfile”. A lockfile is a file that's generated when you run composer install for the first time, or bundle install if you're working Ruby. A lockfile is the result of the dependnecy graph being worked out, and then specifying the exact versions of each package. This file is checked into Git, and each dev in the project gets the exact same versions of the packages required for the project.

Solutionspace

I was listening to the most recent Laravel podcast over the weekend and they got to talking about a new quasi-npm-replacement that had just come out called Yarn.

Yarn aims to be an almost drop in replacement for npm. There are a number of ways of installing it, but the most simple is just npm install -g yarn. My coworkers thought I was trolling them with that, but it makes perfect sense if you think about it.

The only other step is run the yarn command locally in order to have Yarn traverse your node_modules directory and build up the Yarn lockfile – yarn.lock. Then commit that to git and let's rock and roll on your .platform.app.yaml. We're going to require Yarn in the global dependencies section -

dependencies:
  ruby:
    sass: "**" # not required, just assuming
    nodejs:
    gulp-cli: "**" # same here
    yarn: "**"

And then replace npm install in your hook.build with yarn install instead, like so -

hooks:
  build: |
    yarn install
    gulp default // for a Laravel project

This took my previously 6 minute builds down to about 1 minute. In other words, the time that it took out of my build phase was longer than the time it took to completely move from npm to Yarn in the first place. The reason for this speed boost is that Yarn doesn't have to generate the dependency graph every single time (like npm does) since the lockfile, and Yarn downloads the packages in parallel rather than whatever npm does, which must be one at a time.

If you're using npm install as part of your build step on Platform.sh, it's really a no-brainer. Check it out!

#javascript #platformsh

Hello, and welcome back to Platform.sh from scratch. In this post we'll be reconfiguring your Laravel app that we've been working on in the previous posts to use Redis as a cache and session store, rather than the default file store. We'll also install the Platform CLI and use it to SSH into our application container and get a feel for the layout of the filesystem after it's deployed to its environment.

But first, I'd like to have a brief chat about Git...


Using the tools the way they're meant to be used

Ok, we've gotten this far and we're feeling good about life, but we aren't really doing anything that mindblowing yet. We've spent two posts configuring an app to run on a new hosting vendor, whooptidoo. Now, don't get me wrong – we specified our entire project's infrastructure in code. We are free to change around our project's infrastructure however we see fit, without having to file a ticket and wait for support to change it for us. And yet, we're still working on the Git branch that represents the production state of our website, aka “Master”.

Before we continue on, let's put a little insurance in place, courtesy of Git and Platform.sh.

Go to your project admin screen and click the “Branch” button, which is the orange one in the top right. Name this new branch “dev”, or really whatever you prefer.

Now say this out loud -

  • I will never push straight to master again
  • I will never push straight to master again
  • I will never push straight to master again

I'm serious. This is important. You may have worked with Git branches before, and you might be familiar with the stress saving benefits of using them, but you've probably never worked with a hosting vendor who makes it so dead simple to really use them in your day to day development work. In fact, when I first started this job I told people that I worked for a hosting vendor. Now that I understand the power of the tools that we provide I say

Platform.sh is a software company that builds tools to make your job as a developer or web application owner easier and less stressful. We also happen to host the sites with which you use our tools.

Now that you've created that branch at Platform.sh you have a byte for byte copy of your master environment, complete with web accessible URL. Any work that you do from now on will land in that dev branch before it gets merged into master. In this way you'll be able to fully QA and test out new changes before deploying them to production.

This branch only exists at Platform.sh for now, so create and checkout a local dev branch to continue on.

git checkout -b dev

A new Redis service!

So far we haven't actually built any logic into this application, nor have we even activated Laravel's built in user authentication feature, so let's go ahead and do that. Following along with the Laravel docs, run this artisan command locally to scaffold out the files that are required.

php artisan make:auth

You can run the migration locally with php artisan migrate but you'll also want to add this to the bottom of your .platform.app.yaml file -

hooks:
  deploy: |
    php artisan migrate --force

Per the previous post, this will run the database migrations for you when you deploy your app on Platform.sh. At this point you can add, commit, and push to Platform.sh and experience the joy of having bona fide user auth in just a few minutes. Thanks Taylor!

Now that you've committed that, let's head back over to the config folder and switch from using the default file session store in favor of the Redis store. At the top of config/session.php, change the SESSION_DRIVER setting from file to redis. As long as we're at it, let's also go to config/cache.php and change the default CACHE_DRIVER setting to redis as well. Now let's set up your app to use Redis.

In your .platform/services.yaml file you're going to add a new Redis service -

rediscache:
  type: redis:3.0

and in your .platform.app.yaml file we're going to add that new service to the relationships section -

relationships:
  database: "pgsql:postgresql"
  redis: "rediscache:redis" # this is new!

This is all that's required on our end to add a new service to your project, but you'll need to “enhance” your app just a bit to make use of it. In the previous post we added to the top of the config/database.php file to enable your app to find the Postgres database that we're using. That file also contains the configuration for Redis, so go there now and change this -

$config = new Platformsh\ConfigReader\Config();

if ($config->isAvailable()){
  $pltrels = $config->relationships;
  $database = $pltrels['database'][0];
  putenv("DB_CONNECTION={$database['scheme']}");
  putenv("DB_HOST={$database['host']}");
  putenv("DB_PORT={$database['port']}");
  putenv("DB_DATABASE={$database['path']}");
  putenv("DB_USERNAME={$database['username']}");
  putenv("DB_PASSWORD={$database['password']}");
}

to this -

$config = new Platformsh\ConfigReader\Config();

if ($config->isAvailable()){
  $pltrels = $config->relationships;
  $database = $pltrels['database'][0];
  putenv("DB_CONNECTION={$database['scheme']}");
  putenv("DB_HOST={$database['host']}");
  putenv("DB_PORT={$database['port']}");
  putenv("DB_DATABASE={$database['path']}");
  putenv("DB_USERNAME={$database['username']}");
  putenv("DB_PASSWORD={$database['password']}");
  if(isset($pltrels['redis'])) {
    $redis = $pltrels['redis'][0];
    putenv("REDIS_HOST={$redis['host']}");
    putenv("REDIS_PORT={$redis['port']}");
 };
}

That is all that's required to enable your app to be able to use the Redis service in your Platform.sh environment. Add, commit, and push!

git push platform dev

While that's building, let's install the Platform CLI.


The Platform CLI

As the docs say, “the CLI is the official tool to use and manage your Platform.sh projects directly from your terminal. Anything you can do within the Web Interface can be done with the CLI.” Fun fact, the project management interface is actually an AngularJS application, and both it and the CLI interact with the same set of APIs on the backend to manage your project. Almost anything that you can do from the UI you can also do from the CLI, and vice versa.

Follow the instructions in this section to install the CLI, and do make sure you read the rest of that docs page for some more background information.

Logs!

Let's use the newly installed CLI to check out some logs, since logging is crucial to knowing what's going on inside not just your application but also the environment in which it's running. Running platform logs will give you several options for which log you'd like to inspect -

> platform logs
Enter a number to choose a log:
 [0] access
 [1] deploy
 [2] error
 [3] php.access
 [4] php

Let's check out the deploy.log and see what goes on in there -

[2016-10-03 17:23:01.523855] Launching hook 'php artisan migrate --force'.

Migration table created successfully.
Migrated: 2014_10_12_000000_create_users_table
Migrated: 2014_10_12_100000_create_password_resets_table

So the deploy log is actually the stdout output from whatever you have in your .platform.app.yaml file in the hooks.deploy section. Pretty handy for debugging as you're building up new steps. By default, the platform logs command will just cat the entire file out to your screen, but you can also pass the name of the log that you want to access as an argument to the command, and pass it a flag --tail if you want to tail the log, like so -

> platform logs --help
Command: environment:logs
Aliases: log
Description: Read an environment's logs

Usage:
 platform environment:logs [--lines LINES] [--tail] [-p|--project PROJECT] [--host HOST] [-e|--environment ENVIRONMENT] [-A|--app APP] [--] []

Arguments:
 type The log type, e.g. "access" or "error"

Options:
 --lines=LINES The number of lines to show [default: 100]
 --tail Continuously tail the log
 -p, --project=PROJECT The project ID
 --host=HOST The project's API hostname
 -e, --environment=ENVIRONMENT The environment ID
 -A, --app=APP The remote application name
 -h, --help Display this help message
 -q, --quiet Do not output any message
 -V, --version Display this application version
 -y, --yes Answer "yes" to all prompts; disable interaction
 -n, --no Answer "no" to all prompts
 -v|vv|vvv, --verbose Increase the verbosity of messages

Examples:
 Display a choice of logs that can be read:
 platform environment:logs

 Read the deploy log:
 platform environment:logs deploy

 Read the access log continuously:
 platform environment:logs access --tail

 Read the last 500 lines of the cron log:
 platform environment:logs cron --lines 500

Merging

So your dev brnach should be done building by now. Check out that everything is working the way that you expect, and if it is, let's merge the dev branhc into master. This will constitute a production deployment. You can either click the “Merge” button in your project admin UI, or you can run this from the CLI – platform merge. This will give you some interactive output so you can confirm that you're merging into the environment that you want.

One last trick for now – run platform ssh from the root of your project. Sure enough, this will SSH you into your application's PHP container, so you can get a feel for what goes on in there. A few tips -

  • The root of the application will be /app and the user will be web.
  • Those very same logs can be found at /var/log, just like normal!
  • You can check out the generated Nginx config file at the usual location as well – /etc/nginx/nginx.conf. This can be *very* useful for debugging complex configurations in your .platform.app.yaml file.
  • You can get a list of running processes the same as normal too – top. You'll see that there's not really anything going on in there beyond what your app needs to run, since OS level processes are not running in your LXC container.
  • Every app container has a Java executable. There be dragons, but you could theoretically whip up some fairly complex setups with Java dependencies if you ever needed to.

That concludes this post, and the series! We'll dive into other features, but with what you've learned in the past 3 posts you should have about 90% of what you need to orient yourself within our product.

#platformsh #laravel

Hello (!) and welcome back to Platform.sh from scratch. In this post we'll learn about how to set up the Laravel app from the previous post to hook in to various services on Platform, starting with a database connection and moving on to using Redis as a cache and session store. Along the way we'll visit Platform.sh's “environment variables” feature, and we'll set up our first fully functioning deploy hook.

Prerequisites – go through the previous post and get that far...

Let's get started!


So the first step is to add a database to your services.yaml file. Let's choose PostgresQL, which is my personal preference for open source databases these days (mostly due to the fact that it hasn't been bought by Oracle and subsequently forked). Add this to your services.yaml file, which should currently be empty.

# adds a Postgres 9.3 service to your project
# and gives it about a gigabyte of disk space
pgsql:
  type: postgresql:9.3
  disk: 1024

And in your .platform.app.yaml add this anywhere -

# This is how you define service relationships in your application
# I personally think this should've been named "services" but such is life
relationships:
  database: "pgsql:postgresql"

As you can see, setting up your project to provision new services is super easy and as the platform matures we'll likely support several versions of any given piece of software. This will someday allow users to easily test out upgrading something like a database to a new major version in another branch without worrying about the usual hassles.

Now we need to set up our application to use these new services. This is fairly straightforward, but feels a little strange the first time so I'll walk you through the general algorithm that you'll use no matter what the framework or language you're using.

Platform.sh encodes many key pieces of information about your application into OS environment variables. If you SSH into your app container you can echo $PLATFORM_APPLICATION and get back a long string that's in essence the base64 encoded version of your .platform.app.yaml file. Same with $PLATFORM_ROUTES. This is how we store metadata about your application, and you'll make use of these variables as well to establish database connections. The basic algorithm for finding DB connection info is

  • read and decode $PLATFORM_RELATIONSHIPS into a json string.
  • parse that json string into an object and use the attributes of that object to set the connection info

So with that in mind, let's get your Laravel app wired up.


First let's make use of a nice little composer package that Platform.sh has authored in order to simplify this bit. Run composer require platformsh/config-reader and install this package. Next let's head to config/database.php and add this little snippet to the top -

php // <- leave that out, it's for syntax highlighting only
$config = new Platformsh\ConfigReader\Config();

if ($config-isAvailable()){
  $pltrels = $config->relationships;
  $database = $pltrels['database'][0];
  putenv("DB_CONNECTION={$database['scheme']}");
  putenv("DB_HOST={$database['host']}");
  putenv("DB_PORT={$database['port']}");
  putenv("DB_DATABASE={$database['path']}");
  putenv("DB_USERNAME={$database['username']}");
  putenv("DB_PASSWORD={$database['password']}");
}

You can read the source of the composer package yourself, but essentially the $config instance has properties for each of the encoded environment variables that Platform.sh adds to your environments. In the case of the relationships attribute, you might have several databases defined, so it's a simple matter of digging into that property to pull that values out.

In the case of Laravel, it makes use of the vlucas/phpdotenv package to read certain settings out of environment variables, so it's really just a matter of translating the nested variables that Platform.sh provides into what Laravel is already expecting.

I suspect it would be astonishingly easy for someone to come up with a drop in “Laravel Helper” package that would set all this up for automatically, but I haven't gotten that far just yet.

With this much code, your app is now ready to connect to the database in whichever environment you'll be deploying so go ahead and commit this.

git add . && git commit -m "adding platform db config"

There's one more blocker for your Laravel app that you'll need to take care of before you're really up and running and that's the need for the APP_KEY variable to be defined. Time to leran about Platform.sh's environment variables feature!


Platform.sh Environment Variables

You're likely familiar with the situation where you need to make use of some “privileged” data in your application, and that you don't want to store that data in Git. Or perhaps the use case is that you have different settings for certain things in development than you do in production, perhaps a DEBUG flag or something like that. The usual solution for these cases is to use OS environment variables (just like we do). We provide a feature easily setting variables that you can read in your environments, so let's set one up for the Laravel SECRET_KEY.

Head to your project's admin page and click on “configure environment”, then “Variables”. The simplest thing to do is pull the SECRETKEY out of your local .env file, so click “add variable” and give it a name of “SECRETKEY” and put the value in the “value” field. This will trigger a redeployment of your application.

Last step is to add some code in to your application to read those variables back out. Add this to the top of your config/app.php file -


$config = new \Platformsh\ConfigReader\Config();

if($config-isAvailable()){
  foreach($config->variables as $k => $v) {
    putenv("$k=$v");
  }
}

Provided you've already set up that SECRET_KEY variable, it'll be read further down that file and you're good to go, so let's commit and push to platform.

git add .
git commit -m "adding platform environment variables config"
git push platform master

So this post is already a bit long so I'll just touch on one last point quickly. Now that you have this database all hooked up and ready to go, you're likely going to want to use it! Should you want to automate the process of applying your database migrations, you can do that in a simple deploy hook.

hooks:
  deploy: |
    php artisan migrate --forcxe

The --force flag sounds scary but all it does it disable the command needing feedback which of course, you are unable to provide yourself. The other option is to SSH into the app server after deployment and run the command yourself, which I'll demonstrate in the next post.

#platformsh #laravel

Hi there and welcome back to Platform.sh from scratch. In this post we'll convert a Laravel app for use on Platform and learn a few tricks that will hopefully inform converting any app for use on Platform.


Step 1 is to get going with a new Laravel app, so follow the instructions on installing Laravel and setting up a new project. Initialize a git repo, add a new platform, and add the Platform.sh git remote to your local repo. All of this is documented in the previous post.

Now, at this point you can try and push code to us but we'll reject it because you don't have any of the Platform config files in place. Let's use the exact same routes file as the previous Silex project.

# .platform/routes.yaml
"http://www.{default}/":
  type: upstream
  upstream: "app:http"
"http://{default}/":
  type: redirect
  to: "http://www.{default}/"

So that's step one. We'll get to the services.yaml file in just a minute, but let's go ahead and stub it out – touch .platform/services.yaml.

Now let's get to work on the .platform.app.yaml file, which will define what your new Laravel app will need to run. One of the key differences between this application and the previous one is that for this one we're actually going to need some writable disk space. Laravel expects a few directories to be present (and to be writable by the web user) in order to write logs and caches and such. We glossed over that bit in the previous post, so I'll now take a moment to talk about Platform.sh's read-only filesystem.


[!info]

Aside – the read-only filesystem

Platform.sh, like some other cloud PaaS providers, utilizes a read-only filesystem. When your application is deployed, we package up a snapshot of your application code and mount it into it's environment. This means that the days of being able to edit code directly on the server, or of being able to FTP code up to the server are effectively over.

All of your app's code must be in Git in order to be deployed, which has quite a few advantages. Not least of them are accountability for who did what and when to your codebase. Of course, you likely need to have some part of your filesystem be writable for logs and file uploads, so we take care of that for you but first let me expand on the benefits of going read-only.

Benefit one – consistency

As we've discussed, Platform.sh's entire workflow is built around Git. This means that each commit has a SHA hash that identifies each unique commit within your project. If you cut a new Git branch out of another, that new branch will have the same SHA hash as the currently existing branch that you cut it from.

> ~/work/php/magento-platform-sh
> $> git checkout -b test_branch master 93783b2 # <- SHA hash of this commit
> Switched to a new branch 'test_branch'
> 
> ~/work/php/magento-platform-sh
> $> test_branch 93783b2 # <- Same SHA hash
> ~~~
> 
> Platform.sh sees that the two SHA hashes are the same and doesn't bother building your new environment's codebase again since it's already built it. It just uses the already packaged code snapshot from the original branch and creates an environment around it. This saves time, but when it really shines is when it heads in the opposite direction - on merge. When merging a feature branch into a long running develop or master branch, Platform.sh sees that the code snapshot has already been built and deploys that into your master environment. 
> 
> What this means is that you are 100% guaranteed that the code being deployed into your master environment is precisely the same code that you just tested out in your feature branch. Nobody snuck anything new in there via FTP or editing directly, so you can be confident in your deployments. This leads neatly into benefit two...
> 
> __Benefit two - security__
> 
> As you're likely aware there is a large class of exploits, particularly in PHP web apps, that take advantage of that fact that a great many of them allow write access to files that the web server will execute. This means that nefarious users can sometimes find security holes that will allow them to upload executable PHP files to the server and then use those files to gain "elevated privileges", another way of saying "hack your server". With a read-only filesystem, many of those exploits are blocked before they can even happen.
> 
> __Lastly, of course you need to write some files__
> 
> So yes, your web app likely has something in there that needs to be writable. It might be for uploads or it might be for caching or it might be for logs. We got you covered, but you have to specify which directories to make writable in your `.platform.app.yaml` config file. 

So I'm just going to drop the `.platform.app.yaml` in here and explain it bit by bit.

~~~yaml
# the name of this particular app, remember that we allow you
# to create a project out of 1 or more apps, so this gives our
# Laravel app a name...
name: app
type: php:7.0 # SSIA
build:
  # Same as before, this alerts our system to look for a composer.json
  # or composer.lock file and install the dependencies defined therein
  flavor: composer
# basic web configuration for this particular app. Laravel apps have a
# "public" folder that serves as the web docroot.
web:
  locations:
  "/":
    root: "public"
    index:
      - index.php
    allow: true
    passthru: "/index.php"
# How much disk space to allot to this app. 
disk: 2048
# This is where you define your writable file system paths. the keys are the
# paths in your app that need to be writable/uploadable. The values are always
# going to be named "shared:files/$WHATEVER_HERE", where "WHATEVER_HERE" can be
# any arbitrary identifier.
mounts:
  # Laravel uses a directory off the root called "storage" for logs and cache.
  "/storage/app/public": "shared:files/storage/app/public"
  "/storage/framework/views": "shared:files/storage/framework/views"
  "/storage/framework/sessions": "shared:files/storage/framework/sessions"
  "/storage/framework/cache": "shared:files/storage/framework/cache"
  "/storage/logs": "shared:files/storage/logs"
  # And another cache directory here.
  "/bootstrap/cache": "shared:files/bootstrap_cache"

So this brings us to a decision point. Inside the storage directory are nested a few more directories. Laravel sets these directories up for you and then drops a .gitignore inside each of them. This is handy, but presents a small challenge to Platform.sh. Any directories that you declare as writable (or mountable) will be emptied out on the first build and deploy. This means that those nested directories that exist in your git repo will be deleted and you'll be left with an empty storage directory.

This will cause you some headaches when you try to deploy your Laravel app. There are two solutions to this – either add each directory that needs to be writable inside of storage to the mounts directive or recreate them in a “deploy hook” instead. We'll go with option A but I want to introduce you to hooks, so here's what option B looks like -

# .platform.app.yaml, after all the rest ...
mounts:
  # Laravel uses a directory off the root called "storage" for logs and cache.
  "/storage": "shared:files/storage"
  # And another cache directory here.
  "/bootstrap/cache": "shared:files/bootstrap_cache"
hooks:
  deploy: |
    mkdir -p storage/app/public
    mkdir -p storage/framework/views
    mkdir -p storage/framework/sessions
    mkdir -p storage/framework/cache
    mkdir -p storage/logs

This second method here took quite a bit of trial and error to figure out, and either method is valid. It would've taken me a bit if trial and error to go with option A as well, since I'm not very familiar with Laravel.

Now I'll take a moment to explain what hooks are.

Aside – hooks

Hooks are what they sound like – commands that will execute at certain points in the deployment lifecycle. In this case, we have two – build and deploy.

Build hooks run while your app is being packaged up, before it's sent to the application's environment. The filesystem is still writable at this point, so if you need to make any modifications to the file structure of your project, this is your chance to do so. Since your project doesn't have an environment yet, it doesn't yet have access to the various services that you've declared in services.yaml. So, no database is available at this point.

Deploy hooks run after your project has been mounted into the app environment. The versioned file system is no longer writable, but you do have access to your services at this point. This is typically when you do things like migrate databases or clear caches.

So you might notice that we're performing those mkdir calls in the deploy hook, which seems to contradict what I just said. However, those directories are being created inside of what you've declared as a writable directory in the mounts directive, so nothing breaks and indeed you have access to that mounted directory, where you wouldn't have had access in the build hook.

Here's the full .platform.app.yaml at this point, for reference.

name: app
type: php:7.0
build:
  flavor: composer
disk: 2048
web:
  locations:
    "/":
      root: "public"
      index:
        - index.php
      allow: true
      passthru: "/index.php"
mounts:
  # Laravel uses a directory off the root called "storage" for logs and cache.
  "/storage/app/public": "shared:files/storage/app/public"
  "/storage/framework/views": "shared:files/storage/framework/views"
  "/storage/framework/sessions": "shared:files/storage/framework/sessions"
  "/storage/framework/cache": "shared:files/storage/framework/cache"
  "/storage/logs": "shared:files/storage/logs"
  # And another cache directory here.
  "/bootstrap/cache": "shared:files/bootstrap_cache"

So save that, commit to git and push it to your Platform git remote and you should be on your way. You can find the full repository here on Github. The most important difference between that repo and this project is that we have yet to set this project up with a database connection, which we'll do in the next post.

#platformsh #laravel

Hi there and welcome back to Platform from scratch. Today we're going to take a very simple Laravel application that will make use of Postgres on the backend as a database.

The complete example application can be found here – https://github.com/JGrubb/platformsh-laravel-example

The very first step of this will be to add in the appropriate .platform/services.yaml file. This file was left intentionally empty in the setup for this Laravel application, as we didn't have a need for a working database and were just getting our app set up and running. Now however, we're going to add a very simple configuration into services.yaml.

# This is the "name" of this service and can be any arbitrary string.
# You could name this "foo" and that's the name you'd use in your
# .platform.app.yaml file in the next step.
pgsql:
  # This is the actual service you'll be using.
  type: postgresql:9.3
  # How much space you want to give this in megabytes. 
  # 1 gig will get us going
  disk: 1024

Over in .platform.app.yaml we'll add in the relationships section to the file.

relationships:
  # This config takes the form of "relationship_name: service_name:driver"
  #
  # Instead of "database", you could also call this "bar" and that's how it'll
  # show up in the $PLATFORM_RELATIONSHIPS environment variable. I'll show
  # you how that manifests itself in just a moment.
  #
  # "pgsql" in this case is the name you gave it in services.yaml and
  # "postgresql" is the part that you can't just arbitrarily name. That
  # is the hint to our container build system that you need a PG database
  # service.
  database: "pgsql:postgresql"

Commit this and push it up to your platform remote. This will trigger a rebuild of your application, and when that's done let's SSH into your environment with the Platform CLI – $ platform ssh.

Once you're in, try this – echo $PLATFORM_RELATIONSHIPS. You'll get a big base64 encoded string as a result. This is because you can't set complex objects or even JSON as an environment variable, so let's decode that by piping it to base64 --decode – echo $PLATFORM_RELATIONSHIPS | base64 --decode

This should give you back something like

{"database": [{"username": "main", "password": "main", "ip": "250.0.96.171", "host": "database.internal", "query": {"is_master": true}, "path": "main", "scheme": "pgsql", "port": 5432}]}.

sidebar

Just for the fun of it, let's try this in the relationships section -

relationships:
  dark_chocolate: "pgsql:postgresql"

Sure enough —

web@4ikq2xigwlw5s-master--app:~$ echo $PLATFORM_RELATIONSHIPS | base64 --decode
{"dark_chocolate": [{"username": "main", "password": "main", "ip": "250.0.96.171", "host": "big_daddy.internal", "query": {"is_master": true}, "path": "main", "scheme": "pgsql", "port": 5432}]}

So the basic gist of how you establish a connection to any kind of service that you set up in services.yaml should now be clearer than it was before, and we'll now set about adding the code to our Laravel app that will make use of these environment variables.

// at the top of config/database.php. This will decode the base64
// encoded envvar and expand it into the variables that Laravel is
// expecting.
if ($relationships = getenv('PLATFORM_RELATIONSHIPS')){
  $pltrels = json_decode(base64_decode($relationships), TRUE);
  $database = $pltrels['database'][0];
  putenv("DB_CONNECTION={$database['scheme']}");
  putenv("DB_HOST={$database['host']}");
  putenv("DB_PORT={$database['port']}");
  putenv("DB_DATABASE={$database['path']}");
  putenv("DB_USERNAME={$database['username']}");
  putenv("DB_PASSWORD={$database['password']}");
}

This particular piece of code is not Postgres specific, and in fact will work just fine with MySQL as well. The beauties of abstraction...

The final step in this process is optional, but if you want to have artisan migrate the database on deploy rather than logging into the server and running it manually you'd add this to the bottom of your hooks.deploy in .platform.app.yaml —

hooks:
  deploy:
    # other commands
    php artisan migrate --force

The --force flag will allow migrate to run in a “production” environment.

There is one final step to be aware of, and that's that the pdo_pgsql extension is not enabled by default in the PHP containers. You'll need to add this somewhere in .platform.app.yaml -

runtime:
  extensions:
    - pdo_pgsql

If you were using MySQL, this step would not be needed as pdo_mysql is enabled by default. Indeed, if you're using Postgres, you can disable the MySQL extension if you wish -

runtime:
  extensions:
    - pdo_pgsql
  disabled_extensions:
    - pdo_mysql

For reference, here's the complete .platform.app.yaml —

name: app
type: php:5.6
runtime:
  extensions:
    - pdo_pgsql
build:
  flavor: composer
disk: 2048
web:
  locations:
    "/":
      root: "public"
      index:
        - index.php
      allow: true
      passthru: "/index.php"
mounts:
  "/storage": "shared:files/storage"
  "/bootstrap/cache": "shared:files/bootstrap_cache"
relationships:
  database: "pgsql:postgresql"
hooks:
  deploy: |
    mkdir -p storage/app/public
    mkdir -p storage/framework/views
    mkdir -p storage/framework/sessions
    mkdir -p storage/framework/cache
    mkdir -p storage/logs
    php artisan migrate --force

#platformsh #postgres #laravel

Hello, and welcome to “Platform.sh from Scratch”. In this prologue to the series, I'll go over some of the very highest level concepts of Platform.sh so that you'll have a clearer understanding of what this product is and why it came to be.

Platform.sh is a “Platform as a Service”, commonly referred to in this age of acronyms as a “PaaS”. The platform that we provide is essentially a suite of development and hosting tools to make developing software applications a smoother end-to-end process. In order to understand what this means though, I'm going to have to go into some detail in this first sidebar. Skip this if you're comfortable with this post so far.


[!abstract]

PaaS

Everyone has heard of Salesforce. Salesforce has come to be the poster child for what is now referred to as “SaaS” – Software as a service. Prior to the SaaS era if you wanted a piece of software, be it a video game or Quickbooks or anything else, you had to drive to a store and buy a box with some disks in it. Once the internet reached a level of market penetration into people's homes though, those stores went out of business. This is an obvious evolution in hindsight. SaaS is a high level thing – it's a runnable piece of software that you'll access over the internet via a URL. You might be able to modify/config it a little bit, but will never be your entire business. It's not your product. It's someone else's and will likely play some fractional part in your overall business plan.

Almost everyone by this point has heard of Amazon Web Services – AWS. AWS is basically what people are talking about when they say “The Cloud”. AWS is a suite of products that emerged from Amazon when they figured out that they needed a huge amount of datacenter capacity to be able to withstand massive retail events like “Black Friday” and “Cyber Monday”, and that for most of the rest of the year they had tons of excess capacity sitting around draining money from their wallet. What to do with all that excess capacity? Sell it to someone else.

This relatively simple premise has evolved over the last 10-12 years into numerous products from S3 (basically a giant, limitless hard drive in the sky) to EC2 (basically a giant, limitless hosting server in the sky) to Redshift (basically a giant, limitless database that can be used for data warehousing) to SES (a simple service that sends emails) to an ever growing host of other services that always seems to come out just before your start-up figures out that it needs them.

AWS and “the cloud” in general is often given the acronym “IaaS” – infrastructure as a service. They're selling you the low level hardware abstractions that you can assemble into an infrastructure on which to run your software and by extension your company. It requires a decent bit of specialized knowledge for how to use the individual pieces as well as how to plumb them together, but for all intents is infinitely flexible. It's this level that has had most of my interest for the past few years.

In the middle of these two is what's called “platform as a service” – PaaS. This is what Platform.sh is – a suite of software and hosting services that lets you efficiently build and develop your software application, and then deploy your software application to a hosting environment that doesn't require as much specialized knowledge on how to plumb all the pieces together. Nor does the hosting environment require you – and this is a most important detail – to set up monitoring and alerting for if something goes wrong in the public environment.

The PaaS takes elements of both IaaS and SaaS to allow you to build your software product but not have to hire an extra person just to know the low level server business.

So, back to the program. The development tool set of Platform.sh is entirely based around Git. Just in case the reader is not already familiar with Git, I should explain this a little bit.


[!abstract]

Git

Software projects are typically composed of lots of files. If you want to add a new feature, you might be required to make changes in more than one of those files. Of course, before you get started you'll want to make some kind of backup just in case. If it turns out that the change was buggy or unneeded and you want to revert back to a previous state, you'd just restore those few files back to their previous versions.

What if, however, you're working with a bunch of different people and more than one person is working on that change (an utterly common scenario)? How do you manage those backups between all those people? Saving copies of files is basically impossible to manage after a very short while, so out of this need SCM (source code management) was born. It's been through several different iterations by this point, and at this point in time the version of SCM that is leading the market is called Git.

Git is pretty cool. It basically takes snapshots of your entire project whenever you tell it to. It then keeps track of all those snapshots and lets you share those snapshots among a team of developers. Any snapshot can be reverted, and you can see the full history of every change to the codebase so you can keep track of “what happened when”. But wait! There's more!

This is not an exclusive feature of Git, but it has a feature called “branching”. Branching is intuitively named, and is basically the concept of taking a specific snapshot and making changes based off of that one snapshot while other work continues on down the main code line. This is the recommended way to work if you're going to make any kind of significant change to the software, and this method of working allows you to keep the main code line (almost always referred to as the “master branch”) in 100% working order. It can be thought of as having a furniture workshop away from your house where you can work and keep the house clean for company to come over at any moment, as opposed to working in the house and risking having a wreck to present should company decide to drop by.

In essence branching is making a complete copy of your project at a point in time that you can hack on all you like without disturbing anyone else. If and when the change is ready, you “merge” the code back in to the master branch, test it out to make sure everything is still groovy and then you can release the feature or bug fix to the public.

gitGraph
   commit
   commit
   branch develop
   checkout develop
   commit
   commit
   checkout main
   merge develop
   commit
   commit

You can read more about the super basics here if you wish. For now, all you really need to know is that Git

  • Makes it easier to develop software as a team
  • Makes it very cheap and easy to try out new features without breaking anything
  • Makes it easier to manage changes to your software and to revert back to a known non-broken state

update: Hey look, a really great post explaining all this better than I did.


Platform.sh has taken this branching and merging workflow and extended it out into the entire hardware stack. When you're building a software project of any size, there are considerations beyond just the code your team is writing.

Most applications of any size connect to some kind of database in the background, this is where they save “data stuff”. User uploaded images are a very common thing in the web app world, so if those images aren't there the app will look busted.

You can branch your code all you like, but you need these other supporting resources to really do your job. Platform.sh makes branches not just of your code, but the entire infrastructure that your project runs on. This allows you to use the common branching/merging workflow with the complete support of everything else that your application depends on.

This may seem like an obvious feature, since how can you develop a new feature without being able to run it (?), but no other service that I know of actually does this. A branch in Git triggers (for all intents) a complete copy of your production site without requiring you to set up any new servers, copy databases over, copy images and everything else, etc, etc, etc. It's a significant hassle to do all this stuff, trust me, and it slows the team down every time you have to do it. Removing this need removes a major friction point in the workflow for building new features on your software product.

But wait! There's more!!!

This is where it starts getting really, really good. In case you're not aware, there's a website called GitHub. It's where a whole lot of folks have decided to host their “git repos” – repo being short for “repository”, which is basically that series of snapshots of the state of your project/codebase back to the beginning of time. This is the repo for this blog – https://github.com/JGrubb/django-blog, and here's some of the code that just ran to generate this page you're reading – https://github.com/JGrubb/django-blog/blob/master/blog/views.py#L26-L33. Pretty cool, right? And if I were working on a project with a buddy, we could both use this same repo and work on the same project, whether I'm in Germany or New Jersey or wherever. I can pull his changes over and he can pull mine and this is basically how open source software gets written these days.

The same workflow applies though – if you want to make a new feature or even if you just want to fix a bug, you'd make a new branch and do your work and then “submit a merge request”. This basically pings the person who runs the project and says “hey, I would like to suggest making this change. Here's the code I'm changing, maybe you could look it over and if you agree with this change you can merge it in”. By way of an example, here's a list of “pull (aka merge) requests” for the codebase that comprises the documentation for Platform.

Again, this is how software gets written and it's pretty mind blowing if you think about it. We software developers are so used to it our minds cease to be amazed, but not because it's not amazing. I mean, currently participating in that list of PRs are folks from France, Chicago, Hong Kong, the UK, and so on. Amazing. It is also, however, a pain in the ass.

It's a pain in the ass because it's typically impossible to tell if something works or not just by looking at the code, so you have to pull their changes over to your computer and test them out somehow. I bet you can see where this is going! Platform has a GitHub integration (BitBucket too) that will automatically build a working version of any merge request that someone opens against your project. That let's you go visit a working copy of the project and test it out without having to do a thing. Now, I don't care how long you've been doing this, that is mind blowing. For example, here's Ori's (currently work in progress) PR for adding the Ruby runtime documentation – https://github.com/platformsh/platformsh-docs/pull/339. If you click the “show all checks” link down toward the bottom, it expands with a little link “details”. That link takes you to a complete copy of the documentation with Ori's change added to it, so you can read it like you normally would, rather than reviewing a “diff”. It's the future now!

What this means in the wider scope is that your time to set up new things to test out new ideas, only to have to tear it down once the tests pass is time that you don't have to waste anymore. You can test changes out and keep right on moving.

This GitHub integration is only one of the really cool and unique features that Platform provides, but this post has gotten absurdly long already. Fortunately, this is intended to be the prologue to this series, so I'll touch on as many of those features as I can as the series progresses.

#generaldevelopment #devops #theory #platformsh