Over the last few years, Phylos has built up a significant part of its web services using the Laravel PHP framework. Adjunct to this great framework is a service called Forge. One can create a Forge account, hand it some keys to a cloud platform, and it will provision the perfect instance for serving a Laravel app, handling deploys, migrations, etc. Forge is a fantastic service at a great price point for what it provides. However, we’ve grown the team in the last year, both in membership and scope; we needed new service architecture to match.
We run many services here at Phylos, both internally and externally, in addition to our Laravel app. Stay tuned for future blog posts on the myriad languages and libraries we use to power our web services and research.
This first part focuses on image and nginx configuration for running our services via Docker Compose on a developer’s laptop. Part 2 dives into how those same images flow through our devops into production. Note that a goal of this is to handle Laravel app requests only under a specific path, leaving it possible to proxy to other services under other paths. Another goal is to have images that will promote well to running on Amazon ECS clusters. Feel free to clone the example repo and follow along.
A Common Base
An immediately identified need was to consolidate development environments across the team. That means it should be easy for everyone to be using the same versions of things. A quick survey revealed quite the variety even amongst our small team. Though we had both Linux and Mac users, there were hodgepodge installs of Homestead, Valet, Homebrew, and apt packages. Time for some regrouping at the source level.
Docker Compose is a great way to bring the repeatable, common environment that we were looking for to each of our developers. For this, we must have images to specify and run via our compose YAML.
In determining which base image to build from, we looked at a number of options. There is a project named Laradock which seeks to provide container images for anything and everything a Laravel developer might need. After investigating Laradock and some others, we settled on building our own images from Alpine bases. This took a bit more of an initial time investment, but gave us the best results in the interim, with clearer paths forward. Have a look through the example Dockerfile:
FROM php:7.2-fpm-alpine # replace www-data(82:82) with www-data(1000:1000) ARG NAME=www-data ENV NAME ${NAME} RUN deluser www-data && \ adduser -s /bin/sh -D -u 1000 -g '' ${NAME} ${NAME} && \ chown -R ${NAME}:${NAME} /home/${NAME} # needed packages RUN apk add --no-cache autoconf g++ libtool make pcre-dev \ postgresql-dev \ zlib-dev # install required php extensions RUN docker-php-ext-install bcmath \ iconv \ opcache \ pdo pdo_pgsql pgsql \ zip # opcache config COPY opcache-recommended.ini /usr/local/etc/php/conf.d/ # laravel config COPY laravel.ini /usr/local/etc/php/conf.d/ COPY xlaravel.pool.conf /usr/local/etc/php-fpm.d/ # clean RUN apk del autoconf g++ libtool make pcre-dev RUN rm -rf /tmp/* /var/cache/apk/* # no USER is needed, since php will setuid WORKDIR /var/www CMD ["php-fpm"] EXPOSE 9000
Nginx Server
We needed a server to do all this proxying: nginx. Full configuration of nginx is beyond the scope of this article, but let’s talk about a couple key things we found that might help others on a similar path.
We’re going to run the Laravel app (hereafter referred to as “example”) via a PHP 7.x FPM Alpine based image. We’re also going to need to run separate services at different paths, so we’re limiting the Laravel app to handling anything under /example. We’ll front this with an nginx-alpine-based container that talks to PHP via FastCGI over a TCP port. By default, Docker Compose will build a network for all specified services to run in, as well as provide name resolution, so this can be configured simply by service name: “example_php_fpm”.
location /example { root /var/www/example/public; try_files $uri @example; } location /_dusk { root /var/www/app/example/public; try_files $uri @example; } location @example { fastcgi_pass example_php_fpm:9000; }
Note the location section for /_dusk. This lets the Dusk browser tests use testing specific routes that are defined solely for browser testing purposes. We do not include this location block in production, but it may be required for running browser tests depending on your app.
Since we want nginx to serve static files directly, we volume mount the Laravel app’s source tree into nginx:/var/www/example, then configure nginx’s root to serve from /var/www/example/public. Note that because we only handle Laravel requests under the /example path, that path is appended to the document root by nginx, so static files need to exist in /var/www/example/public/example. See the path of “dog.png” and how it is referenced in the HTML. The separate named-location block handles all the proxying to PHP, and static files get served directly.
HTTPS Gotcha
We did not discover the need to add these configuration directives until production since we terminate TLS at the load balancer, so nginx never handles TLS. However, Laravel wants to always redirect to HTTPS in production, so adding this to the production FastCGI params was necessary:
# TLS terminates at the ALB so force PHP to HTTPS fastcgi_param REQUEST_SCHEME "https"; fastcgi_param HTTPS "on";
Source Editing, Migrations, and Testing
In the course of normal local development, we want to be able to edit our source, and see the results as immediately as possible. To do this, we volume mount our source tree into our PHP/FPM container as well as the nginx container, so that changes appear to the PHP interpreter as they are made.
To run artisan tasks for things like migrations and to be able to run unit and browser tests, we build a separate PHP/CLI container, based from the PHP 7.x CLI Alpine image. This container has everything it needs to run all the artisan tasks, and act as a base for our Queue and Schedule images. This particular image is only run in production to handle migrations.
Following the one main process per container best practice, our Queue image runs supervisord, configured to maintain an artisan queue:work process. Our Schedule image runs crond, configured to launch artisan schedule:run every minute. None of these CLI-based containers are included in the example.
To be continued…
This wraps up a high level discussion on our approach to using Docker containers for a homogenous development environment for our team to run locally. Stay tuned for Part 2 to read about how we promote these exact images into production so our stacks remain as similar as possible.