New and shiny vs Good old software

«It depends». That may be the most used sentence when evaluating the correct technology for a generic challenge. Choosing the right technology is not a trivial problem. When the needs are poorly defined, or not defined at all, it may not even be possible to suggest a good solution. But even if the problem is broken down into simple technical challenges, there are multiple factors that influence the implementation of the solution. Budget constraints, internal knowledge, timelines, resourcing issues, corporate policies, integration with existing systems, technical feasibility, and a long etcetera will influence the final decision.

Leaving these other—extremely valid—considerations aside, let’s focus on the technical part. Often times you have different frameworks, applications, libraries, SAAS, … to provide a solution with different degrees of success. The sheer size of this list of options can complicate the decision. It is common practice to check what everyone else is doing. Doing market research is a good first step, but it should not determine the final decision.

Many times when a new technology arises, there is media hype. All of a sudden many computer science related blogs and news sites are flooded with posts explaining the merits of this groundbreaking new solution. Often times, these communications are very passionate and contagious, and that’s good! Their goal is to get other people to try the new tools. Information flows rapidly and allows people that are solving that particular problem to assess if they want to try a new direction.

This situation can result in what I call the shiny effect. I admit that I have been blinded by it in the past. I have spent large amounts of time learning frameworks that turned out to be great for only a narrow set of use cases. That has made me cautious about how I approach new tools, even though I still get excited about them.

One widely used argument is based on authority. (If all these multi-milion dollar organizations are using this solution for the exact same problem I have, then I should do the same. After all, they would never choose to implement it without careful analysis.) Resorting to the argument from authority will not always lead you to a reliable answer. The market trend will not always be accurate, there are multiple examples of that.

I remember how in a not so distant past, NoSQL databases were postulated by some people as the future, some people went so far as to say that SQL was dead. There was a time when Apache was to be completely usurped by other alternatives. The LAMP stack was supposed to be replaced by the MEAN stack, without leaving any trace. SOAP and RPC were to disappear because of REST, while REST is now irrelevant because of GraphQL. Also, at the present time no one was supposed to be using Basic Auth. It goes without saying that the opposite happened as well: prophecies of emerging solutions being doomed to disappear that were never fulfilled.

None of that happened. All the tools and technologies listed above have found a way to coexist and share the spectrum of solutions. Even when they seemed to overlap at the beginning, the emerging alternatives ended up being only a better solution for a subset of scenarios in most of the cases. That is a very big win for everyone. We now have two different tools that are very good at solving similar problems. Even if you can build a search backend in MySQL, you are probably going to have a better experience if you do it with Solr, for instance.

Disregarding well-proven technological solutions, with proliferous ecosystems, is dangerous. Failing to know new solutions that may prove to be better at solving your problem is just as dangerous. We really need to take the time to understand the problem that the new solution is fixing before jumping into it with both feet. Carey Flichel attributes this, amongst other causes, to boredom and lack of understanding in Why Developers Keep Making Bad Technology Choices.

It’s no surprise then that we want to try something new, even if we’ve adequately solved a problem before. We are natural puzzle solvers, and sometimes you just want to try a new puzzle.

Choosing the right solution often involves checking what everyone else is doing, and then analyzing the problem for yourself while taking all the options into consideration. You must not trust new solutions just because they're new anymore than you trust old solutions just because they're old. Instead, zero in on the problem to be solved and find the perfect solution regardless of the buzz. Keep every technology solution on the table until you understand the nuances of the problem space, and let that be your guiding light. Being aware of all the technologies involved, and knowing what’s the best choice, takes time and a lot of research. Many times this will require a software architect to guide you.

Javascript aggregation in Drupal 7

Javascript aggregation in Drupal 7

What is it? Why should we care?

Javascript aggregation in Drupal is just what it sounds like: it aggregates Javascript files that are added during a page request into a single file. Modules and themes add Javascript using Drupal’s API, and the Javascript aggregation system takes care of aggregating all of that Javascript into one or more files. Drupal does this in order to cut down on the number of HTTP requests needed to load a page. Fewer HTTP requests is generally better for front-end performance.

In this article we’ll take a look at the API for adding Javascript, paying specific attention to the options affecting aggregation in order to make best use of the system. We’ll also take a look at some common pitfalls to look out for and how you can avoid them using Advanced Aggregation (AdvAgg) module. This article will mostly be looking at Drupal 7, however much of what we’ll cover applies equally to Drupal 8, and we’ll take a look specifically at what’s changed in Drupal 8.

If fewer is better, why multiple aggregates?

Before we dig in, I said that fewer HTTP requests is generally better for front-end performance. You may have noticed that Drupal creates more that one Javascript aggregate for each page request. Why not a single aggregate? The reason Drupal does this is to take advantage of browser caching. If you had a single aggregate with all the Javascript on the page, any difference in the aggregate from page to page would cause the browser to download a different aggregate with a lot of the same code. This situation arises when Javascript is conditionally added to the page, as is the case with many modules and themes. Ideally, we would group Javascript that appears on every page apart from conditionally added Javascript. That way the browser would download the every page Javascript aggregate once and subsequently load it from cache on other page requests on your website. Drupal’s Javascript aggregation attempts to give you the ability to do exactly that, striking a balance between making fewer HTTP requests, and leveraging browser caching.

The Basics

Let’s take a step back and briefly go over how to use Javascript aggregation in Drupal 7. You can turn on Javascript aggregation by going to Site Configuration > Development > Performance in your Drupal admin. Check off "Aggregate Javascript files" and Save. That’s it. Drupal will start aggregating module and theme Javascript from there on out.

Using the API

Once you’ve enabled Javascript aggregation, great! You’re on your way to sweet, sweet performance gains. But you’ve got modules and themes to build, and Javascript to go along with them. How do you make use of Javascript aggregation provided by Drupal? What kind of control do you have over the process?

There are a couple API entry points for adding Javascript to the page. Most API entry points have parameters that let you tell Drupal how to aggregate your Javascript. Let’s look at each in turn.


This is a loaded function that is used to add a Javascript file, externally hosted Javascript, a setting, or inline Javascript to the page. When it comes to Javascript aggregation, only Javascript files are aggregated by Drupal, so we’re going to focus on adding files. The other types (inline and external) do have important impacts on Javascript aggregation though, which we’ll get into later.


Drupal also provides a mechanism to register Javascript/CSS "libraries" associated with a module. This is nice because you can specify the options to pass to drupal_add_js in one place. That gives you the flexibility to call drupal_add_library in multiple places, without having to repeat the same options. The other nice thing is that you can specify dependencies for your library and Drupal will take care of resolving those dependencies and putting them in the right order for you.

I like to register every script in my module as a library, and only add scripts using drupal_add_library (or the corresponding #attached method, see below). This way I’m clearly expressing any dependencies and in the case where my script is added in more than one place, the options I give the aggregation system are all in one place in case I need to make a change. It’s also a nice way to safely deprecate scripts. If all the dependencies are spelled out, you can be assured that removing a script won’t break things.

Instructions for using hook_library and drupal_add_library are well documented. However, one important thing to note is the third parameter to drupal_add_library, every_page, which is used to help optimize aggregation. At registration time in hook_library, you typically don’t know if a library will be used on every page request or not. Especially if you’re registering libraries for other module authors to use. This is why drupal_add_library has an every_page parameter. If you’re adding a library unconditionally on every page, be sure to set every_page parameter to TRUE so that Drupal will group your script into a stable aggregate with other scripts that are added to every page. We’ll discuss every_page in more detail below.


attached is a render array property that lets you add Javascript, CSS, libraries and arbitrary data to the page. #attached is the preferred approach for adding Javascript in Drupal. In fact, drupal_add_js and drupal_add_library are gone in Drupal 8 in favor of #attached. #attached is nice since it is part of a render array and can be easily altered by other modules and themes. It’s also essential for places where render arrays are cached. With caching in play, functions that build out a render array are bypassed in favor of a cached copy. In that scenario, you need to have your Javascript addition included in #attached on the render array, because a call to drupal_add_js/drupal_add_library in your builder function would otherwise be missed.

Drupal developers often fall back on drupal_add_js for lack of a render array in context. It’s common to add Javascript in places like hook_init, preprocess and process functions, where there is no render array available to add your Javascript via #attached. In these scenarios, consider whether your Javascript is more appropriately added via a hook_page_build, or hook_node_view, where there is a render array available to add your script via #attached. Not only will you begin to align yourself with the Drupal 8 way of doing things, but you open yourself up to being able to use the render_cache module, which can gain you some performance. See the documentation for drupal_process_attached for more information.

Controlling aggregation

Let’s take a close look at the parameters affecting Javascript aggregation. Drupal will create an aggregate for each unique combination of the scope, group, and every_page parameters, so it’s important to understand what each of these mean.


The possible values for scope are ‘header’ and ‘footer’. A scope of ‘header’ will output the script in the

tag, a scope of ‘footer’ will output the script just before the closing tag.


The ‘group’ option takes any integer as a value, although you should stick to the constants JS_LIBRARY, JS_DEFAULT and JS_THEME to avoid excessive numbers of aggregates and follow convention.


The ‘every_page’ option expects a boolean. This is a way to tell the aggregation system that your script is guaranteed to be included on every page. This flag is commonly overlooked, but is very important. Any scripts that are added on every page should be included in a stable aggregate group, one that is the same from page to page, such that the browser can make good use of caching. The every_page parameter is used for exactly that purpose. Miss out on using every_page and your script is apt to be lumped into a volatile aggregate that changes from page to page, forcing the browser to download your code anew on each page request.

What can go wrong

There are a few problems to watch out for when it comes to Javascript aggregation. Let’s look at some in turn.

Potential number of aggregates

The astute reader will have noticed that there is a potential for a lot of aggregates to be created. When you combine scope, group and every_page options, the number of aggregates per page can get out of control if you’re not careful. With two values for scope, three for groups and two for every_page, there is potential to climb to twelve aggregates in a page. Twelve seems a little out of control. Sure, we want to leverage browser caching, but this many HTTP requests is concerning. If you’re getting high into the number of aggregates it’s likely that modules and themes adding Javascript are not using the API as best as they could.

Misuse of Groups

As was mentioned, JS_LIBRARY, JS_DEFAULT and JS_THEME are default constants that ship with Drupal to group Javascript. When adding Javascript, modules and themes should stick to these groups. As soon as you deviate, you introduce a new aggregate. Often when people deviate from the default groups, it’s to ensure ordering. Maybe you need your script to be the very first script on the page, so you set your group to JS_LIBRARY - 100. Instead, use the ‘weight’ option to manage ordering. Set your group to JS_LIBRARY and set the weight very low to ensure your first in the JS_LIBRARY aggregate. Another issue I see is themes adding script explicitly under the JS_THEME group. This is logical, after all, it is Javascript added by the theme! However, it’s better to make use of hook_library, declare your dependencies and let the group default to JS_DEFAULT. The order you need is preserved by declaring your dependencies, and you avoid creating a new aggregate unessesarily.

Inline and External Scripts

Extra care should be employed when adding external and inline scripts. drupal_add_js and friends preserve order really well, such that they track the order the function is called for all the script added to the page and respect that. That’s all well and good. However, if an inline or external script is added between file scripts, it can split the aggregate. For example if you add 3 scripts: 2 file scripts and 1 external, all with the same scope, group, and every_page values, you might think that Drupal would aggregate the 2 file scripts and output 2 script tags total. However, if drupal_add_js gets called for the file, then external, then file, you end up getting two aggregates generated with the external script in between for a total of 3 script tags. This is probably not what you want. In this case it’s best to specify a high or low weight value for the external script so it sits at the top or bottom of the aggregate and doesn’t end up splitting it. The same goes for inline JS.

Scripts added in a different order

I alluded to this in the last one, but certain situations can arise where scripts get added to an aggregate in a different order from one request to the next. This can be for any number of reasons, but because Drupal is tracking the call order to drupal_add_js, you end up with a different order for the same scripts in a group and thus a different aggregate. Sadly the same code will sit in two aggregates on the server, with slightly different source order, but otherwise they could have been exactly equal, produced a single aggregate and thus cached by the browser from page to page. The solution in this case is to use weight values to ensure the same order within an aggregate from page to page. It’s not ideal, because you don’t want to have to set weights on every hook_library / drupal_add_js. I’d recommend handling it on a case by case basis.

Best Practices

Clearly, there is a lot that can go wrong, or at least end up taking you to a place that is sub-optimal. Considering all that we’ve covered, I’ve come up with a list of best practices to follow:

Always use the API, never ‘shoehorn’ scripts into the page

A common problem is running into modules and themes that sidestep the API to add their Javascript to the page. Common methods include using drupal_add_html_head to add script to the header, using hook_page_alter and appending script markup to $variables[‘page_bottom’], or otherwise outputting a raw

Use hook_library

hook_library is a great tool. Use it for any and all Javascript you output (inline, external, file). It centralizes all of the options for each script so you don’t have to repeat them, and best of all, you can declare dependencies for your Javascript.

Use every_page when appropriate

The every_page option signals to the aggregation system that your script will be added to all pages on the site. If your script qualifies, make sure your setting every_page = TRUE. This puts your script into a "stable" aggregate that is cached and reused often.


Advanced CSS/JS Aggregation (AdvAgg) is a contributed module that replaces Drupal’s built in aggregation system with it’s own, making many improvements and adding additional features along the way. The way that you add scripts is the same, but behind the scenes, how the groups are generated and aggregated into their respective aggregates is different. AdvAgg also attempts to overcome many of the scenarios where core aggregation can go wrong that I listed above. I won’t cover all of what AdvAgg has to offer, it has an impressive scope that would warrant it’s own article or more. Instead I’ll touch on how it can solve some of the problems we listed above, as well as some other neat features.

Out of the box the core AdvAgg module supplies a handful of transparent backend improvements. One of those is stampede protection. After a code release with JS/CSS changes, there is a potential that multiple requests for the same page will all trigger the calculation and writing of the same new aggregates, duplicating work. On high traffic sites, this can lead to deadlocks which can be detrimental to performance. AdvAgg implements locking so that only the first process will perform the work of calculating and writing aggregates. Advagg also employs smarter caching strategies so the work of calculating and writing aggregates is done as infrequently as possible, and only when there is a change to the source files. These are nice improvements, but there is great power to behold in AdvAgg’s submodules.

AdvAgg Compress Javascript

AdvAgg comes with two submodules to enhance JS and CSS compression. They each provide a pluggable way to have a compressor library act on the each aggregate before it’s saved to disk. There are a few options for JS compression, JSqueeze is a great option that will work out of the box. However, if you have the flexibility on your server to install the JSMIN C extension, it’s slightly more performant.

AdvAgg Modifier

AdvAgg Modifier is another sub module that ships with AdvAgg, and here is where we get into solving some of the problems we listed earlier. Let’s explore some of the more interesting options that are made available by AdvAgg Modifier:

Optimize Javascript Ordering

There are a couple of options under this fieldset that seek to solve some of the problems laid out above. First up, "Move Javascript added by drupal_add_html_head() into drupal_add_js()" does just what it says, correcting the errant use of the API by module/theme authors. Doing this allows the Javascript in question to participate in Javascript aggregation and potentially removes an HTTP request if it was a file that was being added. Next “Move all external scripts to the top of the execution order” and “Move all inline scripts to the bottom of the execution order” are both an attempt at resolving the issue of splitting the aggregate. This is more or less assuming that external files are more library-esque and inline Javascript is meant to be run after all other Javascript has run. That may or may not be the case depending on what you’re doing and you should definitely use caution. Such as that is, if it works for you, it could be a simple way to avoid splitting the aggregate without having to do manual work of adjusting weights.

Move JS to the footer

As we discussed, you typically add Javascript to one of two scopes, header or footer. A common performance optimization is to move all the Javascript to the footer. AdvAgg gives you a couple choices in how you do that ranging from some to all Javascript moved to the footer. This can easily break your site however, and you definitely need to test and use caution here. The reason of course is that there could be other script that aren’t declaring their dependencies properly, and/or are ‘shoehorning’ their Javascript into the page that are depending on a dependency existing in the header. These will need to be fixed on a case by case basis (submit patches to contrib modules!). If you still have Javascript that really does need to be in the header, AdvAgg extends the options for drupal_add_js and friends to include a boolean scope_lock that when set to TRUE, will prevent AdvAgg from moving the script to the footer. There always seems to be some stubborn ads or analytics provider that insists on presence in the header, and is backed by the business that forces you to make at least one of these exceptions. Another related option is "Put a wrapper around inline JS if it was added in the content section incorrectly". This is an attempt to resolve the problem of inline Javascript in the body depending on things being in the header. AdvAgg will wrap any inline Javascript in a timer that waits on jQuery and Drupal to be defined before running the inline code. It uses a regular expression to search for the scripts, which while neat, feels a little brittle. There is an alternate option to search for the scripts using xpath instead of regex, which could be more reliable. If you try it, do so with care, test first.

Drupal 8

Not a lot is different when it comes to Javascript aggregation in D8, but there have been a few notable changes. First, the aggregation system has been refactored under the hood to be pluggable. You can now cleanly swap out and/or add to the aggregation system, whereas in Drupal 7 it was a difficult and messy affair to do so. Modules like AdvAgg should have a relatively easier time implementing their added functionality. Other improvements to the aggregation system that didn’t quite make the D8 release will now have a clearer path to implementation for the same reason. In terms of its function and the way in which Javascript is grouped for aggregation, everything is as it was in Drupal 7.

The second change centres around adding Javascript to the page. drupal_add_js and drupal_add_library have been removed in Drupal 8 in favor of #attached. The primary motivation is to improve cacheability. With Javascript consistently added using #attached, it’s possible to do better render array caching. The classic example being that if you were using drupal_add_js in your form builder function, your Javascript would be missed if the cached render array was used and the builder function skipped. Caching aside, it’s nice that there is a single consistent way to add Javascript. Also related to this change, is hook_library definitions have gone the way of the dodo. In their place, *.libraries.yml files are now used to define your Javascript and CSS assets in libraries, continuing the Drupal 8 theme of replacing info hooks with YML files.

A third somewhat related change that I’ll point out is that Drupal core no longer automatically adds jQuery, Drupal.js and Drupal.settings (drupalSettings in Drupal 8). jQuery still ships with core, but if you need it, you have to declare it as a dependency in your *.libraries.yml file. There have also been steps taken to reduce Drupal core’s reliance on jQuery. This isn’t directly related to Javascript aggregation per se, but it’s a big change nonetheless that you’ll have to keep in mind if you’re used to writing Javascript in Drupal 7 and below. It’s a welcome improvement with potential for some nice front end performance gains. For more information and specific implementation details, check out the guide for adding Javascript and CSS from a theme and adding Javascript and CSS from a module.

HTTP 2.0

Any article about aggregating Javascript assets these days deserves a disclaimer regarding HTTP 2.0. For the uninitiated, HTTP 2.0 is the new version of the HTTP protocol that while maintaining all the semantics of HTTP 1.1, significantly changes the implementation details. The most noted change in a nutshell is that instead of opening a bunch of TCP connections to download all the resources you need from a single origin, HTTP 2.0 opens a single connection per origin and multiplexes resources on that connection to achieve parallelism. There is a lot of promise here because HTTP 2.0 enables you to minimize the number of TCP connections your browser has to open, while still downloading assets in parallel and individually caching assets. That means that you could be better off not aggregating your assets at all, allowing the browser to maximize caching efficiency without incurring the cost of multiple open TCP connections. In practice, the jury is still out on whether no aggregation at all is a good idea. It turns out that compression doesn’t do as well on a collection of individual small files as it does on aggregated large files. It’s likely that some level of aggregation will continue to be used in practice. However as hosts more widely support HTTP 2.0 and old browsers fall off, this will become a larger area of interest.


Whew, we’ve come to the end at last. My purpose was to go over all of the nuances of Javascript aggregation in Drupal. It’s an important topic when it comes to front end performance, and one that deserves careful consideration when you're trying to eek every last bit of performance out of your site. With Drupal 8 and HTTP 2.0 now a reality, it’ll be interesting to watch as things evolve for the better.

The Uncomplicated Firewall

Firewalls are a tool that most web developers only deal with when sites are down or something is broken. Firewalls aren’t fun, and it’s easy to ignore them entirely on smaller projects.

Part of why firewalls are complicated is that what we think of as a "firewall" on a typical Linux or BSD server is responsible for much more than just blocking access to services. Firewalls (like iptables, nftables, or pf) manage filtering inbound and outbound traffic, network address translation (NAT), Quality of Service (QoS), and more. Most firewalls have an understandably complex configuration to support all of this functionality. Since firewalls are dealing with network traffic, it’s relatively easy to lock yourself out of a server by blocking SSH by mistake.

In the desktop operating system world, there has been great success in the "application" firewall paradigm. When I load a multiplayer game, I don’t care about the minutiae of ports and protocols - just that I want to allow that game to host a server. Windows, OS X, and Ubuntu all support application firewalls where applications describe what ports and protocols they need open. The user can then block access to those applications if they want.

Uncomplicated Firewall (ufw) is shipped by default with Ubuntu, but like OS X (and unlike Windows) it is not turned on automatically. With a few simple commands we can get it running, allow access to services like Apache, and even add custom services like MariaDB that don’t ship with a ufw profile. UFW is also available for other Linux distributions, though they may have their own preferred firewall tool.

Before you start

Locking yourself out of a system is a pain to deal with, whether it’s lugging a keyboard and monitor to your closet or opening a support ticket. Before testing out a firewall, make sure you have some way to get into the server should you lock yourself out. In my case, I’m using a LAMP vagrant box, so I can either attach the Virtualbox GUI with a console, or use vagrant destroy / vagrant up to start clean. With remote servers, console access is often available through a management web interface or a "recovery" SSH server like Linode’s Lish.

It’s good to run a scan on a server before you set up a firewall, so you know what is initially being exposed. Many services will bind to ‘localhost’ by default, so even though they are listening on a network port they can’t be accessed from external systems. I like to use nmap (which is available in every package manager) to run port scans.

$ nmap Starting Nmap 6.40 ( ) at 2015-09-02 13:16 EDT Nmap scan report for trusty-lamp.lan ( Host is up (0.0045s latency). Not shown: 996 closed ports PORT STATE SERVICE 22/tcp open ssh 80/tcp open http 111/tcp open rpcbind 3306/tcp open mysql Nmap done: 1 IP address (1 host up) scanned in 0.23 seconds

Listening on for SSH and HTTP connections makes sense, but we probably don’t need rpcbind (for NFS) or MySQL to be exposed.

Turning on the firewall

The first step is to tell UFW to allow SSH access:

$ sudo ufw app list Available applications: Apache Apache Full Apache Secure OpenSSH $ sudo ufw allow openssh Rules updated Rules updated (v6) $ sudo ufw enable Command may disrupt existing ssh connections. Proceed with operation (y|n)? y Firewall is active and enabled on system startup $ sudo ufw status Status: active To Action From -- ------ ---- OpenSSH ALLOW Anywhere OpenSSH (v6) ALLOW Anywhere (v6)

Test to make sure the SSH rule is working by opening a new terminal window and ssh’ing to your server. If it doesn’t work, run sudo ufw disable and see if you have some other firewall configuration that’s conflicting with UFW. Let’s scan our server again now that the firewall is up:

$ nmap Starting Nmap 6.40 ( ) at 2015-09-02 13:31 EDT Note: Host seems down. If it is really up, but blocking our ping probes, try -Pn Nmap done: 1 IP address (0 hosts up) scanned in 3.07 seconds

UFW is blocking pings by default. We need to run nmap with -Pn so it blindly checks ports.

$ nmap -Pn Starting Nmap 6.40 ( ) at 2015-09-02 13:32 EDT Nmap scan report for trusty-lamp.lan ( Host is up (0.00070s latency). Not shown: 999 filtered ports PORT STATE SERVICE 22/tcp open ssh Nmap done: 1 IP address (1 host up) scanned in 6.59 seconds

Excellent! We’ve blocked access to everything but SSH. Now, let’s open up Apache.

$ sudo ufw allow apache Rule added Rule added (v6)

You should now be able to access Apache on port 80. If you need SSL, allow "apache secure" as well, or just use the “apache full” profile. You’ll need quotes around the application name because of the space.

To remove a rule, prefix the entire rule you created with "delete". To remove the Apache rule we just created, run sudo ufw delete allow apache.

Blocking services

UFW operates in a "default deny" mode, where incoming traffic is denied and outgoing traffic is allowed. To operate in a “default allow” mode, run sudo ufw default allow. After running this, perhaps you don’t want Apache to be able to listen for requests, and only want to allow access from localhost. Using ufw, we can deny access to the service:

$ sudo ufw deny apache Rule updated Rule updated (v6)

You can also use "reject" rules, which tell a client that the service is blocked. Deny forces the connection to timeout, not telling an attacker that a service exists. In general, you should always use deny rules over reject rules, and default deny over default allow.

Address and interface rules

UFW lets you add conditions to the application profiles it ships with. For example, say you are running Apache for an intranet, and have OpenVPN setup for employees to securely connect to the office network. If your office network is connected on eth1, and the VPN on tun0, you can grant access to both of those interfaces while denying access to the general public connected on eth0:

$ sudo ufw allow in on eth1 to any app apache $ sudo ufw allow in on tun0 to any app apache

Replace from <interface> with on <address> to use IP address ranges instead of interface names.

Custom applications

While UFW lets you work directly with ports and protocols, this can be complicated to read over time. Is it Varnish, Apache, or Nginx that’s running on port 8443? With custom application profiles, you can easily specify ports and protocols for your own custom applications, or those that don’t ship with UFW profiles.

Remember up above when we saw MySQL (well, MariaDB in this case) listening on port 3306? Let’s open that up for remote access.

Pull up a terminal and browse to /etc/ufw/applications.d. This directory contains simple INI files. For example, openssh-server contains:

[OpenSSH] title=Secure shell server, an rshd replacement description=OpenSSH is a free implementation of the Secure Shell protocol. ports=22/tcp

We can create a mariadb profile ourselves to work with the database port.

[MariaDB] title=MariaDB database server description=MariaDB is a MySQL-compatible database server. ports=3306/tcp $ sudo ufw app list Available applications: Apache Apache Full Apache Secure MariaDB OpenSSH $ sudo ufw allow from to any app mariadb

You should now be able to access the database from any address on your local network.

Debugging and backup

Debugging firewall problems can be very difficult, but UFW has a simple logging framework that makes it easy to see why traffic is blocked. To turn on logging, start with sudo ufw logging medium. Logs will be written to /var/log/ufw.log. Here’s a UFW BLOCK line where Apache has not been allowed through the firewall:

Jan 5 18:14:50 trusty-lamp kernel: [ 3165.091697] [UFW BLOCK] IN=eth2 OUT= MAC=08:00:27:a1:a3:c5:00:1e:8c:e3:b6:38:08:00 SRC= DST= LEN=60 TOS=0x00 PREC=0x00 TTL=64 ID=65499 DF PROTO=TCP SPT=41557 DPT=80 WINDOW=29200 RES=0x00 SYN URGP=0

From this, we can see all of the information about the source of the request as well as the destination. When you can’t access a service, this logging makes it easy to see if it’s the firewall or something else causing problems. High logging can use a large amount of disk space and IO, so when not debugging it’s recommended to set it to low or off.

Once you have everything configured to your liking, you might discover that there isn’t anything in /etc with your rules configured. That’s because ufw actually stores its rules in /lib/ufw. If you look at /lib/ufw/user.rules, you’ll see iptables configurations for everything you’ve set. In fact, UFW supports custom iptables rules too if you have one or two rules that are just too complex for UFW.

For server backups, make sure to include the /lib/ufw directory. I like to create a symlink from /etc/ufw/user-rules to /lib/ufw. That way, it’s easy to remember where on disk the rules are stored.

Next steps

Controlling inbound traffic is a great first step, but controlling outbound traffic is better. For example, if your server doesn’t send email, you could prevent some hacks from being able to reach mail servers on port 25. If your server has many shell users, you can prevent them from running servers without being approved first. What other security tools are good for individual and small server deployments? Let me know in the comments!

Sharing Breakpoints Between Drupal 8 and Sass

Among the many great new features in Drupal 8 is the Breakpoint module. This module allows you to define sets of media queries that can be used in a Drupal site. The most notable place that Drupal core uses breakpoints is the Responsive Image module.

To add and configure breakpoints, you create a file named *theme-name*.breakpoints.yml in the root directory of your theme, replacing *theme-name* with the theme’s machine name. Modules can also have *module-name*.breakpoints.yml configuration files at their root. In this file you define each breakpoint by these properties:

  • label - the human readable name of the breakpoint
  • mediaQuery - a valid @media query
  • weight - where the breakpoint is ordered in relation to other breakpoints: the breakpoint targeting the smallest viewport size should have a smaller weight, and larger breakpoints should have larger weight values
  • multiplier - the ratio between the physical pixel size of the active device and the device-independent pixel size

Exposing all this information to Drupal from a single file is great. But having this file creates a problem. The breakpoints defined here are not exposed to the Sass code used to style our site.

A solution

One best practice in software development is avoiding repetition whenever possible, often called keeping things DRY (Don’t Repeat Yourself). To apply the DRY method to the breakpoints.yml file and Sass, I wrote drupal-sass-breakpoints. This is an Eyeglass module that allows importing breakpoints from our theme’s breakpoints.yml into our Sass code.

In this article we are going to setup Drupal Sass Breakpoint in a minimal Drupal 8 theme. If you would like an introduction to creating a Drupal 8 theme, I recommend going through John Hannah’s article on Drupal 8 Theming Fundamentals. For this article's example (neelix), we are going to start with the following files:

├── scss │ └── style.scss ├── css ├── neelix.libraries.yml ├── ├── neelix.breakpoints.yml

This gives us a directory for our Sass, a directory for the compiled CSS, a file to declare a library for our CSS, an *.info.yml file, and our breakpoints.yml file. Inside the neelix.breakpoints.yml file add the following:

neelix.small: label: small mediaQuery: '' weight: 0 multipliers: - 1x neelix.medium: label: medium mediaQuery: 'screen and (min-width: 560px)' weight: 1 multipliers: - 1x neelix.wide: label: wide mediaQuery: 'screen and (min-width: 600px)' weight: 2 multipliers: - 1x neelix.xwide: label: xwide mediaQuery: 'screen and (min-width: 860px)' weight: 2 multipliers: - 1x

Now that we have a starting point for the new theme and some breakpoints in place, we need to install drupal-sass-breakpoints. For the example here, we are going to use Grunt to compile Sass. But It is important to mention that drupal-sass-breakpoints can work with Gulp or any other task runner that supports Eyeglass modules. To get started with Grunt, we first need to set up dependencies. This can be done by defining the dependencies in a file named package.json in the root directory of the neelix theme:

{ "devDependencies": { "breakpoint-sass": "^2.6.1", "drupal-sass-breakpoints": "^1.0.1", "eyeglass": ^"0.7.1", "grunt": "^0.4.5", "grunt-contrib-watch": "^0.6.1", "grunt-sass": "^1.0.0" } }

Then run npm install this from the command line while in the theme’s root directory.

This installs all the necessary dependencies we need, which were defined in package.json. Next, we need to define Grunt tasks. Do so by adding the following in a Gruntfile.js file in the root directory of the theme:

'use strict'; var eyeglass = require("eyeglass"); module.exports = function(grunt) { grunt.initConfig({ watch: { sass: { files: ['scss/*.{scss,sass}', 'Gruntfile.js', 'neelix.breakpoints.yml'], tasks: ['sass:dist'] }, }, sass: { options: require("eyeglass").decorate({ outputStyle: 'expanded' }), dist: { files: { 'css/style.css': 'scss/style.scss' } } } }); grunt.registerTask('default', ['sass:dist', 'watch']); grunt.loadNpmTasks('grunt-sass'); grunt.loadNpmTasks('grunt-contrib-watch'); };

This creates all that we need to compile Sass with Eyeglass. Now we can set up a test case to our scss/style.scss file by adding the following:

@import "drupal-sass-breakpoints"; body { @include media('medium') { content: "Styles that apply to our 'medium' breakpoint"; } }

In the above example we are using the media() mixin. This is a mixin added by drupal-sass-breakpoints. The argument we pass to it is the label for the media query we are targeting ( medium). Now run this from the command line in the root directory of our theme to compile Sass:


Now check out the compiled results inside css/style.css:

@media (min-width: 560px) { body { content: "Styles that apply to our 'medium' breakpoint"; } }

That is it! We now have Sass importing the media queries from our theme’s breakpoints.yml file.

What about srcset and sizes?

If you find you're overwhelmed by the number of breakpoints in your theme, remember that there are many instances where you only need to use srcset for responsive image styles. The srcset attribute is particulary useful when used in conjunction with the sizes attribute. I recommend this article on CSS-Tricks which outlines the scenarios which srcset and sizes are best for. In these cases, it may not be necessary to define all of our breakpoints in the .breakpoints.yml file.


Do have any feature requests or suggestions for improving drupal-sass-breakpoints? Feel free to contribute on GitHub.

You can find a fully working example of this article here. Happy styling!

My Four Kickstarter Mistakes Can Improve Your Software Projects

So I wrote a children’s book. To get it printed and published, I launched a Kickstarter that was successful thanks to the many people who believed in the idea and committed with their wallets.

But the campaign succeeded in spite of myself. I made a lot of mistakes. There are many things I would do differently if I were to launch it again, and will do differently for the next campaign. As I reflected on the mistakes and lessons learned, however, there was commonality with general best practices for implementing a project, and in particular, software projects.

So here are my mistakes (at least, the ones I know about) with the resulting lessons learned.

1. Rushing the Launch

Many of the mistakes that follow sprouted from impatience. That’s why this is first. If you don’t make this mistake, you’ll save yourself a lot of headaches.

There is such a thing as analysis paralysis, or fear of launching because you are scared to fail. But that was not my problem. My problem was pure impatience. I had worked on the book for months, on both editing and requisitioning sample illustrations. I had spent almost another month crafting the kickstarter page and putting together the video. By the time the video was done, I just wanted to get it out there.

However, I would have benefited from sitting down and doing more intentional research, and pulling in some advice from people I trust. I could have tested the video to see if it spurred people to action. I could have looked for the ideal time to launch a children’s book Kickstarter, and then actually waited. I could have spent some time and effort collecting a list of media contacts and prepared a message to send them on launch day. I could have set up a social media campaign to capture email addresses and test initial copy. I could have...done so many other things.

Instead of having a methodical plan to follow, I just started throwing things at the wall to see what stuck, desperate to try anything. It caused some unnecessary stress and worry, especially when the momentum started to slow down after the first few days of launch. I knew I hadn’t done everything I could to ensure a better chance at success, and at that point, there was less and less that I could do about it.


Don’t be haphazard with your launch. If you feel sick of working on something and just want to get it out...stop. Think. Sleep on it. Your whims and emotional health are not the most important factors in a launch strategy. Take time to plan the launch and the weeks following the launch. Be intentional about establishing a calendar that makes sense, and do your best stick to it.

2. Underestimating Time and Cost Involved

The consequences of this mistake can be severe. If the funding of your Kickstarter doesn’t cover all of your costs, the remainder comes directly from your own pocket. If you can’t afford the overage, you risk breaking your promises to your backers. Or a potential lawsuit.

It is critical that you count the cost and attempt to measure possible risk. An unfunded, failed campaign is disappointing. Possibly heartbreaking. Worse, though, is a campaign that leaves you burned out, destitute, with the residue of broken promises polluting your wake.

Some things I underestimated or ignored:

  • The time required to finish illustration. I originally wanted to ship books to backers by April 2015, but ended up being over 3 months late. This is no fault of the illustrator, but rather myself forgetting about my own nitpicking tendencies, and the vast amount of back and forth communication that it required.
  • The time required to manage the campaign, both during and after. Posting updates, reaching out to blogs, the packing and shipping of the products (so many trips to the post office. So, so many.) Since I wasn’t paying myself anything for labor, this didn’t have a direct impact on costs, but it did have a negative impact on timelines.
  • The price of international shipping. This is always more expensive than you think it is, and it varies heavily from carrier to carrier, country to country. Total package weight and size matter much more, and a slight increase can mean a big bump in cost. I literally lost money on every single international shipment, even the ones to Canada. If you think you are charging enough for International shipping, you probably aren’t.
  • The cost of shipping materials. It turns out that a 9.5x9.5 book needs larger envelopes than a book that is just one inch narrower. Packaging a print so it doesn’t bend during shipment requires some extra materials. And on and on. The small things add up.
  • The cost of marketing. I found some niche newsletters to advertise in. Did I plan and budget for these ads? Nope.

I ended up spending a decent amount of my own money to cover the extra costs. Since it was a passion project for me, and additional amount needed was not astronomical, it wasn’t that big of a deal...but it could have been a disaster.


Measure twice, cut once. Estimation is hard, and if you are at the mercy of other companies whose prices can change, be sure you under-promise so you set yourself up to over-deliver. Make sure you are accounting for as much as possible, and then add some more to your estimation to take into account possible risks and other uncertainties.

3. Rewards Didn’t Align With What People Actually Wanted

I thought I knew what potential backers would want. I myself was a connoisseur of children’s books, and so I had a lofty view of my own opinion. And while my desires intersected somewhat with what people wanted, I missed the mark badly on many rewards. This was either because the reward itself was undesirable, the price for the reward was too much, or a combination of the two.

For example, an original sketch from the illustrator was a reward that some people obviously wanted, but at only three takers, the price point probably kept many from claiming it. Instead of setting it at a round number that “felt” right, I should have taken the time to see if the numbers worked for a lower price point.

Likewise, I didn’t really highlight signed books as a potential benefit. This was me being oblivious and living inside my head too much. I personally don’t value signed books that much, even from my favorite authors, and so I didn’t expect others to care about my own sloppy signature. But that was stupid. Nearly every other book project on Kickstarter offers signed copies, and I should have taken the obvious hint.

While I did take time to see what rewards other successful campaigns had offered, I didn’t open myself up to learning everything I could. I had already chiseled some ideas in stone, as if they were footnotes to the Ten Commandments.


Is your project actually offering what people want? Does your marketing message accurately reflect the benefits that your target users are looking for? Remember, most of the time, you are not your target audience. Do not assume you have the authoritative opinion on your product or service. Get some empathy, talk to people who aren’t you, and don’t ignore what other successful projects (in the same domain space) might have in common. Perhaps do some organized user research. Be prepared to change (or kill) your darlings.

4. Not Properly Determining a Minimum Viable Product

I aimed too high. If not for generous friends, family, and coworkers, I would have turned out like Icarus, with melted wings pushing me down toward Poseidon's ambivalent embrace. Or like something much less melodramatic, but equally disastrous.

If I was going to publish a book, I was going to do it right. The best materials, the best hardcover binding, book dimensions usually reserved for a ping pong table. Oh, and a dust jacket, of course. At one point, I’m sure I even entertained the idea of gold foil on the cover with blinking lights, or some such nonsense. Thankfully, the realities of book printing held me in check.

In my mind, this premium, sewn-bound, 9.5x9.5 hardcover book was the minimum product. I assumed that people would be more likely to back the campaign if they were getting something that could be showcased. That might have been true for some. For the most part, however, I projected my own excitement and gilded assumptions, and it hampered the campaign.

What should have been my minimum viable product? Something much cheaper to print, for starters. An 8x8 softcover book would have cut the print cost by at least 35%. This would have allowed me to set my campaign goal much lower, and have a much lower-priced reward tier where a backer would still get a physical copy.

I could have still offered the premium upgrades, but as stretch goals. If the campaign reached a certain amount, the softcover would be upgraded to a hardcover. Another could have added the dust jackets. I suspect I would have had a higher total, with more backers, and would have still been able to deliver a premium hardcover at the end.

But even if not, more people would have had my book in their hands, even if it was a more mundane softcover version. Instead, I outpriced my market a bit, and discouraged impulse backers.


Make sure your Minimum Viable Product is actually your Minimum Viable Product. Are the features you think of as “minimal” actually required for people to extract value from your product or service? Be brutally honest over what are “nice-to-haves.” Otherwise, you might spend time and money chasing the wrong things, limiting both future growth and agility. Extra features and luxuries can be added later, once there is more demand.


Successful projects are hard to bring about. They don’t happen by accident, though you may get more luck than you deserve, like I did. Any one of the above mistakes has the potential to wreck a project, to leave the ruins scattered into the well-populated landfill of failed intentions. I hope some of my experiences can help you avoid some of these mistakes in the future.

What you made a project management mistake that has resulted in a valuable lesson? Let us know!

Rebuilding POP in D8

Outside my Drupal and Lullabot life, I help my girlfriend Nicole run a non-profit called Pinball Outreach Project, POP for short. POP brings pinball to kids by taking games to children’s hospitals, as well as working with organizations like Children’s Cancer Association, Girls Make Games, and Big Brothers Big Sisters to bring pinball to their events or host kids at our pinball location here in Portland.

The POP website was built in Wordpress and has served us well for three years, but as we have become more active and our needs have expanded, it has started to show its age. There are several parts of the site that are difficult to update because they are hardcoded in templates, it relies on some paid components to keep running, and the theme hides a lot of basic functionality that it would be nice to have access to.  On the plus side, it is a simple clean design that is fully responsive, so navigating the site is pretty easy.

We wanted to build a new site that kept the design elements we liked, but expanded the functionality to make the site easier to update and more flexible to allow for future growth and needs. Given my expertise, Drupal seemed like the obvious choice, and given where we're at in the release cycle I thought, "Why not do it in Drupal 8?" It would give me some real hands-on experience, and hopefully we'll end up with a modern tool that can help us grow into the future.

Thus began our odyssey, a new website for a small non-profit in Drupal 8. In the coming weeks I will outline this process from the standpoint of a Drupal 7 expert experiencing a real site build with D8 for the first time. While it is true that I have more D8 experience than many due to my role as lead of the Configuration Management Initiative, that experience is a couple years old and almost entirely involved backend code and architecture. The site building and theming changes are completely new to me, and in many cases I don't know what I don't know. In this way, I feel I am like many of you reading this who are also about to embark on this journey. Let’s discover the new stuff in Drupal 8 together, and we can all learn something along the way.

About the new site

Before we get into the build, let’s look into what we're building. The POP site has several high priority functions that we need to address:

  • Provide information about the organization and its mission.
  • Publicize upcoming events, as well as wrapup information and photos about events that have already happened.
  • Provide news about other POP happenings.
  • Provide information related to POP HQ, our location here in Portland (hours, league, party rental, etc.)
  • Allow users to get information about our current needs and donate.
  • Enable users to interact with us through social media and our newsletter.

For better or worse, we don't have a ton of design or UX resources at hand, so our goal is to shoot for a simple information architecture and wrap a super clean theme around it. We're going to start with some very basic wireframes, prototype the site, then head into the design/theming phase. Now I know what you're thinking, if a client came to me with this plan, I would scoff and laugh too. Nevertheless, remember that most clients are also extremely particular about their design, UX, and branding. From our perspective, we are far more interested in getting the functionality we need in place and the information we have up front to our site users. Beyond that, a theme that is reasonably equivalent to what we have now is perfectly fine. We are going in knowing exactly what our limitations and expertise is, and being willing to work within those constraints.

The one thing we want from a new design, that we don't have now, is to put our imagery much more front and center. Pinball is a hobby that has some fantastic visuals, and our events pair that up with kids having fun which makes it all the more appealing. We should find a way to put that in front of people as much as possible.

Beyond that, I picture a pretty straightforward site with about a half dozen of your usual content types (page, article, event, promo block, maybe images and galleries.) The site will be simple enough that I was figuring we could build it with Context to manage block placement and an assortment of Views. Add on a few custom blocks and I figured we would be most of the way there. So let’s see how that played out from day one.

Getting started

Getting Drupal 8 installed was not much different than it was in D7, so I won't spend a ton of time dwelling on that (although it was really nice that I could let the installer create my new database for me, thanks Angie!) If you'd like to read more about installation, I recommend Karen Stevenson's recent article on using Composer to install D8.

The first thing I wanted to check once I got Drupal installed was the state of a few contributed modules, and first on that list was Context, as I was planning on having it serve as the main organizational paradigm for the site. I was pretty surprised to see that a D8 port of Context had not even been started, much less in any usable state. I mentioned this on IRC, and someone said that perhaps the core block enhancements in D8 might meet my needs. Say what? Blocks are actually useful now? Hard to believe, but it is true! There are two main enhancements to blocks in D8 that make them super-powered over D7.


Let’s say you have a block, and you want to place it in the sidebar in some contexts, and in the triptych in others. A huge problem in past versions of Drupal is that blocks can only be placed in one region per theme. If you want it in two different regions based on different visibility rules, you have to have two blocks. This limitation alone was a huge driver towards Context for many sites. In D8 blocks can be placed as many times as needed! This addition is a huge step up for blocks.

Drupal blocks being displayed in multiple places on a single page. Blocks are entities

Blocks are full-blown entities in D8. This means they have a ton of functionality they never had before.

Just like with content types, blocks also now have custom types. This means you can set up unique blocks with their own sets of fields, and use them for their own specific purposes. We can have the promo block type described above, but then also have one we use like a nodequeue, and then one that is just a big HTML block which we can use to embed something like a Mailchimp signup form or other non-fielded content. 

Drupal 8 block type administration screen.

Blocks also now have fields. On the surface this can sound like a case of over-engineering something that is supposed to be simple, but this actually has enormous utility. For instance, one of the things we often find ourselves doing on modern sites is creating a "Promo" content type which is not much more than a title, text, a photo and a link promoting another piece of content. 

We can now do that with core blocks by creating a Promo block type, and creating placement rules around where it appears. Since we also have instancing, we could have it in the sidebar on some content, in the content area on other content, and in the footer on the home page.

Drupal blocks being displayed in multiple places on a single page.

Fields on blocks can also allow us to replicate some simple nodequeue-like behaviors. Create a block with a multi-instance entity reference field and a title. You can now choose a set of entities for the block to display, and place that block using the normal core block placement rules. This could be expanded on in a lot of different ways, the combination of block placement with fields provides a ton of flexibility which we never had before.

Finally, having fields means that you can have blocks with efficiently “chunked” data, rather than just a big text field you dump HTML into. This has enormous implications for accessibility and having a more robust device-independent layouts.

Block Visibility Groups

While it’s not a part of core, the Block Visibility Groups module can help expand and organize the core blocks functionality even further. Block Visibilty Groups adds the concept of a group of blocks, which basically act is if they were one block. Say you need three blocks in the sidebar, but all of them appear at the same path, and only to authenticated users. You can add them all to the visibility group, and then apply the rules to the group. Down the road, if those rules change, you can change them in one place. It also organizes the blocks page better, limiting the number of individual entries you have to deal with. 

Administration screen for the Block Visibility Group module Conclusion

Fields and instancing, when combined with the existing block visibility rules and contrib modules like Block Visibility Groups, make blocks in D8 a killer feature. Based on my pretty basic needs, I no longer need to worry about whether or not Context is ported, because I'm pretty sure I can do everything I want with core. On top of all that, all my config for these blocks is now deployable through the Configuration Management System, which will be the topic of our next article. 

Special thanks go out to larowlan, timplunkett, and eclipsegc for pushing through the major patches that made blocks become super useful in Drupal 8!

A Front-end JavaScript Framework in Drupal Core?

Matt & Mike talk with Acquia's Preston So, Chapter Three's Drew Bolles, and Lullabot's own Sally Young on the possibility of a front-end JavaScript framework in Drupal core, and what that would look like.

The Genesis Of Lullabot

On January 1st, 2006, Matt Westgate and I announced our new company by launching a website at That was 10 years ago.

Matt and I got to know each other very slowly over the course of 2005. I was building my first Drupal site and we met through Actually, my very first post on was an interaction with Matt. On February 14th of that year, I'd found a bug in Matt's, then-popular eCommerce module and I'd posted an issue trying to track down the error I was getting. Matt's response, our first interaction, posted one week later, was "This has been fixed. Thanks."

My first Drupal project used many of Matt's Drupal modules. It was an overwhelming and awful project. My wife (a designer) and I had vastly underestimated and underbid the complex web site which would be my first experience with Drupal. At the time, the Drupal community had big dreams, but the project was still in its early years. I found that much of the functionality that I needed for my project wasn't really mature yet. Most of Drupal's documentation existed only as comments in the code itself. There were no books about Drupal. There were no podcasts. There were no conferences or workshops. There were no companies that I could go to for practical help or advice. I thought my first Drupal project would take me 2 or 3 months. It ended up taking almost a year.

I posted desperately in the Drupal IRC channels and on the forums. Many of my questions were very basic. These questions posted in the #drupal IRC were simply met with links to or I was being told to go "read the fucking manual." It was painful. I felt ashamed. I felt unqualified, lost, and hopeless.

I decided to reach out to the friendly guy whose modules were being used all throughout my project. He had committed a bunch of my code to his modules and he always seemed upbeat and grateful about it. Even in that first interaction, he was upbeat: "This has been fixed." as well as thankful: "Thanks." He'd also amicably answered a few of my forum questions and he obviously knew his stuff. I sent Matt an IRC message and offered to pay him to get on the phone with me and answer my Drupal questions. He was a bit taken aback. In 2005, few of its core developers were getting paid for their Drupal work. I think we settled on $40/hr and sometime early in the summer, I spoke to Matt on the phone for the first time. He lived in Ames, Iowa. I lived near Providence, RI. We became internet buddies, having long conversations over AOL Instant Messenger about Drupal and life. He was friendly and knowledgeable. He was also curious and inquisitive with a positive attitude. There weren't many things that seemed to daunt him. My state of hopelessness moved to one of hope and gratitude.

Prior to 2005, my career success was in the music industry. It was amazing to see so many talented developers working together and contributing such amazing software… for free! In the same way that I was inspired by other musicians' music, I was inspired by Drupal's developers' code. Although I was overwhelmed, I was also inspired. I couldn't understand why there weren't crowds of people cheering when they released a new module. I was so thankful to Matt for his help and generosity. I told him, "You've got an amazing talent. I'm going to make this up to you one day. We're going to do something amazing together. I promise you." I imagined Matt rolling his eyes, but I was intent to follow through on my promise.

In an effort to finish this difficult project, I'd completely immersed myself in Drupal and the Drupal community with a tremendous amount of gratitude and hope. I contributed many Drupal modules over the next few years. My overwhelming first Drupal project began to wind down around October and I started telling Matt that we should start a company together. Drupal was so powerful, but no one was out there acting as an advocate, providing practical information or consulting services to help companies harness its power.

Around the same time, friends at Adaptive Path in San Francisco offered me a project for an up and coming film production company called Participant Productions (now Participant Media). They were about to release a new film called "An Inconvenient Truth" and they wanted to create a community action website where they could inspire visitors to take action on various social causes. Adaptive Path thought Drupal might be a good match and they thought of me. I said I'd do it, but only if we could get Matt to help.

I first met Matt Westgate in-person at the San Francisco airport. It was awkward. We'd been talking over the web and on the phone for almost 6 months and it was really weird to be together, in person. Also, he had flown to San Francisco in November with no clothing heavier than a t-shirt. We stopped at Old Navy on Market Street and bought Matt a sweatshirt. We sat in the Adaptive Path offices and coded up most of the site in about a week. We had flow. Adaptive Path was one of the most prestigious web consulting firms at the time and several of their founders, upon seeing our accomplishment, said we really should start a company doing Drupal. That was all the encouragement we needed.

Matt finally decided to leave the security of his university job some time in December. We spent a lot of time going back and forth trying to name our new company. With Matt's background as a Zen meditator and my background as a musician, we liked the combination of the calm music of "lullaby" with "robot", because we were doing technical work and also, robots are just cool. It also combined the organic and inorganic in a nice way, reminding us that it's important to remember the squishy stuff even while we're working with ones and zeroes.

In our first post on, I wrote: "It is often said that open source software is 'made by developers for developers'. We hope to break down some of those walls and provide an entry for less technical users. We hope to break through the cacophony, and provide clear and understandable information and guidance."

We created Lullabot because we didn't want others to have the frustrating first experience that I had with Drupal. We wanted to make Drupal more accessible to a wider group of web developers. Over our first year as a company, we started the first Drupal podcast, we taught the first Drupal workshops, and Matt co-wrote the first Drupal book, Pro Drupal Development, which became required reading for aspiring Drupal developers. We also built some of the first household-name Drupal sites and helped to move Drupal toward the tipping point of acceptance and adoption.

While we started Lullabot with a lot of energy, hope, and competitive spirit, it's pretty safe to say that the company has been more successful than either of us had hoped in our wildest dreams. We've been blessed to work on some truly amazing project with some amazing clients. We've hired and worked with the best talent in the business. We've tried to infuse Lullabot with the sense of gratitude, hope, and excitement that we had back in 2006. We've built a stellar reputation for doing superlative work and we're constantly astounded by our opportunities and success.

In part 2, I'll talk about some of those successes and growing the company over the past 10 years.

Lullabot Celebrates 10 Years Today

On January 1st, 2006, Matt Westgate and I announced our new company by launching a website at That was 10 years ago today.

Over the past 10 years, we've worked with a long list of amazing clients on a long list of amazing projects. We've run workshops, and we've recorded podcasts and training videos. We've run conferences. We've written books. We've written and contributed a lot of Drupal code, modules, and themes. We printed up a ton of Lullabot t-shirts and handed out temporary tattoos (which people seem to promptly affix to their children). We've had legendary parties. We've met lots of great people. We've trained thousands of Drupal developers. We launched Drupalize.Me and grew it into its own company. We've delved deeply into decoupled Drupal sites and built expertise in React, Node.js, and mobile and connected-television development. We received the highest rating on Clutch's worldwide list of development companies. We created a video delivery platform called Videola (which kind of flopped… but we learned a lot!). We created a cool business-card app called Shoot. More recently, we've launched a new product for website testing called Tugboat.

Most importantly: we've built an amazing team of talented developers, designers, project managers, management, and admin staff. We've grown to over 60 people and we still feel like we've got much of the culture, participation, and collaborative spirit that marked our early years. Our team is dedicated, happy, hardworking, and they encourage each other to grow and become the best of themselves. I am so grateful to these people for helping us to build Lullabot. We feel a tremendous responsibility to our team and we are dedicated to be as good to them as they've been to us.

Over the next few weeks, we'll be posting a series of articles talking about Lullabot's conception and growth over the past 10 years, the technological changes that we've seen, changes in Drupal and the web/mobile, and what it's been like to work as a distributed company.

We'll keep this post short today though, because we know you're probably tired and/or hung-over from ringing in the new year. But check back on Monday as The Lullabot Anniversary Series begins.

Drupal 8 Theming--Twig and Responsive Images

Matt & Mike talk about new things a Drupal themer will find in Drupal 8, including Twig templates and responsive images. They're joined by Lullabot Front-end Developers Marc Drummond and Wes Ruvalcaba

The New Drupal 8 Configuration System

Matt & Mike talk to Alex Pott, Matthew Tift, and Greg Dunlap about all things Drupal Configuration Management and their experiences as owners of Drupal 8's Configuration Management Initiative known as CMI.

The Drupal 8 Developer Experience

Matt & Mike talk to Lullabot's Director of Technology Karen Stevenson and Senior Architect Andrew Berry about their experiences working using Drupal 8 in client work.

Suggestions for Avoiding the Workday Motivation Melt Down

Because Lullabot is a distributed company, I interact and communicate with my co-workers a bit differently than some in traditional, physical offices may. We kick-off projects with on-sites, in-person workshops and the like, but for much of the life of a project, I’m not physically in the same room as the rest of the team. We communicate and collaborate a lot on the phone, in Google Hangouts, and in Slack. While working with a distributed team can, at times, really help with productivity and give a great sense of autonomy, there are also times when you feel unmotivated and you can't rely on the energy of the people in the room with you to get you going. After a year and a half, I can finally say I’ve found myself in a good rhythm and feel like I’ve learned how to work efficiently and effectively in a way that keeps me motivated and focused throughout the day.  Below are just a few ideas and tips that have helped me maximize my productivity, creativity and enjoyment of my work.

Personalize your routine

Don’t be afraid to experiment and find a daily routine that fits you best. You can lean into your natural rhythms and have the flexibility to work during the time of day you’re most creative and productive. It’s okay to work outside of the 9-5 time box. Just make sure you’re still able to make all the necessary meetings and are in touch with your team. At Lullabot, we use Slack to stay connected with team members and projects when working odd hours.

Avoid Multitasking I will have Evernote open throughout the day to help me capture and search notes, so it makes sense for me to also use it as a tool for creating task lists. I’ve also heard great things about Wunderlist,, and todoist. It can be tempting to multitask, especially if you’re working from home. Washing the dishes and trying to run a client call at the same time is just a bad idea. Multitasking involves context switching, which often quickly depletes energy  and can lead to exhaustion. You actually can get more done if you focus on one task at a time. One of the great things about working for Lullabot is that we’re usually assigned to a single project for a duration of time. Because of the narrow focus, I’ve noticed that I often produce better quality work within a shorter amount of time. Creating a task checklist can also help you avoid distractions and multitasking and keep you focused throughout the day.

Create a dedicated space for work time

The boundaries of work and personal time can be very easily blurred when working from home. Creating a dedicated space for work allows you to mentally shift from work time to personal time when the work day is complete. Don’t have an entire room to dedicate to an office? A small area in the corner of a bedroom or dining room will do. Using a notification system such as a post-it-note on the door or a do not disturb sign can let family members or significant others know when you can or can’t be interrupted. When your work day is done, performing routine activities such as making dinner or going for an end of the day walk can help you mentally wind down the work day.

Complete tasks away from the computer

It’s good to keep in mind that the computer is only one of many tools at your disposal, and that you should whenever possible work in other mediums. Stepping away from the computer and using a different tool to complete a task can be mentally refreshing, encourage exploration and can help reset energy for a new task.  As a design team, we often take this approach and sketch out ideas on paper whenever possible before working on the computer.

Take breaks

Taking small breaks throughout the work day can help keep you motivated and focused. Short walks or doing small tasks like emptying the dishwasher can help clear your mind when you’re working on a tough problem. Shifting your focus to simple tasks during breaks can help reset your mind and inspire new solutions. It’s what designers like Cameron Moll refer to as “creative pause.” Sometimes taking a break can slip your mind when you’re secluded and are in the zone. Setting an alert that goes off during certain parts of the day can help remind you to get up, stretch and walk away for a bit.

Switch up your routine

Routines are great, but too much repetition can be boring and reduce your motivation. If you can’t seem to focus on a task, don’t be afraid to change things up. It can be something as small as removing yourself from your home office and working at a coffee shop, or moving to a standing desk for part of your day.

Stay connected

It’s important that you feel connected to your team and the work that you do. Feeling isolated can interfere with your motivation and focus, and the lack of personal connection can make you feel less accountable when working on a team. If you’re feeling disconnected,  don’t be afraid to reach out to coworkers for a quick non work-related chat. At Lullabot, we have several co-workers that join a morning coffee or afternoon lunch Hangout. You can also reach out into the community and join local meet-ups if you’re itching to talk shop in person with someone.

Hopefully by experimenting with a couple of these suggestions, you can more consistently maintain your motivation, focus and have productive, rewarding work days. Have other suggestions to add to this list? I’d love to hear what helps you to stay motivated throughout your workday.  

The Lullabot Podcast is back! Drupal 8! The Past! The Future!

Drupal 8 is here! The Lullabot Podcast is back! It's an exciting time to be alive. We talk about where we've been, before we look ahead to see where we're going. Oh, and we want to hear from you. Leave us listener feedback at 1-877-LULLABOT x789.

Creating a Custom Filter in Drupal 8

Do you want to make it easier for editors to insert a block of HTML by just including a short token? Maybe you want to add some custom Javascript or CSS, but only for content that contains a certain pattern. Or, maybe you want to filter out certain words that site visitors will find offensive.

In this article, we will create a custom filter for Drupal 8, one that replaces a pattern and adds the required CSS to the page. We’ll also add an option for the filter that users can toggle.

What are Drupal Filters and Text Formats?

Drupal allows you to create text formats, which are groupings of filters. Filters can serve several purposes, but one of the main use cases is to limit what HTML can be placed into content. This helps keeps the site more secure, and prevents editors from potentially breaking the layout.

One of the default text formats is “Basic HTML.” When you configure this format by going to admin/config/content/formats/manage/basic_html and scrolling down a bit, you can see all of the enabled filters for it.

Each filter can have optional settings. For example, you can view the options form for the “Limit allowed HTML tags” filter by scrolling down a bit more.

How do you create your own filters to enable on text formats, and how do you make them configurable?

Frame out the Module

First we create a ‘celebrate’ module folder and then our file.

name: Celebrate description: Custom filter to replace a celebrate token. type: module package: custom core: 8.x

A custom filter is a type of plugin, so we will need to create the proper folder structure to adhere to PSR-4 standards. Our folder structure will be celebrate/src/Plugin/Filter.

In the Filter folder, create a file named FilterCelebrate.php. Add the proper namespace for our file and pull in the FilterBase class so we can extend it.

Our file looks like this so far:

namespace Drupal\celebrate\Plugin\Filter; use Drupal\filter\Plugin\FilterBase; class FilterCelebrate extends FilterBase { }

FilterBase is an abstract class that implements the FilterInterface, taking care of most of the mundane setup and configuration. The only function we are required to implement in our own filter class is process(). According to the FilterInterface::process documentation, the function must return the filtered text, wrapped in a FilterProcessResult object. This means we need to put another use statement in our file.

This may seem onerous. Why can’t we just return the text itself? Why do we need another object? This will become clear later, when we need to add some more advanced use cases. There are good reasons for it. For now, to prevent our IDE or code editor from yelling at us, we’ll  put a passthrough function as a placeholder, which does no transformations on the text.

Here is our updated code:

namespace Drupal\celebrate\Plugin\Filter; use Drupal\filter\FilterProcessResult; use Drupal\filter\Plugin\FilterBase; class FilterCelebrate extends FilterBase { public function process($text, $langcode) { return new FilterProcessResult($text); } } Get Drupal to Discover our Filter

Since a filter type is a plugin, Drupal needs us to add an annotation to our class so it knows exactly what its needs to do with our code. Annotations are comments placed before the class definition, arranged in a certain format.

The file, with our annotation, will look like this:

namespace Drupal\celebrate\Plugin\Filter; use Drupal\filter\FilterProcessResult; use Drupal\filter\Plugin\FilterBase; /** * @Filter( * id = "filter_celebrate", * title = @Translation("Celebrate Filter"), * description = @Translation("Help this text format celebrate good times!"), * type = Drupal\filter\Plugin\FilterInterface::TYPE_MARKUP_LANGUAGE, * ) */ class FilterCelebrate extends FilterBase { public function process($text, $langcode) { return new FilterProcessResult($text); } }

Every plugin declaration needs an id, so we give it a reasonable one. Title and description are what will be shown on the admin screens. After we enable the module, you should see something like this on the screen for a text format:

The “type” in the annotation needs a bit more of an explanation. This is a classification for the purpose of the filter, and there are a few constants that help us populate the property. From the documentation, we have the following options:

  • FilterInterface::TYPE_HTML_RESTRICTOR: HTML tag and attribute restricting filters.
  • FilterInterface::TYPE_MARKUP_LANGUAGE: Non-HTML markup language filters that generate HTML.
  • FilterInterface::TYPE_TRANSFORM_IRREVERSIBLE: Irreversible transformation filters.
  • FilterInterface::TYPE_TRANSFORM_REVERSIBLE: Reversible transformation filters.

For our purposes, we plan on taking a bit of non-HTML markup and turning it into HTML, so the second classification fits.

There are a few more optional properties for Filter annotations, and they can be found in the FilterInterface documentation.

Adding Basic Text Processing

For this filter, we want to replace every instance of the token “[celebrate]” with the HTML snippet “<span class=”celebrate-filter”>Good Times!</span>”. To do that, we add some code to our FilterCelebrate::process function.

public function process($text, $langcode) { $replace = '<span class="celebrate-filter">’ . $this->t(‘Good Times!’) . ‘</span>'; $new_text = str_replace('[celebrate]', $replace, $text); return new FilterProcessResult($new_text); }

Enable the Celebrate filter for the Basic HTML content filter, and create some test content that contains the [celebrate] token. You should see it replaced by the HTML snippet defined above. If not, check to make sure the field has the Basic HTML filter applied.

Adding a Settings Form for the Filter

But we want the user to be able to toggle an option regarding this filter. To do that, we need to define a settings form by overriding the settingsForm() method for our class.

We add the following code to our class to define a form array for our filter:

public function settingsForm(array $form, FormStateInterface $form_state) { $form['celebrate_invitation'] = array( '#type' => 'checkbox', '#title' => $this->t('Show Invitation?'), '#default_value' => $this->settings['celebrate_invitation'], '#description' => $this->t('Display a short invitation after the default text.'), ); return $form; }

For more details on using the Form API to define a form array, check out the Form API Documentation. If you have created for altered forms in Drupal 7, the convention should be familiar.

If we reload our text format admin page after adding this function, we’ll get an error:

Fatal error: Declaration of Drupal\celebrate\Plugin\Filter\FilterCelebrate::settingsForm() must be compatible with Drupal\filter\Plugin\FilterInterface::settingsForm(array $form, Drupal\Core\Form\FormStateInterface $form_state)

Essentially, it doesn’t know what a FormStateInterface, as designated in our settingsForm() method. We need to either add the full PSR-4 namespace to the method definition, or add another use statement. For our example, we’ll add another use statement to the top of our FilterCelebrate.php file.

use Drupal\Core\Form\FormStateInterface;

Now we can see our settings form in action.

To get access to these settings in our class, we can call $this->settings[‘celebrate_invitation’].

Our process method now looks like this:

public function process($text, $langcode) { $invitation = $this->settings['celebrate_invitation'] ? ' Come on!' : ''; $replace = '<span class="celebrate-filter">’ . $this->t(‘Good Times!' . $invitation) . ' </span>'; $new_text = str_replace('[celebrate]', $replace, $text); return new FilterProcessResult($new_text); }

Now, if the “Show Invitation?” setting is checked, the text “Come on!” is added to the end of the replacement text.

Adding CSS to the Page When the Filter is Applied

But now we want to add a shaking CSS animation to the replacement text on hover, because we want to celebrate like it's 1999. The CSS should only be loaded when the filter is being used. This is where the additional properties of the FilterProcessResult object come into play.

First, we’ll create a CSS file in the root of our module folder called “celebrate.theme.css”. The following CSS is everything we need to enable a shaking effect on hover:

.celebrate-filter { background-color: #000066; padding: 10px 5px; color: #fff; } .celebrate-filter:hover { animation: shake .3s ease-in-out infinite; background-color: #ff0000; } @keyframes shake { 0% { transform: translateX(0); } 20% { transform: translateX(-6px); } 40% { transform: translateX(6px); } 60% { transform: translateX(-6px); } 80% { transform: translateX(6px); } 100% { transform: translateX(0); } }

In order to attach our CSS file to the FilterProcessResult, it needs to be declared as a library. Create another file in the module root called “celebrate.libraries.yml” with the following text:

celebrate-shake: version: 1.x css: theme: celebrate.theme.css: {}

This defines a library called “celebrate-shake” that includes a CSS file. Multiple CSS and/or Javascript files can be included in a single library. For more details, see the documentation on defining a library.

Now that we have defined a library, we can add it to the page whenever our filter is being applied. We use the setAttachments() method of the FilterProcessResult object to add our library so our process function will look like this:

public function process($text, $langcode) { $invitation = $this->settings['celebrate_invitation'] ? ' Come on!' : ''; $replace = '<span class="celebrate-filter">’ . $this->t(‘Good Times!' . $invitation) . ' </span>'; $new_text = str_replace('[celebrate]', $replace, $text); $result = new FilterProcessResult($new_text); $result->setAttachments(array( 'library' => array('celebrate/celebrate-shake'), )); return $result; }

You will notice that we use the identifier “celebrate/celebrate-shake” to refer to our new library. The first half of the identifier is our module name and the second half is the library name itself. This is to help prevent name conflicts and collisions.

And as an added bonus, other modules will also be able to use our celebrate library.

You can download the full Celebrate Filter module here.

Remember, since filters can be mixed and stacked on top of each other, a good filter will do one thing really well. Keep the use case compact and discreet. If you find your filter code getting long, having to write a lot of exceptions, work around assumptions, and adding more and more options to the settings form, it might be a good idea to step back and see if it makes sense to create more than one type of custom filter.

Configuration Management in Drupal 8: The Key Concepts

While the new configuration system in Drupal 8 strives to make the process of exporting and importing site configuration feel almost effortless, immensely complex logic facilitates this process. Over the past five years, the entire configuration system code was written and rewritten multiple times, and we think we got much of it right in its present form. As a result of this work, it is now possible to store configuration data in a consistent manner and to manage changes to configuration. Although we made every attempt to document how and why decisions were made – and to always update issue queues, documentation, and change notices – it is not reasonable to expect everyone to read all of this material. But I did, and in this post I try to distill years of thinking, discussions, issue summaries, code sprints, and code to ease your transition to Drupal 8.

In this article I highlight nine concepts that are key to understanding the configuration system. This article is light on details and heavy on links to additional resources.

  1. It is called the “configuration system.” The Configuration Management Initiative (CMI) is, by most reasonable measures, feature complete. The number of CMI critical issues was reduced to zero back in the Spring and the #drupal-cmi IRC channel has been very quiet over the past few months. Drupal now has a functional configuration management system, but we only should call the former a CMS. While it is tempting to think of “CMI” as an orphaned initialism, like AARP or SAT, we aspire to avoid confusion. Our preferred phrase to describe the result of CMI is “configuration system.” This is the phrase we use in the issue queue and the configuration system documentation.

  2. DEV ➞ PROD. Perhaps the most important concept to understand is that the configuration system is designed to optimize the process of moving configuration between instances of the same site. It is not intended to allow exporting the configuration from one site to another. In order to move configuration data, the site and import files must have matching values for UUID in the configuration item. In other words, additional environments should initially be set up as clones of the site. We did not, for instance, hope to facilitate exporting configuration from and importing it into

  3. The configuration system is highly configurable. Out of the box the configuration system stores configuration data in the database. However, it allows websites to easily switch to file-based storage, MongoDB, Redis, or another favorite key/value store. In fact, there is a growing ecosystem of modules related to the configuration system, such as Configuration Update, Configuration Tools, Configuration Synchronizer, and Configuration Development.

  4. There is no “recommended” workflow. The configuration system is quite flexible and we can imagine multiple workflows. On one end of the spectrum, we expect some small sites will not ever use the configuration manager module to import and export configuration. For the sites that utilize the full capabilities of the configuration system, one key question they will need to answer regards the role that site administrators will play in managing configuration. I suspect many sites will disable configuration forms on their production sites – perhaps using modules like Configuration Read-Only Mode – and make all configuration changes in their version control system.

  5. Sites, not modules, own configuration. When a module is installed, and the configuration system imports the configuration data from the module’s config/install directory (and perhaps the config/optional directory), the configuration system assumes the site owner is now in control of the configuration data. This is a contentious point to some developers because module maintainers will need to use update hooks rather than making simple changes to their configuration. Changing the files in a module’s config/install directory after the module has been installed will have no effect on the site.

  6. Developers will still use Features. The Features module in Drupal 8 changes how the configuration system works to allow modules to control their configuration. Mike Potter, Nedjo Rogers, and others have been making Features in Drupal 8 do the kinds of things Features was originally intended to do, which is to bundle functionality, such as a “photo gallery feature.” The configuration system makes the work of the Features module maintainers exponentially easier and as a result, we all expect using Features to be more enjoyable in Drupal 8 than it was in Drupal 7.

  7. There are two kinds of configuration in Drupal 8: simple configuration and configuration entities. Simple configuration stores basic configuration, such as boolean values, integers, or texts. Simple configuration has exactly one copy or version, and is somewhat similar to using variable_get() and variable_set() in Drupal 7. Configuration entities, like content types, enable the creation of zero or more items and are far more complex. Examples of configuration entities include views, content types, and image styles.

  8. Multilingual needs drove many decisions. Many of the features of the configuration system exist to support multilingual sites. For example, the primary reason schema files were introduced was for multilingual support. And many of the benefits to enabling multilingual functionality resulted in enhancements that had much wider benefits. The multilingual initiative was perhaps the best organized and documented Drupal 8 initiative and their initiative website contains extensive information and documentation.

  9. Configuration can still be overridden in settings.php. The $config variable in the settings.php file provides a mechanism for overriding configuration data. This is called the configuration override system. Overrides in settings.php take precedence over values provided by modules. This is a good method for storing sensitive data that should not be stored in the database. Note, however, that the values in the active configuration – not the values from settings.php – are displayed on configuration forms. Of course, this behavior can be modified to match expected workflows. For example, some site administrators will want the configuration forms to indicate when form values are overridden in settings.php.

If you want more information about the configuration system, the best place to start is the Configuration API page on It contains numerous links to additional documentation. Additionally, Alex Pott, my fellow configuration system co-maintainer, wrote a series of blog posts concerning the “Principles of Configuration Management” that I enthusiastically recommend.

I hope you will agree that the configuration system is one of the more exciting features of Drupal 8.

This article benefitted from helpful conversations and reviews by Tim Plunkett, Jennifer Hodgdon, Andrew Berry, and Juampy NR.

Five-Fifteens: A Simple Way to Keep Information Flowing Across Teams

Five minutes to read, fifteen minutes to write.  

A five-fifteen is a communication tool that makes the task of reporting upwards a quick, painless, and easy thing to do. The basic idea is to sit down and write the answers to just enough questions that it would only take you fifteen minutes to write and someone else five minutes to read.

This is a process we encourage everyone to do at Lullabot. It gives your direct report or entire team – depending on who you want to send it to – an idea of how your week is going and the challenges that you may be facing so that they have a chance to empathize and try to help in any way they can. It gives you a chance to speak your mind about your work, your life, and to give your direct report the all-important feedback that they need to hear in order to better help you in your life and in your work.

We’ve written a tool to help the creation of these letters. You can use it by going to and following the few short steps that will help you open that line of communciation. It uses your default mail client and is totally client side, so no record of the emails are stored in our system. It’s a completely open source tool, so feel free to send us pull requests or fork it for your own purposes from GitHub:

What’s your favorite way to communicate valuable information to your direct report or team?

Goodbye Drush Make, Hello Composer!

I’ve built and rebuilt many demo Drupal 8 sites while trying out new D8 modules and themes and experimenting with new functionality like migrations. After installing D8 manually from scratch so many times, I decided to sit down and figure out how to build a Drupal site using Composer to make it easier. The process is actually very handy, sort of the way we’ve used Drush Make in the past, where you don’t actually store all the core and contributed module code in your repository, you just record which modules and versions you’re using and pull them in dynamically.

I was a little worried about changing the process I’ve used for a long time, but my worries were for nothing. Anyone who’s used to Drush would probably find it pretty easy to get this up and running. 

TLDR: How to go from an empty directory to a fully functional Drupal site in two command lines:

sudo composer create-project drupal-composer/drupal-project:~8.0 drupal --stability dev --no-interaction cd drupal/web ../vendor/bin/drush site-install --db-url=mysql://{username}:[email protected]/{database} Install Composer

Let's talk through the whole process, step by step. The first step is to install Composer on your local system. See for more information about installing Composer.

Set Up A Project With Composer

To create a new Drupal project using Composer, type the following on the command line, where /var/drupal is the desired code location:

cd /var sudo composer create-project drupal-composer/drupal-project:~8.0 drupal --stability dev --no-interaction

The packaging process downloads all the core modules, Devel, Drush and Drush Console, and then moves all the Drupal code into a ‘web’ subdirectory. It also moves the vendor directory outside of the web root. The new file structure will look like this:

You will end up with a composer.json file at the base of the project that might look like the following. You can see the beginning of the module list in the ‘require’ section, and that Drush and Drupal Console are included by default. You can also see rules that move contributed modules into ‘/contrib’ subfolders as they’re downloaded.

{ "name": "drupal-composer/drupal-project", "description": "Project template for Drupal 8 projects with composer", "type": "project", "license": "GPL-2.0+", "authors": [ { "name": "", "role": "" } ], "repositories": [ { "type": "composer", "url": "" } ], "require": { "composer/installers": "^1.0.20", "drupal/core": "8.0.*", "drush/drush": "8.*", "drupal/console": "~0.8", }, "minimum-stability": "dev", "prefer-stable": true, "scripts": { "post-install-cmd": "scripts/composer/" }, "extra": { "installer-paths": { "web/core": ["type:drupal-core"], "web/modules/contrib/{$name}": ["type:drupal-module"], "web/profiles/contrib/{$name}": ["type:drupal-profile"], "web/themes/contrib/{$name}": ["type:drupal-theme"], "web/drush/commands/{$name}": ["type:drupal-drush"] } } }

That site organization comes from A file there describes the process for doing things like updating core. The contributed modules are coming from Packagist rather than directly from That’s because the current Drupal versioning system doesn’t qualify as the semantic versioning the system needs. There is an ongoing discussion about how to fix that.

Install Drupal

The right version of Drush for Drupal 8 comes built into this package. If you have an empty database you can then install Drupal using the Drush version in the package:

cd drupal/web ../vendor/bin/drush site-install --db-url=mysql://{username}:[email protected]/{database}

If you don’t do the installation with Drush you can do it manually, but the Drush installation handles all this for you. The manual process for installing Drupal 8 is:

  • Copy default.settings.php to settings.php and unprotect it
  • Copy default.license.yml to license.yml and unprotect it
  • Create sites/files and unprotect it
  • Navigate to EXAMPLE.COM/install to provide the database credentials and follow the instructions.
Add Contributed Modules From Packagist

Adding contributed modules is done a little differently. Instead of adding modules using drush dl, add additional modules by running composer commands from the Drupal root:

composer require drupal/migrate_upgrade [email protected] composer require drupal/migrate_plus [email protected]

As you go, each module will be downloaded from Packagist and composer.json will be updated to add this module to the module list. You can peek into the composer.json file at the root of the project and see the ‘require’ list evolving.

Repeat until all desired contributed modules have been added. The composer.json file will then become the equivalent of a Drush make file, with documentation of all your modules.

For even more parity with Drush Make, you can add external libraries to your composer.json as well, and, with a plugin, you can also add patches to it. See more details about all these options at

Commit Files to the Repo

Commit the composer.json changes to the repo. The files downloaded by Composer do not need to be added to the repo. You’ll see a .gitignore file that keeps them out (this was added as a part of the composer packaging). Only composer.json, .gitignore and the /sites subdirectory (except /sites/default/files) will be stored in the git repository.

.gitignore # Ignore directories generated by Composer vendor web/core web/modules/contrib web/themes/contrib web/profiles/contrib # Ignore Drupal's file directory web/sites/default/files Update Files

To update the files any time they might have changed, navigate to the Drupal root on the command line and run:

composer update

Add additional Drupal contributed modules, libraries, and themes at any time from the Drupal root with the same command used earlier:

composer require drupal/module_name [email protected]

That will add another line to the composer.json file for the new module. Then the change to composer.json needs to be committed and pushed to the repository. Other installations will pick this change up the next time they do git pull, and they will get the new module when they run composer update.

The composer update command should be run after any git pull or git fetch. So the standard routine for updating a repository might be:

git pull composer update drush updb ... New Checkout

The process for a new checkout of this repository on another machine would simply be to clone the repository, then cd into it and run the following, which will then download all the required modules, files, and libraries:

composer install That’s It

So that’s it. I was a little daunted at first but it turns out to be pretty easy to manage.  You can use the same process on a Drupal 7 site, with a few slight modifications.

Obviously the above process describes using the development versions of everything, since Drupal 8 is still in flux. As it stabilizes you’ll want to switch from using [email protected] to identifying specific stable releases for core and contributed modules.

See the links below for more information: