Suggestions for Avoiding the Workday Motivation Melt Down

Because Lullabot is a distributed company, I interact and communicate with my co-workers a bit differently than some in traditional, physical offices may. We kick-off projects with on-sites, in-person workshops and the like, but for much of the life of a project, I’m not physically in the same room as the rest of the team. We communicate and collaborate a lot on the phone, in Google Hangouts, and in Slack. While working with a distributed team can, at times, really help with productivity and give a great sense of autonomy, there are also times when you feel unmotivated and you can't rely on the energy of the people in the room with you to get you going. After a year and a half, I can finally say I’ve found myself in a good rhythm and feel like I’ve learned how to work efficiently and effectively in a way that keeps me motivated and focused throughout the day.  Below are just a few ideas and tips that have helped me maximize my productivity, creativity and enjoyment of my work.

Personalize your routine

Don’t be afraid to experiment and find a daily routine that fits you best. You can lean into your natural rhythms and have the flexibility to work during the time of day you’re most creative and productive. It’s okay to work outside of the 9-5 time box. Just make sure you’re still able to make all the necessary meetings and are in touch with your team. At Lullabot, we use Slack to stay connected with team members and projects when working odd hours.

Avoid Multitasking I will have Evernote open throughout the day to help me capture and search notes, so it makes sense for me to also use it as a tool for creating task lists. I’ve also heard great things about Wunderlist,, and todoist. It can be tempting to multitask, especially if you’re working from home. Washing the dishes and trying to run a client call at the same time is just a bad idea. Multitasking involves context switching, which often quickly depletes energy  and can lead to exhaustion. You actually can get more done if you focus on one task at a time. One of the great things about working for Lullabot is that we’re usually assigned to a single project for a duration of time. Because of the narrow focus, I’ve noticed that I often produce better quality work within a shorter amount of time. Creating a task checklist can also help you avoid distractions and multitasking and keep you focused throughout the day.

Create a dedicated space for work time

The boundaries of work and personal time can be very easily blurred when working from home. Creating a dedicated space for work allows you to mentally shift from work time to personal time when the work day is complete. Don’t have an entire room to dedicate to an office? A small area in the corner of a bedroom or dining room will do. Using a notification system such as a post-it-note on the door or a do not disturb sign can let family members or significant others know when you can or can’t be interrupted. When your work day is done, performing routine activities such as making dinner or going for an end of the day walk can help you mentally wind down the work day.

Complete tasks away from the computer

It’s good to keep in mind that the computer is only one of many tools at your disposal, and that you should whenever possible work in other mediums. Stepping away from the computer and using a different tool to complete a task can be mentally refreshing, encourage exploration and can help reset energy for a new task.  As a design team, we often take this approach and sketch out ideas on paper whenever possible before working on the computer.

Take breaks

Taking small breaks throughout the work day can help keep you motivated and focused. Short walks or doing small tasks like emptying the dishwasher can help clear your mind when you’re working on a tough problem. Shifting your focus to simple tasks during breaks can help reset your mind and inspire new solutions. It’s what designers like Cameron Moll refer to as “creative pause.” Sometimes taking a break can slip your mind when you’re secluded and are in the zone. Setting an alert that goes off during certain parts of the day can help remind you to get up, stretch and walk away for a bit.

Switch up your routine

Routines are great, but too much repetition can be boring and reduce your motivation. If you can’t seem to focus on a task, don’t be afraid to change things up. It can be something as small as removing yourself from your home office and working at a coffee shop, or moving to a standing desk for part of your day.

Stay connected

It’s important that you feel connected to your team and the work that you do. Feeling isolated can interfere with your motivation and focus, and the lack of personal connection can make you feel less accountable when working on a team. If you’re feeling disconnected,  don’t be afraid to reach out to coworkers for a quick non work-related chat. At Lullabot, we have several co-workers that join a morning coffee or afternoon lunch Hangout. You can also reach out into the community and join local meet-ups if you’re itching to talk shop in person with someone.

Hopefully by experimenting with a couple of these suggestions, you can more consistently maintain your motivation, focus and have productive, rewarding work days. Have other suggestions to add to this list? I’d love to hear what helps you to stay motivated throughout your workday.  

The Lullabot Podcast is back! Drupal 8! The Past! The Future!

Drupal 8 is here! The Lullabot Podcast is back! It's an exciting time to be alive. We talk about where we've been, before we look ahead to see where we're going. Oh, and we want to hear from you. Leave us listener feedback at 1-877-LULLABOT x789.

Creating a Custom Filter in Drupal 8

Do you want to make it easier for editors to insert a block of HTML by just including a short token? Maybe you want to add some custom Javascript or CSS, but only for content that contains a certain pattern. Or, maybe you want to filter out certain words that site visitors will find offensive.

In this article, we will create a custom filter for Drupal 8, one that replaces a pattern and adds the required CSS to the page. We’ll also add an option for the filter that users can toggle.

What are Drupal Filters and Text Formats?

Drupal allows you to create text formats, which are groupings of filters. Filters can serve several purposes, but one of the main use cases is to limit what HTML can be placed into content. This helps keeps the site more secure, and prevents editors from potentially breaking the layout.

One of the default text formats is “Basic HTML.” When you configure this format by going to admin/config/content/formats/manage/basic_html and scrolling down a bit, you can see all of the enabled filters for it.

Each filter can have optional settings. For example, you can view the options form for the “Limit allowed HTML tags” filter by scrolling down a bit more.

How do you create your own filters to enable on text formats, and how do you make them configurable?

Frame out the Module

First we create a ‘celebrate’ module folder and then our file.

name: Celebrate description: Custom filter to replace a celebrate token. type: module package: custom core: 8.x

A custom filter is a type of plugin, so we will need to create the proper folder structure to adhere to PSR-4 standards. Our folder structure will be celebrate/src/Plugin/Filter.

In the Filter folder, create a file named FilterCelebrate.php. Add the proper namespace for our file and pull in the FilterBase class so we can extend it.

Our file looks like this so far:

namespace Drupal\celebrate\Plugin\Filter; use Drupal\filter\Plugin\FilterBase; class FilterCelebrate extends FilterBase { }

FilterBase is an abstract class that implements the FilterInterface, taking care of most of the mundane setup and configuration. The only function we are required to implement in our own filter class is process(). According to the FilterInterface::process documentation, the function must return the filtered text, wrapped in a FilterProcessResult object. This means we need to put another use statement in our file.

This may seem onerous. Why can’t we just return the text itself? Why do we need another object? This will become clear later, when we need to add some more advanced use cases. There are good reasons for it. For now, to prevent our IDE or code editor from yelling at us, we’ll  put a passthrough function as a placeholder, which does no transformations on the text.

Here is our updated code:

namespace Drupal\celebrate\Plugin\Filter; use Drupal\filter\FilterProcessResult; use Drupal\filter\Plugin\FilterBase; class FilterCelebrate extends FilterBase { public function process($text, $langcode) { return new FilterProcessResult($text); } } Get Drupal to Discover our Filter

Since a filter type is a plugin, Drupal needs us to add an annotation to our class so it knows exactly what its needs to do with our code. Annotations are comments placed before the class definition, arranged in a certain format.

The file, with our annotation, will look like this:

namespace Drupal\celebrate\Plugin\Filter; use Drupal\filter\FilterProcessResult; use Drupal\filter\Plugin\FilterBase; /** * @Filter( * id = "filter_celebrate", * title = @Translation("Celebrate Filter"), * description = @Translation("Help this text format celebrate good times!"), * type = Drupal\filter\Plugin\FilterInterface::TYPE_MARKUP_LANGUAGE, * ) */ class FilterCelebrate extends FilterBase { public function process($text, $langcode) { return new FilterProcessResult($text); } }

Every plugin declaration needs an id, so we give it a reasonable one. Title and description are what will be shown on the admin screens. After we enable the module, you should see something like this on the screen for a text format:

The “type” in the annotation needs a bit more of an explanation. This is a classification for the purpose of the filter, and there are a few constants that help us populate the property. From the documentation, we have the following options:

  • FilterInterface::TYPE_HTML_RESTRICTOR: HTML tag and attribute restricting filters.
  • FilterInterface::TYPE_MARKUP_LANGUAGE: Non-HTML markup language filters that generate HTML.
  • FilterInterface::TYPE_TRANSFORM_IRREVERSIBLE: Irreversible transformation filters.
  • FilterInterface::TYPE_TRANSFORM_REVERSIBLE: Reversible transformation filters.

For our purposes, we plan on taking a bit of non-HTML markup and turning it into HTML, so the second classification fits.

There are a few more optional properties for Filter annotations, and they can be found in the FilterInterface documentation.

Adding Basic Text Processing

For this filter, we want to replace every instance of the token “[celebrate]” with the HTML snippet “<span class=”celebrate-filter”>Good Times!</span>”. To do that, we add some code to our FilterCelebrate::process function.

public function process($text, $langcode) { $replace = '<span class="celebrate-filter">’ . $this->t(‘Good Times!’) . ‘</span>'; $new_text = str_replace('[celebrate]', $replace, $text); return new FilterProcessResult($new_text); }

Enable the Celebrate filter for the Basic HTML content filter, and create some test content that contains the [celebrate] token. You should see it replaced by the HTML snippet defined above. If not, check to make sure the field has the Basic HTML filter applied.

Adding a Settings Form for the Filter

But we want the user to be able to toggle an option regarding this filter. To do that, we need to define a settings form by overriding the settingsForm() method for our class.

We add the following code to our class to define a form array for our filter:

public function settingsForm(array $form, FormStateInterface $form_state) { $form['celebrate_invitation'] = array( '#type' => 'checkbox', '#title' => $this->t('Show Invitation?'), '#default_value' => $this->settings['celebrate_invitation'], '#description' => $this->t('Display a short invitation after the default text.'), ); return $form; }

For more details on using the Form API to define a form array, check out the Form API Documentation. If you have created for altered forms in Drupal 7, the convention should be familiar.

If we reload our text format admin page after adding this function, we’ll get an error:

Fatal error: Declaration of Drupal\celebrate\Plugin\Filter\FilterCelebrate::settingsForm() must be compatible with Drupal\filter\Plugin\FilterInterface::settingsForm(array $form, Drupal\Core\Form\FormStateInterface $form_state)

Essentially, it doesn’t know what a FormStateInterface, as designated in our settingsForm() method. We need to either add the full PSR-4 namespace to the method definition, or add another use statement. For our example, we’ll add another use statement to the top of our FilterCelebrate.php file.

use Drupal\Core\Form\FormStateInterface;

Now we can see our settings form in action.

To get access to these settings in our class, we can call $this->settings[‘celebrate_invitation’].

Our process method now looks like this:

public function process($text, $langcode) { $invitation = $this->settings['celebrate_invitation'] ? ' Come on!' : ''; $replace = '<span class="celebrate-filter">’ . $this->t(‘Good Times!' . $invitation) . ' </span>'; $new_text = str_replace('[celebrate]', $replace, $text); return new FilterProcessResult($new_text); }

Now, if the “Show Invitation?” setting is checked, the text “Come on!” is added to the end of the replacement text.

Adding CSS to the Page When the Filter is Applied

But now we want to add a shaking CSS animation to the replacement text on hover, because we want to celebrate like it's 1999. The CSS should only be loaded when the filter is being used. This is where the additional properties of the FilterProcessResult object come into play.

First, we’ll create a CSS file in the root of our module folder called “celebrate.theme.css”. The following CSS is everything we need to enable a shaking effect on hover:

.celebrate-filter { background-color: #000066; padding: 10px 5px; color: #fff; } .celebrate-filter:hover { animation: shake .3s ease-in-out infinite; background-color: #ff0000; } @keyframes shake { 0% { transform: translateX(0); } 20% { transform: translateX(-6px); } 40% { transform: translateX(6px); } 60% { transform: translateX(-6px); } 80% { transform: translateX(6px); } 100% { transform: translateX(0); } }

In order to attach our CSS file to the FilterProcessResult, it needs to be declared as a library. Create another file in the module root called “celebrate.libraries.yml” with the following text:

celebrate-shake: version: 1.x css: theme: celebrate.theme.css: {}

This defines a library called “celebrate-shake” that includes a CSS file. Multiple CSS and/or Javascript files can be included in a single library. For more details, see the documentation on defining a library.

Now that we have defined a library, we can add it to the page whenever our filter is being applied. We use the setAttachments() method of the FilterProcessResult object to add our library so our process function will look like this:

public function process($text, $langcode) { $invitation = $this->settings['celebrate_invitation'] ? ' Come on!' : ''; $replace = '<span class="celebrate-filter">’ . $this->t(‘Good Times!' . $invitation) . ' </span>'; $new_text = str_replace('[celebrate]', $replace, $text); $result = new FilterProcessResult($new_text); $result->setAttachments(array( 'library' => array('celebrate/celebrate-shake'), )); return $result; }

You will notice that we use the identifier “celebrate/celebrate-shake” to refer to our new library. The first half of the identifier is our module name and the second half is the library name itself. This is to help prevent name conflicts and collisions.

And as an added bonus, other modules will also be able to use our celebrate library.

You can download the full Celebrate Filter module here.

Remember, since filters can be mixed and stacked on top of each other, a good filter will do one thing really well. Keep the use case compact and discreet. If you find your filter code getting long, having to write a lot of exceptions, work around assumptions, and adding more and more options to the settings form, it might be a good idea to step back and see if it makes sense to create more than one type of custom filter.

Configuration Management in Drupal 8: The Key Concepts

While the new configuration system in Drupal 8 strives to make the process of exporting and importing site configuration feel almost effortless, immensely complex logic facilitates this process. Over the past five years, the entire configuration system code was written and rewritten multiple times, and we think we got much of it right in its present form. As a result of this work, it is now possible to store configuration data in a consistent manner and to manage changes to configuration. Although we made every attempt to document how and why decisions were made – and to always update issue queues, documentation, and change notices – it is not reasonable to expect everyone to read all of this material. But I did, and in this post I try to distill years of thinking, discussions, issue summaries, code sprints, and code to ease your transition to Drupal 8.

In this article I highlight nine concepts that are key to understanding the configuration system. This article is light on details and heavy on links to additional resources.

  1. It is called the “configuration system.” The Configuration Management Initiative (CMI) is, by most reasonable measures, feature complete. The number of CMI critical issues was reduced to zero back in the Spring and the #drupal-cmi IRC channel has been very quiet over the past few months. Drupal now has a functional configuration management system, but we only should call the former a CMS. While it is tempting to think of “CMI” as an orphaned initialism, like AARP or SAT, we aspire to avoid confusion. Our preferred phrase to describe the result of CMI is “configuration system.” This is the phrase we use in the issue queue and the configuration system documentation.

  2. DEV ➞ PROD. Perhaps the most important concept to understand is that the configuration system is designed to optimize the process of moving configuration between instances of the same site. It is not intended to allow exporting the configuration from one site to another. In order to move configuration data, the site and import files must have matching values for UUID in the configuration item. In other words, additional environments should initially be set up as clones of the site. We did not, for instance, hope to facilitate exporting configuration from and importing it into

  3. The configuration system is highly configurable. Out of the box the configuration system stores configuration data in the database. However, it allows websites to easily switch to file-based storage, MongoDB, Redis, or another favorite key/value store. In fact, there is a growing ecosystem of modules related to the configuration system, such as Configuration Update, Configuration Tools, Configuration Synchronizer, and Configuration Development.

  4. There is no “recommended” workflow. The configuration system is quite flexible and we can imagine multiple workflows. On one end of the spectrum, we expect some small sites will not ever use the configuration manager module to import and export configuration. For the sites that utilize the full capabilities of the configuration system, one key question they will need to answer regards the role that site administrators will play in managing configuration. I suspect many sites will disable configuration forms on their production sites – perhaps using modules like Configuration Read-Only Mode – and make all configuration changes in their version control system.

  5. Sites, not modules, own configuration. When a module is installed, and the configuration system imports the configuration data from the module’s config/install directory (and perhaps the config/optional directory), the configuration system assumes the site owner is now in control of the configuration data. This is a contentious point to some developers because module maintainers will need to use update hooks rather than making simple changes to their configuration. Changing the files in a module’s config/install directory after the module has been installed will have no effect on the site.

  6. Developers will still use Features. The Features module in Drupal 8 changes how the configuration system works to allow modules to control their configuration. Mike Potter, Nedjo Rogers, and others have been making Features in Drupal 8 do the kinds of things Features was originally intended to do, which is to bundle functionality, such as a “photo gallery feature.” The configuration system makes the work of the Features module maintainers exponentially easier and as a result, we all expect using Features to be more enjoyable in Drupal 8 than it was in Drupal 7.

  7. There are two kinds of configuration in Drupal 8: simple configuration and configuration entities. Simple configuration stores basic configuration, such as boolean values, integers, or texts. Simple configuration has exactly one copy or version, and is somewhat similar to using variable_get() and variable_set() in Drupal 7. Configuration entities, like content types, enable the creation of zero or more items and are far more complex. Examples of configuration entities include views, content types, and image styles.

  8. Multilingual needs drove many decisions. Many of the features of the configuration system exist to support multilingual sites. For example, the primary reason schema files were introduced was for multilingual support. And many of the benefits to enabling multilingual functionality resulted in enhancements that had much wider benefits. The multilingual initiative was perhaps the best organized and documented Drupal 8 initiative and their initiative website contains extensive information and documentation.

  9. Configuration can still be overridden in settings.php. The $config variable in the settings.php file provides a mechanism for overriding configuration data. This is called the configuration override system. Overrides in settings.php take precedence over values provided by modules. This is a good method for storing sensitive data that should not be stored in the database. Note, however, that the values in the active configuration – not the values from settings.php – are displayed on configuration forms. Of course, this behavior can be modified to match expected workflows. For example, some site administrators will want the configuration forms to indicate when form values are overridden in settings.php.

If you want more information about the configuration system, the best place to start is the Configuration API page on It contains numerous links to additional documentation. Additionally, Alex Pott, my fellow configuration system co-maintainer, wrote a series of blog posts concerning the “Principles of Configuration Management” that I enthusiastically recommend.

I hope you will agree that the configuration system is one of the more exciting features of Drupal 8.

This article benefitted from helpful conversations and reviews by Tim Plunkett, Jennifer Hodgdon, Andrew Berry, and Juampy NR.

Five-Fifteens: A Simple Way to Keep Information Flowing Across Teams

Five minutes to read, fifteen minutes to write.  

A five-fifteen is a communication tool that makes the task of reporting upwards a quick, painless, and easy thing to do. The basic idea is to sit down and write the answers to just enough questions that it would only take you fifteen minutes to write and someone else five minutes to read.

This is a process we encourage everyone to do at Lullabot. It gives your direct report or entire team – depending on who you want to send it to – an idea of how your week is going and the challenges that you may be facing so that they have a chance to empathize and try to help in any way they can. It gives you a chance to speak your mind about your work, your life, and to give your direct report the all-important feedback that they need to hear in order to better help you in your life and in your work.

We’ve written a tool to help the creation of these letters. You can use it by going to and following the few short steps that will help you open that line of communciation. It uses your default mail client and is totally client side, so no record of the emails are stored in our system. It’s a completely open source tool, so feel free to send us pull requests or fork it for your own purposes from GitHub:

What’s your favorite way to communicate valuable information to your direct report or team?

Goodbye Drush Make, Hello Composer!

I’ve built and rebuilt many demo Drupal 8 sites while trying out new D8 modules and themes and experimenting with new functionality like migrations. After installing D8 manually from scratch so many times, I decided to sit down and figure out how to build a Drupal site using Composer to make it easier. The process is actually very handy, sort of the way we’ve used Drush Make in the past, where you don’t actually store all the core and contributed module code in your repository, you just record which modules and versions you’re using and pull them in dynamically.

I was a little worried about changing the process I’ve used for a long time, but my worries were for nothing. Anyone who’s used to Drush would probably find it pretty easy to get this up and running. 

TLDR: How to go from an empty directory to a fully functional Drupal site in two command lines:

sudo composer create-project drupal-composer/drupal-project:~8.0 drupal --stability dev --no-interaction cd drupal/web ../vendor/bin/drush site-install --db-url=mysql://{username}:{password}@localhost/{database} Install Composer

Let's talk through the whole process, step by step. The first step is to install Composer on your local system. See for more information about installing Composer.

Set Up A Project With Composer

To create a new Drupal project using Composer, type the following on the command line, where /var/drupal is the desired code location:

cd /var sudo composer create-project drupal-composer/drupal-project:~8.0 drupal --stability dev --no-interaction

The packaging process downloads all the core modules, Devel, Drush and Drush Console, and then moves all the Drupal code into a ‘web’ subdirectory. It also moves the vendor directory outside of the web root. The new file structure will look like this:

You will end up with a composer.json file at the base of the project that might look like the following. You can see the beginning of the module list in the ‘require’ section, and that Drush and Drupal Console are included by default. You can also see rules that move contributed modules into ‘/contrib’ subfolders as they’re downloaded.

{ "name": "drupal-composer/drupal-project", "description": "Project template for Drupal 8 projects with composer", "type": "project", "license": "GPL-2.0+", "authors": [ { "name": "", "role": "" } ], "repositories": [ { "type": "composer", "url": "" } ], "require": { "composer/installers": "^1.0.20", "drupal/core": "8.0.*", "drush/drush": "8.*", "drupal/console": "~0.8", }, "minimum-stability": "dev", "prefer-stable": true, "scripts": { "post-install-cmd": "scripts/composer/" }, "extra": { "installer-paths": { "web/core": ["type:drupal-core"], "web/modules/contrib/{$name}": ["type:drupal-module"], "web/profiles/contrib/{$name}": ["type:drupal-profile"], "web/themes/contrib/{$name}": ["type:drupal-theme"], "web/drush/commands/{$name}": ["type:drupal-drush"] } } }

That site organization comes from A file there describes the process for doing things like updating core. The contributed modules are coming from Packagist rather than directly from That’s because the current Drupal versioning system doesn’t qualify as the semantic versioning the system needs. There is an ongoing discussion about how to fix that.

Install Drupal

The right version of Drush for Drupal 8 comes built into this package. If you have an empty database you can then install Drupal using the Drush version in the package:

cd drupal/web ../vendor/bin/drush site-install --db-url=mysql://{username}:{password}@localhost/{database}

If you don’t do the installation with Drush you can do it manually, but the Drush installation handles all this for you. The manual process for installing Drupal 8 is:

  • Copy default.settings.php to settings.php and unprotect it
  • Copy default.license.yml to license.yml and unprotect it
  • Create sites/files and unprotect it
  • Navigate to EXAMPLE.COM/install to provide the database credentials and follow the instructions.
Add Contributed Modules From Packagist

Adding contributed modules is done a little differently. Instead of adding modules using drush dl, add additional modules by running composer commands from the Drupal root:

composer require drupal/migrate_upgrade 8.1.*@dev composer require drupal/migrate_plus 8.1.*@dev

As you go, each module will be downloaded from Packagist and composer.json will be updated to add this module to the module list. You can peek into the composer.json file at the root of the project and see the ‘require’ list evolving.

Repeat until all desired contributed modules have been added. The composer.json file will then become the equivalent of a Drush make file, with documentation of all your modules.

For even more parity with Drush Make, you can add external libraries to your composer.json as well, and, with a plugin, you can also add patches to it. See more details about all these options at

Commit Files to the Repo

Commit the composer.json changes to the repo. The files downloaded by Composer do not need to be added to the repo. You’ll see a .gitignore file that keeps them out (this was added as a part of the composer packaging). Only composer.json, .gitignore and the /sites subdirectory (except /sites/default/files) will be stored in the git repository.

.gitignore # Ignore directories generated by Composer vendor web/core web/modules/contrib web/themes/contrib web/profiles/contrib # Ignore Drupal's file directory web/sites/default/files Update Files

To update the files any time they might have changed, navigate to the Drupal root on the command line and run:

composer update

Add additional Drupal contributed modules, libraries, and themes at any time from the Drupal root with the same command used earlier:

composer require drupal/module_name 8.1.*@dev

That will add another line to the composer.json file for the new module. Then the change to composer.json needs to be committed and pushed to the repository. Other installations will pick this change up the next time they do git pull, and they will get the new module when they run composer update.

The composer update command should be run after any git pull or git fetch. So the standard routine for updating a repository might be:

git pull composer update drush updb ... New Checkout

The process for a new checkout of this repository on another machine would simply be to clone the repository, then cd into it and run the following, which will then download all the required modules, files, and libraries:

composer install That’s It

So that’s it. I was a little daunted at first but it turns out to be pretty easy to manage.  You can use the same process on a Drupal 7 site, with a few slight modifications.

Obviously the above process describes using the development versions of everything, since Drupal 8 is still in flux. As it stabilizes you’ll want to switch from using 8.1.*@dev to identifying specific stable releases for core and contributed modules.

See the links below for more information:

Why and how we migrated comments to Disqus

When we started working on the latest relaunch of, we wanted to decouple the front end from the back end. While evaluating how to migrate comments, we realized the following:

  • User navigation at is anonymous so Drupal had nothing to do with comments apart from storing them in the database.
  • Even though we were using Mollom, we were still getting spam comments.

After evaluating a few third party commenting tools, we chose Disqus for its moderation and auto-spam detection system. This article details how we exported a large amount of Drupal comments into a Disqus account. The process was tedious and it required us to do some debugging and polishing so in order to make things easier for everyone, and for those reasons it's worth documenting the steps we took.

Here is the list of steps that we will follow:

  1. Creating an account at Disqus.
  2. Finding a way to export Drupal comments for Disqus.
  3. Installing and configuring Disqus Migrate module.
  4. Exporting comments.
  5. Uploading comments to Disqus.
  6. Verifying the setup.
1. Creating an account at Disqus

In order to embed Disqus threads in our website, we need a Disqus account and a Disqus site. Once we have created an account we can open and register here our site:

Next, at the General tab, we need to set the following options:

At the top of this form we see that our site's shortname is lullabot. We will use this parameter later at the Drupal administration to identify ourselves against Disqus. Once we are done with this form we will move to the Advanced tab, where we can specify which domains are allowed to load our site's comments:

This setting will save you a lot of headaches as it prevents Disqus from rendering the Discussion Widget if the current hostname does not match. This in turn precludes developers or testers from creating new threads from development domains such as or lullabot.local, or even worse: posting test comments that would appear later in the live site. Instead, they will see the following message:

Tip: If you need to test the Disqus Discussion Widget locally, you can edit your Hosts file in order to fake the trusted domain.

2. Finding a way to export Drupal comments for Disqus

Disqus can import comments into an XML file following the WXR (WordPress eXtended RSS) format. Here is the Disqus interface to upload the file:

What we need is a tool in Drupal to export comments into a single WXR file. By looking at the Disqus module description, exporting comments was implemented for the Drupal 6 version, but never got completed for Drupal 7:

By searching further I found a sandbox called Disqus Migrate which is a port of the Drupal 6 sub-module for Drupal 7. The module's description did not look promising at all: a sandbox project, marked as Unsupported, encouraging not to use with real accounts, and other warnings: 

I was wrong though, as the module proved to be a solid project that simply needed some polishing. In fact, during the process I submitted a few patches to the module's issue queue and promoted it to a full project.

3. Installing and configuring Disqus Migrate module

Installing sandbox projects is not among best practices in Drupal but at Lullabot sometimes we discover sandbox projects that are mature enough to become full projects. In these cases, we help module maintainers to fix existing bugs and then we encourage them to promote the sandbox to a full project. 

I glanced through the module's source code at and my first impression was positive. I did not see any security holes and the code was easy to understand. Next, I downloaded and installed the module in my local instance of by entering the following commands:

git clone --branch 7.x-1.x sites/all/modules/sandbox/disqus_migrate drush en disqus_migrate

Disqus Migrate leverages the Disqus Drupal module by adding a tab called Migrate at admin/config/services/disqus. At the top of this administration form we must set the shortname, which we saw before at the Disqus admin panel. If we are going to use Drupal to render the Disqus widget, then we need a set of keys to connect with the Disqus API. Once you have registered your application there, you can enter the keys at the form like so:

Unless we are exporting comments from the production environment (which we don't recommend), we need to override the base URL so each thread will have the trusted domain. Here is the field where we can set this:

4. Exporting comments

Now we are ready to export comments. My local environment has a recent copy of the production environment's database. Disqus Migrate adds a task at admin/content/comment where we can export comments:

After confirming, we get all comments in an XML file:

Here is how the XML file looks:

In the above XML file, there is a <comment> entry for each comment which belongs to a thread. Each thread has a unique identifier which the Disqus Migrate module sets to the node path, such as node/30. Now we are ready to upload comments to Disqus.

5. Uploading comments to Disqus

Here is a screenshot of the Import tool in the Disqus administration. We have selected the XML file and after clicking Upload and import, the file will be uploaded and processed by Disqus.

That's all, after a few minutes we should see comments at the Comments tab:

If you have issues uploading comments to Disqus, then have a look at these troubleshooting tips. In our case, we used xmllint to discover bugs in the the XML file and then we submitted a few patches to the Disqus Migrate's issue queue to fix them.

6. Verifying the setup and wrapping up

If you are using Drupal to render the front end and you filled out the administration form that we saw at Installing and configuring Disqus Migrate module, Disqus comments should load automatically for your site. has a decoupled front end powered by React so we wrote a React component which, given a thread title and identifer, renders the Disqus widget with its comments. You can see it in action below this article. Try it out and leave us a comment with your thoughts.

Image credit:

A Tale of Two Base Themes in Drupal 8 core

Hi, my name is Marc, and I am a markup nerd.

My skin crawls when I see HTML filled with divs tucked inside each other over and over again like a Russian nesting doll. My shoulders tense up when I see class names spilling out of a class attribute like clowns in a circus car. Conversely, my heart sings at the sight of cleanly crafted HTML that uses just the right elements to describe its contents, with sensible BEM-style classes that simplify styling and clarify how the markup has been structured.

Because I prefer being very particular about markup, I’ve been known to develop an eye twitch from time to time while theming Drupal sites.

In 2013 I started contributing fairly regularly to the development of Drupal 8. Working at Lullabot since September 2014 has made it possible to devote even more of my time to working on Drupal 8 front-end improvements. There’s a great team of developers around the world focused on improving the theming experience in Drupal 8, and it’s been a joy collaborating with them. I’m very excited about how we’ve worked together to make Drupal 8 better for front-end developers and themers.

Thanks to those efforts, I’m looking forward to having much better tools for carefully crafting markup in Drupal 8.

Making markup shine in Drupal 8

The new templating engine in Drupal 8, Twig, makes it easier to change markup without knowing the ins and outs of PHP. Most of the classes in Drupal are now added in the Twig templates themselves, rather than being buried deep in preprocessor and theme functions. Better yet, by turning on Twig Debug in the services.yml file in your sites/default folder, you can see which templates are generating each bit of markup when you view source. You’ll find which template file is currently in use along with suggestions for filenames you can use in your theme to override that template.

The markup itself has been greatly improved as well. Contributors worked to improve template logic so that only necessary divs or other elements are generated: yes, there are fewer wrapper divs in Drupal 8! In previous versions of Drupal, field markup introduced wrapper after wrapper, often unnecessarily. Now if you have only one value and no label for a field...there’s only one div.

Two base themes in Drupal 8: Classy and Stable

In general, your theme has the final say on Drupal’s markup, CSS, and JS. Themes can override templates, CSS, and JS that come from core modules, contributed modules and base theme. Themes can have parent-child relationships, with the parent theme serving as the base theme for a sub-theme. A sub-theme inherits templates, CSS, and JS from its base theme. Themes can also have grandparent themes or even great-grandparent themes: we’ll talk about chaining together contrib themes with core themes in just a bit.

Drupal’s markup comes in two flavors, depending on what you choose as your base theme.

  • If you want to start with markup with sensible classes you can use as styling hooks, then you can use Classy as your base theme. This is similar to the approach used with Drupal 7’s default markup, but with classes and markup significantly improved.
  • However, if you want markup that is even more lean, you can use Stable as your base theme; in fact that’s the default. Stable still has some classes needed for front-end UI components like the toolbar and contextual links, but in general has far fewer classes than Classy, and even fewer wrapper elements.

By providing two flavors of markup with the Stable and Classy base themes, Drupal 8 allows you to choose how you want to approach markup changes:

  • Tweak the classes Drupal provides as a starting point (Classy),
  • Only add classes where you think they are essential (Stable).

If you prefer to start from scratch in order to use highly customized markup patterns, Stable may be the right base theme for you. This approach can be useful when implementing a framework like Bootstrap or Foundation that relies upon very particular markup patterns.

Field markup: Stable vs Classy

Let’s look at one example of how markup can differ between Stable and Classy.

Here’s Classy field markup when you have a label and multiple field values:

<div class="field field--name-field-favorite-horses field--type-string field--label-above"> <div class="field__label">Favorite horses</div> <div class='field__items'> <div class="field__item">Black Stallion</div> <div class="field__item">Shadowfax</div> <div class="field__item">Hidalgo</div> </div> </div>

In contrast, here’s the same markup when using Stable:

<div> <div>Favorite horses</div> <div> <div>Black Stallion</div> <div>Shadowfax</div> <div>Hidalgo</div> </div> </div>

Just for fun, look how much slimmer Classy’s markup is when only one value is allowed on a field, and the label is hidden:

<div class="field field--name-field-best-horse field--type-string field--label-hidden field__item">Mister Ed</div>

That markup gets even leaner with Stable:

<div>Mister Ed</div>

To see how far we've come, here's how that markup would look by default in Drupal 7:

<div class="field field-name-field-best-horse field-type-text field-label-hidden"> <div class="field-items"> <div class="field-item even">Mister Ed</div> </div> </div>

That’s a lot more markup than you really need when you know there will only ever be one value for a field.

From this example, you can see that Stable is a starting point. You would want to customize Stable’s markup in your theme to add sensible classes. Writing CSS using only HTML elements in your selectors is painful. However, with Stable you can build up only the classes you need, rather than spending time evaluating which classes in Classy should be retained.

Keeping Core markup reliable

One of the primary purposes of the Stable theme, which is new to core, is to provide a backwards compatibility layer for Drupal’s core markup, CSS, and JS. Those will all be locked within the Stable theme as of Drupal 8 RC1. You can rely upon Stable as your default base theme without worry that the markup will change on you during the Drupal 8 cycle.

The Classy theme will also provide reliable markup, CSS and JS, most likely by using Stable as its base theme.

So whether you choose Stable or Classy as your base theme, you can also rely on its markup whether you have Drupal 8.0.5 or Drupal 8.3.4 installed. Both Stable and Classy will still receive bug fixes—for example, if the wrong variable is being printed by a template. The goal is to make these fixes in a way that ensures backwards compatibility.

Drupal 8 also ships with two other themes, Bartik and Seven.

  • Bartik is the default theme when you install Drupal 8 and demonstrates one possible front-end implementation.
  • Seven serves as the default admin theme.

Both have been improved in Drupal 8 and use Classy as their base theme. Bartik and Seven can also continue to change throughout Drupal 8, so you wouldn’t want to use Bartik or Seven as a base theme. Bartik or Seven’s markup could be tweaked in 8.1 or 8.2, which could break your site’s appearance if you use Bartik or Seven as a base for your theme.

In the meantime, during the Drupal 8 cycle, core markup can continue to evolve. Contributors can continue to clean things up so that we don’t need to spend nearly as much time working on tidying our markup when work on Drupal 9 begins.

How to define the base theme in .info.yml

Many of a theme’s basic settings are defined in a file called within your theme folder, where MYTHEME is replaced with the name of your theme. This is similar to the .info file in Drupal 8.

If you want to use Classy as your theme, add the following in your theme’s .info.yml file:

base theme: classy

If you want Stable as your base theme, you do not need to add a base theme setting to your theme’s .info.yml. If no base theme is declared, by default your base theme will be set to Stable.

If you don’t want a base theme at all, add this to your theme’s .info.yml file:

base theme: false

That would mean your theme would use Drupal 8’s core templates, CSS, and JS directly, rather than passing through Stable or Classy first. That’s a risky strategy. If something changes in core, you might need to update your theme accordingly.

Using Classy or Stable as your base theme is a more reliable way to ensure the stability of your theme.

Chaining base themes together

Contrib themes can also choose to use Classy or Stable as their base theme, or no base theme at all. That’s right, a base theme like Zen can have Stable or Classy as its base theme. A sub-theme of Zen would have Zen as its base theme, but would also inherit from Zen’s base theme, whether that’s Classy or Stable. So while you might end up using a contrib theme as a base for your theme, ultimately that contrib theme will likely trace back to one of the two base themes in core, Classy or Stable.

When you evaluate base themes like Zen or Omega or Adaptive Theme, make sure to check their info.yml file for the base theme setting. If you see base theme: false, you may want to be wary as core markup will surely change. If you see Classy or no base theme setting at all, you’ll have an idea what kind of markup (and stability) to expect from that contrib theme.

To pull together what we've learned so far, this graphic created by Morten DK shows how Drupal's core themes interact with contrib themes or themes you might develop:

How Drupal 8 themes relate to each other

This is a good look at how things stand in October 2015, with Drupal 8 RC1 recently released. As more and more people try out Drupal 8 and build themes, we'll continue to develop a better understanding of how best to set up contrib and custom themes.

Learning more

You can learn more about theming in Drupal 8 by checking out John Hannah’s great articles on the subject, Drupal 8 Theming Fundamentals Part I and Part II

I’m definitely looking forward to working with markup in Drupal 8, more so than I ever have before. Hopefully you will too!

What Happened to Hook_Menu in Drupal 8?

In Drupal 7 and earlier versions hook_menu has been the Swiss Army knife of hooks. It does a little bit of everything: page paths, menu callbacks, tabs and local tasks, contextual links, access control, arguments and parameters, form callbacks, and on top of all that it even sets up menu items! In my book it’s probably the most-used hook of all. I don’t know if I’ve ever written a module that didn’t implement hook_menu.

But things have changed In Drupal 8. Hook_menu is gone and now all these tasks are managed separately using a system of YAML files that provide metadata about each item and corresponding PHP classes that provide the underlying logic.

The new system makes lots of sense, but figuring out how to make the switch can be confusing. To make things worse, the API has changed a few times over the long cycle of Drupal 8 development, so there is documentation out in the wild that is now incorrect. This article explains how things work now, and it shouldn't change any more.

I’m going to list some of the situations I ran into while porting a custom module to Drupal 8 and show before and after code examples of what happened to my old hook_menu items.

Custom Pages

One of the simplest uses of hook_menu is to set up a custom page at a given path. You'd use this for a classic "Hello World" module. In Drupal 8, paths are managed using a MODULE.routing.yml file to describe each path (or ‘route’) and a corresponding controller class that extends a base controller, which contains the logic of what happens on that path. Each controller class lives in its own file, where the file is named to match the class name. This controller logic might have lived in a separate file in Drupal 7.

In Drupal 7 the code might look like this:

function example_menu() { $items = array(); $items['main'] = array( 'title' => 'Main Page', 'page callback' => example_main_page', 'access arguments' => array('access content'), 'type' => MENU_NORMAL_ITEM, 'file' => '' ); return $items; } function example_main_page() { return t(‘Something goes here’); }

In Drupal 8 we put the route information into a file called MODULE.routing.yml. Routes have names that don’t necessary have anything to do with their paths. They are just unique identifiers. They should be prefixed with your module name to avoid name clashes. You may see documentation that talks about using _content or _form instead of _controller in this YAML file, but that was later changed. You should now always use _controller to identify the related controller.

example.main_page_controller: path: '/main' defaults: _controller: '\Drupal\example\Controller\MainPageController::mainPage' _title: 'Main Page' requirements: _permission: 'access content'

Note that we now use a preceding slash on paths! In Drupal 7 the path would have been main, and in Drupal 8 it is /main! I keep forgetting that and it is a common source of problems as I make the transition. It’s the first thing to check if your new code isn’t working!

The page callback goes into a controller class. In this example the controller class is named MainPageController.php, and is located at MODULE/src/Controller/MainPageController.php. The file name should match the class name of the controller, and all your module’s controllers should be in that /src/Controller directory. That location is dictated by the PSR-4 standard that Drupal has adopted. Basically, anything that is located in the expected place in the ‘/src’ directory will be autoloaded when needed without using module_load_include() or listing file locations in the .info file, as we had to do in Drupal 7.

The method used inside the controller to manage this route can have any name, mainPage is an arbitrary choice for the method in this example. The method used in the controller file should match the YAML file, where it is described as CLASS_NAME::METHOD. Note that the Contains line in the class @file documentation matches the _controller entry in the YAML file above.

A controller can manage one or more routes, as long as each has a method for its callback and its own entry in the YAML file. For instance, the core nodeController manages four of the routes listed in node.routing.yml.

The controller should always return a render array, not text or HTML, another change from Drupal 7.

Translation is available within the controller as $this->t() instead of t(). This works because ControllerBase has added the StringTranslationTrait. There's a good article about how PHP Traits like translation work in Drupal 8 on Drupalize.Me.

/** * @file * Contains \Drupal\example\Controller\MainPageController. */ namespace Drupal\example\Controller; use Drupal\Core\Controller\ControllerBase; class MainPageController extends ControllerBase { public function mainPage() { return [ '#markup' => $this->t('Something goes here!'), ]; } Paths With Arguments

Some paths need additional arguments or parameters. If my page had a couple extra parameters it would look like this in Drupal 7:

function example_menu() { $items = array(); $items[‘main/%/%’] = array( 'title' => 'Main Page', 'page callback' => 'example_main_page', ‘page arguments’ => array(1, 2), 'access arguments' => array('access content'), 'type' => MENU_NORMAL_ITEM, ); return $items; } function example_main_page($first, $second) { return t(‘Something goes here’); }

In Drupal 8 the YAML file would be adjusted to look like this (adding the parameters to the path):

example.main_page_controller: path: '/main/{first}/{second}' defaults: _controller: '\Drupal\example\Controller\MainPageController::mainPage' _title: 'Main Page’ requirements: _permission: 'access content'

The controller then looks like this (showing the parameters in the function signature)::

/** * @file * Contains \Drupal\example\Controller\MainPageController. */ namespace Drupal\example\Controller; use Drupal\Core\Controller\ControllerBase; class MainPageController extends ControllerBase { public function mainPage($first, $second) { // Do something with $first and $second. return [ '#markup => $this->t('Something goes here!'), ]; } }

Obviously anything in the path could be altered by a user so you’ll want to test for valid values and otherwise ensure that these values are safe to use. I can’t tell if the system does any sanitization of these values or if this is a straight pass-through of whatever is in the url, so I’d probably assume that I need to type hint and sanitize these values as necessary for my code to work.

Paths With Optional Arguments

The above code will work correctly only for that specific path, with both parameters. Neither the path /main, nor /main/first will work, only /main/first/second. If you want the parameters to be optional, so /main, /main/first, and /main/first/second are all valid paths, you need to make some changes to the YAML file.

By adding the arguments to the defaults section you are telling the controller to treat the base path as the main route and the two additional parameters as path alternatives. You are also setting the default value for the parameters. The empty value says they are optional, or you could give them a fixed default value to be used if they are not present in the url.

example.main_page_controller: path: '/main/{first}/{second}' defaults: _controller: '\Drupal\example\Controller\MainPageController::mainPage' _title: 'Main Page' first: '' second: '' requirements: _permission: 'access content' Restricting Parameters

Once you set up parameters you probably should also provide information about what values will be allowed for them. You can do this by adding some more information to the YAML file. The example below indicates that $first can only contain the values ‘Y’ or ‘N’, and $second must be a number. Any parameters that don’t match these rules will return a 404. Basically the code is expecting to evaluate a regular expression to determine if the path is valid.

See Symfony documentation for lots more information about configuring routes and route requirements.

example.main_page_controller: path: '/main/{first}/{second}' defaults: _controller: '\Drupal\example\Controller\MainPageController::mainPage' _title: 'Main Page' first: '' second: '' requirements: _permission: 'access content' first: Y|N second: \d+ Entity Parameters

As in Drupal 7, when creating a route that has an entity id you can set it up so the system will automatically pass the entity object to the callback instead of just the id. This is called ‘upcasting’. In Drupal 7 we did this by using %node instead of %. In Drupal 8 you just need to use the name of the entity type as the parameter name, for instance {node} or {user}.

example.main_page_controller: path: '/node/{node}' defaults: _controller: '\Drupal\example\Controller\MainPageController::mainPage' _title: 'Node Page' requirements: _permission: 'access content'

This upcasting only happens if you have type-hinted the entity object in your controller parameter. Otherwise it will simply be the value of the parameter.

JSON Callbacks

All the above code will create HTML at the specified path. Your render array will be converted to HTML automatically by the system. But what if you wanted that path to display JSON instead? I had trouble finding any documentation about how to do that. There is some old documentation that indicates you need to add _format: json to the YAML file in the requirements section, but that is not required unless you want to provide alternate formats at the same path.

Create the array of values you want to return and then return it as a JsonResponse object. Be sure to add ”use Symfony\Component\HttpFoundation\JsonResponse” at the top of your class so it will be available.

/** * @file * Contains \Drupal\example\Controller\MainPageController. */ namespace Drupal\example\Controller; use Drupal\Core\Controller\ControllerBase; use Symfony\Component\HttpFoundation\JsonResponse; class MainPageController extends ControllerBase { public function mainPage() { $return = array(); // Create key/value array. return new JsonResponse($return); } } Access Control

Hook_menu() also manages access control. Access control is now handled by the MODULE.routing.yml file. There are various ways to control access:

Allow access by anyone to this path:

example.main_page_controller: path: '/main' requirements: _access: 'TRUE'

Limit access to users with ‘access content’ permission:

example.main_page_controller: path: '/main' requirements: _permission: 'access content'

Limit access to users with the ‘admin’ role:

example.main_page_controller: path: '/main' requirements: _role: 'admin'

Limit access to users who have ‘edit’ permission on an entity (when the entity is provided in the path):

example.main_page_controller: path: '/node/{node}' requirements: _entity_access: 'node.edit'

See documentation for more details about setting up access control in your MODULE.routing.yml file.


So what if a route already exists (created by core or some other module) and you want to alter something about it? In Drupal 7 that is done with hook_menu_alter, but that hook is also removed in Drupal 8. It’s a little more complicated now. The simplest example in core I could find was in the Node module, which is altering a route created by the System module.

A class file at MODULE/src/Routing/CLASSNAME.php extends RouteSubscriberBase and looks like the following. It finds the route it wants to alter using the alterRoutes() method and changes it as necessary. You can see that the values that are being altered map to lines in the original MODULE.routing.yml file for this entry.

/** * @file * Contains \Drupal\node\Routing\RouteSubscriber. */ namespace Drupal\node\Routing; use Drupal\Core\Routing\RouteSubscriberBase; use Symfony\Component\Routing\RouteCollection; /** * Listens to the dynamic route events. */ class RouteSubscriber extends RouteSubscriberBase { /** * {@inheritdoc} */ protected function alterRoutes(RouteCollection $collection) { // As nodes are the primary type of content, the node listing should be // easily available. In order to do that, override admin/content to show // a node listing instead of the path's child links. $route = $collection->get('system.admin_content'); if ($route) { $route->setDefaults(array( '_title' => 'Content', '_entity_list' => 'node', )); $route->setRequirements(array( '_permission' => 'access content overview', )); } } }

To wire up the menu_alter there is also a file with an entry that points to the class that does the work:

services: node.route_subscriber: class: Drupal\node\Routing\RouteSubscriber tags: - { name: event_subscriber }

Many core modules put their RouteSubscriber class in a different location: MODULE/src/EventSubscriber/CLASSNAME.php instead of MODULE/src/Routing/CLASSNAME.php. I haven’t been able to figure out why you would use one location over the other.

Altering routes and creating dynamic routes are complicated topics that are really beyond the scope of this article. There are more complex examples in the Field UI and Views modules in core.

And More!

And these are still only some of the things that are done in hook_menu in Drupal 7 that need to be transformed to Drupal 8. Hook_menu is also used for creating menu items, local tasks (tabs), contextual links, and form callbacks. I’ll dive into the Drupal 8 versions of some of those in a later article.

More information about this topic:

Better, then Bigger: Cultivating the Drupal Community

Each year at the largest Drupal conferences in the world, Dries Buytaert, the creator and project lead of Drupal, presents keynotes about the current “State of Drupal.” These events are well known in the Drupal community as “Driesnotes” (Dries, obviously influenced by Steve Jobs, has quoted Jobs in his keynotes and has even ended multiple Driesnotes with “one more thing,” much like a Stevenote). Frequently, Dries will clarify ideas from his keynotes on his personal blog, which was the case in a post from last year titled “Scaling Open Source communities.” While his title suggests a focus on open source software, his more immediate ambition is scaling Drupal. Indeed, Dries conflates “Drupal” with “Open Source” in his article, concluding, “we can scale Drupal development to new heights and with that, increase Open Source’s impact on the world.” Dries would like to grow Open Source (he likes to capitalize these words) by growing Drupal.

It was certainly not the first discussion about scaling the Drupal community, but when Dries first made his case for “scaling” in Amsterdam in 2014, many seasoned Drupalers immediately realized this was not a typical Driesnote. Dries referenced a variety of economic theories, covering topics such as “public goods,” the “free rider problem,” “self-interest theory,” “collective action theory,” “selective benefits,” and “privileged groups.” He was not talking about the average number of times Drupal had been downloaded each day or charting the number of contributed modules, as he often did in previous “states of Drupal” talks. Dries was engaging in analysis. He warned the audience that he had been reading “lots of economic papers,” admitted that “there’s an academic hidden inside me,” and pleaded, “please don’t fall asleep.” That such a talk required these disclaimers revealed the level of patience our community typically has had for academically-oriented analysis. Dries proceeded in his keynote (but not his blog post) to cite peer-reviewed articles from the economist Paul Samuelson and the American ecologist Garrett Hardin, and he extrapolated ideas from economist Mancur Olson’s well-known 1965 book, The Logic of Collective Action. Rather than just see Dries presenting, the audience witnessed Dr. Buytaert historicizing. The British journalist Will Self once remarked, “Visionaries, notoriously, are quite free from ratiocination and devoid of insight.” With his new ideas based on economic theories, Dries contested the stereotype. The reaction from the community was generally positive, with his talk garnering such accolades as “historic” and “the best Dries keynote ever.”

This Driesnote signaled a more nuanced critique from an entrepreneur more accustomed to discussing books about Drupal than books from academic presses. More important, since Dries started promoting his ideas about economic theory, some of what he suggested has become reality. It probably comes as little surprise that this “benevolent dictator” can get things done. On his blog and in his talks he suggested various changes to, such as improved organizational profile pages featuring more statistics and adding corporate attributions to individual commit credits in Drupal code (a topic he had blogged about previously). He offered other concrete suggestions that might very well still be in the works, including new advertising opportunities on in exchange for fixing bugs, the opportunity for organizations to get better visibility on the job board, and the ability to sort the marketplace page by contributions rather than just alphabetically. All of his suggested improvements were technical in nature, and ostensibly designed to benefit organizations.

To his great credit, Dries maintains an openness to other ideas. In his Amsterdam keynote, Dries said, “these are not final solutions. These are just ideas, and I hope they will be starting points for discussion” (34:53). He said, “this is just me brainstorming” (43:49), and that we should keep working at building our community, “even if it takes us years to get it right” (47:27). Accordingly, here I try to adopt a constructivist approach that adds to – rather than subtracts from – what Dries suggested. I am here to assemble, not debunk. I bring the same attitude that I aspire to when dealing with any of my colleagues, which is to assume positive motivation – I assume that Dries, like my co-workers, has good intentions (others of Dries’s critics seem to forget this). Like Dries, I care deeply about the Drupal community and I would like to understand more about the problems we face, what Drupal means, and how various changes might affect our community dynamics. In the remainder of this article, I will spend most of my effort dissecting Dries’s suggestions, the logic behind them, and how they compare to the theories of the economic theorists he cites. Finally, I will offer a few of my own suggestions. I believe that we will be most successful not merely by convincing more people to work with us through technological manipulations, but instead by focusing on improving interactions within the community and a goal of cultivating social solidarity. In other words, I will argue that instead of using technology to grow our community, we should focus our efforts on adjusting our culture in order to improve our technology.

What Is the Problem?

Before we can discuss solutions, we should consider the problems that need solving. Dries mentions generalized goals of attracting “more contributors” to the Drupal project in order to “try more things and do things better and faster,” without interrogating what “better” means or why “faster” should be a goal. His solutions seem to suggest that we should lure organizations to get more involved by hiring Drupal core developers, although Dries admits that “hiring Drupal talent is hard.” That Dries does not make explicit the benefits of growing the community beyond increasing our capacity to do things “better and faster” indicates that he understands the problem to be obvious. But is the problem actually that straightforward? Does bigger mean better? Should we consider goals beyond growing the community?

Evgeny Morozov, a rigorous thinker with a combative style, would label Dries’s approach “solutionism.” In To Save Everything, Click Here: The Folly of Technological Solutionism, Morozov writes, “Solutionism presumes rather than investigates the problems that it is trying to solve” (6). Morozov is frustrated by the prevalence of solutionism in technology debates and he dislikes any debate that presupposes the inherent worth of technologies such as “the Internet” (nowadays, Morozov always puts “the Internet” in scare quotes) or Open Source. I agree, and for our purposes, we should not assume that scaling open source, or Drupal, is a venture of unquestionable worth. Do we grow Drupal for social reasons? Are we politically motivated? Is this activism? Is this a philosophical debate? Or should we all just assume economic motivations? Whatever the impetus, I feel that when we talk about growing Drupal, we should not approach this activity as one with absolute value.

One potential benefit might be to increase business. Perhaps Dries feels it unnecessary to explain his motives. Dries is not just a developer, he’s also a successful entrepreneur. Dries discusses his ideas about “scaling” on his blog, which is also the place where he posts his annual “retrospectives” about his Drupal company, Acquia. In other words, Dries uses his blog not just to share personal information and news related to the Drupal project, he also uses his blog for business. So it seems quite probable that he wants to do more than grow the community, and that his goal is also to grow his company. Dries has fully committed himself to Drupal, and as the value of the Drupal software increases, so does the value of his Drupal company. One can hardly fault someone who has to answer to investors and who seeks to take his company public.

Another possibility is that Dries needs to defend his company. Dries is keenly aware that Acquia contributes disproportionately more to the Drupal project than any other company, and he understandably seeks to change this situation. Indeed, multiple times during his presentation Dries discusses the ratio of contributors. Dries says that it is “all about this ratio” (26:29 minutes into his talk) and that changing the ratio “will fundamentally change the dynamics of the community” (26:40). I agree with the latter part of his suggestion in that growing the community beyond Acquia will ease the “exploitation” of Acquia. While “exploitation” may seem a bit strong in this context, I borrow this word from one of Dries’s primary informants, Mancur Olson, who uses it repeatedly in The Logic of Collective Action. Olson believes there exists a “systematic tendency for ‘exploitation’ of the great by the small” (29). So applying Olson’s idea to Dries’s subject, we could understand why Acquia – run by the founder of Drupal, offering Drupal services, and employing more Drupal contributors than any other organization – has to carry the most weight. We should not feel too bad, however, because while it may be that Acquia contributes disproportionately to Drupal, it is also true that Acquia benefits disproportionately as Drupal gets better. Arguably, Dries and his company have the most to gain when others participate in Drupal.

While Acquia grows with Drupal, there are certainly many others in the Drupal community that stand to benefit as well, especially the many other Drupal “agencies” (including Lullabot, where I work) as well as Acquia’s many partners. Dries writes, “my company Acquia currently employs the most full-time contributors to Drupal but does not receive any exclusive benefits in terms of monetizing Drupal. While Acquia does accrue some value from hiring the Drupal contributors that it does, this is something any company can do.” Certainly another part of Dries’s project is to entice Drupal agencies to contribute. But doesn’t this happen already? Don’t agencies understand that the prospect of scaling Drupal will lead to more clients and that it is in their best interest to contribute to Drupal? This topic of individuals contributing to groups is, in fact, one of the main subjects in Olson’s book, with his main point being that as groups get larger, rational individuals are less likely to participate. Olson’s thesis is that in large groups, “rational, self-interested individuals will not act to achieve their common or group interests” (2). Consequently, it is not difficult to understand why Dries was drawn to Olson’s theories – Olson’s study offers multiple perspectives on why individuals do not contribute to large groups, which Dries can use to help understand why all of these Drupal agencies do not fund as many Drupal core developers as Acquia. Even better, it offers multiple ideas about how to entice these agencies to help make Drupal “better and faster.”

While it would seem that Dries is focused primarily on growing Acquia and other Drupal businesses, he would also like to attract individuals. He later clarified his goal in a comment of a curious blog post, maintaining that he “proposed ways to increase the social capital of both individuals and organizations.” His most immediate goal is not, in fact, to “scale Open Source.” Rather, he seeks to encourage individuals and organizations to contribute to Drupal. And from Olson, Dries learns more methods for coercion, another term that Olson uses frequently in his book. Olson believes that members of a group will not “act to advance their common or group objectives unless there is coercion to force them to do so, or unless some separate incentive, distinct from the achievement of the common or group interest, is offered to the members of the group individually on the condition that they help bear the costs or burdens involved in the achievement of the group objectives” (2). Olson talks at length about various types of incentives – social, selective, economic, etc. – that would make participation in a group more rational.

It can be quite tricky to grok the motivations of the organizations and individuals that contribute to the Drupal project. Olson focuses primarily on individuals who are rational and self-interested. Olson’s subjects are individuals that “rationally seek to maximize their personal welfare” (2). In a similar manner, Dries believes “modern economics suggest that both individuals and organizations tend to act in their own self-interest,” even as he admits that contributions to Drupal are “often described as altruistic.” Dries’s discussion of “self-interest” hints at the difficulties in characterizing the motivations for participating in our community, and the need for subtlety. Especially with Drupal agencies, it can be difficult to generalize motivations. For example, I was recently talking with a senior member of the Drupal community, and a former Lullabot employee, who described Lullabot as a “lifestyle company” that seemingly puts the needs of its employees ahead of profit, and that is extremely selective when evaluating potential projects. His description of Lullabot feels apt, and by no means exclusive to Lullabot. Four Kitchens takes a similar approach with its employees by striving to cultivate a “culture of empowerment.” Think Shout goes a step further, having recently become a B Corp, which is a type of for-profit company that is required to make a positive impact on society and the environment. Or consider Enjoy Creativity, a nonprofit organization – required to act for the public good – that builds Drupal sites for churches and ministries. These kinds of Drupal agencies seem motivated by goals that are different from – if not in conflict with – traditionally capitalist goals where “the common good” is, as Ayn Rand put it, “merely a secondary consequence.” We might conclude that sometimes we just help others for no rational reason, and that, as Nietzsche famously observed, “in everything one thing is impossible: rationality.” Or we might adopt a more optimistic view, as Dries did in The Next Web, and conclude that “capitalism is starting to become more collaborative rather than centered around individual ownership.”

So it seems there exists a wide variety of potential problems that need solving, although few of them feel urgent. First, Acquia has too much control. One of our goals should be to ensure that Drupal is understood as institutionally independent and that no single company dictates its future. I agree with Dries in that we should work to change the ratio of developers contributing code to Drupal. Second, we should ensure that Drupal continues to welcome a wide variety of individuals and organizations, both those that have the resources to contribute to core and those that do not. Drupal must not be construed as something only for business. After all, we do not know if Drupal will survive without individual contributors. Finally, we should strive to adapt to change and continue to make decisions that are understood as welcoming as well as benefiting a broader community. It is fine to desire more individual and organization participation, but not if that means alienating significant groups within the community. In other words, rather than asking if a change will grow the Drupal project, we should ask if it will improve the Drupal project.

Drupal: Public Good or Collective Good?

Perhaps even more confounding than articulating Drupal’s problems is the task of determining what Drupal is. Previously I explored the “cultural construction of Drupal” and various narratives about Drupal in our community. Dries offers yet another narrative when he states clearly his belief that “Open Source projects are public goods.” He arrives at this conclusion because he feels that open source meets the two relevant criteria of “non-excludability” (“it is impossible to prevent anyone from consuming that good”) and “non-rivalry” (“consumption of this good by anyone does not reduce the benefits available to others”). Again, this is Dries borrowing from economic theory, and on the surface this seems like a useful way of thinking about Drupal, as well as free software.

Dries’s use of the term “public good” is not problematic in that a large body of research uses this term. However, returning to Olson, it seems unlikely that he would call Drupal a “public good” and that he would have instead used the term “collective good” or “common good.” Olson characterizes public goods as government goods. I arrived at this conclusion not only because Olson based his research on Paul Samuelson’s work – whose essay mentioned “collective,” not “public,” goods – or because the title of Olson’s book is the Logic of Collective Action, but also because Olson made statements like this in his book: “The common or collective benefits provided by governments are usually called ‘public goods’ by economists” (14, emphasis added). Olson was actually quite specific about this distinction between “public” and “collective” goods: “A state is first of all an organization that provides public goods for its members, the citizens; and other types of organizations similarly provide collective goods for their members” (15). Even so, Dries very clearly compared Drupal to other public goods that eventually became the purview of the government – on one slide he placed Drupal alongside roads, schools, parks, streetlights, and defense. He was clear that each of these goods went from “invention” to “product” to “utility,” and that each was controlled by “volunteers,” then “business,” then “government.”

While Dries certainly was not suggesting that the government take over control of Drupal, it seems a curious choice to compare Drupal to government projects. It makes for an interesting thought experiment to consider what happens when we understand Drupal as a public good, controlled by the government. Olson’s study, after all, concerns groups (representing individuals) that work for common interests. So to begin, it would make sense that the Drupal Association – a nonprofit “dedicated to helping the open-source Drupal CMS project flourish” – would be the group that represents we, the Drupal community. Partially akin to one of Olson’s labor unions, the Drupal Association works for a common interest. But what branch of the government would control Drupal? Would this also be the Drupal Association, with Dries as its president, moving under the purview of the government? Or would the government hire the core committers, with Dries still in his role as the benevolent dictator? But enough of that. One could (and should) object that comparing Drupal to the government is obtuse, and that it suggests a kind of economic determinism in which Drupal would become a government utility, which is clearly not anyone’s goal. I would agree, and while this may seem silly to construct a building with the lights on and nobody home, it helps to reveal what we already know – we do not actually want the government involved in Drupal. Many people in our community, including Dries, do not really consider Drupal to be like other government projects. Drupal is our project. We make it what we want and we do not want to (nor can we, really) hand the keys over to the government.

So what happens when we shift the focus to free software as a “collective” good? To be clear, this use of “collective” does not signify “the collective life of humanity” (as Philip Gilbert Hamerton once put it), but rather a group of individuals acting together. Conceiving of Drupal as a “collective” rather than a “public” can be helpful for a variety of purposes. For one, it helps to explain why Holly Ross, the executive director of the Drupal Association, talks openly and thoughtfully about why she is starting to question whether the most appropriate tax classification for the Drupal Association is 501c3 – an organization that exists for the public good – or if it should more appropriately be classified as a 501c6, a trade organization whose purpose is to grow the businesses that support it. While I was quite taken aback when she admitted this to me, I can understand the thesis. It seems quite likely that our community is moving away from the notion of Drupal as something for the public and instead something for our collective. The internal deliberations of the Drupal Association are yet another indication that our group is gradually becoming more business focused.

In the end, it does not especially matter if Drupal is a public good or a collective good if our focus is on improving the Drupal project. Our group, like the large organizations that Olson analyzes, is growing not just in members and contributors, but also in complexity of problems. We have a wide variety of ways to understand our community and its corresponding problems. A growing percentage of our membership is both self-interested and economically motivated, while other factions lean toward the selfless or the seemingly irrational. How one understands our community, and the problems that need solving, greatly informs how we go about finding solutions.

The Trouble with Technical Fixes

Dries likes to fix problems with technology because, like countless entrepreneurs before him, Dries has great faith in technology. He writes, “We truly live in miraculous times. Open Source is at the core of the largest organizations in the world. Open Source is changing lives in emerging countries. Open Source has changed the tide of governments around the world.” Talk of “miraculous times” is a bold assertion. It’s also an example of an attitude that Morozov, that pugnacious and insightful technology critic, describes in his book, To Save Everything, Click Here, as “epochalism”, or “to believe one is living in truly exceptional times” (36). The problem with this attitude, Morozov claims, is that it leads to unhealthy beliefs about technology. What Dries claims for Open Source is quite similar to what others had envisioned for the telegraph, radio, telephone, television, personal computers, and countless other technologies, which Morozov takes up in his book, The Net Delusion. Morozov cites a bevy of ideas with eerily familiar conjecture, some of which are worth noting here. For example, in 1858, a New Englander editorial proclaimed: “The telegraph binds together by a vital cord all the nations of the earth” (276). In 1921 the president of GE predicted that radio would be “a means for general and perpetual peace on earth” (278). And just a few years later, the New York Times critic Orrin Dunlap would foresee that “Television will usher in a new era of friendly intercourse between the nations of the earth” (280). Fast forward to 2014 and we read Dries making similar prognostications about open source changing organizations and governments. This belief in technology entices us into using it for new purposes.

Dries’s choice of a technical solution is confusing. In addition to works by Samuelson and Olson, Dries cites in his keynote a well-known article by Garrett Hardin titled “Tragedy of the Commons” (20:45). Dries is vague about how he understands this article (he accidentally calls it a “book”), which makes it all the more curious why he would mention it. The epigraph of the article reads, “The population problem has no technical solution; it requires a fundamental extension in morality” (emphasis added). What is more, Hardin’s first sentence contains the following quotes from Weisner and York: “It is our considered professional judgment that this dilemma has no technical solution. If the great powers continue to look for solutions in the area of science and technology only, the result will be to worsen the situation” (1243). Hardin was bullish in his suspicion of “technical solutions,” reiterating his position four years later in the preface to his 1972 book, Exploring New Ethics for Survival: “For too long have we supposed that technology would solve the ‘population problem.’ It won’t.” Like Morozov, Hardin is suspect of technical fixes to complex problems. Since Hardin’s essay focused on “a class of human problems” that he described as “no technical solution problems,” perhaps there was another aspect that Dries found helpful.

Hardin, who contends “it takes courage to assert that a desired technical solution is not possible” (1243), had agonized over how to convey his conclusions. Hardin also believes that we cannot succeed by appealing to conscience or by making people feel guilty. Hardin, like Olson, speaks of “coercion” to counteract the effects of self-interest. Hardin recommends that we cannot appeal to a person’s “sense of responsibility” but rather that in order to make changes we instead need a “mutually agreed upon coercion,” and that it may require infringing on personal liberties. He talks of the need to take away freedoms for the common good. Thirty years later, in 1998, Hardin would describe the ideas he presented in his earlier essay as his “first attempt at interdisciplinary analysis.” He felt that he was trying to solve a problem so large – the human overpopulation problem – that he could not employ simple, technological fixes, and that it would be necessary to draw on conclusions derived from multiple disciplines. People, not computers, would have to work together.

Moreover, there are pitfalls with technological fixes beyond what Hardin construes (and again, I draw inspiration from Morozov and others). For example, introducing technological fixes can irritate existing social conflicts. Organizations that have long flourished in the Drupal community might be embarrassed by the new profile pages and might be less inclined to contribute, not more. Technological fixes can also distract, or act as mechanisms for denying the existence of deeper social problems – higher listings on the marketplace page, for example, will not distract individuals and organizations that are upset by Acquia’s sales techniques or who have concerns about its influence on Drupal Association webinars. When technological fixes do not work, they can have the effect of making us think that we just need a different technological fix. Dries seems to express just this attitude when he writes, “There are plenty of technical challenges ahead of us that we need to work on, fun ideas that we should experiment with, and more.” If these are intellectually challenging problems that require serious discussions, and not just “fun ideas,” we will never get to the point of solving our problems.

Perhaps the most troublesome trait of technological fixes is when they close down thoughtful contributions by people with knowledge about addressing social and political problems. Dries broaches the topic of “social capital” in his Amsterdam keynote (28:24), saying, “this is where we are good.” But he follows that up, suggesting that “altruism” and “social capital” are not scalable (31:37) and that these are not solutions for Drupal (31:55). Why close down discussion of these topics and abandon ideals that have served the community so well? What would happen if, rather than discarding these modes of investigation, we dug deeper to find alternative answers? What if the solution to scaling Drupal lies not with technology? What if, rather than use technology to change our community and culture we reverse our efforts and instead focus on making adjustments to our culture in order to improve our technology? Or perhaps we should only consider technological fixes that support broader efforts to improve the Drupal project rather than simply grow it? As the German philosopher Martin Heidegger put it, “the essence of technology is by no means anything technological.”

Sprouting Social Solidarity

When I suggest we need to look beyond technical solutions, this is not necessarily contra Dries. In another, much shorter blog post on “Open Source and social capital” – posted less than a month before his post about “scaling Open Source” – Dries concluded, “social capital is a big deal; it is worth understanding, worth talking about, and worth investing in. It is key to achieving personal success, business success and even happiness.” Plus, Dries has written about “fostering inclusivity and diversity” on his blog. Like Dries, I do not believe that there is only one way to grow the Drupal community. We can use technology to support our broader goals, so discarding all technological fixes is not my objective. Rather, I am suggesting an approach to cultivating our community that mirrors how we make changes to Drupal code – we carefully consider how each change will improve the overall project, never assuming that more automatically means better.

What is more compelling to me than technological fixes is to examine how Drupal and cultures around the globe shape each other, and how we can create more situations where more individuals make the choice to start participating in our community. This mode of investigation requires a multidisciplinary approach, a broader understanding not just of economic transactions, but also human interactions. I agree with Lars Udéhn’s assessment that “Olson’s theory of collective action has proved inadequate and must be replaced by a theory assuming mixed motivations” (239). The last time I checked, Drupal’s unofficial slogan is not “come for the code, stay for the economy” – it’s about community, and that is where I believe we should concentrate our efforts.

While I found Dries’s turn to analysis refreshing, I also question that his informants offer the most helpful of ideas. Hardin changed his mind many times over his career, a fact he readily admits, so it would seem reasonable to explore his later ideas. Samuelson’s article, with its thick prose and mathematical formulas, feels quite unrelated to Drupal. It seems reasonable that someone like Dries, with all his responsibilities, should not be compelled to trace the history of theories of rationalism and social action from Aristotle to Descartes to Kant to Max Weber and beyond – especially not in a single Driesnote. If only we did not have those pesky clients to help and 529s to fund, we could explore more of these ideas. All of this is but a reminder that Dries’s sources are certainly not the last words on these subjects.

Not to pick on Olson, but it also does not seem as though he considered the full force of social solidarity to Marx’s thinking about motivation. In his Economic and Philosophical Manuscripts, Marx writes of workers who get together to further their shared goal, “but at the same time, they acquire a new need – the need for society – and what appears as a means had become an end.” For Marx, building relationships was another form of production, and social solidarity was a key component for bringing about change (for more detailed critiques of Olson’s interpretation of Marx, see, for example, Gomberg or Booth below). Likewise, social solidarity is a significant force in the Drupal community. In my local Drupal community we not only have a monthly “user group” meeting, but every month we also have a “jam session” (coder meetup), community “lab hours,” and a social meetup at a bar. Many individuals in our community help organize the Twin Cities DrupalCamp, attend the nearby DrupalCorn or DrupalCamp Midwest, and travel to the annual North American DrupalCon (DrupalCons, organized by the Drupal Association, are the largest Drupal conferences in the world). The people we interact with become important to our lives – not just collaborators, but friends. “It is not the consciousness of men that determines their existence,” Marx wrote in A Contribution to the Critique of Political Economy, “but their social existence that determines their consciousness.”

We, as a community, would benefit from questioning our own unexamined beliefs, no matter what discomfort it may cause. We should continue to ask if successful programs like D8 Accelerate – a project that funds Drupal core development through grants – truly benefit our community, or if they might foster a collective motivated primarily by money. The way Drupal production is organized affects our understanding of it, and how we choose to coerce individuals matters. While many would prefer economic incentives and hard science over humanities, some in our community are marginalized and brushed aside by such priorities. Perhaps we will determine that it is in our best interest to ensure that our community exists more for the public than for our collective. It could be that programs like D8 Accelerate negatively affect solidarity.

If IRC and issue queues online beget lively debates at conferences and code sprints in person, we should continue to examine each of those interactions. For instance, I agree with Larry Garfield when he writes, “The new contributor first commit is one of #DrupalCon’s most important rituals.” At the end of a week-long conference, the community code sprint occurs on the final day. During this sprint, veteran Drupalers train new contributors about the peculiarities of contributing code to the Drupal code base. Near the end of the day, one or more lucky individuals are picked to go in front of everyone else where Dries commits that person’s contribution. Personally, it was this event that got me hooked on the Drupal community. This symbolic act welcomes people to our community, demonstrates their worth, and gives future contributors some extra motivation as they work toward finding problems to solve.

Moreover, we should promote a wide variety of events, encouraging more meetups, social events, and quasi-productive gatherings where code and conversation flow freely. The Drupal Association has already made steps in the right direction when they announced the results of their survey and their resulting “new approach to community at DrupalCon.” The community theme in this announcement was comprehensive: “Community Keynote,” “Community Kickoff,” “Community BoFs,” “Community Training,” and “Community Sprints.” One could argue that the Drupal Association is bringing these activities back, and that previously they did not require the “community” prefix.

Finally, I hope to see more thoughtful writing from our community about our community. The complexity of our community makes this a difficult task for an outsider. In addition to recommendations about “how to configure a View” and “how to make a page load faster” on Planet Drupal, many of us would like to know how other local Drupal communities work. What has been successful? How do they grow their membership? What does it mean to grow membership? The problem is not that we never discuss these issues, it is that we tend not to interrogate these issues more thoroughly “in print.” Drupal Watchdog is a step in the right direction, with its slightly longer form articles that allow the community to share their ideas in a more considered manner than a traditional blog post. While sharing ideas is nice, it can be even more helpful to share our ideas after they have been improved by an editor. While there are many issues of Drupal Watchdog that contain content that I find less engaging, I am glad that it allows for a wider range of voices.

All too often we incorrectly describe Drupal as a means to an end, detached from a political agenda. We forget that organizations use Drupal not only as a tool, or even because of the community. Some organizations, such as the Free Software Foundation, clearly choose their software, including Drupal, for philosophical reasons first. Or consider the American Booksellers Association (ABA), an organization engaged in political and trade-related efforts geared toward helping independently owned bookstores. The ABA’s hundreds of Drupal websites represent just one component of their larger political project. Like so many nonprofits, the ABA has a staff of passionate individuals dedicated to the cause, and their conception of Drupal must not conflict with their ideals. Consequently, I would like to see more posts on Planet Drupal that test the boundaries of the guidelines, which discourage “posts that don’t provide valuable, actionable content.” It would be nice to see more thoughtful articles that discuss political agendas and activities, and then describe how Drupal supports those activities. Countless people are inspired to use Drupal for reasons that have nothing to do with technology, and we should consider encouraging more of these stories.

While I have many other ideas that I am tempted to suggest here, those ideas are more properly topics for another article. That said, I think we can certainly benefit from studying other free software communities. When I was sitting in the audience for the DriesNote at DrupalCon Los Angeles in May, I suggested on Twitter that it “sounds like @Dries gets lots of inspiration from proprietary products (Pinterest, Pandora), rather than from other free software.” Dries later saw my tweet and clicked the “Favorite” button. I think we would benefit not just from discussing other free software projects, but also interrogating the thinking about them. The kind of scholarship that I have found most illuminating is not that of economists, but rather work like Gabriella Coleman’s anthropological studies of the Debian community and groups associated with Anonymous, as well as Christopher Kelty’s ethnographic research into free software. There is a great deal to be gained by considering our ideas about Drupal in light of what we know about the Linux community, the Fedora project, OpenStack, and other large free software communities, while acknowledging that the Drupal community is complex and that there are no easy answers or solutions.

The distinguished literary theorist Terry Eagleton has remarked, “most people are too preoccupied with keeping themselves afloat to bother with visions of the future. Social disruption, understandably enough, is not something most men and women are eager to embrace” (194). I understand why many in our community would not be quick to embrace any sort of radical change, but I also think it’s important that we talk about these issues. We cannot offload all of our problem solving to technology. To change what we think, we must change what we do. Making the case that the Drupal project should focus on its community and culture might seem less exciting the innovative technical solutions, but I hope to have highlighted just a few of the approaches to understanding our community that could prove beneficial, and that we should be careful as we consider which of them to adopt. Dries, in his recent turn to historicizing, is on the right track, and I hope the conversation continues.

Works Cited

Booth, Douglas E. “Collective Action, Marx’s Class Theory, and the Union Movement.” Journal of Economic Issues 12 (1978): 163-185.

Coleman, Gabriella. Coding Freedom: The Ethics and Aesthetics of Hacking. Princeton: Princeton University Press, 2012.

Coleman, Gabriella. Hacker, Hoaxer, Whistleblower, Spy: The Many Faces of Anonymous. New York: Verso, 2014.

Eagleton, Terry. Why Marx Was Right. New Haven: Yale University Press, 2011.

Gomberg, Paul. “Marxism and Rationality.” American Philosophical Quarterly 26 (1989): 53-62.

Hamerton, Philip Gilbert. The Intellectual Life. New York: Macmillan, 1875.

Hardin, Garrett. Exploring New Ethics for Survival: The Voyage of the Spaceship Beagle. New York: Viking Press, 1972.

Hardin, Garrett. “Extensions of ‘The Tragedy of the Commons.’” Science 280 (1998): 682-683.

Hardin, Garrett. “The Tragedy of the Commons.” Science 162 (1968): 1243-1248.

Heidegger, Martin. The Question Concerning Technology and Other Essays. New York: Garland, 1977.

Kelty, Christopher. Two Bits: The Cultural Significance of Free Software. Durham: Duke University Press, 2008.

Morozov, Evgeny. The Net Delusion: The Dark Side of Internet Freedom. New York: PublicAffairs, 2011.

Morozov, Evgeny. To Save Everything, Click Here: The Folly of Technological Solutionism. New York: Public Affairs, 2013.

Nietzsche, Friedrich. The Portable Nietzsche. Translated by Walter Kaufmann. New York: Penguin Books, 1977.

Olson, Mancur. The Logic of Collective Action: Public Goods and the Theory of Groups. Cambridge, Mass.: Harvard University Press, 1971.

Rand, Ayn. Capitalism, the Unknown Ideal. New York: New American Library, 1966.

Samuelson, Paul. “The Pure Theory of Public Expenditure.” The Review of Economics and Statistics 36 (1954): 387-389.

Self, Will. How the Dead Live. New York: Grove Press, 2000.

Udéhn, Lars. “Twenty-Five Years with ‘The Logic of Collective Action.’” Acta Sociologica 36 (1993): 239-261.

Processing expensive back-end operations

During the life cycle of a Drupal project there are many situations when you need to do expensive operations. Examples of these are: populating a newly created field for thousands of entities, calling a webservice multiple times, downloading all of the images referenced by a certain text field, etc.

In this article, I will explain how you can organize these operations in order to avoid the pitfalls related to them. I created a GitHub repository with the code of every step in this article.

You will see how we can use update hooks, drush commands or Drupal queues to solve this. Depending on the situation you’ll learn to use one or the other.


The UX team at the Great Shows TV channel has come up with an idea to improve the user experience on their Drupal website. They are partnering with Nuts For TV, an online database with lots of reviews of TV show episodes, fan art, etc. The idea is that whenever an episode is created –or updated– in the Great Shows website, all the information available will be downloaded from a specific URL stored in Drupal fields. Also, they have gone ahead and manually updated all of the existing episode nodes to add the URL to the new field_third_party_uri field.

Your job as the lead back-end developer at Great Shows is to import the episode information from Nuts For TV. After writing the preceptive hook_entity_presave that will call _bg_process_perform_expensive_task, you end up with hundreds of old episode nodes that need to be processed. Your first approach may be to write an update hook to loop through the episode content and run _bg_process_perform_expensive_task.

The example repo focuses on the strategies to deal with massive operations. All the code samples are written for educational purposes, and not for their direct use.

Time expensive operations are many times expensive in memory resources as well. You want to avoid the update hook to fail because the available memory has been exhausted.

Do not run out of memory

With an update hook you can have the code deployed to every environment and run database updates as part of your deploy process. This is the approach taken in the first step in the example repo. You will take the entities that have the field_third_party_uri attached to them and process them with _bg_process_perform_expensive_task.

/** * Update from a remote 3rd party web service. */ function bg_process_update_7100() { // All of the entities that need to be updated contain the field. $field_info = field_info_field(FIELD_URI); // $field_info['bundles'] contains information about the entities and bundles // that have this particular field attached to them. $entity_list = array(); // Populate $entity_list // Something like: // $entity_list = array( // array('entity_type' => 'node', 'entity_id' => 123), // array('entity_type' => 'node', 'entity_id' => 45), // ); // Here is where we process all of the items: $succeeded = $errored = 0; foreach ($entity_list as $entity_item) { $success = _bg_process_perform_expensive_task($entity_item['entity_type'], $entity_item['entity_id']); $success ? $succeeded++ : $errored++; } return t('@succeeded entities were processed correctly. @errored entities failed.', array( '@succeeded' => $sandbox['succeeded'], '@errored' => $sandbox['errored'], )); }

This is when you realize that the update hook never completes due to memory issues. Even if it completes in your local machine, there is no way to guarantee that it will finish in all of the environments in which it needs to be deployed. You can solve this using batch update hooks. So that's what we are going to do in Step 2.

Running updates in batches

There is no exact way of telling when you will need to perform your updates in batches, but if you answer any of these questions with a yes, then you should do batches:

  • Did the single update run out of memory in your local?
  • Did you wonder if the update was dead when running a single batch?
  • Are you loading/updating more than 20 entities at a time?

While these provide a good rule of thumb, every situation deserves to be evaluated separately.

When using batches, your episodes update hook will transform into:

/** * Update from a remote 3rd party web service. * * Take all the entities that have FIELD_URI attached to * them and perform the expensive operation on them. */ function bg_process_update_7100(&$sandbox) { // Generate the list of entities to update only once. if (empty($sandbox['entity_list'])) { // Size of the batch to process. $batch_size = 10; // All of the entities that need to be updated contain the field. $field_info = field_info_field(FIELD_URI); // $field_info['bundles'] contains information about the entities and bundles // that have this particular field attached to them. $entity_list = array(); foreach ($field_info['bundles'] as $entity_type => $bundles) { $query = new \EntityFieldQuery(); $results = $query ->entityCondition('entity_type', $entity_type) ->entityCondition('bundle', $bundles, 'IN') ->execute(); if (empty($results[$entity_type])) { continue; } // Add the ids with the entity type to the $entity_list array, that will be // processed later. $ids = array_keys($results[$entity_type]); $entity_list += array_map(function ($id) use ($entity_type) { return array( 'entity_type' => $entity_type, 'entity_id' => $id, ); }, $ids); } $sandbox['total'] = count($entity_list); $sandbox['entity_list'] = array_chunk($entity_list, $batch_size); $sandbox['succeeded'] = $sandbox['errored'] = $sandbox['processed_chunks'] = 0; } // At this point we have the $sandbox['entity_list'] array populated: // $entity_list = array( // array( // array('entity_type' => 'node', 'entity_id' => 123), // array('entity_type' => 'node', 'entity_id' => 45), // ), // array( // array('entity_type' => 'file', 'entity_id' => 98), // array('entity_type' => 'file', 'entity_id' => 640), // array('entity_type' => 'taxonomy_term', 'entity_id' => 74), // ), // ); // Here is where we process all of the items: $current_chunk = $sandbox['entity_list'][$sandbox['processed_chunks']]; foreach ($current_chunk as $entity_item) { $success = _bg_process_perform_expensive_task($entity_item['entity_type'], $entity_item['entity_id']); $success ? $sandbox['succeeded']++ : $sandbox['errored']++; } // Increment the number of processed chunks to see if we finished. $sandbox['processed_chunks']++; // When we have processed all of the chunks $sandbox['#finished'] will be 1. // Then the update runner will consider the job finished. $sandbox['#finished'] = $sandbox['processed_chunks'] / count($sandbox['entity_list']); return t('@succeeded entities were processed correctly. @errored entities failed.', array( '@succeeded' => $sandbox['succeeded'], '@errored' => $sandbox['errored'], )); }

Note how the $sandbox array will be shared among all the batch iterations. That is how you can detect that this is the first iteration –by doing empty($sandbox['entity_list'])– and how you signal Batch API that the update is done. The $sandbox is also used to keep track of the chunks that have been processed already.

By running your episode updates in batches your next release will be safer, since you will have decreased the chances of memory issues. At this point, you observe that this next release will take two extra hours because of these operations running as part of the deploy process. You decide that you will write a drush command that will take care of updating all your episodes, that will decouple the data import from the deploy process.

Writing a custom drush command

With a custom drush command you can run your operations in every environment, and you can do it at any time and as many times as you need. You have decided to create this drush command so Matt (the release manager at Great Shows) can run it as part of the production release. That way he can create a release plan that is not blocked by a 2 hours update hook.

Drush runs in your terminal, and that means that it will be running under PHP CLI. This allows you to have different configurations to run your drush commands, without affecting your web server. Thus, can set a very high memory limit for PHP CLI to run your expensive operations. Check out Karen Stevenson’s article to test your custom drush commands with different drush versions.

To create a drush command from our original update hook in Step 1 we just need to create the drush file and implement the following methods:

  • hook_drush_command declares the command name and options passed to it.
  • drush_{MODULE}_{COMMANDNAME}. This is the main callback function, the action will happen here.

This results in:

/** * Main command callback. * * @param string $field_name * The name of the field in the entities to process. */ function drush_bg_process_background_process($field_name = NULL) { if (!$field_name) { return; } // All of the entities that need to be updated contain the field. $field_info = field_info_field($field_name); $entity_list = array(); foreach ($field_info['bundles'] as $entity_type => $bundles) { // Some of the code has been omitted for brevity’s sake. See the example repo // for the complete code. // At this point we have the $entity_list array populated. // Something like: // $entity_list = array( // array('entity_type' => 'node', 'entity_id' => 123), // array('entity_type' => 'file', 'entity_id' => 98), // ); // Here is where we process all of the items: $succeeded = $errored = 0; foreach ($entity_list as $entity_item) { $success = _bg_process_perform_expensive_task($entity_item['entity_type'], $entity_item['entity_id']); $success ? $succeeded++ : $errored++; } }

Some of the code above has been omitted for brevity’s sake. Please look at the complete example.

After declaring the drush command there is almost no difference between the update hook in Step 1 and this drush command.

With this code in place, you will have to run drush background-process field_third_party_uri in an environment to be able to QA the updated episodes. Drush also introduces some additional flexibility.

As the dev lead for Great Shows, you know that even though you can configure PHP CLI separately, you still want to run your drush command in batches. That will save some resources and will not rely on the PHP memory_limit setting.

A batch drush command

The transition to a batch drush command is also straightforward. The callback for the command will be responsible for preparing the batches. A new function will be written to deal with every batch, which will be very similar to our old command callback.

Looking at the source code for the batch command you can see how drush_bg_process_background_process is reponsible for getting the array of pairs containing entity types and entity IDs for all of the entities that need to be updated. That array is then chunked, so every batch will only process one of the chunks.

The last step is creating the operations array. Every item in the array will describe what needs to be done for every batch. With the operations array populated we can set some extra properties to the batch, like a callback that runs after all batches, and a progress message.

The drush command to add the extra data to the episodes uses two helper functions in order to have more readable code. _drush_bg_callback_get_entity_list is a helper function that will find all of the episodes that need to be updated, and return the entity type and entity ID pairs. _drush_bg_callback_process_entity_batch will update the episodes in the batch.

It is common to need to run a callback on a list of entities in a batch drush command.  Entity Process Callback is a generic drush command that lets you select the entities to be updated and apply a specified callback function to them. With that you only need to write a function that takes an entity type and an entity object and pass the name of that function to drush epc node _my_custom_callback_function. For our example, all the drush code is simplified to:

/** * Helper function that performs an expensive operation for EPC. */ function _my_custom_callback_function($entity_type, $entity) { list($entity_id,,) = entity_extract_ids($entity_type, $entity); _bg_process_perform_expensive_task($entity_type, $entity_id); }

Running drush batch commands is a very powerful and flexible way of executing your expensive back-end operations. However, it will run all of the operations sequentially in a single run. If that becomes a problem you can leverage Drupal’s built-in queue system.

Drupal Queues

Sometimes you don’t care if your operations are executed immediately, you only need to execute the operations at some point in the near future. In those cases, you may use Drupal queues.

Instead of updating the episodes immediately, there will be an operation per episode waiting in the queue to be executed. Each one of those operations will update an episode. All of the episodes will be updated only when all of the queue items –the episode update operations– have been processed.

You will only need an update hook to insert a queue item to the queue with all the information for the episode to be updated later. First, create the new queue that will hold the episode information. Then, insert the entity type and entity ID in the queue.

At this point you have created a queue and inserted a bunch of entity type and ID pairs, but there is nothing that is processing those items. To fix that you need to implement hook_cron_queue_info so queue elements get processed during cron runs. The 'worker callback' key holds the function that is executed for every queue item. Since we have been inserting an array for the queue item, that is what _bg_process_execute_queue_item –your worker callback– will receive as an argument. All that your worker needs to do is to execute the expensive operation.

There are several ways to process your queue items.

  • Drupal core ships with the aforementioned cron processing. This is the basic method, and the one used by Great Shows for their episode updates.
  • Similar to that, drush comes with drush queue-list and drush queue-run {queue name} to trigger your cron queues manually.
  • Fellow Lullabot Dave Reid wrote Concurrent Queue to process your queue operations in parallel and decrease the execution time.
  • The Advanced Queue module will give you extra niceties for your queues.
  • Another alternative is Queue Runner. This daemon will be monitoring your queue to process the items as soon as possible.

There are probably even more ways to deal with the queue items that are not listed here.


In this article, we started with a very naive update hook to execute all of our expensive operations. Resource limitations made us turn that into a batch update hook. If you need to detach these operations from the release process, you can turn your update hooks into a drush command or a batch drush command. A good alternative to that is to use Drupal’s queue system to prepare your operations and execute them asynchronously in the (near) future.

Some tasks will be better suited for one approach than others. Do you use other strategies when dealing with this? Share them in the comments!

Between Releases: When Should I Adopt the Newest Version of Drupal?

Watching the Drupal release cycle ebb and flow reminds me of sitting on the beach as the waves roll in. There is Drupal 5! It’s getting closer and closer! Finally it crashes on the beach in a splash of glory. But immediately, and initially imperceptibly, it starts to recede, making way for Drupal 6. And so the cycle goes, Drupal 5 recedes and Drupal 6 rushes in. Drupal 6 is overcome by Drupal 7. And now, as I write this, we’re watching as Drupal 7 washes back and Drupal 8 towers over the beach.

Each new version of Drupal is a huge improvement on the one before. But each version also introduces uncertainties. Is all that new functionality necessary? Has Drupal core become ‘bloated’? Does it do too much (or too little)? Will it be performant? How much work will it take to implement? Is it still buggy? And, arguably, the most important question of all, when will the contributed modules we need catch up?

So when is Drupal “ready” for our clients? If clients want a new site in this between-releases period, do we build it on the solid, safe, predictable older release? Or jump in with the shiny, new, improved release that is just over the horizon, or just released? Or do we wait for the new version to mature further and delay building a new site until it’s ready?

We’ve dealt with these questions over and over through the years. Knowing when to embrace and build on a new major release requires careful consideration along several axis, and making the right decision can be the difference between success and failure.Here are the guidelines I use.

How Complex is the Site?

If the site is simple and can be built primarily with Drupal Core, than the shiny new version is likely a safe bet. Contributed modules may add nice features, but creating the site without many (or any) of them will mitigate your risk.

Each new Drupal release pulls into core some functionality that was previously only possible using contributed modules. Drupal 5 allowed you to create custom content types in the UI. Drupal 7 added custom fields to core. Drupal 8 brings Views into core. And every Drupal release makes some contributed modules obsolete. If the new core functionality is a good match for what the site needs, we’ll be able to build a new site without using (and waiting for) those contributed modules, which would be a good reason to build out on the frontier.

Correspondingly, if the site requires many contributed modules that are not included in core, we’ll have to wait for, and perhaps help port, those modules before we can use the new version. If we can’t wait or can’t help we may have no choice but to use the older version, or wait until contributed modules catch up.

How Tight is the Deadline?

It will probably take longer to build a site on a new version of Drupal that everyone is still getting familiar with than an older version that is well understood. It always takes a little longer to do things when using new processes as when repeating patterns you’ve used many times before.

Delays will also be introduced while waiting for related functionality to be ready. Perhaps there is a contributed module that solves a problem, but it hasn’t been ported yet, so we have to stop and help port it. Or there may not be any contributed module that does anything close to what we need, requiring us to plan and write custom code to solve the problem. Latent bugs in the code may emerge only when real world sites start to use the platform, and we might have to take time to help fix them.

In contrast, if we’re using the mature version of Drupal, odds are good that the bugs have been uncovered and there is code somewhere to do pretty much anything that needs to be done. It might be a contributed module, or a post with examples of how others solved the problem, or a gist or sandbox somewhere. Whatever the problem, someone somewhere probably has already run into it. And that code will either solve the problem, or at least provide a foundation for a custom solution, meaning less custom code.

Basically, if the deadline is a key consideration, stick with the tried and true, mature version of Drupal. There just may not be enough time to fix bugs and create custom code or port contributed modules.

How Flexible is the Budget?

This is a corollary to the previous question. For all the same reasons that a deadline might be missed, the budget may be affected. It takes more time (and money) to write custom code (or stop and port related contributed modules). So again, if budget is tight and inflexible, it might be a bad decision to roll out a site on a shiny new version of Drupal.

How Flexible is the Scope?

If we use the latest, greatest, version of Drupal, is the scope flexible enough to allow us to leverage the way the new code works out of the box? If not, if the requirements of the new site force us to bend Drupal to our will, no matter what, it will require custom code. If we build on a more mature version of Drupal we may have more existing modules and code examples to rely on for that custom functionality. If we build on the bleeding edge, we’ll be much more on our own.

Where is the Data Coming From?

If this is a new, from-scratch site, and there’s no need to migrate old data in, that would be a good use case for building this shiny new site with the latest, greatest version of Drupal.

But if there is an existing site, and we need to not only create a new site, but also migrate data from the old site to the new, the question of which version to use gets more complicated. If the source is another, older Drupal site, there will (eventually) be a supported method to get data from the old site to the new site. Even so, that may not be fully ready when the new version of Drupal is released. Drupal 8 uses Migrate module for data migration, but only the Drupal 6 to Drupal 8 migration path is complete, and that migration process will likely improve in future point releases. The upgrade path in previous versions of Drupal was often fraught with problems early on. It's something that never gets fully baked until the new version is in use and the upgrade process is tested over and over with complex, real-world sites. So the need to migrate data is another reason to use the older, more mature version of Drupal (or to wait until the new release is more mature).

How Important Is the New Hotness?

Every version of Drupal has a few things that just weren’t possible in previous versions. CMI (Configuration Management) in Drupal 8 provides a much more rational process for deploying code and configuration changes than Drupal 7 does. Drupal 7 requires banging your head against the limitations of the Features module, which in turn is hampered by the fact that Drupal 7 core just isn’t architected in a way that makes this easy. And Drupal 8 core has built-in support for functionality previously only possible by using one or more additional Services modules in Drupal 7.

If these new features are critical features, and if struggling to solve them in older versions has been a time sink or requires complex contributed modules, it makes sense to dive into the latest greatest version that has this new functionality built in.

How Long Should It Last?

A final question is how often the site gets re-built. If it is likely to be redesigned and re-architected every two or three years to keep it fresh, there should be little concern about rolling out on the older, mature version of Drupal. Drupal 7 will be supported until Drupal 9 is released, and that is likely to be a long time in the future. If it will be many years before there will be budget to re-build this site that might be a reason to build it on the latest version, delaying the project if necessary until the latest version is fully supported by contributed modules and potential problems have been worked out.

It’s Complicated!

The ideas above are just part of the thought process we go through in evaluating when to use which version of Drupal. It’s often a complex question with no black and white answers. But I take pride in our ability to use our long experience with Drupal to help clients determine the best path forward in these between-release periods.

5 Things You Can Do to Save Your Sanity When Updating to Photoshop & Illustrator 2015

After updating to Adobe CC 2015, there were some small changes that drove me completely nuts, especially in Photoshop and Illustrator. I spent an ungodly amount of time combing through preferences and tool settings (I even watched a video to relearn how to use the crop tool in Photoshop), and I think I finally have Photoshop and Illustrator set to the point where I can actually use them without pulling out my hair. In this article I provide some tips that hopefully spare you some frustration when using Photoshop or Illustrator CC 2015 for the first time.

Disabling the animated zoom functionality in Illustrator

Adobe’s new animated zoom feature is turned on by default in the new version of Adobe CC 2015. Luckily, they provide  a way to turn this off.  If you’re like me and find the animated zoom annoying, choppy, or worse;  slowing down your computer, then I would recommend disabling animated zoom. You can find it in Preferences under GPU Performance.

You can find GPU Performance under the Illustrator CC Preferences menu. Uncheck the Animated Zoom option. Disabling the GPU performance in Illustrator

If you’re experiencing issues with graphics and typography displaying in Illustrator, it could be related to the GPU performance. When you launch Illustrator for the first time, Illustrator will recommend that you turn this on if you have a compatible graphics card. Not knowing what it was at the time or what it did, I chose to follow Adobe’s recommendation and turned this on. After all, Adobe told me that I did have a compatible graphics card. This is what I saw when I opened my first file after the update.

The above is not what I expected to happen. After restarting Illustrator a couple of times and then restarting my computer without success, I realized that it may be a preference setting in Illustrator that’s gone rogue. I stumbled across the GPU Performance option and immediately realized that this was probably the culprit. I unchecked it and viola, issue solved.

One thing to note is that when you disable the GPU performance, you also disable the animated zoom. Yes! I love killing two birds with one stone. My issue was pretty severe, but there are other smaller graphics problems that can appear due to GPU performance including random lines appearing and disappearing when scrolling down a page or while using the hand tool to “swim” through designs. My suggestion is that if you’re experiencing any weird graphic anomalies, try turning off the GPU performance to see if the problem is resolved. You can find the option under Preferences.

You can find GPU Performance under the Illustrator CC Preferences menu item. Uncheck the GPU Performance option to disable it. This will also disable the Animated Zoom option. Enabling the resizing handles when clicking on an object in Illustrator

By default, Illustrator disables the resizing handles that appear when you click on an object on the artboard. I’d love to be able to understand the data behind this decision, because most designers I know use the bounding box handles to resize objects more than the actual transform tool. Good thing you can re-enable these fairly quickly with a shortcut or a menu item. Just press command+shift+B to toggle the shortcut for the bounding box, or you can find the Show Bounding Box option under the menu item View.

You can find the Show/Hide bounding box option under the View menu. Enabling classic crop setting when cropping in Adobe Photoshop

After upgrading Photoshop, I experienced some new issues with the crop tool, especially when trying to crop artboards. Moving and rotating the crop on artboards seemed unnatural and I’ve found that sometimes when resizing the crop, I also unintentionally rotated the canvas. I even watched a video to relearn how to use the crop tool. In this video they revealed the classic crop option, which I’ve been using ever since. The classic crop option has been around since the release of Adobe CC but I had completely forgotten the option was available, probably because it’s hidden away behind a nondescript settings icon that can be difficult to find. If the default crop tool drives you crazy and makes you want to throw stuff around the room, then I highly recommend using the classic crop tool option. It’s much more familiar and predictable in how it works.

To enable classic crop mode, click on the crop tool, and then click on the gear icon that appears along the top of the screen. Add a checkmark next to the Use Classic Mode option. I also enabled the Enable Crop Shield option to help with accuracy when cropping.

When using the crop tool, you can enable Classic Mode by clicking on the gear icon that appears along the the top and adding a checkmark next to Use Classic Mode. To increase accuracy when cropping, I would also suggest turning on Enable Crop Shield. Utilize the option to export individual layers in Photoshop

The Save for Web option has been labeled as legacy and seems like it’s being replaced with a general export. The shortcut command+option+shift+s still works for the save for web option, and there’s still a menu item for it, but I’ve embraced the new export option mostly because you can export individual layers. The option to export individual layers instead of using the slice tool has improved my workflow and I’ve found that in some cases, I can export assets much more quickly without worrying about creating multiple layers of slices to extract specific graphics from a design.

To export individual layers, select the layers you want to export in the layer panel and right click to reveal the menu. Choose the Export As option. To export multiple layers at once, shift+select or command+select the layers you want to export and then right click to reveal the menu. You can also select Quick Export as PNG if you don’t want to customize any of the settings before exporting.

To export assets on a single or multiple layers, right click the selected layers in the layers panel to reveal the Export As option. There’s still more to learn

Learning the ins and outs of Illustrator and Photoshop CC 2015 is an ongoing process, and I’m sure I’ll continue to find better solutions for utilizing tools, settings, and preferences to their fullest. These were just a few I found that help me the most. I suggest visiting Adobe’s Learn & Support site for more in depth tutorials and want to learn more about what’s new in Adobe CC 2015. If you have a suggestion on a tool setting, preference, or something that has helped streamline your process when working in Photoshop or Illustrator CC 2015, we’d love to hear it!

Processing forms in React

React is a JavaScript presentation library created by Facebook and used at AirBnB, Instagram, NetFlix, and PayPal, among others. It is structured in components, where each component is a class that contains everything it needs for its rendering. React is also isomorphic, meaning that its code can be executed by the server and the browser.

This article gives an overview of how our contact form works. In order to explain it, we have created a GitHub repository and a live example which renders the form and processes the submission. For clarity, we have skipped some aspects of real applications such as a routing system, client and server side form validation, and email submission.

React is not a framework. Therefore, we need a few extra technologies to power the form. Here is a description of each of them:

  • Express: a Node.js web application manager. It listens to requests for our application and returns a response.
  • JADE: a templating engine widely used within the Node.js community. It is used to render the main HTML of the application.
  • Grunt: a JavaScript task manager. We use it to run two tasks: transform React's JSX syntax into JavaScript through Babel and then package these files into a single JavaScript file through Browserify.

In the following sections we will open the form, fill it out, submit it and view the response. During this process, we will explain what happens in the browser and in the server on each interaction.

Bootstrapping and rendering the form

We start by opening the form located at with a web browser. Here is the response:

And here is what happened in the web server in order to render the response:

  1. Our Express application received a request and found a match at the following rule:

// Returns the contact form. app.get('/', function (req, res) { var ContactForm = React.renderToString(ContactFormFactory()); res.render('index', { Content: ContactForm }); });

  2. The rule above rendered the ContactForm component into a variable and passed it to the index.jade template, which has the following contents:

html head title!= React form example | Lullabot script(src='/js/react.js') link(href='https://some-path/bootstrap.min.css', rel='stylesheet') body.container #container.container!= Content script(src='/build/bundle.js')

  3. Express injected the form into the Content variable at the above template and then returned a complete HTML page back to the browser. At this point, we saw the form in the web browser.

  4. The web browser completed receiving /build/bundle.js and executed the following code contained there:

var React = require('react'); var ContactForm = require('./contact-form.jsx'); React.render(<ContactForm />, document.getElementById('container'));

The above snippet loaded the ContactForm component and rendered it. We already did this server side but we need to do it again client side in order to attach listeners to DOM events such as the form submission. This is the reason why React is isomorphic: it's code can be processed client side and server side, which solves the blank screen effect of Single Page Applications. In this example, at step 3 we received the whole form from the server instead of a blank screen. Isn't React neat?

Filling out the form and submitting it

We will use this snippet to fill out the form. Here is a screenshot of us running it in Chrome's Developer Tools:

Next, we will submit the form by clicking on the button at the bottom of the page. This will call a method in the ContactForm React component, which listens to the form submission event through the following code:

<form action="" onSubmit={this.handleSubmit}>

If you worked on web development a few years ago, then the above syntax may seem familiar. The old way of attaching handlers to DOM events was via HTML event attributes. This was straightforward but it had disadvantages such as polluting the HTML with JavaScript. Later on, Unobtrusive JavaScript became the standard so websites would separate HTML from JavaScript by attaching event listeners through jQuery.bind(). However, on large web applications it became difficult to find which callbacks were listening to a particular piece of HTML. React joins the best of both strategies because a) it lets us write event handlers in HTML event attributes and b) when the JSX code is transformed to JavaScript, it is taken out of the HTML and moved to a single event listener. React's approach is clear for developers and efficient for the web browser.

Updating the component's status

When we click on the submit button, the form will first show a Sending message and then, once we receive a response from the web server, it will update the message accordingly. We achieve this in React by chaining statuses. The following method makes the first state change:

handleSubmit: function (event) { event.preventDefault(); document.getElementById('heading').scrollIntoView(); this.setState({ type: 'info', message: 'Sending...' }, this.sendFormData); },

In React, the method this.setState() renders a component with new properties. Therefore, the following code in the render() method of the ContactForm component will behave differently than the first time we called it at the beginning of the article:

render: function() { if (this.state.type && this.state.message) { var classString = 'alert alert-' + this.state.type; var status = <div id="status" className={classString} ref="status"> {this.state.message} </div>; } return ( <div> <h1 id="heading">React contact form example: Tell us about your project</h1> {status}

When we rendered the form on server side, we did not set any state properties so React did not show a status message at the top of the form. Now we have, so React prints the following message:

Sending the form data and rendering a response

As soon as React shows the above Sending message on screen, it will call the method that will send the form data to the server: this.sendFormData(). We defined this transition in the following code:

this.setState({ type: 'info', message: 'Sending...' }, this.sendFormData);

This is how you chain statuses in React. In order to show a Sending status and then submit the form data, we provide an array with new properties for the render() method, and a callback to be executed once it finishes rendering. Here is a simplified version of the callback that sends the form data to the web server:

sendFormData: function () { // Fetch form values. var formData = { budget: React.findDOMNode(this.refs.budget).value, company: React.findDOMNode(, email: React.findDOMNode( }; // Send the form data. var xmlhttp = new XMLHttpRequest(); var _this = this; xmlhttp.onreadystatechange = function() { if (xmlhttp.readyState === 4) { var response = JSON.parse(xmlhttp.responseText); if (xmlhttp.status === 200 && response.status === 'OK') { _this.setState({ type: 'success', message: 'We have received your message and will get in touch shortly. Thanks!' }); } else { _this.setState({ type: 'danger', message: 'Sorry, there has been an error. Please try again later or send us an email at [email protected]' }); } } };'POST', 'send', true); xmlhttp.setRequestHeader('Content-type', 'application/x-www-form-urlencoded'); xmlhttp.send(this.requestBuildQueryString(formData)); },

The abobe code fetches the form values and then submits them. Depending on the response data, it shows a success or failure message by updating the component's state through this.setState(). Here is what we see on the web browser:

You may be surprised that we did not use jQuery to make the request. We don't need it. The native XMLHttpRequest object is available in the set of browsers that we support at and has everything that we need to make requests client side.

The following code in the Express application handles the form submission by returning a successful response:

// Processes the form submission.'/send', function (req, res) { return res.send({status: 'OK'}); });

In the real contact form at we grab the form data and send an email. If you are curious about how this works, you can find an example snippet at the repository.


Clarity and simplicity are the adjectives that come to mind when we think of React. Our ContactForm component has a few custom methods and relies on some of React's API methods to render and process the form. The key to React is that every time that we want to provide feedback to the user, we set the state of the component with new values, which causes the component (and the components within) to render again.

React's solves some of the challenges that we have experienced in front end web applications when working with other technologies. The ability to render the result of the first request on the server is mindblowing. We also like its declarative syntax, which makes it easy for new team members to understand a given component. Finally, its efficient event system saves us from attaching too many event listeners in a page.

Did this article spark your curiosity? Go and try the live example and fork the repository if you want to dive deeper into it. At Lullabot, we are very excited about React and are looking forward to your feedback.

A PHP Developer’s Guide to Caching Data in Drupal 7

If there’s one thing in programming that drives me up the wall, it’s patterns that I use once every few months, such that I almost remember what to do but inevitably forget some key detail. Lately, that has been when I’ve needed to cache data from remote web services. I end up searching for A Beginner's Guide to Caching Data in Drupal 7 and checking it’s examples against my code. That’s no fun at all.

After some searching for a different project, I found the Drupal Doctrine Cache project and thought "what if I could chain the static and Drupal cache calls automatically?" - and of course, it’s already done with Doctrine’s ChainCache class. ChainCache gives us a consistent API for all of the usual cache operations. The class takes an array of CacheProvider classes that can be used to cache data. When fetching data, it goes through them in order until it finds the object you’re looking for. As a developer, caching data in memory in a static cache is no different than caching it in MySQL, Redis, or anything else. On top of that, ChainCache handles saving and deleting entries through the entire chain automatically. If you update the database (and invalidate your cached data), you can clear the static and persistent caches with a simple $cache->delete(). In fact, as someone using the cache object directly, you might not even know that a static cache exists! For example, the ChainCache could be updated to also persist data in a local APC cache. Or, the persistent Drupal cache could be removed if it turned out not to improve performance. Calling code doesn't need to have any knowledge of these changes. All that matters is you can reliably save, fetch, and delete cached data with a consistent interface.

What does all this mean? If you’re already using Composer in your Drupal projects, you can easily use these classes to simplify any of your caching code. If you’re not using Composer, this makes a great (and simple) example of how you can start to use modern PHP libraries in your existing Drupal 7 project. Let’s see how this works.

Adding Drupal Doctrine Cache with Composer

The first step is to set up your module so that it requires the Drupal Doctrine Cache library. For modules that get posted on, I like to use Composer Manager since it will handle managing Composer libraries when different contributed modules are all using Composer on the same site. Here are the steps to set it up:

  1. Install Composer if you haven’t installed it yet.
  2. Create a Drupal module with an info file and a module file (I’ve put an example module in a sandbox).
  3. In the info file, depend on Composer Manager: dependencies[] = composer_manager
  4. Open up a terminal, and change to the module directory.
  5. Run composer init to create your initial composer.json file. For the package name, use drupal/my_module_name.
  6. When you get to the step to define dependencies (you can modify them later), add capgemini/drupal_doctrine_cache to require the library. You can add it later by editing composer.json or using composer require.

When you enable your module with Drush, Composer Manager will download the library automatically and put it in the vendor folder. For site implementations, it’s worth reading the Composer Manager documentation to learn how to configure folder paths and so on.

Using the CacheProvider for a Static and Persistent Cache

We’re now at the point where we can use all of the classes provided by the Drupal Doctrine Cache library and Doctrine Cache in our module. In a previous implementation, we might have had caching code like this:

function my_module_function() { $my_data = &drupal_static(__FUNCTION__); if (!isset($my_data)) { if ($cache = cache_get('my_module_data')) { $my_data = $cache->data; } else { // Do your expensive calculations here, and populate $my_data // with the correct stuff. cache_set('my_module_data', $my_data); } } return $my_data; }

We can now replace this code with the ChainCache class. While this is nearly the same amount of code as the previous version, I find it much easier to read and understand. One less level of nested if statements makes the code easier to debug. Best of all, to a junior or non-Drupal PHP developer, this code doesn’t contain any "Drupal magic" like drupal_static().

function my_module_function() { // We want this to be static so the ArrayCache() isn’t recreated on each function // call. In OOP code, make this a static class variable. static $cache; if (!$cache) { $cache = new ChainCache([new ArrayCache(), new DrupalDoctrineCache()]); } if ($cache->contains('my_module_data')) { $my_data = $cache->fetch('my_module_data'); } else { // Do your expensive calculations here. $my_data = 'a very hard string to generate.'; $cache->save('my_module_data', $my_data); } return $my_data; }

If the calling code needs to interact with the cache directly, it’s entirely reasonable to return the cache object itself, and document in the @returns tag that a CacheProvider is returned. Your specific caching configuration is safe and abstracted, avoiding sporadic drupal_static_reset() calls in your code.

Interfaces are the Future, and they’re Already Here

Even if you prefer the Drupal-specific version of this code, there’s something really interesting about how Doctrine, a small bit of glue code, and Drupal can now work together. The above is all possible because Doctrine ships a set of interfaces for caching, instead of just creating raw functions or concrete classes. Doctrine is helpful in providing many prebuilt cache implementations, but those can be swapped out for anything - including a thin wrapper around Drupal’s cache functions. You can even go the other way around, and tie Drupal’s cache system into something like Guzzle’s Cache Subscriber to cache HTTP requests in Drupal. By writing our code around interfaces instead of implementations, we let others extend our code in ways that are simply impossible with Drupal-7 style procedural programming. The rest of the PHP community is already operating this way, and Drupal 8 works this way as well.

Do you know about other PHP libraries that are great at working with Drupal 7’s core systems? Share them here by posting a comment below.

MozCon 2015 Recap, Part 2: Remarketing, Communities, and Analytics

In part 1 of the MozCon 2015 recap, I wrote about how Google is slowly killing SEO, and how that has led to a resurgence of on-site strategy and tactics that can help get you more traffic. SEO is dead. Long live SEO!

Talks touching on that topic were the most interesting and exciting of the conference. The other three topics that are worthy of note, however, offered practical advice that can be acted on immediately. Like before, I have given some highlights of the talks followed by my personal takeaways.


Duane Brown talked about delightful remarketing. Those aren’t two words that normally go together, but he made me a believer. This mainly consisted of some best practices to not annoy potential customers.

  • Set up a burn pixel, so you don’t keep marketing to someone who has already bought that product.
  • Look back window - how long will you continue to retarget them? He recommends no more than 3 days. That seems kind of short to me, but it’s something that warrants testing, as I’m sure it depends on target audience and product/service type.

He also mentioned Adwords Customizer, which was something new to me. You can set up a countdown, embedded right in your ad. This becomes more powerful when combined with remarketing.

Cara Harshman talked about online personalization and much of it relates to remarketing. How do you do it without being creepy?

She listed three parts of a framework to use as a guide when personalizing copy:

  1. Who to target
    • Contextual personalization - changing the text to match ad copy.
    • Demographic personalization - this is not just typical stuff like age and gender, but also where they are in the funnel. Enterprise or small business? Pre-sale or post sale? Adroll, for example, gives one a phone number and the other a link to a customer support site.
    • Behavioral personalization - what are they doing, or what are they more likely to do? Past purchasers are more likely to buy again, so perhaps show them higher margin items?
  2. What to show them - don’t get too detailed, or it just gets creepy.
  3. How to prioritize what to implement. Ask these questions:
    • What is the potential business impact?
    • What would be the technical effort to execute?
    • What are the requirements to sustain it?
Action Items for Remarketing
  1. Implement a burn pixel if you haven’t already.
  2. Limit how long you remarket to visitors. Begin some tests to see how long it typically takes customers to convert after the first visit. When there is a massive dropoff, that is probably your limit. Anything longer and you risk causing some burn out and bad will toward your brand.
  3. Segment your audience, but don’t slice them too thin. Use the personalization framework to help start the conversation.
  4. Start looking for additional ways to personalize that could have a big impact. Start with simple text changes with an aim to eventually go bigger. What is Code? gives a certificate of completion at the end.
Building and Maintaining Communities

Rich Millington offered some advice on building (or reinvigorating) online communities, even for brands with boring and mundane products. Many go the route of the big launch, which leads to nothing but a quick plummet. It pays off to think smaller and grow more organically. Often, all you need is 150 active members to reach critical mass, something that is self-sustaining.

Finding Your First Community Members

Where to start? Start small. Like any network, you start with your friends and people you already know. To get them to join, however, you need credibility or a founder who has that credibility. If you don’t have that credibility, don’t waste your time trying to start a community. Build your credibility instead. Create content, host events, interview experts - the typical things you would expect to do if you want to be known as a thought leader.

There are four ways to ensure people want to become involved in your community:

  1. Solve a problem they already know exists - one company targeted toward teachers could never get them engaged. Turns out, teachers just didn’t have the time, so instead they created a community for teachers to swap time-saving tips, and activity exploded.
  2. Seize an opportunity they are aware of - if your product is boring, like washing machines, come up with a new angle. Housework tips and hacks.
  3. Explore a passion they are curious about.
  4. Increase their status among their friends - exclusive clubs are popular for a reason.

Don’t focus so much on aesthetics and polish. Some of the most active communities are ugly and super simple. Just look at Reddit and HackerNews.

Onboarding New Members

Most platforms for communities encourage lurking. How do you break out of that and get people to participate faster? Most communities introduce new members the worst way possible: asking them to introduce themselves in some long thread, and asking them to complete their profile. Boring.

Instead persuade them to share their experience, opinion, or problem. One community sends new members an intro email where the main thrust is this: “Hi, glad you are here. We’d love your opinion on this discussion here:” Every few days, they rotate out the name and link they recommend to keep up with popular and interesting discussions.

Keep these starter questions in mind when onboarding new members:

  1. What are they doing? (Yammer)
  2. What are they thinking? (Facebook)
  3. What have they recently learned?
  4. What do they need help with?
Keeping Members Active and Engaged

After getting new members involved with their first thread, however, you need to keep them engaged. The speed of the response to the first thread is important. If new members get a response in the first 15 minutes to their question or opinion, they have an 87% chance of posting again. That number goes down the longer it stretches out. This is one reason why Ubercart, an ecommerce platform for Drupal, got so popular. The people behind it were very active on their support forums and made sure to answer every question quickly, even if they didn’t know the answer right away.

Gamify participation and show progress. Provide the sense that they are accomplishing something. Communities also need a clear map that members can follow that leads to them getting more autonomy and responsibility. You need moderators you can trust, and you need to show how people can grow into that position.

Finally, encourage friendships. Introduce them to similar people. These relationships they build will be the main reason they stay for the long term.

And whatever you do, don’t use a Facebook page for your community.

Action Items for Building Communities
  1. Do you have proper credibility in your sphere? If not, start making a plan on building that credibility, or make a list of people who could be your founding members that provides that credibility.
  2. Craft a welcome page and email that makes new members feel important.
  3. Make sure you set up a system so you are alerted when new people post their first topic, and make sure those people get a response as soon as possible.
Analytics Correcting for Dark Traffic

Marshall Simmonds discussed dark search and dark social, the traffic you get that had no referral strings or information. His people work with a vast network of publishers and can pull large amounts of traffic data. 160 sites in 68 categories, totaling 226 billion page views. What they found is that at least 18% of what gets labeled as “direct traffic” is not actually direct traffic. That’s just the bucket analytics providers throw stuff into when they can’t classify it any other way.

How do they know this? By looking closer at some of this so-called “direct traffic.” A lot of the pages had URLs three levels deep. Did someone really type that URL in by hand? They most likely were sent from somewhere. Either:

  • From a secure site
  • From inside an app
  • From incognito or private browsing
  • From somewhere with a new referrer string that isn’t recognized yet

He offered some simple heuristics to discover your dark traffic. First, finding direct visits that were actually sent from social networks.

  1. Aggregate everything classified as direct traffic
  2. Remove your homepage and major section fronts that might be bookmarked from the data set
  3. What you have left is probably dark social
  4. Filter for just new users
  5. Verify links against social campaigns
  6. What you have left is probably dark search

This is important for measuring ROI and having a true sense of where your traffic is coming from. You don’t want to be deceived into thinking something isn’t working.

Software updates also affect these numbers. When iOS/Android puts out an update, sometimes someone forgets to flip the referral string switch or something. Organic traffic drops. They did better with iOS 8, so they are learning. But whenever there are major software updates to an OS and browsers, monitor your analytics for major changes. That spike in direct traffic probably isn’t direct traffic.

Paid Search Informing Organic

Stephanie Wallace makes the argument that paid search and organic teams should not be so siloed. The separation happens because they often have some different goals and different skill set requirements, but there are some good benefits in bringing them together.

Organic traffic is more of a long term strategy, and as a result, it's hard to react to results. If you make any changes, you have to wait. And wait. And wait some more. But this is where paid search can be a great partner and fill in these gaps. It specializes in faster results. Some ways you can leverage paid campaigns to help your organic efforts are:

  1. Testing article titles and descriptions. A paid campaign can give you quick results on what title gets the highest CTR. This can be useful for content you have invested heavily in, and want to be sure you don’t sabotage your changes with some weak metadata. Also a good way to test content ideas before you start spinning your wheels.
  2. Identify content gaps that convert. Get a list of your content that was part of a conversion funnel, part of the path that turned a visitor into a customer. Add a secondary dimension of Paid Search to discover which keywords led people through that content. Do you have anything ranking for those keywords? If not, you now have a targeted list to focus your efforts on, one that you know has a high chance of increasing revenue.
  3. Conversion optimization. This may seem obvious, but your paid search landing pages aren’t the only pages that need optimization. Use a paid campaign to build up some data quickly for some of your organic landing pages, and act accordingly.
Beyond the Pageview

Adrian Vender argues that pageviews are not enough. We’re not good at tracking what the user is doing, how they are interacting with the content. A pageview does not equal a “success.” But event tracking is difficult and requires lots of javascript. And then when you’re tracking all this data, it goes into a big black hole where reports are confusing and hard to digest. So how do we do better?

First, use Google Tag Manager (or Tealium) to manage all the javascript. You won’t have to depend on your IT department or a developer to place your new code on the right page.

Second, start tracking more events. Don’t just track the pageview, but also fire an event when someone gets done reading the page. Fire an event for important navigation elements. Track outbound URLs. Track video views and progress. For really important interactions, you can even define user segments and build reports for just those users who, for example, clicked on that call-to-action button in your sidebar. Wouldn’t it be great to quantify that 20% of people, who read this one particular article to the very end, signed up for your email list?

And finally, learn more about your analytics package. You’re going to need to stretch your muscles to deal with all of this new data. He offers a good list to get you started if you’re using Google Analytics.

Action Items for Analytics
  1. Quantify your real direct traffic. Try and measure your dark traffic to ensure your decisions are made with more accurate data, and put measures in place to watch for sudden spikes in “direct traffic” whenever there are major software updates.
  2. If you run paid campaigns, use the paid keyword data to identify valuable content gaps that you aren’t ranking for.
  3. For cornerstone content, consider using paid campaigns to test headlines and descriptions for maximum impact.
  4. Implement Google Tag Manager, or an equivalent. Start tracking events.
  5. Segment your visitors based on critical interactions.
  6. Learn more about Google Analytics event tracking and building reports. Create shortcuts and custom dashboards for quick reference.
Wrapping Up

The more things change, the more they stay the same. The importance of on-site SEO has come back in a big way, and will continue to grow in importance. The influence of mobile cannot be overlooked. Google is disrupting themselves, accepting large cuts in revenue (mobile ad clicks are far less than desktop), to cater to mobile users. Smart marketers will pay attention.

That being said, Google has been wrong before, and they aren’t afraid to disseminate misinformation to accomplish an objective. For example, no one really saw a gain in traffic after switching to HTTPS, even though we were assured by Google that it would be a ranking signal. So be aware. Take things with a grain of salt. Do your own testing and research.

I’ll end with a final reading recommendation that will help you start to think more about the future. With Apple and Google set to release search APIs for their respective mobile operating systems, where apps can potentially serve their deep content to a device’s native search results, the target for marketers may be shifting faster than we think. So go read Emily Grossman’s article titled App Indexing & The New Frontier of SEO, and start preparing for new ways to reach people.

Pausing the Drupalize.Me Podcast

This week instead of a regular podcast episode, this is a very short update about the podcast itself. We will not be publishing any more episodes for the rest of 2015. We’ve had a great run over the last three years, after reviving the old Lullabot podcast as the new Drupalize.Me podcast. We’re taking a pause now so that we can take a step back and rethink the podcast. We’d like to reboot it as a more focused and useful tool for our listeners. If you have ideas or thoughts about what would be an amazing podcast that we can provide, please do let us know! You can look for the new podcast to return to the scene in early 2016.

Why Web Designers Should Moodboard

Moodboarding is a quick, lean way to document inspiration as you research a design project, and a great way to revisit this research phase as you progress further and further into the execution. Traditionally speaking, moodboards have served as a tried and true design artifact, assembled like a collage, full of inspirational imagery for a variety of designers. But interactive designers without this traditional background might overlook moodboards as the handy deliverable that they really are.


What am I talking about? The moodboard is not so much a digital client deliverable as it is an artifact of the design process. But I’ve found it to be such a useful internal tool that it is still a must-have in any process with market research.

A sample from a recent Lullabot moodboard; it incorporates inspiration across many disciplines, to help describe the tone we want to achieve with our brand.

In its most basic form, a moodboard is a layout you can assemble with any inspiration you wish to keep referencing as you work, which can be enhanced with hierarchy, notes, and other obsessive details. In a more modern design workflow, moodboards might fit in right between research (after you know the audience you're designing for) and style tiles (before you get designing).

A Disclaimer

I feel the need to add a little disclaimer here, about what our intentions are, or more specifically, aren’t. Our intention when collecting inspiration is never to blatantly copy others. This is why I love examples that are way beyond the realm of web design, because it’s so much easier to see the benefits and get away from this temptation. The goal with inspirational moodboards is to capture a particular brand essence (or positioning, or tone, or what have you), first to illustrate a concept you wish to achieve, and then to challenge you with the inspiring work that others have done.

Baby’s First Board

My first legitimate moodboard was during a college internship, after taking on a student project to redesign a particular rum brand. One of the first steps was creating a literal, well, board, by cutting and pasting magazine art onto a foam core board. It was pretty old school. This process was tedious, but still had its moments. Coworkers that would stop by the intern’s pen to drop off grunt work would find themselves lured in by the eye candy on the board, which set the stage for us to pitch our branding progress and get feedback. It also made refining my design direction easier. Keep in mind, that was quite some time ago, and I was not as experienced at communicating design intent back then. So instead, I had this visual artifact I could literally point at.

Online Moodboarding Today

Now, there are much better ways for UX designers to moodboard, and faster. The Lullabot designers utilize a Chrome extension, Creonomy Board, that screenshots sites with a variety of options, all of which help to speed up this process. When taking the screenshots, you can use keyboard shortcuts or the extension itself to select a particular image, crop to a selected area, save the visible part of the page, or capture the entire page. Or you can simply drag and drop already-saved images to populate your board.

Once you have your screen capture, you can create different boards for particular buckets, star your favorites and filter to see just those, and assign comments and tags. But my favorite feature is how Creonomy Board automatically saves the link back to where you took the screenshot. This is great when you want to dig a little deeper or refer back to the context of the inspiration for any reason, like to see an interaction on a live site.

Creonomy Board’s Chrome extension provides several options for quickly capturing website inspiration.

All that being said, using this extension is simply a personal preference. I’ve heard of designers having great success using other tools that automate and organize this type of collecting. Don’t forget about Pinterest, Pocket, and even Evernote as you define your own process around moodboarding.

See other tools the Lullabot design team uses in their UX process with 8 Tools for a Leaner Design Research Process.

It’s really not about the app you use, which is why I don’t want to get too far into the details of any particular product. We utilize Creonomy Board to help us speed up and even automate the process of cataloging inspiration, so that the team can focus on the exciting, creative aspect of the inspiration itself. Enough about how, let’s explore why, and get you thinking about all the ways you can use it while creating a website.

Explore Aesthetics

I think the ways to apply a moodboard to your process boil down to two simple approaches. The first is possibly the more traditional or expected, which is to gather inspiration and present a visual direction. It’s a great way to begin style exploration, to pull what others are doing and visualize brand values, before getting into your own sketches or style tiles.

It’s important to force yourself to get outside of the UI box here. It’s an easy habit to get into, to hunt for polished pixels on dribbble, but we need to push ourselves further. You can find thoughtful typography looking at print design, and discover beautiful color palettes from fashion designs, for example. Think about traditional art, fashion design, printed media like posters, books, and packaging, and anything else that might be atypical to a web designer’s feed. You can go one step further by creating and illustrating metaphors that describe your brand or website goals (i.e., "our site should have the sophistication of an Eames recliner, but still be fun and witty, like John Cleese"). 

The last time I was in New York, I found myself engrossed in the gorgeous window designs of the many fashion retail stores. In Soho, I stopped to take pictures of those that I found most interesting. Weeks later, I ended up pulling a color palette from a Lacoste window, and for a client in a completely different industry. You never know where your next idea might come from, so document anything you discover that gets your creative gears turning, lest you regret it later. I have a few moodboards that are random stuff like this (Random Street Inspiration, Random Site Inspiration, etc), for me to pull from later.

Inspiration can come from unexpected places. This fashion retail window display inspired a nautical branding project. Set the Tone

If you want to get really fancy, designers can refine this brain-dump to more closely plan the next phase of styling. Often after a discussion, things that don’t feel quite right will be removed, and other exploration might come up around something that was particularly interesting.

I think the biggest advantage of a moodboard is that it helps bring everyone to the same page, before going too far down any direction. You can quickly capture a few examples of the aesthetic and feel you hope to attain, share it with your internal design team or the entire project team, and eliminate–or at least, reduce–miscommunication before committing to a design direction.

Once the first pass of unconventional inspiration is complete, feel free to take another pass that is closer to home. Think about the sorts of things that would be important to include in a style tile, and pull examples that work well together. Things like typography, color, page layouts, photography, even unique interaction elements, are all fair game. If you do pull things for only  very specific reason (like a beautiful color palette, but the type is all wrong), consider how to label or tag your pulls to explain specifically what you wanted to capture.

Collect Research

OK, I’m going a little out of order here as far as a typical design process, but the second application for moodboards has only occurred to me fairly recently, as the Lullabot design team was working together in Creonomy Board.

Lullabot has a pretty fantastic research process–if I do say so myself, after working within processes that didn’t allow for the same time to prepare for the design phase. One big part of this process is conducting market research, to understand the space our client lives in and the competitors they face. This can be especially valuable to see how other companies are solving similar architecture problems, or to see the position their messaging takes, for example. But it’s easy to get lost in a sea of similar companies, and we can find ourselves asking “what was that one site about the product that does the thing?” just weeks later.

As I mentioned, taking a screenshot from a site using the Creonomy Board plugin (rather than simply dropping files in) lets you also capture the web url it came from. This offers a practical application for the moodboard: to quickly and visually catalog a whole mess of sites and their particular pages. It’s so handy to scroll down through the preview thumbnails later and immediately get a sense of the research, but still have the option to dig deeper into their original site.

Talk Amongst Yourselves

There! Now that you see how Lullabot uses moodboards within different parts of their process, I hope you feel empowered to go out and create your own. And I really hope these examples are helpful as you consider your own design process, and am excited to hear from you. If moodboarding is already a part of your skillset, what tools do you use? How do you use them within your own process?

Write Unit Tests for Your Drupal 7 Code (part 2)

This article is a continuation of Write Unit Tests for Your Drupal 7 Code (part 1), where I wrote about how important it is to have unit tests in your codebase and how you can start writing OOP code in Drupal 7 that can be tested with PHPUnit.

In this article I show you how this can be applied to a real life example.


I encourage you to start testing your code. Here are the most important points of the article:

Dependency Injection and Service Container

Jeremy Miller defines dependency injection as:

[...] In a nutshell, dependency injection just means that a given class or system is no longer responsible for instantiating their own dependencies.

In our MyClass we avoided instantiating CacheController by passing it through the constructor. This is a basic form of dependency injection. Acoording to Martin Fowler:

There are three main styles of dependency injection. The names I'm using for them are Constructor Injection, Setter Injection, and Interface Injection.

As long as you are injecting your dependencies, you will be able to swap those objects out with  their test doubles in your unit tests.

An effective way to pass objects into other objects is by using dependency injection via a service container. The service container will be in charge of giving the receiving class all the needed objects. Then, the receiving object will only need to get the service container. In our System Under Test (SUT), the service container will yield the actual objects, while in the unit test domain it will deliver mocked objects. Using a service container can be a little bit confusing at first, or even daunting, but it makes your API more stable and robust.

Using the service container, our example is changed to:

class MyClass implements MyClassInterface { // ... public function __construct(ContainerInterface $service_container) { $this->cacheController = $service_container->get('cache_controller'); $this->anotherService = $service_container->get('my_services.another_one'); } // ... public function myMethod() { $cache = $this->cacheController->cacheGet('cache_key'); // Here starts the logic we want to test. // ... } // ... }

Note that if you need to use a new service called 'my_services.another_one', the constructor signature remains unchanged. The services need to be declared separately in the service providers.

Dependency injection and service encapsulation is not only useful for mocking purposes, but also to help you to encapsulate your components –and services–. Borrowing, again, Jeremy Miller’s words:

Making sure that any new code that depends on undesirable legacy code uses Dependency Injection leaves an easier migration path to eliminate the legacy code later with all new code.

If you encapsulate your legacy dependencies you can ultimately write a new version and swap them out. Just like you do for your tests, but with the new implementation.

Just like with almost everything, there are several modules that will help you with these tasks:

  • Registry autoload will help you to structure your object oriented code by giving you autoloading if you follow the PSR-0 or PSR-4 standards.
  • Service container will provide you with a service container, with the added benefit that is very similar to the one that Drupal 8 will ship with.
  • XAutoload will give you both autoloading and a dependency injection container.

With these strategies, you will write code that can have it’s dependencies mocked. In the previous article I showed how to use fake classes or dummies for that. Now I want to show you how you can simplify that by using Mockery.

Mock your objects

Mockery is geared towards providing even more flexibility when creating mocks. Mockery is not tied to any test framework which makes it useful even if you decided to move away from PHPUnit.
In our previous example the test case would be:

// Called from the test case. $fake_cache_response = (object) array('data' => 1234); $cache_controller_fake = \Mockery::mock('CacheControllerInterface'); $cache_controller_fake->shouldReceive('cacheGet')->andReturn($fake_cache_response); $object = new MyClass($cache_controller_fake); $object->myMethod();

Here, I did not need to write a CacheControllerFake only for our test, I used Mockery instead.
PHPUnit comes with a great mock builder as well. Check its documentation to explore the possibilities. Sometimes you will want to use one or the other depending on how you want to mock your dependency, and the tools both frameworks offer. See the same example using PHPUnit instead of Mockery:

// Called from the test case. $fake_cache_response = (object) array('data' => 1234); $cache_controller_fake = $this ->getMockBuilder('CacheControllerInterface') ->getMock(); $cache_controller_fake->method('cacheGet')->willReturn($fake_cache_response); $object = new MyClass($cache_controller_fake); $object->myMethod();

Mocking your dependencies can be hard –but valuable– work. An alternative is to include the real dependency, if it does not break the test runner. The next section explains how to save some time using Drupal Unit Autoload.

Cutting corners

Sometimes writing tests for your code makes you realize that you need to use a class from another Drupal module. The first instinct would be, «no problem, let’s create a mock and inject it in place of the real object». That is a very good approach. However, it can be tedious to create and maintain all these mocks, especially for classes that don’t depend on a bootstrap. That code could just be required in your test case.

Since your unit test can be considered a standalone PHP script that executes some code –and makes some assertions– you could just use the require_once statement. This would include the code that contains the class definitions that your code needs. However, a better way of achieving this is by using Composer’s autoloader. An example composer.json in your tests directory would be:

{ "require-dev": { "phpunit/phpunit": "4.7.*", "mockery/mockery": "0.9.*" }, "autoload": { "psr-0": { "Drupal\\Component\\": "lib/" }, "psr-4": { "Drupal\\typed_entity\\": "../src/" } } }

With the previous example, your unit test script would know how to load any class in the Drupal\Component and Drupal\typed_entity namespaces. This will save you from writing test doubles for classes that you don’t have to mock.

At this point, you will be tempted to add classes from your module’s dependencies. The big problem is that every drupal module can be installed in a different location, so a simple ../../contrib/modulename will not do. That would only work for your installation, but not for others. This is one of the reasons why I wrote with Christian Lopez (penyaskito) the Drupal Unit Autoload. By adding Drupal Unit Autoload to your composer.json you can add references to Drupal core and other contributed modules. The following example speaks for itself:

{ "require-dev": { "phpunit/phpunit": "4.7.*", "mockery/mockery": "0.9.*", "mateu-aguilo-bosch/drupal-unit-autoload": "0.1.*" }, "autoload": { "psr-0": { "Drupal\\Component\\": "lib/", "Symfony\\": ["DRUPAL_CONTRIB<service_container>/lib"] }, "psr-4": { "Drupal\\typed_entity\\": "../src/", "Drupal\\service_container\\": ["DRUPAL_CONTRIB<service_container>/src"] }, "class-location": { "\\DrupalCacheInterface": "DRUPAL_ROOT/includes/", "\\ServiceContainer": "DRUPAL_CONTRIB<service_container>/lib/ServiceContainer.php" } } }

We added mateu-aguilo-bosch/drupal-unit-autoload to the testing setup, so we can include Drupal aware autoloading options to our composer.json.

Striving for the green

One of the most interesting possibilities that PHPUnit offers is code coverage. Code coverage is a measure used to describe the degree to which the methods are tested. Having a high coverage reduces the number of bugs in your code greatly. Moreover, adding new code with test coverage will help you ensure that you are not introducing any bugs along with the new code.

PhpStorm coverage integration.

A test harness with coverage information is a valuable tool to include in your CI tool. One way to execute all your PHPUnit cases is by adding a phpunit.xml file describing where the tests are located and other integrations. Running the phpunit command in that folder will execute all the tests.

Another good 3rd party service is coveralls. It will tell your CI tool how the coverage of your code will change with the pull request –or patch– in question; since Coveralls knows about most of the major CI tools almost no configuration is needed. Coveralls also provides a web UI to see what parts of the code are covered and the ones that are not. dashboard.

Write tests until you get 100% test coverage, or a satisfactory number. The higher the number the higher the confidence that the code is bug free.

Read the next section to see all these tools in action in contributed Drupal 7 module.

A real life example

I applied the tips of this article to the TypedEntity module. TypedEntity is a nice little module that helps you get your code organized around your Drupal entities and bundles, as first class PHP objects. This module will help you to change your mindset.
Make sure to check the contents of the tests/ directory. In there you will see real life examples of a composer.json and test cases. To run the tests follow these steps:

  1. Download the latest version of the module in your Drupal site.
  2. Navigate to the typed_entity module directory.
  3. Execute tests/ in your terminal. You will need to have composer installed to run them.

This module executes both PHPUnit and Simpletest tests on every commit using Travis CI configured through .travis.yml and phpunit.xml.

Another great example, this one with a very high test coverage, is the Amazon S3 module. A new release of this useful module was created recently by fellow Lullabot Andrew Berry, and he added unit tests!

Do you do things differently in your Drupal 7 modules? Leave your thoughts in the comments!

MozCon 2015 Recap, Part 1: The Resurgence of On-site SEO

This year, MozCon demonstrated that Google has effectively killed SEO, in the sense of getting more traffic to your site through natural search rankings.

Much of the conference fell back on some simplistic bromides around branding, social media, mobile, and the like, general digital marketing stuff that has been hashed and rehashed before, and in much greater depth. This content might be inspiring. It goes down like cotton candy and gives you a temporary sugar high. But for those who have any experience at all in the digital marketing arena, it just didn’t offer that much substance for you to sink your teeth into.

They also didn’t offer many practical steps to help smaller business owners compete in an increasingly uneven playing field. When a competitor can influence search rankings through a national TV campaign, what do you do? SEO has traditionally helped even the odds in this regard. There was some good, worthwhile content (which I’ll get to in a moment), but very little of it that could be called “SEO” anymore, and the lack of such practical content was noticeable. Based on conversations with attendees who have been coming for years, this was a major shift from previous iterations of the conference. Of course, the changing landscape does provide some interesting opportunities for those who are creative and tenacious enough to take advantage. As Wil Reynolds noted, VRBO was beat in 18 months by Airbnb. Airbnb ranks #9 for vacation rentals. VRBO still ranks #1 for the phrase, and it doesn’t matter.

But of course, killing SEO has been one of Google’s goals for a long time. Google wants you to buy ads. That’s how it gets most of its revenue. As long as there are known ways to raise your rankings in organic results, then your need for ad spend goes down.

There are still ways to increase your rankings and get more traffic naturally. On-site structure and best practices, link building, social signals, and user engagement still have their place. What’s missing is the ability to measure effectiveness. It has become a black box of mystery. Even if you follow Google’s own rules and recommendations 100%, to the letter, there is no guarantee that it will actually help you get more traffic. This adds uncertainty, which leads to paying more for organic rankings and less definable ROI, which leads to the grass over on the paid Google Adwords side looking so much greener.

It starts with the dreaded “not provided” referrer in analytics. Over 70% of all searches are encrypted and are classified as “not provided”, meaning that marketers are blindfolded most of the time when trying to discover how users are reaching their site (unless, of course, they pay for ads. Then the keyword data shows up just fine.)

Add to this to the various search algorithm updates and heavy-handed punishment Google lays down on some online publishers, and you have people scared for their lives, timid to try anything new that may cross the line. Those that have figured something out are not going to share it publicly anytime soon.

Even the best of the talks/topics at MozCon, that had clear examples to back up their assertions, had a large amount of uncertainty baked in. It’s like playing a game that generates random dungeon layouts each time. Everything is recognizable, the textures and colors familiar, but you still have to find your own way through the dark maze, and hope you stumble upon the treasure. It has made it much harder for the little guy to compete with big brands. And in many cases, it has sites competing against the Google giant itself for real estate in the search results.


There are no organic listings above the fold. Zero. This query shows nearly all ads.

So what can we do to increase visibility of our web real estate? There are four broad topics that were worthwhile and were covered in various talks, with some highlights and my personal takeaways. Part one will cover the first key topic: the resurgence of on-site SEO.

Onsite SEO

Onsite SEO used to be the only SEO. Your content structure and keywords. What you said about yourself is how your rankings were determined. Google came along and said that what other people said about you is more important, and backlinks and their anchor text were a key factor. With the work of Google’s search spam team and other initiatives like machine learning, the landscape has changed and morphed, and it looks like we are coming full circle.

We are now seeing a renaissance in the importance of onsite optimization, for two main reasons.

Reason 1: Google’s Knowledge Graph

Pete Meyers talked about disappearing organic rankings, with ads, shopping results, local listings, videos, and other Google properties taking up more and more space. Google does not want you to leave their ecosystem. You can see this further when Google just provides a direct answer to your question, solving your need immediately.


This is a great user experience; it saves the user clicks and time. Google’s Knowledge Graph powers these answers, and you see it in more detailed action if you type in a query that generates a knowledge card. Meyers used the space needle as an example, but for this article we’ll use the Empire State Building.


The data is structured, and you can play around. The answer it spit out is listed as the “Height.” We can play some quick Jeopardy to see if we can get it to return any other parts of that knowledge, and we find that it does this really well.


The problem is that it is editorially controlled, so it doesn’t scale. What happens when there is nothing to return from the structured data? We get a rich snippet that looks like a direct answer, but the text is actually pulled from the first ranking result.


But these rich snippets don’t always pull from the first organic ranking. Ranking does matter...but all you have to do is get on the first page of the search results. After that, Google determines from that limited list what content to feature. How does it determine which one to use? Based on the on-page semantics of the page. Your content can directly shape the results of the SERPs (search engine results page). And once your content is featured in the rich snippet, you effectively double your search real estate, and some have seen significant CTR increases for those queries.

Another example. In the image below, you’ll see CNN has claimed the rich snippet, even though they are fourth result.


What’s interesting is that mobile seems to be the driver of these rich snippets (and a lot of the other redesigns Google has been implementing). Do the same query on a smaller device, and the rich snippet fills up the space above the fold. There are also some hints that voice is driving these changes. When you ask a question, what does Google Now (or Siri) tell you? They need structured data to pull from, and in its absence, they need to fill in the gaps to help with scale.

Reason 2: Machine Learning and Semantics

Rand Fishkin’s talk on on-site SEO went through the evolution of Google’s search engine, one that had a bias against machine learning, and was dependent on a manually crafted algorithm. This has changed. Google now relies heavily on machine learning, where algorithms don’t just know the manually inputted factors for ranking, they come up with their own rules for what makes a good results page for a given query. The machine doesn’t have to be told what a cat is, for example. It can learn on its own, without human intervention.

This means that not even Google’s engineers know why a certain ranking decision has been made. They can get hints and clues, and the system can give them weights and data. But for the most part, the system is a black box. If Google doesn’t know, we certainly can’t know. We can continue to optimize for classic ranking inputs like backlinks, content, page load speed, etc. However, their effectiveness will continue to get harder and harder to measure. If we want to prepare better for the future, we’ll have to focus more on, and get better at, optimizing for user outputs. Did the searcher succeed in their task, and what signals that? CTR? Bounce rate? Return visits? Length and depth of visit?

Based on some experiments that have been run, the industry knows that Google takes into account relative CTR and engagement. If users land on your result and spend more time on your site versus other results for the same query, you have a vote in your favor. If users click the back button quickly after visiting your page, that doesn’t bode well for your rankings.

Rand offers five ways to start optimizing for this brave new world:

  1. Get above your rankings average CTR. Optimize title, description, and URL for click-throughs instead of keywords. Entice the click, but make sure you deliver with the content behind the headline This is an age where even the URL slug might need a copywriter’s touch. And since Google often tests new results, it could be worth it to publish additional unique content around the same topic, to get it just right.
  2. Have higher engagement than your neighbors for the same queries. Keep them on the page longer, make it easier for them to go deeper into your content and site, load your site fast, invest in fantastic UX, and don’t annoy your visitors. Offering some type of interaction is a good idea. Make it really hard for them to click that back button.
  3. Fill gaps in knowledge. Google can tell what phrases and topics are semantically related for a given space, and can use criteria to determine if a page will actually fulfill a user’s need. For example, if your page is about New York, but you never mention Brooklyn or Long Island, your page probably isn’t comprehensive. Likewise, articles on natural language processing are probably going to include phrases like “tokenization” and “parsing.”
  4. Earn more social shares, backlinks, and loyalty per visit. Google doesn’t just take shares, alone on an island, as a ranking factor. Raw shares and links don’t mean as much. If you earn them faster than your competition however, that’s a stronger signal. Your visits to shares ratio needs to be better. You should also aim to get more returning visitors over time.
  5. Fulfilling the searcher’s task, not just their query. If a large number of searchers keep ending up at a certain location, Google wants to get them to that destination faster without them having to go through a meandering path. If people searching for “best dog food” eventually stop their searching after they end up at Origen’s website, then Google might use that data to rank Origen higher for those queries. Based on Chrome’s privacy policy, they are definitely getting and storing this data. Likewise, some queries are more transactional, and results can be modified accordingly. Conversion rate optimization efforts might therefore help with search visibility.
Action Items for Onsite SEO
  1. What questions are your potential customers asking, questions whose answers can’t be editorially curated? What questions actually motivate some of the keywords you rank for on the first page? What are some gaps in content that are hard to editorially curate? Craft sections of your content on your ranked page so it better answers those questions, and fills in these gaps.
  2. Build deep content. Add value and insight, stuff that can’t be replaced by a simple, one-line answer.
  3. Watch one of Google’s top engineers, Jeff Dean, talk about Deep Learning for building intelligent computer systems.
  4. Give users a reason and a way to come back to your site. Build loyalty. Email newsletters and social profiles can help. If you know a good content path that builds loyalty in potential customers, consider setting up remarketing campaigns that guide them through this funnel. If they drop off, draw them back in at the appropriate place. This can also be done with an email newsletter series.
  5. Look for pages that could benefit from some kind of interactivity, adding value to the goal of the page and helping the user fulfill their need. A good example of this is the NY Times page on how family income relates to chances to go to college. Look for future opportunities. Another example is Seer Interactive’s Pinterest guide, where the checklists contain items you can actually…check.
  6. Ask yourself: can I create content that is 10 times better than anything else listed for this query?
  7. Matthew Brown, in his talk on insane content, gave several examples of companies that broke the mold on what content can be, raising engagement and building loyalty by shattering expectations. When working on content ideas, be sure and add the presentation as part of the discussions. How could you add some personalization?  How can you put the user more in charge of their reading experience? Some examples he showed:
    • Coder - Tinder but for code snippets. Swipe left if the code is bad, swipe right if the code is good.
    • Krugman Battles the Austerians.
    • What is Code? - this particular feature took on a second life when it was put on Github so changes could be recommended. A great way of extending life of an investment. It also adds some personalization in that it shows you how many visits and how long you’ve spent reading.
  8. Check some of your richer content from the past. Have you created any assets as part of your content that can live on their own? Maybe with just a little improvement?
  9. Define who a loyal user is. Is it someone who comes to the site twice a month? 5 times a month? Define other metrics of success for a piece of content. To measure it, you need to know what it is.
  10. Find content that has a high bounce rate and/or low time on page. Why are people leaving? Does it have a poor layout? Hard to read? Takes too long to load?
  11. Exchange your Google+ social buttons for WhatsApp buttons. This is based on a large pool of aggregate data that Marshall Simmonds and his team have analyzed. Simmonds said CNN saw big gains when they did this.

SEO is an exciting discipline, one that, by necessity, tries to be a synthesis of so many other disciplines. Where else are you going to be talking about content strategy, which then leads into the theories behind deep machine learning, and then onto fostering a better user experience?

In part 2 of the recap, we’ll go over remarketing, building online communities, and digging deeper into your analytics.