Replacing the Body Field in Drupal 8

The body field has been around since the beginning of Drupal time. Before you could create custom fields in core, and before custom entities were in core, there was a body field. As it exists now, the body field is a bit of a platypus. It's not exactly a text field like any other text field. It's two text fields in one (summary and body), with a lot of specialized behavior to allow you to show or hide it on the node form, and options to either create distinct summary text or deduce a summary by clipping off a certain number of characters from the beginning of the body.

The oddity of this field can create problems. The summary has no format of its own, it shares a format with the body. So you can't have a simple format for the summary and a more complex one for the body. The link to expose and hide the summary on the edit form is a little non-intuitive, especially since no other field behaves this way, so it's easy to miss the fact that there is a summary field there at all. If you are relying on the truncated text for the summary, there's no easy way to see in the node form what the summary will end up looking like. You have to preview the node to tell.

I wanted to move away from using the legacy body field in favor of separate body and summary fields that behave in a more normal way, where each is a distinct field, with its own format and no unexpected behavior. I like the benefits of having two fields, with the additional granularity that provides. This article describes how I made this switch on one of my legacy sites.

Making the Switch

The first step was to add the new fields to the content types where they will be used. I just did this in the UI by going to admin > structure > types. I created two fields, one called field_description for the full body text and one called field_summary for the summary. My plan was for the summary field to be a truncated, plain text excerpt of the body that I could use in metatags and in AMP metadata, as well as on teasers. I updated the Manage Display and Manage Form Display data on each content type to display my new fields instead of the old body field on the node form and in all my view modes.

Once the new fields were created I wanted to get my old body/summary data copied over to my new fields. To do this I needed an update hook. I used as a guide for creating an update hook in Drupal 8.

The instructions for update hooks recommend not using normal hooks, like $node->save(), inside update hooks, and instead updating the database directly with a SQL query. But that would require understanding all the tables that need to be updated. This is much more complicated in Drupal 8 than it was in Drupal 7. In Drupal 7 each field has exactly two tables, one for the active values of the field and one with revision values. In Drupal 8 there are numerous tables that might be used, depending on whether you are using revisions and/or translations. There could be up to four tables that need to be updated for each individual field that is altered. On top of that, if I had two fields in Drupal 7 that had the same name, they were always stored in the same tables, but in Drupal 8 if I have two fields with the same name they might be in different tables, with each field stored in up to four tables for each type of entity the field exists on.

To avoid any chance of missing or misunderstanding which tables to update, I went ahead and used the $node->save() method in the update hook to ensure every table gets the right changes. That method is time-consuming and could easily time out for mass updates, so it was critical to run the updates in small batches. I then tested it to be sure the batches were small enough not to create a problem when the update ran.

The update hook ended up looking like this:

<?php /** * Update new summary and description fields from body values. */ function custom_update_8001(&$sandbox) { // The content types to update. $bundles = ['article', 'news', 'book']; // The new field for the summary. Must already exist on these content types. $summary_field = 'field_summary'; // The new field for the body. Must already exist on these content types. $body_field = 'field_description'; // The number of nodes to update at once. $range = 5; if (!isset($sandbox['progress'])) { // This must be the first run. Initialize the sandbox. $sandbox['progress'] = 0; $sandbox['current_pk'] = 0; $sandbox['max'] = Database::getConnection()->query("SELECT COUNT(nid) FROM {node} WHERE type IN (:bundles[])", array(':bundles[]' => $bundles))->fetchField(); } // Update in chunks of $range. $storage = Drupal::entityManager()->getStorage('node'); $records = Database::getConnection()->select('node', 'n') ->fields('n', array('nid')) ->condition('type', $bundles, 'IN') ->condition('nid', $sandbox['current_pk'], '>') ->range(0, $range) ->orderBy('nid', 'ASC') ->execute(); foreach ($records as $record) { $node = $storage->load($record->nid); // Get the body values if there is now a body field. if (isset($node->body)) { $body = $node->get('body')->value; $summary = $node->get('body')->summary; $format = $node->get('body')->format; // Copy the values to the new fields, being careful not to wipe out other values that might be there. if (empty($node->{$summary_field}->getValue()) && !empty($summary)) { $node->{$summary_field}->setValue(['value' => $summary, 'format' => $format]); } if (empty($node->{$body_field}->getValue()) && !empty($body)) { $node->{$body_field}->setValue(['value' => $body, 'format' => $format]); } if ($updated) { // Clear the body values. $node->body->setValue([]); } } // Force a node save even if there are no changes to force the pre_save hook to be executed. $node->save(); $sandbox['progress']++; $sandbox['current_pk'] = $record->nid; } $sandbox['#finished'] = empty($sandbox['max']) ? 1 : ($sandbox['progress'] / $sandbox['max']); return t('All content of the types: @bundles were updated with the new description and summary fields.', array('@bundles' => implode(', ', $bundles))); } ?> Creating the Summary

That update would copy the existing body data to the new fields, but many of the new summary fields would be empty. As distinct fields, they won't automatically pick up content from the body field, and will just not display at all. The update needs something more to get the summary fields populated. What I wanted was to end up with something that would work similarly to the old body field. If the summary is empty I want to populate it with a value derived from the body field. But when doing that I also want to truncate it to a reasonable length for a summary, and in my case I also wanted to be sure that I ended up with plain text, not markup, in that field.

I created a helper function in a custom module that would take text, like that which might be in the body field, and alter it appropriately to create the summaries I want. I have a lot of nodes with html data tables, and I needed to remove those tables before truncating the content to create a summary. My body fields also have a number of filters that need to do their replacements before I try creating a summary. I ended up with the following processing, which I put in a custom.module file:

<?php use Drupal\Component\Render\PlainTextOutput; /** * Clean up and trim text or markup to create a plain text summary of $limit size. * * @param string $value * The text to use to create the summary. * @param string $limit * The maximum characters for the summary, zero means unlimited. * @param string $input_format * The format to use on filtered text to restore filter values before creating a summary. * @param string $output_format * The format to use for the resulting summary. * @param boolean $add_elipsis * Whether or not to add an elipsis to the summary. */ function custom_parse_summary($value, $limit = 150, $input_format = 'plain_text', $output_format = 'plain_text', $add_elipsis = TRUE) { // Allow filters to replace values so we have all the original markup. $value = check_markup($value, $input_format); // Completely strip tables out of summaries, they won't truncate well. // Stripping markup, done next, would leave the table contents, which may create odd results, so remove the tables entirely. $value = preg_replace('/(.*?)<\/table>/si', '', $value); // Strip out all markup. $value = PlainTextOutput::renderFromHtml(htmlspecialchars_decode($value)); // Strip out carriage returns and extra spaces to pack as much info as possible into the allotted space. $value = str_replace("\n", "", $value); $value = preg_replace('/\s+/', ' ', $value); $value = trim($value); // Trim the text to the $limit length. if (!empty($limit)) { $value = text_summary($value, $output_format, $limit); } // Add elipsis. if ($add_elipsis && !empty($value)) { $value .= '...'; } return $value; } ?> Adding a Presave Hook

I could have used this helper function in my update hook to populate my summary fields, but I realized that I actually want automatic population of the summaries to be the default behavior. I don't want to have to copy, paste, and truncate content from the body to populate the summary field every time I edit a node, I'd like to just leave the summary field blank if I want a truncated version of the body in that field, and have it updated automatically when I save it.

To do that I used the pre_save hook. The pre_save hook will update the summary field whenever I save the node, and it will also update the summary field when the above update hook does $node->save(), making sure that my legacy summaries also get this treatment.

My pre_save hook, in the same custom.module file used above, ended up looking like the following:

<?php use Drupal\Core\Entity\EntityInterface; /** * Implements hook_entity_presave(). * * Make sure summary and image are populated. */ function custom_entity_presave(EntityInterface $entity) { $entity_type = 'node'; $bundles = ['article', 'news', 'book']; // The new field for the summary. Must already exist on these content types. $summary_field = 'field_summary'; // The new field for the body. Must already exist on these content types. $body_field = 'field_description'; // The maximum length of any summary, set to zero for no limit. $summary_length = 300; // Everything is an entity in Drupal 8, and this hook is executed on all of them! // Make sure this only operates on nodes of a particular type. if ($entity->getEntityTypeId() != $entity_type || !in_array($entity->bundle(), $bundles)) { return; } // If we have a summary, run it through custom_parse_summary() to clean it up. $format = $entity->get($summary_field)->format; $summary = $entity->get($summary_field)->value; if (!empty($summary)) { $summary = custom_parse_summary($summary, $summary_length, $format, 'plain_text'); $entity->{$summary_field}->setValue(['value' => $summary, 'format' => 'plain_text']); } // The summary might be empty or could have been emptied by the cleanup in the previous step. If so, we need to pull it from description. $format = $entity->get($body_field)->format; $description = $entity->get($body_field)->value; if (empty($summary) && !empty($description)) { $summary = custom_parse_summary($description, $summary_length, $format, 'plain_text'); $entity->{$summary_field}->setValue(['value' => $summary, 'format' => 'plain_text']); } } ?>

With this final bit of code I’m ready to actually run my update. Now whenever a node is saved, including when I run the update to move all my legacy body data to the new fields, empty summary fields will automatically be populated with a plain text, trimmed, excerpt from the full text.

Going forward, when I edit a node, I can either type in a custom summary, or leave the summary field empty if I want to automatically extract its value from the body. The next time I edit the node the summary will already be populated from the previous save. I can leave that value, or alter it manually, and it won't be overridden by the pre_save process on the next save. Or I can wipe the field out if I want it populated automatically again when the node is re-saved.

Javascript or Presave?

Instead of a pre_save hook I could have used javascript to automatically update the summary field in the node form as the node is being edited. I would only want that behavior if I'm not adding a custom summary, so the javascript would have to be smart enough to leave the summary field alone if I already have text in it or if I start typing in it, while still picking up every change I make in the description field if I don’t. And it would be difficult to use javascript to do filter replacements on the description text or have it strip html as I'm updating the body. Thinking through all the implications of trying to make a javascript solution work, I preferred the idea of doing this as a pre_save hook.

If I was using javascript to update my summaries, the javascript changes wouldn't be triggered by my update hook, and the update hook code above would have to be altered to do the summary clean up as well.


And that's it. I ran the update hook and then the final step was to remove my now-empty body field from the content types that I switched, which I did using the UI on the Content Types management page.

My site now has all its nodes updated to use my new fields, and summaries are getting updated automatically when I save nodes. And as a bonus this was a good exercise in seeing how to manipulate nodes and how to write update and pre_save hooks in Drupal 8.

Using the Template Method pattern in Drupal 8

Software design patterns are a very good way to standardize on known implementation strategies. By following design patterns you create expectations and get comfortable with the best practices. Even if you read about a design pattern and realize you have been using it for a long time, learning the formal definition will help you avoid eventual edge cases. Additionally, labeling the pattern will enhance communication, making it clearer and more effective. If you told someone about a foldable computer that you can carry around that contains an integrated trackpad, etc, you could have been more efficient by calling that a laptop.

I have already talked about design patterns in general and the decorator pattern in particular, and today I will tell you about the Template Method pattern. These templates have nothing to do with Drupal’s templates in the theme system.

Imagine that we are implementing a social media platform, and we want to support posting messages to different networks. The algorithm has several common parts for posting, but the authentication and sending of actual data are specific to each social network. This is a very good candidate for the template pattern, so we decide to create an abstract base class, Network, and several specialized subclasses, Facebook, Twitter, …

In the Template Method pattern, the abstract class contains the logic for the algorithm. In this case we have several steps that are easily identifiable:

  1. Authentication. Before we can do any operation in the social network we need to identify the user making the post.
  2. Sending the data. After we have a successful authentication with the social network, we need to be able to send the array of values that the social network will turn into a post.
  3. Storing the proof of reception. When the social network responds to the publication request, we store the results in an entity.

The first two steps of the algorithm are very specific to each network. Facebook and Instagram may have a different authentication scheme. At the same time, Twitter and Google+ will probably have different requirements when sending data. Luckily, storing the proof of reception is going to be generic to all networks. In summary, we will have two abstract methods that will authenticate the request and send the data plus a method that will store the result of the request in an entity. More importantly, we will have the posting method that will do all the orchestration and call all these other methods.

One possible implementation of this (simplified for the sake of the example) could be:

<?php namespace Drupal\template; use Drupal\Component\Serialization\Json; /** * Class Network. * * @package Drupal\template */ abstract class Network implements NetworkInterface { /** * The entity type manager. * * @var \Drupal\Core\Entity\EntityTypeManagerInterface. */ protected $entityTypeManager; /** * Publish the data to whatever network. * * @param PostInterface $post * A made up post object. * * @return bool * TRUE if the post was posted correctly. */ public function post(PostInterface $post) { // Authenticate before posting. Every network uses a different // authentication method. $this->authenticate(); // Send the post data and keep the receipt. $receipt = $this->sendData($post->getData()); // Save the receipt in the database. $saved = $this->storeReceipt($receipt); return $saved == SAVED_NEW || $saved == SAVED_UPDATED; } /** * Authenticates on the request before sending the post. * * @throws NetworkException * If the request cannot be authenticated. */ abstract protected function authenticate(); /** * Send the data to the social network. * * @param array $values * The values for the publication in the network. * * @return array * A receipt indicating the status of the publication in the social network. */ abstract protected function sendData(array $values); /** * Store the receipt data from the publication call. * * @return int * Either SAVED_NEW or SAVED_UPDATED (core constants), depending on the operation performed. * * @throws NetworkException * If the data was not accepted. */ protected function storeReceipt($receipt) { if ($receipt['status'] > 399) { // There was an error sending the data. throw new NetworkException(sprintf( '%s could not process the data. Receipt: %s', get_called_class(), Json::encode($receipt) )); } return $this->entityTypeManager->getStorage('network_receipts') ->create($receipt) ->save(); } }

The post public method shows how you can structure your posting algorithm in a very readable way, while keeping the extensibility needed to accommodate the differences between different classes. The specialized class will implement the steps (abstract methods) that make it different.

<?php namespace Drupal\template; /** * Class Facebook. * * @package Drupal\template */ class Facebook extends Network { /** * [email protected]} */ protected function authenticate() { // Do the actual work to do the authentication. } /** * [email protected]} */ protected function sendData(array $values) { // Do the actual work to send the data. } }

After implementing the abstract methods, you are done. You have successfully implemented the template method pattern! Now you are ready to start posting to all the social networks.

// Build the message. $message = 'I like the new article about design patterns in the Lullabot blog!'; $post = new Post($message); // Instantiate the network objects and publish. $network = new \Drupal\template\Facebook(); $network->post($post); $network = new \Drupal\template\Twitter(); $network->post($post);

As you can see, this is a behavioral pattern very useful to deal with specialization in a subclass for a generic algorithm.

To summarize, this pattern involves a parent class, the abstract class, and a subclass, called the specialized class. The abstract class implements an algorithm by calling both abstract and non-abstract methods.

  • The non-abstract methods are implemented in the abstract class, and the abstract methods are the specialized steps that are subsequently handled by the subclasses. The main reason why they are declared abstract in the parent class is because the subclass handles the specialization, and the generic parent class knows nothing about how. Another reason is because PHP won’t let you instantiate an abstract class (the parent) or a class with abstract methods (the specialized classes before implementing the methods), thus forcing you to provide an implementation for the missing steps in the algorithm.
  • The design pattern doesn’t define the visibility of these methods, you can declare them public or protected. If you declare these methods public, then you can surface them in an interface to make the base class abstract.

In one typical variation of the template pattern, one or more of the abstract methods are not declared abstract. Instead they are implemented in the base class to provide a sensible default. This is done when there is a shared implementation among several of the specialized classes. This is called a hook method (note that this has nothing to do with Drupal's hooks).

Coming back to our example, we know that most of the Networks use OAuth 2 as their authentication method. Therefore we can turn our abstract authenticate method into an OAuth 2 implementation. All of the classes that use OAuth 2 will not need to worry about authentication since that will be the default. The authenticate method will only be implemented in the specialized subclasses that differ from the common case. When we provide a default implementation for one of the (previously) abstract methods, we call that a hook method.

At this point you may be thinking that this is just OOP or basic subclassing. This is because the template pattern is very common. Quoting Wikipedia's words:

The Template Method pattern occurs frequently, at least in its simplest case, where a method calls only one abstract method, with object oriented languages. If a software writer uses a polymorphic method at all, this design pattern may be a rather natural consequence. This is because a method calling an abstract or polymorphic function is simply the reason for being of the abstract or polymorphic method.

You will find yourself in many situations when writing Drupal 8 applications and modules where the Template Method pattern will be useful. The classic example would be annotated plugins, where you have a base class, and every plugin contains the bit of logic that is specific for it.

I like the Template Method pattern because it forces you to structure your algorithm in a very clean way. At the same time it allows you to compare the subclasses very easily, since the common algorithm is contained in the parent (and abstract) class. All in all it's a good way to have variability and keep common features clean and organized.

Navigation and Deep Linking with React Native

Mobile deep links are an essential part of a mobile application’s business strategy. They allow us to link to specific screens in a mobile application by using custom URL schemas. To implement deep links on the Android platform, we create intent filters to allow Android to route intents that have matching URLs to the application runtime. For iOS, we define the custom URL schema and listen for incoming URLs in the app delegate. It’s possible to create a custom module for React Native which wraps this native functionality, but fortunately React Native now ships with the APIs we need for deep linking. This article provides an overview of how to get deep links working in a React Native application for both Android and iOS platforms. We will also cover the basics of routing URLs and handling screen transitions.

The accompanying example application code can be found here. Note that at the time of writing this article, the current stable version of React Native is 0.26.

Setting up the navigation

In this example, we will use React’s Navigator component to manage the routes and transitions in the app. For non-trivial apps, you may want to look into React’s new NavigatorExperimental component instead as it leverages Redux style navigation logic. 

We can use React’s Navigator component to manage the routes and transitions in the app. First make sure the component is imported:

import { Navigator, // Add Navigator here to import the Navigator component StyleSheet, Platform, Text, View, ToolbarAndroid, SegmentedControlIOS, Linking } from 'react-native';

Next, we need to define the route objects. In this example, are two routes, “home” and “account”. You can also add custom properties to each route so that they are accessible later when we want to render the screen associated to the route.

const App = React.createClass({ ... getInitialState() { return { routes: { home: { title: 'Home', component: HomeView }, account: { title: 'My Account', component: AccountView } } }; },

The Navigator component is then returned in the render function:

render() { return ( <Navigator ref={component => this._navigator = component} navigationBar={this.getNav()} initialRoute={this.state.routes.home} renderScene={(route, navigator) => <route.component {...route.props} navigator={navigator} />} /> ); },

To retain a reference to the navigator component, a function can be passed to the ref prop. The function receives the navigator object as a parameter allowing us to set a reference to it for later use. This is important because we need to use the navigator object for screen transitions outside the scope of the navigator component. 

A navigation bar that persists across all the screens can be provided using the navigationBar prop. A function is used here to return either a ToolbarAndroid or SegmentedControlIOS React component depending on which platform the code is running on. There are two screens, we can transition to using the navigation bar here.

getNav() { if (Platform.OS === 'ios') { return ( < SegmentedControlIOS values = { [this.state.routes.home.title, this.state.routes.account.title] } onValueChange = { value => { const route = value === 'Home' ? this.state.routes.home : this.state.routes.account; this._navigator.replace(route); } } /> ); } else { return ( < ToolbarAndroid style = { styles.toolbar } actions = { [{ title: this.state.routes.home.title, show: 'always' }, { title: this.state.routes.account.title, show: 'always' } ] } onActionSelected = { index => { const route = index === 0 ? this.state.routes.home : this.state.routes.account; this._navigator.replace(route); } } /> ); } }

The initialRoute prop lets the navigator component know which screen to start with when the component is rendered and the renderScene prop accepts a function for figuring out and rendering a scene for a given route.

With the Navigator component in place, we can transition forwards to a new screen by calling the navigator’s replace function and passing a route object as a parameter. Since we set a reference to the navigator object earlier, we can access it as follows:


We can also use this._navigator.push here if we want to retain a stack of views for back button functionality. Since we are using a toolbar/segmented controls and do not have back button functionality in this example, we can go ahead and use replace.

Notice that the parameters passed to onActionSelected and onValueChange are 0 and ‘Home` respectively. This is due to the differences in the ToolbarAndroid and SegmentedControlIOS components. The argument passed to the method from the ToolbarAndroid component is an integer denoting the position of the button. A string with the button value is passed for the SegmentedControlIOS component.

Now that we have a simple application that can transition between two screens, we can dive into how we can link directly to each screen using deep links.

Android Defining a custom URL

In order to allow deep linking to the content in Android, we need to add intent filters to respond to action requests from other applications. Intent filters are specified in your android manifest located in your React Native project at /android/app/src/main/java/com/[your app]/AndroidManifest.xml. Here is the modified manifest with the intent filter added to the main activity:

<activity android:name=".MainActivity" android:label="@string/app_name" android:configChanges="keyboard|keyboardHidden|orientation|screenSize"> <intent-filter> <action android:name="android.intent.action.MAIN" /> <category android:name="android.intent.category.LAUNCHER" /> </intent-filter> <intent-filter android:label="filter_react_native"> <action android:name="android.intent.action.VIEW" /> <category android:name="android.intent.category.DEFAULT" /> <category android:name="android.intent.category.BROWSABLE" /> <data android:scheme="deeplink" android:host="home" /> </intent-filter> </activity>

The <data> tag specifies the URL scheme which resolves to the activity in which the intent filter has been added. In this example the intent filter accepts URIs that begin with deeplink://home. More than one URI scheme can be specified using additional <data> tags. That’s it for the native side of things, all that’s left to do is the implementation on the JavaScript side.

Receiving the intent data in React

React’s Linking API, allows us to access any incoming intents. In the componentWillMount() function, we retrieve the incoming intent data and figure out which screen to render. Building on the navigation example above, we can add the following componentDidMount() function:

componentDidMount() { const url = Linking.getInitialURL().then(url => { if (url) { const route = url.replace(/.*?:\/\//g, ""); this._navigator.replace(this.state.routes[route]); } }); }

The getIntialURL() function returns the URL that started the activity. Here we are using a string replace function to get the part of the URL string after deeplink://. this.navigator.replace is called to transition to the requested screen.

Using the Android Debug Bridge (ADB), we can test the deep links via the command line.

$ adb shell am start -W -a android.intent.action.VIEW -d deeplink://home(the url scheme) com.deeplinkexample(the app package name)

Alternatively, open a web browser on the device and type in the URL scheme. The application should launch and open the requested screen.

IOS Defining a custom URL

In iOS, we register the URL scheme in the Info.plist file which can be found in the React Native project at /ios/[project name]/Info.plist. The url scheme defined in the example is deeplink://. This allows the application to be launched externally using the url.

<key>CFBundleURLTypes</key> <array> <dict> <key>CFBundleTypeRole</key> <string>Editor</string> <key>CFBundleURLSchemes</key> <array> <string>deeplink</string> </array> </dict> </array>

We also need to add a few extra lines of code to AppDelegate.m to listen for incoming links to the application when it has already been launched and is running.

#import "RCTLinkingManager.h" - (BOOL)application:(UIApplication *)application openURL:(NSURL *)url sourceApplication:(NSString *)sourceApplication annotation:(id)annotation { return [RCTLinkingManager application:application openURL:url sourceApplication:sourceApplication annotation:annotation]; }

 If we try to compile the project in XCode at this point we will get a RCTLinkingManager not found error. In order to use the LinkingIOS library, we need to manually link to the library in XCode.  There are detailed instructions in the official documentation for how to do this. The library we need to link is named RCTLinking and header search path we need to add is:

$(SRCROOT)/../node_modules/react-native/Libraries Receiving the URL in React

React’s Linking API provides an event listener for incoming links. Note that the event listeners are only available for IOS. We want to add the event listener when the component mounts and ensure that it is removed when the component is un-mounted to avoid any memory leaks. When an incoming link is received, the handleDeepLink function is executed which calls this.navigator.replace to transition to the requested screen.

componentDidMount() { Linking.addEventListener('url', this.handleDeepLink); }, componentWillUnmount() { Linking.removeEventListener('url', this.handleDeepLink); }, handleDeepLink(e) { const route = e.url.replace(/.*?:\/\//g, ""); this._navigator.replace(this.state.routes[route]); }

Test the deep link by visiting the URL in a web browser on the device.


In this article we covered how to enable deep links and screen transitions in a React Native application. Although the deep link implementation involves a small amount of work using native Android/iOS code, the available React Native APIs allow us to easily bridge the gap between native and javascript code, and leverage the JavaScript side to write the logic that figures out where the deep links should go. It’s worth noting that this article only serves as an example. For more complex real world applications, the NavigatorExperimental component should be considered as a more robust solution. We hope you have as much fun trying out mobile deep linking as we did!

Build native iOS and Android apps with React Native

In this article I am going to provide a brief overview of React Native and explain how you can start using it to build your native mobile apps today.

What’s all the fuss?

Having to write the same logic in different languages sucks. It’s expensive to hire separate iOS and Android developers. Did you know you could also be using your web developers to build your mobile apps?

There have been many attempts to utilize web development technologies to spit out an iOS or Android app with the most common being PhoneGap or Titanium. Nevertheless, there have always been issues with their approaches such as non-native UI, memory or performance limitations, and a lack of community support.

Enter React Native. For me, there are two key points as to why I think React Native is here to stay.

1. It brings React to mobile app development

At Lullabot, we think React is awesome. Developers *enjoy *building applications in React. Over recent years, there have been countless JavaScript frameworks and libraries but React seem to have gotten the formula just right. For a more detailed comparison, see this article by our very own John Hannah.

The approach behind React is to “Learn once, write anywhere” and React Native lets developers use React to write both iOS and Android apps. That doesn’t mean you can also use the same code for web, but being able to use the same developers across platforms is a huge win for boosting productivity and dropping costs.

2. It’s native

With React Native, there isn’t really a compromise to building your app in JavaScript. You get to use the language you know and love. Then, as if by magic, you get a fully native app.

React Native comes with a bunch of core APIs and components that we can use to build our native apps. What’s more, there is a lot of effort put into platform parity through single components instead of having separate iOS and Android components for functionality that is very similar. For an example, see Alert.

A source of confusion for beginners is whether they should still build a separate app for iOS and Android. That ultimately comes down to how different you want the apps to be. When I built my first app on React Native, Kaiwa, I only had a single codebase and almost all of the code was the same for both platforms. For the times when you need to target a specific platform, you can easily implement Platform Specific Code.


There are a few things you should do before starting with React Native.

JavaScript & ES6

You’ll want to know JavaScript, that’s for sure. If you’re developing in JavaScript any time from 2015 onwards, you’ll definitely want to start using ES6 (a.k.a. ECMAScript 2015). ES6 does a great job at helping you write cleaner and more concise code. Here is the cheat sheet that I use.


You’ll need to know how React works. When I first started, I thought that React Native would be a totally different ball game to React. Because it’s for mobile, not web, right? Wrong. It’s the same.

The official documentation does a good job of getting you started. A page that I have at hand at all times when working with React is Component Specs and Lifecycle. It covers one of the topics that will definitely feel foreign when you first start with React. But before you know it, you won’t be able to live without it.


If you’re developing for iOS and Android you’re ideally going to have both iOS and Android devices to test on. You can get away with just using simulators for the most part but there are definitely differences when using actual devices that you’re going to want to know about. For example, devices act differently depending on power settings and the iOS simulator doesn’t accept push notifications.

If you’re shipping an app, buy devices. They don’t have to be brand-spanking-new, any old used thing from eBay will probably be fine, however, you will need a device that can run at least iOS 7 or Android 4.1. It’s also worth noting that you cannot submit an app to the Apple App Store without an iOS device.


To build for iOS, you’re going to need Xcode which requires a Mac. Xcode has a simulator built in for iOS development. For Android, I use Genymotion as an alternative to the stock Android emulator but you can also connect an Android device via USB or Wi-Fi and debug directly on that.

Getting Started

This guide will help Mac users get started with React Native. If you’re on Linux or Windows, don’t worry! The official documentation has you covered.

Install Homebrew if you’ve been living under a rock. Then use this to install Node.

brew install node

Install the React Native CLI which allows you to play with your projects from the command line.

npm install -g react-native-cli

Optionally, install Watchman. It makes things faster so why not?

brew install watchman

For Android, you’ll need to install Android Studio which gives you the Android SDK and Emulator. For more, see these steps.

Set up your first project. I highly recommend that you do this as opposed to creating the project from scratch yourself. It creates all the files, folders and configurations you will need to compile a native app on either iOS or Android.

react-native init MyApp

From within the new directory for your app, run either of these commands to start testing your app on iOS or Android.

react-native run-ios react-native run-android

For iOS, you should see the Xcode iPhone simulator launch automatically with your app. If you wish, you can open Xcode and launch the app using ⌘ + R instead. For Android, If you’ve got an emulator running or you’ve set up your device for USB debugging then that will automatically launch with your app too. Job done!


Being able to debug your app is crucial to efficient development. First, you’re going to want to know how to access the developer menu whist your app is running.

For iOS, shake the device or hit ⌘ + D in the simulator.

For Android, shake the device or press the hardware menu button.

Web Developers may be familiar with the concept of LiveReload. By enabling this from the developer menu, any changes to your JS will trigger an automatic reload on any active app. This feature really got my blood pumping the first time I started with React Native. In comparison to working natively, it definitely speeds up development time.

Want another slice of awesome for web developers? By selecting Debug JS Remotely from the developer menu, you can debug your JavaScript in Chrome DevTools. Some tips:

Tips and tricks for building your first app

These are some of the things that I wish someone had told me before I started working with React Native. I hope they can help you!

Know where to find components

You’re building your app in React. That means you’re going to write React Components. React Native provides a lot of components and APIs out of the box (see official docs for more). I was able to write the majority of my first app with only core components. However, if you can’t find a component for your specific use-case, its quite possible someone else has already made it. Head to the Awesome React Native GitHub repo and your mind will be blown with how many contributions the community is making.


Do you know what Flexbox is? If not, here is an article to start learning about it right now. You’re going to be using Flexbox extensively for laying out elements in your UI. Once you get past the learning curve, Flexbox is great and you’ll want to start using it on all your web projects too.

By default, React Native sets the deployment target device to iPhone within Xcode. You can change this to Universal so that your app will scale correctly on iPad too.


Don’t assume the simulators are enough. Devices can act differently when it comes to things such as keyboards, web sockets, push notifications, performance etc. Also, don’t forget landscape! Lots of people test their app only in portrait and get a shock the first time they rotate.

Don’t stress about the release cycle

At the time of writing this article, a new version of React Native is released every 2 weeks. That’s a really fast release cycle. It’s inspiring to see how much momentum is in the community right now. However, my advice is to not worry if you’re finding it hard to keep up to date. Unless there is a specific feature or bug fix you need, it can be a lot of work upgrading and there are often regressions. You can keep an eye on release notes here.


Navigation and routing play a major role in any mobile app. There has been some confusion around Navigator, NavigatorIOS and the recent NavigationExperimental components. Read this comparison for a clear overview. TL;DR Go with NavigationExperimental.

Data flow

Consider some sort of Flux architecture. Although not exactly Flux, Redux is probably the most popular pattern for React apps at present. It’s a great way to have data flow through your app. I found it invaluable for implementing things such as user login. If you’re new to React, I recommend that you read the examples over at Thinking In React before approaching data flow techniques for your app.


React Native is riding a wave at the moment. The recent announcement that Microsoft and Samsung are also backing React Native will only increase its popularity. Although there may be cases where its not the right fit, I do highly recommend considering it for your next mobile project.

Want to learn more? I’ll be presenting on this topic at ForwardJS in San Francisco, July 2016.

Drupalize.Me Sale: Save $10, Forever!

Looking for Drupal training? Our sister company, Drupalize.Me is running a big promotion this week.

Sign up for a monthly Personal Membership, and enter the code SAVE10 at checkout. You’ll save $10 immediately, and you’ll also save $10 each time your membership renews! The monthly savings won’t expire until you cancel or modify your membership.

But hurry—this offer expires on Friday, June 17.

Drupalize.Me is the #1 source for Drupal training tutorials. In the last few months Drupalize.Me has released tons of new Drupal 8 content, including a big Drupal 8 Theming Guide and Drupal 8 Migration Guide, with more on the way.

Drupalize.Me Sale: Save $10, Forever!

Looking for Drupal training? Our sister company, Drupalize.Me is running a big promotion this week.

Sign up for a monthly Personal Membership, and enter the code SAVE10 at checkout. You’ll save $10 immediately, and you’ll also save $10 each time your membership renews! The monthly savings won’t expire until you cancel or modify your membership.

But hurry—this offer expires on Friday, June 17.

Drupalize.Me is the #1 source for Drupal training tutorials. In the last few months Drupalize.Me has released tons of new Drupal 8 content, including a big Drupal 8 Theming Guide and Drupal 8 Migration Guide, with more on the way.

Adventures with eDrive: Accelerated SSD Encryption on Windows

As we enter the age of ISO 27001, data security becomes an increasingly important topic. Most of the time, we don’t think of website development as something that needs tight security on our local machines. Drupal websites tend to be public, have a small number of authenticated users, and, in the case of a data disclosure, sensitive data (like API and site keys) can be easily changed. However, think about all of the things you might have on a development computer. Email? Saved passwors that are obscured but not encrypted? Passwordless SSH keys? Login cookies? There are a ton of ways that a lost computer or disk drive can be used to compromise you and your clients.

If you’re a Mac user, FileVault 2 is enabled by default, so you’re likely already running with an encrypted hard drive. It’s easy to check and enable in System Preferences. Linux users usually have an encrypted disk option during install, as shown in the Ubuntu installer. Like both of these operating systems, Windows supports software-driven encryption with BitLocker.

I recently had to purchase a new SSD for my desktop computer, and I ended up with the Samsung 850 EVO. Most new Samsung drives support a new encryption technology called "eDrive".

But wait - don’t most SSDs already have encryption?

The answer is… complicated.

SSDs consist of individual cells, and each cell has a limited number of program/erase (PE) cycles. As cells reach their maximum number of PE cycles, they are replaced by spare cells. In a naive scenario, write activity can be concentrated on a small set of sectors on disk, which could lead to those extra cells being used up prematurely. Once all of the spare blocks are used, the drive is effectively dead (though you might be able to read data off of it). Drives can last longer if they spread writes across the entire disk automatically. You have data to save, that must be randomly distributed across a disk, and then read back together as needed. Another word for that? Encryption! As the poster on Stack Overflow says, it truly is a ridiculous and awesome hack to use encryption this way.

What most SSDs do is they have an encryption key which secures the data, but is in no way accessible to an end user. Some SSDs might let you access this through the ATA password, but there are concerns about that level of security. In general, if you have possession of the drive, you can read the data. The one feature you do get "for free" with this security model is secure erase. You don’t need to overwrite data on a drive anymore to erase it. Instead, simply tell the drive to regenerate its internal encryption key (via the ATA secure erase command), and BAM! the data is effectively gone.

All this means is that if you’re using any sort of software-driven encryption (like OS X’s FileVault, Windows BitLocker, or dm-crypt on Linux), you’re effectively encrypting data twice. It works, but it’s going to be slower than just using the AES chipset your drive is already using.

eDrive is a Microsoft standard based on TCG Opal and IEEE 1667 that gives operating systems access to manage the encryption key on an SSD. This gives you all of the speed benefits of disk-hosted encryption, with the security of software-driven encryption.

Using eDrive on a Windows desktop has a pretty strict set of requirements. Laptops are much more likely to support everything automatically. Unfortunately, this article isn’t going to end in success (which I’ll get to later), but it turns out that removing eDrive is much more complicated than you’d expect. Much of this is documented in parts on various forums, but I’m hoping to collect everything here into a single resource.

The Setup
  • An SSD supporting eDrive and "ready" for eDrive
  • Windows 10, or Windows 8 Professional
  • A UEFI 2.3.1 or higher motherboard, without any CSMs (Compatibility Support Modules) enabled, supporting EFI_STORAGE_SECURITY_COMMAND_PROTOCOL
  • A UEFI installation of Windows
  • (optionally) a TPM to store encryption keys
  • No additional disk drivers like Intel’s Rapid Storage Tools for software RAID support
  • An additional USB key to run secure erases, or an alternate boot disk
  • If you need to disable eDrive entirely, an alternate Windows boot disk or computer

I’m running Windows 10 Professional. While Windows 10 Home supports BitLocker, it forces encryption keys to be stored with your Microsoft account in the cloud. Honestly for most individuals I think that’s better than no encryption, but I’d rather have solid backup strategies than give others access to my encryption keys.

Determining motherboard compatibility can be very difficult. I have a Gigabyte GA-Z68A-D3-B3, which was upgraded to support UEFI with a firmware update. However, there was no way for me to determine what version of UEFI it used, or a way to determine if EFI_STORAGE_SECURITY_COMMAND_PROTOCOL was supported. The best I can suggest at this point is to try it with a bare Windows installation, and if BitLocker doesn’t detect eDrive support revert back to a standard configuration.

The Install

Samsung disk drives do not ship with eDrive enabled out of the box. That means you need to connect the drive and install Samsung’s Magician software to turn it on before you install Windows to the drive. You can do this from another Windows install, or install bare Windows on the drive knowing it will be erased. Install the Magician software, and set eDrive to "Ready to enable" under “Data Security”.

After eDrive is enabled, you must run a secure erase on the disk. Magician can create a USB or CD drive to boot with, or you can use any other computer. If you get warnings about the drive being "frozen", don’t ignore them! It’s OK to pull the power on the running drive. If you skip the secure erase step, eDrive will not be enabled properly.

Once the disk has been erased, remove the USB key and reboot with your Windows install disk. You must remove the second secure erase USB key, or Window’s boot loader will fail (#facepalm). Make sure that you boot with UEFI and not BIOS if your system supports both booting methods. Install Windows like normal. When you get to the drive step, it shouldn’t show any partitions. If it does, you know secure erase didn’t work.

After Windows is installed, install Magician again, and look at the security settings. It should show eDrive as "Enabled". If not, something went wrong and you should secure erase and reinstall again. However, it’s important to note that “Enabled” here does not mean secure. Anyone with physical access to your drive can still read data on it unless you turn on BitLocker in the next step.

Turning on BitLocker

Open up the BitLocker control panel. If you get an error about TPM not being available, you can enable encryption without a TPM by following this How-To Geek article. As an aside, I wonder if there are any motherboards without a TPM that have the proper UEFI support for hardware BitLocker. If not, the presence of a TPM (and SecureBoot) might be an easy way to check compatibility without multiple Windows installs.

Work your way through the BitLocker wizard. The make or break moment is after storing your recovery key. If you’re shown the following screen, you know that your computer isn’t able to support eDrive.

You can still go ahead with software encryption, but you will lose access to certain ATA features like secure erase unless you disable eDrive. If you don’t see this screen, go ahead and turn on BitLocker. It will be enabled instantly, since all it has to do is encrypt the eDrive key with your passphrase or USB key instead of rewriting all data on disk.

Turning off eDrive

Did you see that warning earlier about being unable to turn off eDrive? Samsung in particular hasn’t publically released a tool to disable eDrive. To disable eDrive, you need physical access to the drive so you can use the PSID printed on the label. You are supposed to use a manufacturer supplied tool and enter this number, and it will disable eDrive and erase any data. I can’t see any reason to limit access to these tools, given you need physical access to the disk. There’s also a Free Software implementation of these standards, so it’s not like the API is hidden. The Samsung PSID Revert tool is out there thanks to a Lenovo customer support leak (hah!), but I can’t link to it here. Samsung won’t provide the tool directly, and require drives to be RMA’ed instead.

For this, I’m going to use open-source Self Encrypting Drive tools. I had to manually download the 2010 and 2015 VC++ redistributables for it to work. You can actually run it from within a running system, which leads to hilarious Windows-crashing results.

C:\Users\andre\msed> msed --scan C:\Users\andre\msed> msed --yesIreallywanttoERASEALLmydatausingthePSID <YOURPSID> \\.\PhysicalDrive?

At this stage, your drive is in the "Ready" state and still has eDrive enabled. If you install Windows now, eDrive will be re-enabled automatically. Instead, use another Windows installation with Magician to disable eDrive. You can now install Windows as if you’ve never used eDrive in the first place.

Quick Benchmarks

After all this, I decided to run with software encryption anyways, just like I do on my MacBook with FileVault. On an i5-2500K, 8GB of RAM, with the aforementioned Samsung 850 EVO:

Before Turning On BitLocker After BitLocker After Enabling RAPID in Magician

RAPID is a Samsung provided disk filter that aggressively caches disk accesses to RAM, at the cost of increased risk of data loss during a crash or power failure.

As you can see, enabling RAPID (6+ GB a second!) more than makes up for the slight IO performance hit with BitLocker. There’s a possible CPU performance impact using BitLocker as well, but in practice with Intel’s AES crypto extensions I haven’t seen much of an impact on CPU use.

A common question about BitLocker performance is if there is any sort of impact on the TRIM command used to maintain SSD performance. Since BitLocker runs at the operating system level, as long as you are using NTFS TRIM commands are properly passed through to the drive.

In Closing

I think it’s fair to say that if you want robust and fast SSD encryption on Windows, it’s easiest to buy a system pre-built with support for it. In a build-your-own scenario, you still need at least two Windows installations to configure eDrive. Luckily Windows 10 installs are pretty quick (10-20 minutes on my machine), but it’s still more effort than it should be. It’s a shame MacBooks don’t have support for any of this yet. Linux support is functional for basic use, with a new release coming out as I write. Otherwise, falling back to software encryption like regular BitLocker or FileVault 2 is certainly the best solution today.

Header photo is a Ford Anglia Race Car, photographed by Kieran White

The Accidental Project Manager: Risk Management

Meet Jill. Jill is a web developer. She likes her job. However, Jill can see some additional tasks on her project that need doing. They're things like planning, prioritization, and communication with her boss and with her client about when the project will be done.

So Jill takes some initiative and handles those tasks. She learns a little about spreadsheets along the way, and her boss notices she's pretty good with clients.

This happens a few times more, and suddenly Jill is asked to manage the next project she's assigned to. A little time goes by and she's got a different job title and different expectations to go along with it.

This is a pretty common thing. As a designer, developer, and especially as a freelancer, you spot things that need doing. Before you know it, you're doing a different job, and you've never had an ounce of training. The best you can do is read a few blog posts about scrum and off you go!

Getting up to speed

As an accidental project manager (PM), you’ll have to exercise a different set of muscles to succeed in your new role. There are tricks and tactics that you’ll need that don’t always come naturally.

For example, most of my early PM failures came because I thought and acted like a developer who was out to solve problems for the client. That’s a great perspective, but has to be tempered with regard for the scope of the project. A PM needs to be able to step back, recognize an idea’s worth, and still say “We can’t do that”, for reasons of time or budget.

It can take time to build those new ways of thinking. Even worse, accidental project managers don't benefit much from the training that's available. I’ve found little value in the certifications and large volumes of information that exist for PMs—the Project Management Body of Knowledge (PMBOK) and the related Project Management Professional (PMP) certification are a prime example, but there are other certifications as well.

It’s really a question of scale: As an accidental project manager, a certification like the PMP is a terrible fit for where you're at, because you manage small-to-medium digital projects. Unless you’re building a skyscraper, you don't need 47 different processes and 71 distinct documents to manage things—you just need the right tools, applied in the right way.

In order to help fill that knowledge gap for accidental project managers, I’ve decided to start a series. In the future, I plan to post about scope control, measuring and reporting progress, and other topics that are important to PMs, but for now we’re going to cover just one: Risk Management.

Defining Risk

If you were looking at the PMBOK, you'd see risk defined like this:

an uncertain event or condition that, if it occurs, has a positive or negative effect on one or more project objectives such as scope, schedule, cost, or quality.

I fell asleep just pasting that into this post...I can hear you snoring as well, so we'll need a better definition. How about this:

Risk is a way of talking about uncertainty in your project—things that could affect your project that you're not sure about.

I like that better – it's more human, at the very least.

How to manage risks
  • Identify uncertainties, especially at kickoff or in transitions between project phases
  • Assess those risks for possible impact, likelihood, and relative priority
  • Plan responses to the risks that you feel are likely or high-priority
  • Monitor and address risks as the project progresses

Essentially, you should be brainstorming with the following questions:

  • What am I worried about? (risk identification)
  • Why am I worried about that? (possible impact)
  • How worried am I that this could happen? (likelihood/priority)
  • What can I control about this? (mitigation strategies)
  • What should I do if this thing actually happens? (response planning)
But that doesn't feel good...

You've nailed it. All those questions are great to ask – and better yet, talk about with your team and your client. The tricky part is that all this requires honesty. You have to be honest with yourself about what could happen, and this may lead you to realize some difficult things, possibly about yourself, the team you're working with, or the client you're working for. I'm going to go out on a limb and say that most project problems are not technical.

Brainstorming and being honest with yourself might lead you to truths like "I don't trust my client to deal with me honestly", or "My development team is unreliable.” That's hard stuff, and requires delicate handling.

In the end, the underlying reasons for risks are the reasons your project will fail, so you want to handle them as openly as possible. Letting that stuff go unattended is bad news. Get it out where you can do something about it.

Getting risk management into your routine

Risks can be identified throughout a project, but it’s often best to try and spot them right away during a kickoff. Any time you or your team is uncomfortable with something—a development task, a stakeholder, or whatever it may be—you should write it down. Even if you're not sure what to write down exactly, it's worthwhile to discuss with the team and see if you can spot what’s making you uncomfortable.

As the project progresses, you’ll want to communicate weekly with the client about risks you’re seeing, make plans to address them, and highlight items that are becoming more and more likely from week to week.That might take the form of a brief meeting, or an email summary calling out important points in a shared document that the client has access to.

However you communicate risks to your client, you'll want to track them someplace. You can do it in a simple list of items to discuss together or with your client, or you can use a more formal document called a risk log.

The Risk Log

You can track risks anywhere you want, but if you're feeling formal, it's common to keep them in a log that you can update and refer to throughout the project.

One thing to know is that risks are often stated in ‘condition > cause > consequence’ format. The format is loosely structured like this:

“There is a risk that …{something will happen}… caused by …{some condition}… resulting in …{some outcome}.”

Then for each risk, the log lets you:

  • track scores for likelihood and probable impact (on a 1-5 scale),
  • identify warning signs or dates for when action is needed add action plans for each risk in the log.
  • assign a responsible person

I've created a sample risk log for you to copy and modify as needed. If you don't think your project needs this kind of formality, it's possible to keep a simple running list of items to raise and discuss with the team. Nevertheless, seeing the slightly more formal version can help formulate what kind of tracking and tactics are available for the risks your project is facing.

Really comprehensive risk logs can also contain things like financial cost tracking, but that's rarely been a useful measure in my experience. Even big web projects rarely need that kind of tracking.

Common Risk Areas

Physicist Nils Bohr said: "An expert is a man who has made all the mistakes which can be made, in a narrow field." As an accidental project manager, you may lack the rich history of failures that might help you spot an impending risk. To help you figure out what risks you should be managing, here are a bunch of places to look:

When planning your project, watch out when:
  • Tasks exist that rely on the completion of other work before they can begin
  • Tasks exist that none of the project team has ever done before
  • You are required to use unfamiliar technologies
  • Tasks involve third parties or external vendors
  • Multiple systems have to interact, as in the case of api integration or data migration
  • Key decision makers are not available
  • Decisions involve more than one department/team
  • Resources/staff exist that are outside your direct control
  • You have to plan a task based on assumption rather than fact
When interacting with people, watch out for:
  • People who are worried about loss of their job
  • People who will require retraining
  • People that may be moved to a different department/team
  • People that are being forced to commit resources to the project unwillingly
  • People who fear loss of control over a function or resources
  • People forced to do their job in a different way than they're used to
  • People that are handling new or additional job functions (perhaps that's you?)
  • People who will have to use a new technology

Many of the people-oriented risks in the list above fall under the heading of ‘Change is hard’. This is especially true with folks on a client or external team where a new project introduces changes to the way they do business. If you're providing services to a client, you may bear some of the brunt of those change-related stressors.

Change-related risks aren’t limited to people outside your team —sometimes the developers, designers or other folks working directly with you have the same concerns. Wherever you find it, risk management is often about taking good care of people in the middle of change.

Organizational risk

It’s probably also worth mentioning that risks can also arise from organizational change outside your immediate control. Corporate restructuring and changes in leadership can impact your project, and politics between different business units are frequently a source of challenges.

There may not be a ton you can do about these kinds of risks, but identifying them and understanding how they affect your project can make a big difference in how you communicate about things.

So, accidental project manager, go forth and manage your risk! Take some time and think through what you’re uncomfortable or worried about. Be circumspect, ask yourself a lot of ‘why’ questions, and then communicate about it. You’ll be glad you did.

Rebuilding POP in D8 - Development Environments

This is the second in a series of articles about building a website for a small non-profit using Drupal 8. These articles assume that the reader is already familiar with Drupal 7 development, and focuses on what is new / different in putting together a Drupal 8 site.

In the last article, I talked about Drupal 8's new block layout tools and how they are going to help us build the POP website without relying on external modules like Context. Having done some basic architectural research, it is now time to dive into real development. The first part of that, of course, is setting up our environments. There are quite a few new considerations in getting even this simple a setup in place for Drupal 8, so lets start digging into them.

My Setup

I wanted a pretty basic setup. I have a local development environment setup on laptop, then I wanted to host the code on github, and be able to push updates to a dev server so that my partner Nicole could see them, make comments, and eventually begin entering new content into the site. This is going to be done using a pretty basic dev/stage/live setup, along with using a QA tool we've built here at Lullabot called Tugboat. We'll be going into the details of workflow and deployment in the next article, but there is actually a bunch of new functionality in Drupal 8 surrounding environment and development settings. So what all do we need to know to get this going? Lets find out!

Local Settings

In past versions of Drupal, devs would often modify settings.php to include a localized version to store environment-specific information like database settings or API keys. This file does not get put into version control, but is instead created by hand in each environment to ensure that settings from one do not transfer to another inadvertently. In Drupal 8 this functionality is baked into core.

At the bottom of your settings.php are three commented out lines:

# if (file_exists(__DIR__ . '/settings.local.php')) { # include __DIR__ . '/settings.local.php'; # }

If you uncomment these lines and place a file named settings.local.php into the same directory as your settings.php, Drupal will automatically see it and include it, along with whatever settings you put in. Drupal core even ships with an example.settings.local.php which you can copy and use as your own. This example file includes several settings pre-configured which can be helpful to know about.


There are several settings related to caching in the example.settings.local.php which are useful to know about. $settings['cache']['bins']['render'] controls what cache backend is used for the render cache, and $settings['cache']['bins']['dynamic_page_cache'] controls what cache backend is used for the page cache. There are commented out lines for both of these which set the cache to cache.backend.null, which is a special cache backend that is equivalent to turning caching off for the specified setting.

The cache.backend.null cache backend is defined in the file, which is by default included in the example.settings.local.php with this line:

$settings['container_yamls'][] = DRUPAL_ROOT . '/sites/';

If you want to disable caching as described above, then you must leave this line uncommented. If you comment it out, you will get a big ugly error the next time you try and run a cache rebuild.

Drush error message when the null caching backend has not been enabled.

The file is actually itself a localized configuration file for a variety of other Drupal 8 settings. We'll circle back to this a bit later in the article.

Other Settings

example.settings.local.php also includes a variety of other settings that can help during development. One such setting is rebuild_access. Drupal 8 includes a file called rebuild.php, which you can access from a web browser in order to rebuild Drupal's caches in situations where the Drupal admin is otherwise inaccessible. Normally you need a special token to access rebuild.php, however by setting $settings['rebuild_access'] = TRUE, you can access rebuild without a token for specific environments (like your laptop.)

Another thing you can do is turn on or off CSS and Javascript preprocessing, or show/hide testing modules and themes. It is worth taking the time to go through this file and see what all is available to you in addition to the usual things you would put in a local settings file like your database information.

Trusted Hosts

One setting you'll want to set that isn't pre-defined in example.settings.local.php is trusted_host_patterns. In earlier versions of Drupal, it was relatively easy for attackers to spoof your HTTP host in order to do things like rewrite the link in password reset emails, or poison the cache so that images and links pointed to a different domain. Drupal offers the trusted_host_patterns setting to allow users to specify exactly what hosts Drupal should respond to requests for. For the site, you would set this up as follows.

$settings['trusted_host_patterns'] = array( '^www\.example\.com$', );

If you want your site to respond to all subdomains of, you would add an entry like so:

$settings['trusted_host_patterns'] = array( '^www\.example\.com$', '^.+\.example\.com$', );

Trusted hosts can be added as needed to this array dependent on your needs. This is also something you'll want to set up on a per-environment basis in a local.settings.php, since each environment will have its own trusted hosts.

Local Service Settings

When Drupal 8 started merging in components from Symfony, we introduced the concept of "services". A service is simply an object that performs a single piece of functionality which is global to your application. For instance, Symfony uses a Mailer service which is used globally to send email. Some other examples of services are Twig (for template management) and Session Handling.

Symfony uses a file called services.yml for managing configuration for services, and just like with our settings.local.php, we can use a file called to manage our localized service configuration. As we saw above, this file is automatically included when we use Drupal 8's default local settings file. If you add this file to your .gitignore, then we can use it for environment-specific configuration just like we do with settings.local.php.

The full scale of configuration that can be managed through services.yml is well outside the scope of this article. The main item of interest from a development standpoint is Twig debugging. When you set debug: true in the twig.config portion of your services configuration file, your HTML output will have a great deal of debugging information added to it. You can see an example of this below:

Drupal page output including Twig debugging information.

Every template hook is outlined in the HTML output, so that you can easily determine where that portion of markup is coming from. This is extremely useful, especially for people who are new to Drupal theming. This does come with a cost in terms of performance, so it should not be turned on in production, but for development it is a vital tool.

Configuration Management

One of the major features of Drupal 8 is its new configuration management system. This allows configuration to be exported from one site and imported on another site with the ease of deploying any other code changes. Drupal provides all installations with a sync directory which is where configuration is exported to and imported from. Be default this directory is located in Drupal's files directory, however this is not the best place for it considering the sensitive data that can be stored in your configuration. Ideally you will want to store it outside of your webroot. For my installation I have setup a directory structure like this:

Sample Drupal 8 directory structure.

The Drupal installation lives inside docroot, util contains build scripts and other tools that are useful for deployments (more on this in the next article) and config/sync is where my configuration files are going to live. To make this work, you must change settings.php as follows:

$config_directories = array( CONFIG_SYNC_DIRECTORY => '../config/sync', );

Note that this will be the same for all sites, so you will want to set it in your main settings.php, not a local.settings.php.

Having done all this, we are now setup for work and ready to setup our development workflow for pushing changes upstream and reviewing changes as they are worked on. That will be the subject of our next article so stay tuned!

Why We Built Tugboat.QA

Videos require iframe browser support.

Three years ago, Lullabot started a project to bring visibility to the web development process. Asking our clients to wait to see work in progress is not only an embarrassing ask, it’s an alienating one for stakeholders. Waiting two weeks until the end of a sprint cycle for a demo was not only nerve-wracking, it also slowed potential progress, and meant that feedback was often delayed until too late. The truth was that building these demo sites was a lot of work: work that was taking time away from creating the actual website. How would you feel if your general contractor said you could only see the house they were building for you twice a month? Unacceptable. Time and time again, through client interviews and market research we found this pain was widespread and in need of resolution. The lack of visibility and thus engagement was leading to out-of-sync stakeholders and expensive rework. We knew we could work smarter and with lower costs.

For stakeholders and non-technical team members, the barrier to collaboration in web development is nearly insurmountable. Stakeholders need to know how to setup a local working copy of the project and they need to know how to continuously keep that local instance working—which is its own sisyphean task.

With Tugboat, we’re able to eliminate the technical barriers and share work as it happens with everyone, anywhere. Tugboat does this by automatically building a complete working website whenever a new pull request is generated. The entire team can see every change, every feature, every bug fix fully integrated into the larger site the moment it’s ready for review. No more waiting to see the work. No more waiting for feedback.

Tugboat is hosting agnostic. You don’t host your website with us in order to use it. But we do take platform architecture seriously, so every account is its own silo of dedicated resources and each project begins with a custom setup to ensure you’re ready to set sail. Tugboat can even run inside your own hosting environment (on premise) if you need to keep your data behind your firewall. We can also run custom Docker containers if you’re already using Docker as part of your production workflow.

We hope you get a chance to check out Tugboat. Stop by the Lullabot booth at Drupalcon and say hello. Meanwhile, if you’re a developer we’ve built a sandbox version of the platform for you to explore. Fork the public repository of our website, create a pull request and voilà! You can watch Tugboat chug chug chug its way into action. Be sure to check out some of the cool features while you’re there: built-in command-line access, visual regression tests, and real-time logging. But don’t take our word for it. Head on over to the demo to take Tugboat for a cruise!

Dynamically Inlining Critical CSS with Server-side JavaScript

Data gathered by Akamai on e-commerce sites shows that 40% of users will abandon a site if it fails to load within 3 seconds. Google will even penalise your site in search rankings if it's slow to load. Couple that with large numbers of users accessing sites from mobile data connections, and the time it takes for your site to initially render should be a very important consideration.

There's a number of ways to analyse how your page is being rendered, from directly in the browser using Chrome Developer Tools' Timeline feature to remotely with WebPagetest's visual comparison tool.

Optimising your page to load quickly is chiefly about understanding how the browser will download, parse and, interpret resources such as CSS and JavaScript. This is called the critical rendering path. A fantastic tool for analysing the front-end performance of your site is Google's PageSpeed Insights.

The very definition of irony

An issue this tool frequently highlights is failure to optimise CSS delivery. This warning will be triggered when CSS has been included externally in the head of the document.

<!doctype html> <html> <head> <link rel="stylesheet" href="styles.css"> </head> <body> <h1>Hello, world!</h1> </body> </html> Before the browser can render content it must process all the style and layout information for the current page. As a result, the browser will block rendering until external stylesheets are downloaded and processed, which may require multiple roundtrips and delay the time to first render. Google PageSpeed Insights

As well as the time required to download and parse the stylesheet, for browsers which don't yet support HTTP/2 there is an even futher overhead from the round-trip time to open a new connection to the web server. To solve this problem, PageSpeed recommends adding inline CSS to the HTML document. This doesn't mean we want to embed the entire stylesheet into the document, as then we would still have to wait for the entire document to download and be interpreted by the browser, instead we only inline the CSS required to display the initial viewport when loading the page.

A great tool for this is Critical. Take a look at the source of the before and after examples and what PageSpeed Insights has to say about them.

Using Critical requires analysing the page with PhantomJS, a headless browser, which will then extract the styles required to show the above-the-fold content. This can be awkward to implement into a site which has a large number of different entry points (e.g. the homepage, an article, a product page). It would require pre-analysing each page type, storing the generated styles and then delivering the correct one depending on where the user has come into the site. This becomes even harder when you consider that each page type could have a different set of styles required depending on what content has been added to it. Will all articles have full width images? Will all products have a gallery? This kind of page examination with Critical can't be done at run-time either as spinning up a PhantomJS instance for each page request would cause atrocious performance issues.

With markup generated on the server-side, we're given an opportunity to analyse what's been produced and make some performance optimisations. I've created a simple example for Node.js and Express, which you can find on GitHub. We're currently using this technique on, where we use React to render the initial markup for the page on the server. After the page is delivered and client-side JavaScript downloaded, React Router takes over, and navigation around the site is done with the History API and XHR calls. This negates the need for a full page load each time, but also doesn't require the site to keep generating critical CSS for each new navigation action.

Below is a very simple example of rendering a React component to a string that can be delivered from Node.js — see for more detailed examples.

var pageMarkup = ReactDOMServer.renderToString(MyReactApp);

The package we use to generate our critical CSS is PurifyCSS. It takes HTML and a stylesheet, then returns only the CSS which would be applied and used for that markup. Although the CSS generated won't be as optimised as using Critical, it gives us a dynamically generated fallback option for all types and permutations of pages on the site.

Taking the rendered markup from the code above, the example below runs it through purifyCSS and then injects it into our page template, loading the remaining CSS with the loadCSS technique.

var template = fs.readFileSync(path.join(__dirname, 'index.html'), 'utf8'); var markup = { externalCss: 'style.css', criticalCss: '' }; purify(pageMarkup, [markup.externalCss], { minify: true, output: false, info: false, rejected: false }, function (purifiedOutput) { markup.criticalCSS = purifiedOutput; var html = mustache.to_html(template, markup); }); <!doctype html> <html> <head> <style> {{&criticalCSS}} </style> <script> function loadCSS(e,n,o,t){"use strict";var d=window.document.createElement("link"),i=n||window.document.getElementsByTagName("script")[0],r=window.document.styleSheets;return d.rel="stylesheet",d.href=e,"only x",t&&(d.onload=t),i.parentNode.insertBefore(d,i),d.onloadcssdefined=function(e){for(var n,o=0;o<r.length;o++)r[o].href&&r[o].href===d.href&&(n=!0);n?e():setTimeout(function(){d.onloadcssdefined(e)})},d.onloadcssdefined(function(){||"all"}),d} loadCSS('/{{&css}}'); </script> <noscript><link href="/{{&css}}" rel="stylesheet"></noscript> </head> <body> <h1>Hello, world!</h1> </body> </html>

Make sure to keep on eye on how much CSS is being inlined, as running this over a framework such as Bootstrap could cause a lot to be produced. Finally, whilst we haven't seen any performance issues with this technique in production (we also typically use Varnish in front of Node.js), at this point it would be advisable to run some benchmarks for your pages to ensure it's not causing any meltdowns!

DrupalCon New Orleans Session Extravaganza!

Matt and Mike talk with a plethora of Lullabots about their sessions at DrupalCon New Orleans, what their favorite all-time DrupalCon experience was, and what sessions they’re looking forward to seeing this year.

Web Accessibility: The Inclusive Way to Boost Your Bottom Line

We have a lot of words that we use to describe people with disabilities. We have words like ‘differently abled’, or ‘blind’, or ‘motor impaired’, but there are some really important labels that we often forget, like ‘customers’, and ‘viewers’, and ‘students’. While members of the disability community might do some things a little differently, they are also a group of consumers twice as large as the entire nation of Australia in the United States alone. When we’re talking about so many people, we’re discussing massive buying power. People with disabilities watch shows, buy products, subscribe to services, and take classes. They’re fans, foodies, and potential brand evangelists just waiting to happen! When we make our web presence accessible to people with disabilities, we’re not just doing the right thing; we’re unlocking a huge group of customers that we otherwise would have missed out on. 

Web accessibility is about including people by making sure that people with disabilities can use the sites we build. Can someone using a screenreader navigate to your contact form? Can a purchase be made on your site without using a mouse? Can that enrollment application be filled out by someone who can’t see the screen? Concepts like these need to be taken into consideration to ensure that we’re including everybody. Yes, it is about recognizing humanity in all of its diversity and doing our very best to give everyone the experience they came for. However, making websites accessible isn’t a charitable act. It’s not a nod to an edge-case scenario to satisfy a requirement somewhere, nor is it about spending money to put in features that are only going to benefit a few people to give us the warm fuzzies inside. At the end of the day, it’s good business.  After all, we’re talking about impacting 56.7 million potential users, and that’s if we’re only counting users in the United States (Source: US Census Bureau, 2012). If your website wouldn’t work for anyone in the state of New York, there is no chance that you would even consider launching it. So how is it that we disregard web accessibility as an ‘extra feature’ to be culled when there are nearly twice as many Americans with severe disabilities than New York residents?

Cyclical Non-inclusiveness

Usually we disregard accessibility because we don’t realize the large impact of that decision. As human beings, we collectively suffer from something called the “False-Consensus Effect”. Essentially, we naturally gravitate toward people who we have a lot in common with, validate our experiences amongst ourselves, and end up with an incomplete world view. For instance, many Deaf people associate mainly with other Deaf people within their own community. It might be more challenging for a hearing person to form a friendship with someone Deaf without knowing ASL, so a lot of hearing people don’t have many (or any) Deaf friends. However, the fact that your average hearing person doesn’t know many Deaf people doesn’t mean that there isn’t a huge thriving Deaf community. Non-inclusiveness is cyclical. Poor accessibility leads to people with disabilities not being able to fully take part in the mainstream community. From there, it’s out of sight, out of mind. Poor visibility and representation lead to mainstream society forgetting to make accommodations for them — or worse, actively deciding that it’s not worth the cost or trouble to do so based on an obstructed view of the impact of that decision. The carousel of non-inclusiveness spins around and around, but we have the power to break the cycle.

The thing about exclusion is that it rarely feels personal to the side doing the excluding. Businesses don’t skip web accessibility because they hate people with disabilities and want to keep them out of their websites. A lot of the time, accessibility gets skipped because stakeholders have never heard of it before and have no idea they should be doing it. Sometimes businesses skip accessibility on their websites because they don’t know what it would entail to implement it, or are worried about the budget or the timeline. Web accessibility doesn’t happen for lots of reasons, but for the business, it’s not personal.

Consumers, on the other hand, absolutely choose where to spend their money and which brands to follow based on personal reasons, and accessibility is very personal when you’re the one who needs accommodation. One of the strongest motivators for repeat business and brand evangelism is how your company makes its target market feel. If your company excludes a user from your online experience, that person now has a negative association with your brand, and that’s pretty hard to undo. After all, everyone likes to feel like they matter. If someone arrives at your site to find that no one considered their needs or made any accommodations for them, it sends a pretty strong message that you either forgot them or disregarded them. Either way, it doesn’t make a great impression. As my grandmother used to say, “You can’t take it back once you spit.”

Breaking the Cycle and Creating Loyal Customers

On the bright side, if you make a site with great accessibility, customers with disabilities will remember that, too. Delighted users love to return to deliver repeat business again and again. Additionally, the disability community is pretty tight-knit, and they often shout it out loud through their circles when a brand does things right for them so that other people know where to go to find a good experience. As a result, a small amount of accessibility work can buy you a huge network of loyal customers.

For instance, consider the case of Legal & General (L&G) on After a full accessibility audit, they decided to take the plunge and make their site fully accessible. Within a day of launching their newly accessible site, they saw a 25% increase in traffic. Over time, that increase grew to a full 50%. Visitors who converted into leads receiving quotes doubled within three months, and L&G’s maintenance costs fell by 66%. Within 12 months, they saw a 100% ROI. Talk about a lot of bang for your buck! 

CNET has some a11y bragging rights, too. After adding transcripts to their website they saw a 30% increase in their traffic from Google. How come? It’s because accessibility is awesome for SEO! Between the easy-to-crawl site outlines, the relevant keywords found on the alt-text for all of the images, and the adherence to best practices, accessible sites are prime picking for Google’s algorithms and tend to rank more highly in searches.

Of course, there are also cautionary tales; just ask Target about their pockets being $9.7 million lighter (before paying their own attorney’s fees) after a settlement with the NFB over their non-compliant online shopping experience. After a very expensive lesson learned, Target’s website is now beautifully accessible for everyone. 

The fantastic news is that accessibility usually isn’t especially time-consuming or challenging to do in the grand scheme of a normal web development project — especially if you plan for it from the beginning and implement it as you go along. By defining accessibility as a priority for your web presence right at the beginning and checking for it along the planning, design, and development stages, you’ll be on the right track to reach your entire market base at launch. Oh, and don’t forget your editors! Once you launch your awesome accessible site, make sure that you keep it that way by ensuring that your editorial team knows how to post new content that is access-friendly with alternative text for images and other accessibility basics in place. In the end, access-savvy sites are business-savvy sites, and like most business-savvy endeavors it requires some investment to give you a return. Including everybody? That’s priceless.

Whether you’re thinking about building accessibility into a new web presence, already have a site that needs a little bit of a11y love, or are keeping a wise eye toward the ADA and WCAG 2.0 / Section 508 compliance, Lullabot can help! If you’re more of the do-it-yourself type, there are some great resources to be found through the Web Accessibility Initiative, on the University of Washington’s Accessible Technology page, and from the American Federation of the Blind. Want to get an idea of how your site is currently doing? Try running it through the free WAVE tool.

A Framework for Project Kickoffs

Project kickoffs can be the shortest individual component of a project, but they can also be the most important. Done poorly, a kickoff can feel like a reading of a contract by inhuman actors doing inhuman work. Done well, a kickoff can bring a team together and push them towards success. Kickoffs are one of the project skills we don’t get many opportunities to iterate and learn. Developers at Lullabot commonly end up attached to a client or project for many months (or years!) at a time, so it’s entirely possible to go that period of time without having a formal kickoff. Here are some thoughts I’ve had after doing several kickoffs this year.

About the Client

In a distributed team, a kickoff usually happens with a phone call. While pre-sales communication will have already happened, the kickoff call is usually the first time when everyone working on a team will be together at once. As a team member from the vendor, this is your chance to ask questions of the business stakeholders who might not be available day to day. I like to find out:

  • Why are we all here? Are the business, technology, or creative concerns the primary driver?
  • What is the business looking for their team to learn and accomplish?
  • What are the external constraints on the project? Are there timelines and due dates, or other projects dependent on our work? What are the upcoming decisions and turning points in the business that could have a big affect on the project?
About Me

We all have ideas about how we want to work and be utilized on a project. Making sure they align with the client is very important to work out during a kickoff. Sometimes, a client has specific priorities of work to get done. Other times, they might not have realized you have skills in a specific subject area that they really need. It’s really important to understand your role on a project, especially if you have multiple skill sets. Perhaps you’re a great Drupal site builder, but what the client really needs is to use your skills to organize and clean up their content model. Figuring all of that out is a great kickoff topic.

About Us

Once we understand each other, then we can start to figure out how we work together. It’s kind of like moving in with someone. You might know each other very well, but how are you going to handle talking with your landlord? How are each person’s work schedules going to integrate?

For a distributed team, communication tools are at the core of this discussion. We all have email, chat rooms, instant messaging, video, and more. What tools are best used when? Are there specific tools the client prefers, or tools that they can’t use because of their company’s network setup? Finding the middle ground between “all mediums, all the time” and “it’s all in person until you ask” is key.

Recurring meetings are another good topic to cover. Some companies will take new team members, add them to every recurring meeting, and use up a 10 hour-per-week consulting engagement with nothing but agile ceremony. Perhaps that’s what you’re needed for—or perhaps they’ve just operated out of habit. Finding a good balance will go a long way towards building a sustainable relationship.

Sharing each person’s timezones and availability also helps to keep expectations reasonable. Some companies have recurring meetings (like Lullabot’s Monday / Friday Team Calls) which will always be booked. Sometimes individuals have days their hours are different due to personal or family commitments. Identify the stakeholders who have the “worst” availability and give them extra flexibility in scheduling. Knowing all of this ahead of time will help prevent lots of back-and-forth over meeting times.

Finally, find out who you should go to if work is blocked. That might be a stakeholder or project manager on the client’s side, but it could also be one of your coworkers. Having someone identified to the team as the “unblocker of work” helps keep the project running smoothly and personal tensions low.

About Tech

For development projects, the first question I ask is “will we need any sort of VPN access?”. VPN access is almost always a pain to get set up—many companies aren’t able to smoothly setup contractors who are entirely remote. It’s not unheard of for VPN access to take days or weeks to set up. If critical resources are behind a VPN, it’s a good idea to start setting that up before an official kickoff.

Barring the VPN-monster, figuring out where code repositories are, where tickets are managed, and how development and QA servers work are all good kickoff topics. Get your accounts created and make sure they all work. If a client is missing anything (like a good QA environment or ticket system), this is when you can make some recommendations.

About Onsites

Some projects will have a kickoff colocated somewhere, either at a client’s office or at a location central to everyone. In distributed teams, an in-person meeting can be incredibly useful in understanding each person. The subtle, dry humour of your video expert becomes apparent in-person, but could have been misunderstood online. Most of the above can be handled in the first hour of an onsite visit, leaving much more time to fill given the travel time!

We like to focus onsites on the topics that are significant unknowns, require a significant number of people across many teams, and are likely to require whiteboards, diagrams, and group brainstorming. Project discoveries are a classic fit; it’s common to meet with many different people from different departments, and doing first meetings in person can be a significant time saver. The goal of an onsite shouldn’t be to “kick off” the project—it should be to build the shared understanding a team needs so they can be effective.

But what about sales engineering?

I’m sure some readers are now thinking “Wait a minute! Aren’t these all things you should know before a contract is signed?”. It’s true! Going into a kickoff without any of this information would be a serious risk.

It’s important to remember that the team on a kickoff isn’t going to be identical to the team who did the sales engineering work. Both the client and the vendor will have new people just getting started. As well, it’s useful to hear the project parameters one more time. Discrepancies in the discussions can alert the team to any misunderstandings, or more likely changes in the business environment running up to the signing of the contract. Especially on projects where a team is already working, hearing about progress or changes made in the week between signing an SOW and kickoff can be invaluable.

What did you learn the last time you helped to kick off a project? Let us know in the comments!

Does Working From Home Benefit the Environment?

I work from home. As I commute up the stairs to my home office, I confess to a certain smugness. Steaming coffee in hand, I think of those poor souls and their wasteful commutes, trudging out in the snow to scrape windshields, or those who commute from suburb to city. One of Lullabot’s clients, let’s call him John, commutes 2.5 hours one way by train and subway from Succasunna, NJ, to 30 Rockefeller Center. Every. Single. Work day. That’s close to 1,200 hours of commute time each year. To meet with John, as I do frequently, I merely walk up the stairs and spin up a Google Hangout.

But this smugness extends beyond the time savings. I’ve always assumed that not using a car to get to work—not even owning a second car—put me squarely atop the moral high ground when it came to the environment. I’ve long harbored the belief that the communication tools and techniques we’ve pioneered in distributed companies like Lullabot will eventually enable all knowledge workers to work from home. And, together, we’ll save winter. I live near Aspen, Colo., after all.

According to the most recent US Census, about 4% of the US workforce telecommutes and that number is rapidly growing. That keeps a lot of carbon out of the atmosphere. A 2007 IBM study showed that having a distributed workforce saved the company five million gallons of fuel, preventing more than 450,000 tons of CO2 from entering the atmosphere. Companies like Aetna, Dell, and Xerox use their telecommuting programs to market their environmental credentials.

But what does this all mean? Does telecommuting actually make an impact or does the need to heat and cool a house that would otherwise be empty during the day offset the savings? What about having to fly to meetings and company retreats? Does living in a lifestyle destination like the Roaring Fork Valley with our relatively low population density help or hurt?

To explore these questions, I called on one of Lullabot’s clients, Lisa Altieri, president of GoCO2Free, a Palo Alto-based company that is working with the cities of Menlo Park, Palo Alto, and Fremont to reduce their carbon footprint. She’s been working with Lullabot Juan Olalla to build a new carbon calculator that will help each household quantify their footprint. Altieri helped me do the math.

The Commute

According to the most recent US Census data, the average worker commutes for 50 minutes a day over an average of 32 miles. Burning a pound of gasoline produces about 19.4 pounds of CO2. What?! Taking a very dense form of energy, a hydrocarbon like octane (C8H18), and burning it for energy adds a lot of weighty oxygen. (Remember combustion is just really, really fast oxidation.) The result of burning one of these octane molecules—remembering the law of conservation of mass—is eight molecules of carbon dioxide (CO2) and nine molecules of water (H2O). Turns out this carbon dioxide is heavy, voluminous stuff. But 19.4 pounds isn’t the extent of CO2 burned. We also have to factor in the energy used to extract the crude oil, transport it, refine it, and then transport it again. Taking this “embodied energy” into account, Altieri gave me the constant of 28.3 pounds of CO2 emitted per gallon of gasoline used.

So, back to our math. A typical American probably works 235 days per year. If we multiply that by our average commute of 32 miles we get 7,520 miles. My Chrysler Town & Country gets a pretty average 25 mpg, meaning I’m saving about 300.8 gallons of gasoline by not commuting, or (multiplying by our constant), I’m preventing about 8,513 pounds of CO2 from entering the atmosphere. To calculate your own carbon footprint exactly (taking into account your model of car, energy usage, etc.) try the following calculator.

Burning 128 gallons of gasoline produces enough CO2 to fill the Washington Monument, and I’m saving more than twice that! I’m feeling pretty smug about saving all that smog. But hold on…

Heating the House

To work from home, I have to stay warm in the winter by heating my 1,484-square-foot home with our inveterate 1974 Slant | Fin stainless steel boiler. Despite what Harry and Lloyd say, Aspen is not warm. As you can imagine, this is less efficient than heating an office with many people occupying a small space. In fact, the average North American office employee occupies 150 square feet, so about 1/10th of the space.

Altieri instructed me to subtract the total “therms” from my July natural gas bill from the total on my January bill to get the extra therms I use just for heating. What the heck is a therm? Apparently it’s enough for English majors like me to just grab these handy numbers from our bill and then multiply by the constant 17.4 pounds of CO2 per therm. So I multiply the 90 therms I use to heat the house in winter, times five cold months, times 17.4 pounds of CO2 per therm. That yields 7,830 pounds of CO2. I can subtract the 1/10th I might have used if I was in an office to derive 7,047 pounds of CO2.

But, I’d have to heat my house anyway in the winter, right? But let’s say I’m parsimonious and turn down the heat during the day. The average worker is away from home about 50 hours a week, taking into account commuting, lunch and working hours. That’s about 30 percent of the time. So claiming 30 percent of that 7,047 as my carbon footprint, I use an extra 2,114 pounds of CO2.

So I’m still feeling good about myself for telecommuting with a net savings of—8,513 (no commute) minus 2,114 (extra heat)—6,399 pounds of CO2.

Air Travel

What about air miles, Altieri asked me? Ruh roh! Turns out flying is the cardinal carbon sin of modern life. For every mile of air travel, figure about .58 pounds of CO2 emissions, says Altieri. I get together with other Lullabots at four retreats per year. Assuming the pattern of two retreats on the West Coast, one on the East Coast, and one in the Midwest, I’m probably traveling around 10,000 air miles per year that can be directly attributed to working for a distributed company. Plus, shorter flights, like my favorite Aspen-to-Denver trip, result in greater emissions per mile because a larger portion of the trip is spent in the energy-intensive takeoff and landing. Suddenly, I’ve got another 5,800 pounds of carbon footprint to worry about.

Living the Dream

Finally, there’s life in the Roaring Fork Valley, with its relatively low population density. Dense urban areas tend to have much lower per capita emissions than less dense ones. Turns out our zip code in Carbondale, 81623, comes in way above the already astronomical US average carbon footprint. Carbondale residents average an annual carbon footprint of 57.8 metric tons of CO2, whereas the American household average (astronomical by world standards) is 48 metric tons of CO2, about 20% higher.

I used the carbon calculator on Terrapass to calculate my total carbon footprint in a given year and came up with 36,585 pounds of CO2. I have to assume this is about 20% higher than if I lived in an area with average population density where knowledge worker jobs are usually found, which means another 7,317 pounds of carbon.

In my example, living the “distributed” workforce lifestyle means a net increase of 6,718 pounds of CO2 are being pumped into the atmosphere. That's not a blanket condemnation of working from home as each person's situation will vary, but I'm guessing additional air travel will offset many of the benefits of not having a commute. Smug smile wiped off my face, I turn to Altieri for absolution.

Sounds like it’s time to purchase some offsets, she says, referring me to Terrapass. I could either plant 426 urban trees, which sounds time consuming, or purchase offsets at $5.95 per 1,000 pounds of CO2 or about $42 worth. Terrapass uses the money to do things like buy anaerobic digesters for animal waste, to capture landfill gas, and to derive clean energy from wind power.

Altieri also suggested that I could cut my impact by looking up my local utility provider to see if they have a local renewable energy pool. While this may increase your power bill a bit, it’s likely the single simplest way to reduce your carbon footprint short of buying a plugin electric car. Perhaps a Tesla Model 3 will absolve my sins?