Clayton Dewey on Drutopia

In this episode of Hacking Culture, Matthew Tift talks with Clayton Dewey about Drutopia, an initiative to revolutionize the way we build online tools

Drupalcon Baltimore Edition

Matt and Mike sit down with several Lullabots who are presenting at Drupalcon Baltimore. We talk about our sessions, sessions that we're excited to see, and speaking tips for first-time presenters.

Cross-Pollination between Drupal and WordPress

WordPress controls a whopping 27% of the CMS market share on the web. Although it grew out of a blogging platform, it can now can handle advanced functionality similar to Drupal and is a major (yet friendly) competitor to Drupal. Like Drupal, it’s open source and has an amazing community. Both communities learn from each other, but there is still much more to share between the two platforms.

Recently I had the opportunity to speak at WordCamp Miami on the topic of Drupal. WordCamp Miami is one of the larger WordCamps in the world, with a sold-out attendance of approximately 800 people.

undefined What makes Drupal so great?

Drupal commands somewhere in the neighborhood of 2% of the CMS market share of the web. It makes complex data models easy, and much of this can be accomplished through the user interface. It has very robust APIs and enables modules to share one another’s APIs. Taken together, you can develop very complex functionality with little to no custom code.

So, what can WordPress take away from Drupal? Developer Experience: More and better APIs included in WordPress Core

The WordPress plugin ecosystem could dramatically benefit from standardizing API’s in core.

  • Something analogous to Drupal’s Render API and Form API would make it possible for WordPress plugins to standardize and integrate their markup, which in turn would allow plugins to work together without stepping on each other’s toes.
  • WordPress could benefit from a way to create a custom post type in the core UI. Drupal has this functionality out the the box. WordPress has the functionality available, but only to the developer. This results in WordPress site builders searching for very specific plugins that create a specific post type, and hoping it does what they want.
  • WordPress already has plugins similar to Drupal’s Field API. Plugins such as Advanced Custom Fields and CMB2 go along way to allowing WordPress developers to easily create custom fields. Integrating something similar to this into WordPress core would allow plugin developers to count on a stable API and easily extend it.
  • An API for plugins to set dependencies on other plugins is something that Drupal has done since its beginning. It enables mini-ecosystems to develop that extend more complex modules. In Drupal, we see a module ecosystems built around Views, Fields, Commerce, Organic Groups, and more. WordPress would benefit greatly from this.
  • A go-to solution for custom query/list building would be wonderful for WordPress. Drupal has Views, but WordPress does not, so site builders end up using plugins that create very specific queries with output according to a very specific need. When a user needs to make a list of “popular posts,” they end up looking through multiple plugins dedicated to this single task.

A potential issue with including new APIs in WordPress core is that it could possibly break WordPress’ commitment to backwards compatibility, and would also dramatically affect their plugin ecosystem (much of this functionality is for sale right now).

WordPress Security Improvements

WordPress has a much-maligned security reputation. Because it commands a significant portion of the web, it’s a large attack vector. WordPress sites are also frequently set up by non-technical users, who don’t have the experience to keep it (and all of its plugins) updated, and/or lock down the site properly.

That being said, WordPress has some low-hanging fruit that would go a long way to help the platform’s reputation.

  • Brute force password protection (flood control) would prevent bots from repeatedly connecting to wp-login.php. How often do you see attempted connections to wp-login.php in your server logs?.
  • Raise the minimum supported PHP version from 5.2 (which does not receive security updates). Various WordPress plugins are already doing this, and there’s also talk about changing the ‘recommended’ version of PHP to 7.0.
  • An official public mailing list for all WordPress core and plugin vulnerabilities would be an easy way to alert developers to potential security issues. Note that there are third-party vendors that offer mailing lists like this.
Why is WordPress’ market share so large?

Easy: It can be set up and operated by non-developers—and there are a lot more non-developers than developers! Installing both Drupal and WordPress is dead simple, but once you’re up and running, WordPress becomes much easier.

Case in Point: Changing Your Site's Appearance

Changing what your site looks like is often the first thing that a new owner will want to do. With WordPress, you go to Appearance > Themes > Add New, and can easily browse themes from within your admin UI. To enable the theme, click Install, then click Activate.


With Drupal, you go to Appearance, but you only see core themes that are installed. If you happen to look at the top text, you read in small text that "alternative themes are available." Below that there is a button to “Install a New Theme.”


Clicking the button takes you to a page where you can either 1) paste in a URL to the tarball/zip, or upload a downloaded tarball/zip. You still have to know how to to download the zip or tarball, and where to extract it, and then browse to appearance, and enable the theme.

So it goes with Drupal. The same process goes with modules and more. Drupal makes things much more difficult. 

So, what can Drupal learn from WordPress?

To continue to grow, Drupal needs to enable non-developers. New non-developers can eventually turn into developers, and will become “new blood” in the community. Here’s how Drupal can do it:

  • A built in theme and module browser would do wonders for enabling users to discover new functionality and ways to change their site’s appearance. A working attempt at this is the Project Browser module (available only for Drupal 7). The catch 22 of this is that you have to download this the old-fashioned way in order to use it.
  • The ability to download vetted install profiles during the Drupal installation process would be amazing. This would go a long way to enable the “casual explorers," and show them the power of Drupal. A discussion of this can be found here.
  • Automatic security updates is a feature that would be used by many smaller sites. Projects have been steered toward WordPress specifically because smaller clients don’t have the budget to pay developers to keep up with updates. This feature has been conceptually signed off on by Drupal’s core committers, but significant work has yet to be done.
Mitigating Security Risks

The downside for this functionality is that Drupal would need to have a writable file-system, which at it’s core, is less secure. Whether that balances out with automatic updates is debatable.

Automatic security updates and theme/module installation would not have to be enabled out of the box. The functionality could be provided in core modules that could be enabled only when needed.

What has Drupal already learned from WordPress?

Cross-pollination has already been happening for a while. Let’s take a look at what the Drupal community has already, or is in the process of, implementing:

  • Semantic versioning is one of the most important changes in Drupal 8. With semantic versioning, bug fixes and new features can be added at a regular cadence. Prior to this, Drupal developers had to wait a few years for the next major version. WordPress has been doing this for a long time.
  • A better authoring experience is something that Drupal has been working on for years (remember when there was no admin theme?). With Drupal 8, the default authoring experience is finally on par with WordPress and even surpasses it in many areas.
  • Media management is the ability to upload images and video, and easily reference them from multiple pieces of content. There’s currently a media initiative to finally put this functionality in core.
  • Easier major version upgrades is something that WordPress has been doing since it’s inception.

Drupal has traditionally required significant development work in between major versions. That however, is changing. In a recent blog post, the lead of the Drupal project, Dries Buytaert said,

Updating from Drupal 8's latest version to Drupal 9.0.0 should be as easy as updating between minor versions of Drupal 8.

This is a very big deal, as it drastically limits the technical debt of Drupal as new versions of Drupal appear.


Drupal and WordPress have extremely intelligent people contributing to their respective platforms. And, because of the GPL, both platforms have the opportunity to use vetted and proven approaches that are shortcuts to increased usability and usage. This, in turn, can lead to a more open (and usable) web.

Special thanks to Jonathan Daggerhart, John TuckerMatthew Tift, and Juampy NR for reviewing and contributing to this article.


The Evolution of Design Tools

I often get asked why we use Sketch over Photoshop when it comes to designing for the web. The simple answer is that it’s the right tool for our team. However, the story behind it is much more complicated. The tools that designers use continue to change as the web evolves. The evolution of tools has accelerated, especially since the introduction of responsive design. As the web continues to change, so does the process for designers. I seem to stumble across a new tool or process idea every week! But, the tools themselves are only a small part of the design process. Below are a few ways in which the evolution of the web has impacted our design process and toolset.

Responsive Design

Before the first smartphone was released and Ethan Marcotte published his first article on responsive design, we designed on a fixed-width canvas. Our designs didn’t have to have a flexible width (though designers still argued about it) because most users browsed the web on their desktops or laptops. The computer screen was seen as a fixed box where the majority had a resolution of 800-1024px wide. Designers in those days urged one another to design wider fixed layouts.

The release of the smartphone and tablet changed that. Now the screen could be any variable of numerous widths, heights, and resolution.


As designers and developers dove into creating optimal responsive experiences for the web, the visual design began to evolve and simplify. We quickly found out that loading an experience heavy on bevel and emboss, drop shadows, textures, and numerous raster graphics created a crappy experience for the users on cellular internet connections because of the additional load time. Designs began evolving from skeuomorphic to flat, stripping away unnecessary design to speed up load times.


As the web landscape was changing, designers found themselves in need of more efficient tools to help envision and test a scalable design. Tools like Sketch and Figma included responsive presets and allowed for multiple art boards and also provided ways to quickly scale up or down components. The tools themselves became simpler, ditching unnecessary features and enabling designers to create and export assets based on real CSS values. There are still major gaps in the new tools that have been released, but overall, they’ve helped improve the process.

User-centric Design

The growth of and interest in user experience design has also had a dramatic impact on the way web designers approach their work. Designers are looking more towards users for feedback to help shape design solutions for a site or product. A user-research based approach to design, along with the popularity of design sprints, has designers needing to work more iteratively in order to test and revise designs.

Designers often prototype in order to test ideas on actual users. For designers whose strength is not in front-end development, this can be a time-consuming process. Additionally, having a development team to prototype ideas with and for you isn’t always feasible due to time and budget constraints. Here’s where prototyping tools such as Adobe Experience Design, Invision, Flinto, and Principle step in to fill that gap. They allow designers to quickly prototype an idea to test it with real users and get feedback from clients and the project team. Very little coding is needed to complete these sorts of prototypes, and some also integrate nicely with Sketch and Adobe Photoshop to save time when importing and exporting.


Our design team often uses a combination of Sketch and Invision to prototype and test simple ideas. When we need to produce more complicated prototypes of things like animations our team has experimented with other tools like Principle and Flinto to help visualize our ideas in action. Many of the most common interactions, transitions, animations etc. can be relatively quickly prototyped for user testing and feedback without writing a single line of code.

It's worth noting that most design teams work differently, and while one may leverage tools like the ones previously mentioned for prototyping, others may prototype nearly everything "in the browser" using basic HTML, CSS and JavaScript or frameworks like Bootstrap and Foundation as starting points. Our team tries to always work in the most efficient way possible, especially when producing artifacts that are, in the end, disposable. There are things that we prototype with some basic HTML and CSS, but many things that we're able to test more quickly using tools that require no writing of code.

Collaboration By asking people for their input early in the process, you help them feel invested in the outcome.

- Jake Knapp, Sprint: How to Solve Big Problems and Test New Ideas in Just 5 Days

Design and development teams have evolved to become more cross-functional and collaborative. Many designers and developers are also individually becoming more cross-functional with designers learning the basics of HTML, CSS and Javascript to prototype ideas and developers with an eye and heart for user experience beginning to participate in that process and adopt methods.

Before responsive design and the popularity of agile, most teams tended to be separated in their own silo. Design worked in one space, development in another. Designs were often created and approved with little input from developers, and developers tended to develop the site with little input from designers. This approach often led to miscommunication and misunderstanding.

Luckily, as web technology and processes began to accelerate, teams began to see the benefits of working more closely together. Today, designers, developers, clients, basically anyone involved in the project are often brainstorming and participating in the entire process from start to finish. Teams are working together much earlier in the process to help solve problems and test solutions. However, teams today can also be co-located across the world, making team collaboration more difficult.

Design tools have evolved to help communicate ideas to the client and team. Programs like Invision, Marvel, UXPin and others not only help designers prototype ideas, but also allow the team and clients to leave feedback by way of comments or notes. They can review and comment on a prototype from anywhere, using any device.

For design teams that are collaborating on a single project, several design programs have introduced shared libraries where global assets in a project can be kept up to date. Designers just need to install the library for that project and voilà! They’ll have all of the updated project assets to use in their design. Updating and sharing a component is as easy as adding it to the shared library where others can access it. This added feature really helps our team ensure designs are consistent by using the most updated versions of colors, type styles, elements and components.

Several design programs have also added features to help bridge the gap between design and front-end development. Some now allow for the export of styles as CSS or even Sass. Several programs like Zeplin, Invision (via the Craft Plugin and Inspect feature) and Avocode convert designs to CSS, allowing developers to get the styling information they need without opening a native design file. Assessing colors, typography, spacing, and even exporting raster and vector assets can all be done using one of the above programs. They also play well with Adobe Photoshop and Sketch files by directly syncing with those programs and eliminating the need to export designs and then individually upload them.

undefined Tools Are Only Part of the Process

Design tools have come a long way since the beginning of responsive design. They’ve evolved to help teams solve new problems and enable teams to communicate more efficiently. But, as designers, we need to know that tools will only get you so far. As John Maeda said, “Skill in the digital age is confused with mastery of digital tools, masking the importance of understanding materials and mastering the elements of form.” Know the problems that need to be solved, the goals needed to be accomplished, and find the best solution that speaks to the user. Don't get too focused on the tools.

Finally, every team is different and so is their toolset. Just keep in mind that you should pick the toolset that fits your team’s process the best. What may work for one, may not work for another.

Introduction to the Factory Pattern

I have written in the past about other design patterns, why knowing they exist and using them is important. In this article, I will tell you about another popular one, the factory class pattern.

Some of the biggest benefits of Object Oriented Programming are code reuse and organization using polymorphism. This powerful tool allows you to create an abstract Animal class and then override some of that behavior in your Parrot class. In most languages, you can create an instance of an object using the new keyword. It is common to write your code anticipating that you'll require an Animal at run-time, without the benefit of foresight as to whether it will be a Dog or a Frog. For example, imagine that your application allows you to select your favorite animal via a configuration file, or that you gather that information by prompting the user of the application. In that scenario, when you write the code you cannot anticipate which animal class will be instantiated.  Here's where the factory pattern comes in handy.

The Animal Factory

In the factory pattern, there is a method that will take those external factors as input and return an instantiated class. That means that you can defer the creation of the object to some other piece of code and assume you'll get the correct object instance from the factory object or method. Essentially, the factory couples logic from external factors with the list of eligible classes to instantiate.

In the factory class pattern it is common to end with class names like UserInputAnimalFactory or ConfigAnimalFactory. However, in some situations you may need to decide what factory to use during runtime, so you need a factory of factories like `ConfigAnimalFactoriesFactory`. That may sound confusing, but it is not uncommon.

I have created a nodejs package that will help to illustrate the factory pattern. It is called easy-factory, and you can download it from npm with npm install --save easy-factory. This package contains a factory base class that you can extend. In your extending class you only need to implement a method that contains the business logic necessary to decide which class should be used.

Let’s assume that your app offers the users a list of animal sounds. The user is also asked to select a size. Based on the sound name and the size, the app will print the name of the selected animal. The following code is what this simple app would look like.

// Import the animal factory. We'll get into it in the next code block. const SoundAnimalFactory = require('./SoundAnimalFactory'); const soundNames = ['bark', 'meow', 'moo', 'oink', 'quack']; // The following code assumes that the user chooses one of the above. const selectedSoundName = scanUserAnimalPreference(); // Have the user select the animal size: 'XS', 'S', 'M', 'L', 'XL'. const selectedSizeCode = scanUserSizePreference(); // Based on the sound name and the size get the animal object. const factoryContext = {sound: selectedSoundName, size: selectedSizeCode}; const animal = SoundAnimalFactory.create(factoryContext, selectedSizeCode); // Finally, print the name of the animal. print(animal.getName());

There is no new keyword to create the animal object. We are deferring that job to SoundAnimalFactory.create(…). Also notice, we are passing two variables to the factory: factoryContext and selectedSizeCode. The first parameter is used to determine which class to use, the rest of the parameters will be passed to that class constructor when calling with the new keyword. Thus, with factoryContext the factory will have enough information to create the instance of the needed object. selectedSizeCode will be passed to the constructor of the determined class. Thus ultimately you will end up with something like new Cat('S').

The code for the factory will be aware of all the available animals and will inspect the `factoryContext` to decide which one to instantiate.

// SoundAnimalFactory.js 'use strict'; const FactoryBase = require('easy-factory'); class SoundAnimalFactory extends FactoryBase { /** * Decide which animal to instantiate based on the size and sound. * * @param {object} context * Contains the keys: 'size' and 'sound'. * * @throws Error * If no animal could be found. * * @return {function} * The animal to instantiate. */ static getClass(context) { if (typeof context.size === 'undefined' || typeof context.sound === 'undefined') { throw new Error('Unable to find fruit.'); } if (context.sound === 'bark') { return require('./dog'); } else if (context.sound === 'meow') { if (context.size === 'L' || context.size === 'XL') { return require('./tiger'); } return require('./cat'); } // And so on. } } module.exports = SoundAnimalFactory;

Notice how our application code executed the method SoundAnimalFactory.create but our factory only implements SoundAnimalFactory.getClass. That is just because in this particular implementation of the factory pattern the actual instantiation of the object takes place in the base class FactoryBase. What happens behind the scenes is that the create method will call getClass to know which class to implement and then call new on that with any additional parameters. Note that your approach may differ if you don't use the easy-factory nodejs package.

Real-World Example

I use this pattern in almost all the projects I am involved. For instance, in my current project we have different sources of content since we are building a data pipeline. Each source contains information about different content entities in the form of JSON documents that are structured differently for each source. Our task is to convert incoming JSON documents into a common format. For that, we have denormalizer classes, which take the JSON document in a particular format and transform it to the common format. The problem is that for each combination of source and entity type, we need to use a different denormalizer object. We are using an implementation similar to the animal sound factory above. Instead of taking user input in this case, we inspect the incoming JSON document structure instead to identify the source and entity type. Once our factory delivers the denormalizer object, we only need to call doStuffWithDenormalizer(denormalizer, inputDocument); and we are done.

// The input document comes from an HTTP request. // inputDocument contains a ‘type’ key that can be: season, series, episode, … // it also contains the ‘source’ parameter to know where the item originated. const inputDocument = JSON.parse(awsSqs.fetchMessage()); // Build an object that contains all the info to decide which denormalizer // should be used. const context = { type: inputDocument.type, source: inputDocument.source }; // Find what denormalizer needs to be used for the input document that we // happened to get from the AWS queue. The DenormalizationFactory knows which // class to instantiate based on the `context` variable. const denormalizer = DenormalizationFactory.create(context); // We are done! Now that we have our denormalizer, we can do stuff with it. doStuffWithDenormalizer(denormalizer, inputDocument);

You can see how there are two clear steps. First, build an object –context that contains all the information to know which denormalizer to instantiate. Second, use the factory to create the appropriate instance based on the context variable.

As you can see this pattern is very simple and a common way to get an instance of a class without knowing which class needs to be instantiated when you are writing the code. Even if simple it is important to identify this approach as a design pattern so you can communicate more efficiently and precisely with your team.

Lullabot at DrupalCon Baltimore

We’re joining forces this year with our friends at Pantheon to bring you a party of prehistoric proportions! It will be a night of friendly fun, good conversation, and dinosaurs at the Maryland Science Center. DrupalCon is always a great place to meet new people, and if you’re an old timer, it’s a great place to see old friends and make new ones.

The Maryland Science Center is only a short 10-minute walk from the Convention Center. Stop by and enjoy the harbor views, a tasty beverage, dessert, or just to say “hello”. We promise the dinosaurs will behave themselves!

Lullabot’s 9ᵀᴴ Annual DrupalCon Party
Wednesday, April 26th at the Maryland Science Center
601 Light St.
Baltimore, MD 21230
8pm - 11pm

There will be nineteen of us Lullabots attending, five of whom will be presenting sessions you won’t want to miss. And, be sure to swing by booth 102 in the exhibit hall to pick up some fun Lullabot swag and of course, our famous floppy disk party invites. See you in Baltimore!

Speaker Sessions Wednesday, April 26

Mateu Aguiló Bosch
Advanced Web Services with JSON API
12pm - 1:00pm
308 - Pantheon

Matthew Tift
Drupal in the Public Sphere
2:15pm - 3:15pm
318 - New Target

Wes Ruvalcaba & David Burns
Virtual Reality on the Web - An Overview and "How to" Demo
3:45pm - 4:15pm
Community Stage - Exhibit Hall

Sally Young
Decoupled from the Inside Out
5pm - 6pm
318 - New Target
*Presenting with Preston So and Matthew Grill of Acquia

Thursday, April 27

Mateu Aguiló Bosch
API-First Initiative
10:45am - 11:45am
318 - New Target
*Presenting with Wim Leers of Acquia

What is WebVR?

The WebVR Landscape

According to "WebVR is an experimental JavaScript API that provides access to Virtual Reality devices, such as the Oculus Rift, HTC Vive, Samsung Gear VR, or Google Cardboard, in your browser."

WebVR is built using JavaScript that accesses the WebGL platform built into modern browsers through the HTML5 Canvas element. WebGL allows browsers low-level access to your machine’s GPU for more advanced graphics rendering.

undefined WebVR Specification undefined

Mozilla and Google have been instrumental in coming up with a W3C specification draft for WebVR which describes support for accessing virtual reality devices, including sensors and head-mounted displays on the Web. It’s not yet a standard, but progress is being made to get it there.

The W3C organized a Web & Virtual Reality Workshop in October of 2016 which was hosted by Samsung and sponsored by Google, Mozilla, and The Khronos Group. This workshop had around 14 different sessions talking about Performance, Accessibility, Audio, 360º Video, Multi-user techniques, and next steps for specifying a slew of features.

GRVA - Global Virtual Reality Association undefined

Acer Starbreeze, Google, HTC VIVE, Facebook’s Oculus, Samsung, and Sony Interactive Entertainment announced the creation of a non-profit organization of international headset manufacturers to promote the growth of the global virtual reality (VR) industry. The Global Virtual Reality Association (GVRA) will develop and share best practices for industry and foster dialogue between public and private stakeholders around the world.

GVRA’s mission is to promote responsible development and adoption of VR globally with best practices, dialogue across stakeholders, and research. GVRA will be a resource for industry, consumers, and policymakers interested in VR.

You can read more about the GRVA on

While not focused specifically on WebVR, it’s encouraging that manufacturers are cooperating instead of just competing in order to come up with some standards that can help VR move forward in a collaborative fashion.

Browser Support undefined

Browser support for the WebVR API is currently happening in many of the cutting edge releases of browsers, or is hidden behind a few flags that must be set in order to enable the experimental features. You can find instructions for getting these various browsers working with your hardware here:

Note that you can develop WebVR experiences without these, but in order to see how it looks and works on VR hardware (which is often different than you'd expect when developing on a 2D screen) you will want these.

WebVR Tools

Here are some tools you can use to start building WebVR experiences now:

  • three.js - a JavaScript library making it easier to work with WebGL.
  • Primrose - one of the first WebVR frameworks. This JavaScript framework acts though listeners.
  • A-Frame - a JavaScript library with an entity-component system making it easier to work with Three.js.
  • A-Frame Inspector - a visual tool for inspecting and modifying your A-Frame layouts.
  • A-Frame React - a thin layer on top of A-Frame to bridge with ReactJS.
  • ReactVR - this is currently in a pre-release mode but aims to make use of ReactJS concepts to produce WebVR applications

I leave you with a little bit of inspiration of what is possible in WebVR with A-Painter, a Tilt Brush clone built using A-Frame for the HTC Vive.


Fear Not: Emotional Fear in the Workplace

I love working in human resources and have spent nine years in this field (coming up on seven years with Lullabot). My most enjoyable workdays are the ones in which I get to help an actual person with a real-life problem. I like to be in the know, not because I’m a big control freak (I may be a little one), but because I like to be in a position to help others.

Besides working at Lullabot, I also go to college. This semester, I elected to take a business leadership course. It might be applicable to my job, I thought, as I clicked “enroll”. WHOA. “Might” could not have been more wrong. Every time I read another chapter in my textbook, I find myself nodding along thinking about the way Lullabot operates. It’s a company that certainly harbors an environment for participative leadership.

 I thank my lucky stars that I am taking this course at THIS point of my life, as an employee of Lullabot, and not as the 18-year-old-this-is-what-my-parents-said-to-do-after-high-school self. The words in the textbook have far more meaning than they otherwise would have. I am devouring this course like I’m starving and business leadership is the first food I can get my hands on.

One of my great honors at Lullabot is being part of the hiring team. I’m one team member who gets to correspond with potential candidates and am often the first voice they’ll hear on an initial screening phone call. It’s my pleasure to get to know people and ask a few simple questions. Sometimes, I’ll get asked a question or two, and more than once I’ve been asked, “What’s your favorite thing about working at Lullabot?” Undoubtedly, I’ll respond, “They treat us like adults.” Such a simple concept, really. Treat an adult…like an adult. Nevertheless, at previous companies where I’ve worked, stepping back and letting people do their work independently proves to be a hard thing for management to do.

Empowering people is the single most important element in managing our organization.  - Fred Smith, Chairman of FedEx

I stopped reading in my textbook (The Art of Leadership 5th Edition by George Manning and Kent Curtis) when I got to Chapter 9: Empowerment in the Workplace and the Quality Imperative because it spoke to me and I had to write what I was feeling out. In the section titled Communication Problems and Solutions I came across a paragraph on fear:

Employees may find it hard to be truthful with leaders because of fear of punishment. If they have made a mistake, it may be difficult to communicate this information for fear of the leader’s reaction. Leaders can best circumvent this by showing appreciation for employees’ honesty and candor, even when they have made mistakes.

In 2010, Lullabot approached me with a project. I had helped them with a few events previously, and now they had a task that they could use some outside help on. It wasn’t an event this time, it was a mistake. Someone had made an eCommerce mistake, and they needed some manpower to fix it. This mistake was a little costly, and perhaps embarrassing, but the thing that stood out to me was— wait for it— no one was in trouble. No one was mad. No one was scared. No one lost their job. And it wasn’t a secret. Team members knew, and everyone was fine with it. This goes even further than the “treat people like adults”. It’s treating people like humans. Everyone makes mistakes and Lullabot is ok with that. Our “Be Human” core value is my favorite. People are human for better or for worse.

This situation was one of the main reasons Lullabot earned my trust, thus gaining my loyalty. After helping fix the mistake, I was asked to stay on in an HR capacity with Lullabot. I often wonder if I owe my eventual hire to that mistake.

Safety, not courage, is (arguably) the opposite of fear. The opposing feeling to being scared is feeling secure. In my opinion, treating people like adults abolishes fear. But more than that, when we treat people like humans—with understanding, empathy and respect—we instill a feeling of safety. So the next time someone asks me my favorite thing about working at Lullabot, I’ll amend my answer. My favorite thing about Lullabot is the feeling of safety. Safe to be who I am (a work in progress), responsible for my actions through good and bad, and trusted to make decisions everyday.

The Infrastructure Team

Matt and Mike sit down with the infrastructure team and talk the ins and outs of what it takes to keep's web properties and services (such as composer, git, testing suite, etc) up and running.

A New Dimension for Designing the Web

Consider that designing for the traditional web of today is still a two-dimensional medium. The audience is looking at a flat screen. Users scroll up and down, sometimes left and right, on a two-dimensional plane. Your goal is to make those two dimensions convey meaning and order in a way that touches their hearts and minds. You want to give them an experience that is both memorable and meaningful but must do so within the confines of two dimensions.

Now consider how you would accomplish the same goal in a three-dimensional medium. There are new factors to consider such as the user’s head angle and their viewing radius. How do you make elements touchable in a manner that is intuitive without the benefit of haptic feedback? How do you deal with locomotion in 3D space when a user is sitting or standing still? Does your UI move with the user or disappear altogether? How do you use audio cues to get a user’s attention so that they turn their head to view what you want? We’re no longer designing in a flat medium, but within all the space encompassing a person.

It may seem like there are fewer constraints and that can be liberating, but it can also be a bit scary. The truth is that constraints make our work easier. Luckily, some guidelines are starting to emerge to govern designing for a three-dimensional medium.

There are also many design principles which stay the same. The basic way the human eye works hasn't changed. Principles such as contrast, color, and pattern recognition are still valid tools at our disposal. These concepts can still be useful as the building blocks in our new virtual medium. We may even be able to explore them like never before.

How VR will affect UX

VR comes with a new set of expectations from users and tools with which we can connect them to our medium.

Giving UX a Whole New Meaning with a Third Dimension

Interfaces will no longer constrain themselves to two-dimensional screens sitting on a desk. The screen appears to go away with a screen mounted to your head. As you move your head, there is new canvas space available all around. The head mounted display gives a sense of depth and scale, which we can utilize. Instead of a screen in front of us, we can have browsers and apps represented all around us.

Additionally, we want to think of how to organize these apps in the space around us. Spatial organization of our UI components can help a user to remember where they can find things. Or it could degrade into the nightmare desktop we’ve all seen with a massive clutter of icons.

Placing typical UI elements in such an environment can be tricky. It can be hard to figure out what is optimal when anything is possible. To get an idea of some factors involved in finding what is optimal, take a look at some research done by Mike Alger which is in turn based on some useful observations provided by Alex Chu of Samsung. Mike Alger’s research concludes that a measurable area looking something like this is optimal for user interface elements.


Mike Alger’s 18-minute video “VR Interface Design Pre-Visualisation Methods”

This conclusion is based on several factors as explained by Mike Alger in his paper. The first is the field of view when a user is looking straight forward. This distance is variable per device, but to give you an idea, the Oculus Rift’s FOV is 94.2°. The second factor is the distance a user can comfortably see. This distance is around 0.5 to 1 meters from the user due to how the eyes try to focus more and strain after this distance. Oculus recommends a distance of 0.75 meters to developers. The third factor is the head rotation of a user. Horizontally the comfort zone is about 30° from the center with a maximum distance of 55° to the side. The fourth is head pitch or the distance up and down that is comfortable for a user to position their head. Upwards this is comfortable up to 20° with a maximum of 60°. Downwards this is comfortably 12° with a maximum of 40°.


Left: Seated angles of neck rotation.

Right: Combining rotation with FOV results in beginning zones for content.

Credit: Mike Alger - Visual Design Methods for Virtual Reality.

Creating Memorable Experiences

An interesting side effect of experiencing virtual reality through a head-mounted display is that it’s more directly tied in with your memory due to the feeling of being part of an environment rather than watching it on a screen. Our brains can retain a lot of data that we gather from on-screen reading and watching, but feeling as though you’re a part of it all creates memories of the experience, not just retention of what we learned as through reading.

Creating memories and engaging a user on a level where they can almost forget that what they are experiencing is virtual is what we refer to as immersion. Immersion is at the heart of what VR applications are trying to accomplish. Being a core goal of a VR experience, we should try to understand the factors involved and the techniques that are quickly becoming best practices when trying to achieve the creation of an immersive experience.

Use Sound to Draw Attention

Positional audio is a way of seeing with your ears. We evolved having to deal with predators. Accordingly, our brains know exactly where a sound comes from relative to our bodies. A recognizable sound that is behind you and to your left can quickly trigger not only an image of what made the sound, but we can approximate where that sound came from. It also triggers feelings within us. We might feel a measure of fear or soothing depending on what our brain relates this sounds to and how close it is to us. As in real life, so it is in VR. We can use positional audio as cues to gain a user’s attention or to give them information that is not presented visually. Remembering a time when you were in VR and a sudden loud sound scared the pants off of you can immediately recall a memory of everything else you were experiencing at the time.

Scale can be a Powerful Tool

Another useful tool is how scale can affect the user. Creating a small model of something like a robot can make the user feel powerful, as though the robot were a toy and cute. But scale the robot up to three or four times the size of the user and it’s now menacing. Depending on the memories you’re trying to create, scale can be a useful tool to shape how the user remembers the experience.

Create Beautiful Scenes to Remember

Visually stunning scenery can affect the immersion experience as well. A beautiful sunset or a starry night sky can give the user an environment to which they can relate. Invoking known places and scenes is an effective means of creating memorable experiences. It’s a means of triggering other similar memories and feelings and then building upon those with the direction you are trying to take the user. An immersed user may have been there before, but now there is a new experience to remember.

Make Learning Memorable

Teaching someone a new and wonderful thing can also be a useful memory trigger. Do you remember where you were when you first learned of HTML, or Photoshop? The knowledge that tends to stick out the most to us can trigger pretty powerful images and the feelings that we memorized in those moments. Heralded by some to be an important catalyst in changing how we learn, VR has the potential to revolutionize education. Indeed we are better able to create memories through the use of VR, and what better memories than those of learning new and interesting things. Or perhaps making those less interesting things much more interesting and in doing so learning them more fully.

More VR Resources you need to know Cardboard Design Lab

A set of intrinsic rules you need to know to be able to respect physiologically and treat your users carefully. Google has regrouped some of these principles in an app so you can learn through this great immersive experience.

WebVR Information

This is a convenient website that has lots of links to information regarding the WebVR specification, how to try it out, and how to contribute to its formation. It’s a great quick stop for getting into the nitty gritty of the idea behind WebVR and why.

3D User Interfaces: Theory and Practice

This book addresses the critical area of 3D user interface design—a field that seeks to answer detailed questions that make the difference between a 3D system that is usable and efficient and one that causes user frustration, errors, and even physical discomfort.

Oculus Developer Center

Signing up for an account here can give you access to the developer forums which contain tons of information and discussion about VR, best practices, and shared experiences by other VR developers.

Learning Virtual Reality

Developing Immersive Experiences and Applications for Desktop, Web, and Mobile

The VR Book: Human-Centered Design for Virtual Reality

The VR Book: Human-Centered Design for Virtual Reality is not just for VR designers, it is for managers, programmers, artists, psychologists, engineers, students, educators, and user experience professionals. It is for the entire VR team, as everyone contributing should understand at least the basics of the many aspects of VR design.

Building an Enterprise React Application, Part 2

Building an Enterprise React Application, Part 2

In the first part of this article, we talked a lot about architecture. This time around we’ll be looking at the build tools, processes and front-end code organization on the project. Many of the tools we used are popular and well-tested and I can recommend using any of them without hesitation.

I’d like to begin by briefly talking about adding dependencies (libraries, packages, e.g.) to your JavaScript projects. If you’re coming from a Drupal background (as many of our readers are), then you may be quite cautious about adding dependencies to your projects. This is understandable since adding excessive modules to a Drupal site can have serious performance consequences.

With JavaScript projects, the situation is different. It’s very common to use micro-libraries, sometimes, quite a few of them. The performance concerns in doing so in a JavaScript application are not as great. Rather than the number of dependencies, the focus should be on the size of the file(s) clients need to download to run your app.

The use of small libraries is a huge asset when building JavaScript applications because you can compose a tailored solution for the project at hand, rather than relying on a monolithic solution that may include a bunch of things you’ll never use. It’s really powerful to have that versatility.

Build Tools and Processes

Let’s look at a few of the key tools that the front-end team used on the project:

Let’s talk about these one by one…


On this project we used npm to manage dependencies. Since then, however, Facebook has published yarn, which is probably the better choice if you’re starting a project today. Whether you use yarn or npm, the actual modules will still come from the npm registry.

Another thing I’d like to mention is our use of npm scripts instead of task runners like Grunt and Gulp. You can use npm scripts to do things like watch files for changes, run tests, transpile your code, add pre and post-commit hooks, and much more. You may find task runners to be redundant in most cases.


In even a small JavaScript application, you will likely have your code distributed in a number of different files—think ES6 modules, or React components. Webpack helps gather them all together, process them (ES6 transpiling, for example) and then output a single file—or multiple files, if that’s what you specify—to include in your application.

Browserify does much the same thing as webpack. Both tools process files. They aren’t task runners like Gulp or Grunt, which, as I mentioned, were unnecessary for us. Webpack has built-in hot swapping of modules, which is very helpful with development and is one reason why it’s widely preferred over Browserify for React projects.


Babel is an indispensable part of a JavaScript build process. It allows developers to use ES6 (aka ES2015) today, even if your target browsers don’t yet support it. There are huge advantages to doing this—better variable scoping, destructuring, template literals, default parameters and on and on.

The release of new versions of JavaScript now happens annually. In fact, ES7 was finalized in June of last year, and with Babel you can use most of those features now. Unless something significant changes, transpiling looks to be a permanent feature of JavaScript development.


Do you use Autoprefixer to add vendor prefixes to your CSS rules? If yes, then you’re already using PostCSS. PostCSS is simply a JavaScript tool used to transform CSS. It can replicate the things Sass and Less do, but it doesn’t make assumptions about how you want to process your CSS files. For example, you can use tomorrow’s CSS syntax today, write custom plugins, find and fix syntax errors, add linting to your stylesheets, and much more. If you’re already using a tool like Autoprefixer, it may make sense to go all in and simplify your build process.


After first using ESLint, I wondered how I ever got by without it. ESLint helps enforce consistent JavaScript code style. It’s easy to use and if you’re in a big team, it’s essential. There’s also a plugin to add specific linting rules for React. Quick note, there is a new project called prettier that you might prefer for enforcing consistent code styling. I haven’t tried it out yet, but it looks promising.


I alluded to stylelint above when I discussed PostCSS. It’s a linter for your CSS. As with ESLint, it provides great value in maintaining a consistent, readable codebase.

Testing tools

We used Mocha for a JavaScript testing framework and Chai for the assertion library, but there are other good ones. I also recommend using Enzyme to help with traversing your components during the testing process. We didn't use it on this project, but if we were starting again today we certainly would.

Build Tools and Processes Summary

You may have noticed a common thread between all the tools I listed above. They’re all JavaScript-centric. There are no Ruby gems or wrappers written in C or Java. Everything that is required is available through npm. If you’re a JavaScript developer (or you want to be) this is a big advantage. It greatly simplifies things. Plus, you can contribute to those projects to add functionality, or even fork them if need be.

Another thing to note, we’ve discarded tools that we could do without. PostCSS can do the job performed by Sass, so we went with PostCSS since we’re already using it for vendor prefixes. Npm scripts can do the job of Gulp and Grunt, so we used npm scripts since they are available by default. Front-end build processes are complex enough. When we can simplify, it’s a good idea to do so.

File Structure and Separation of Concerns

What is meant by the term, “separation of concerns”? From the perspective of front-end development, it’s commonly understood to mean that files that do similar things are separated out into their own directories. For example, there is commonly a folder for JavaScript files and another for CSS. Within those folders there are usually subfolders where we further categorize the files by what part of the application they apply to. It may look something like this:

JS/ app.js Views/ Header.js CSS/ global.css Header/ Header.css

With the structure above things are being grouped by technology and then are further categorized by the part of the application in question. But isn’t the actual focus of concern something else entirely? In our example above, what if we structured things more like this?

app/ App.js Header/ Header.js Header.css

Now we have both the CSS and JavaScript files in a single folder. Any other file that pertains solely to the header would also go in this folder, because the focus of concern is the header, not the various technologies used to create it.

When we first started using this file structure on the project, I was uncertain how well it would work. I was used to separating my files by technology and it seemed like the right way to do it. In short order, however, I saw how grouping by component created a more intuitive file structure that made it easier to track down what was going on.

CSS in JavaScript

Taking this line of thinking to its logical conclusion, why not just include all the code for a component in one file? I think this is a really interesting idea, but I haven’t had the opportunity to do it in a real project yet and we certainly didn’t do it on this project.

However, if you’re interested, there are some good solutions that I think are worth consideration, including styled components and Aphrodite. I realize that this suggestion is a bit controversial and many of you reading this will be put off it, but after working on React projects for the last couple of years, it has started to make a lot of sense to me.

Wrapping Up…

I find it remarkable how much time is spent on the two topics we’ve addressed here. Front-end build processes can be complex. Some tools, such as webpack, can be difficult to get your head around at first and it can feel overwhelming when you’re getting started.

There are a few React starter projects I’d like to suggest that may help. The first one is from Facebook, called Create React App. It can help you get started very quickly, but there are some limits. Create React App doesn’t try to be all things to all people. It doesn’t include server rendering, for example, although you can add it yourself if needed.

Another very interesting project is Next.js. It’s a small React framework that does support server rendering, so it may be worth trying out. Server rendering of JavaScript applications is important for many projects, certainly for any public websites you want build. It’s also a fairly complex problem. There isn’t (to my knowledge) a simple, drop-in solution available at this time. Next.js may be the simplest, best supported path to getting started on a universal React app, but it may not be right for your team or project.

We’ve created our own React boilerplate that supports server rendering, so feel free to look it over and take away what’s useful. Of course, there are many other React boilerplate projects out there—too many to mention—but I encourage you to look a few of them over if your team is planning a React project. Reviewing how other teams have approached the same problems will help you decide what’s going to be the best choice for your team.

HTTPS Everywhere: Deep Dive Into Making the Switch

HTTPS Everywhere: Deep Dive Into Making the Switch

In the previous articles, HTTPS Everywhere: Security is Not Just for Banks and HTTPS Everywhere: Quick Start With CloudFlare, I talked about why it’s important to serve even small websites using the secure HTTPS protocol, and provided a quick and easy how-to for sites where you don’t control the server. This article is going to provide a deep dive into SSL terminology and options. Even if you are offloading the work to a service like Cloudflare, it’s good to understand what’s going on behind the scenes. And if you have more control over the server you’ll need a basic understanding of what you need to accomplish and how to go about it.

At a high level, there are a few steps required to set up a website to be served securely over HTTPS:

  1. Decide what type of certificate to use.
  2. Install a signed certificate on the server.
  3. Configure the server to use SSL.
  4. Review your site for mixed content and other validation issues.
  5. Redirect all traffic to HTTPS.
  6. Monitor the certificate expiration date and renew it when it expires.

Your options are dependent on the type of certificate you want and your level of control over the website. If you self-host, you have unlimited choices, but you’ll have to do the work yourself. If you are using a shared host service, you’ll have to see what SSL options your host offers and how they recommend setting it up. Another option is to set up SSL on a proxy service like the Cloudflare CDN, which stands between your website and the rest of the web.

I’m going to go through these steps in detail.

Decide Which Certificate to Use

Every distinct domain needs certificates, so if you are serving content at and, both domains need to be certified. Certificates are provided by a Certificate Authority (CA). There are numerous CAs that will sell you a certificate, including DigiCert, VeriSign, GlobalSign, and Comodo. There are also CAs that provide free SSL certificates, like LetsEncrypt.

Validation Levels There are several certificate validation levels available.

Domain Validation (DV) degree certificate indicates that the applicant has control over the specified DNS domain. DV certificates do not assure that any particular legal entity is connected to the certificate, even if the domain name may imply that. The name of the organization will not appear next to the lock in the browser since the controlling organization is not validated. DV certificates are relatively inexpensive, or even free. It’s a low level of authentication but provides assurance that the user is not on a spoofed copy of a legitimate site.

Organization Validation (OV) OV certificates verify that the applicant is a legitimate business. Before issuing the SSL certificate, the CA performs a rigorous validation procedure, including checking the applicant's business credentials (such as the Articles of Incorporation) and verifying the accuracy of its physical and Web addresses.

Extended Validation (EV) Extended Validation certificates are the newest type of certificate. They provide more validation than the OV validation level and adhere to industry-wide certification guidelines established by leading Web browser vendors and Certificate Authorities. To clarify the degree of validation, the name of the verified legal identity is displayed in the browser, in green, next to the lock. EV certificates are more expensive than DV or OV certificates because of the extra work they require from the CA. EV certificates convey more trust than the other alternatives, so are appropriate for financial and commerce sites, but they are useful on any site where trust is important.

Certificate Types

In addition to the validation levels, there are several types of certificates available.

Single Domain Certificate An individual certificate is issued for a single domain. It can be either DV, OV or EV.

Wildcard Certificate A wildcard certificate will automatically secure any sub-domains that a business adds in the future. They also reduce the number of certificates that need to be tracked. A wildcard domain would be something like *, which would include,,, etc. Wildcards work only with DV and OV certificates. EV certificates cannot be provided as wildcard certificates, since every domain must be specifically identified in an EV certificate.

Multi-Domain Subject Alternative Name (SAN) A multi-domain SAN certificate secures multiple domain names on a single certificate. Unlike a wildcard certificate, the domain names can be totally unrelated. It can be used by services like Cloudflare that combine a number of domains into a single certificate. All domains are covered by the same certificate, so they have the same level of credentials. A SAN certificate is often used to provide multiple domains with DV level certification, but EV SAN certificates are also available.

Install a Signed Certificate

The process of installing a SSL certificate is initiated on the server where the website is hosted by creating a 2048-bit RSA public/private key pair, then generating a Certificate Signing Request (CSR). The CSR is a block of encoded text that contains information that will be included in the certificate, like the organization name and location, along with the server’s public key. The CA then uses the CSR and the public key to create a signed SSL certificate, or a Certificate Chain. A certificate chain consists of multiple certificates where each certificate vouches for the next. This signed certificate or certificate chain is then installed on the original server. The public key is used to encrypt messages, and they can only be decrypted with the corresponding private key, making it possible for the user and the website to communicate privately with each other.

Obviously, this process is something that only works if you have shell access or a control panel UI to the server. If your site is hosted by a third party, it will be up to the host to determine, how, if at all, they will allow their hosted sites to be served over HTTPS. Most major hosts offer HTTPS, but specific instructions and procedures vary from host to host.

As an alternative, there are services, like Cloudflare, that provide HTTPS for any site, no matter where it is hosted. I discussed this in more detail in my previous article, HTTPS Everywhere: Quick Start With CloudFlare.

Configure the Server to Use SSL

The next step is to make sure the website server is configured to use SSL. If a third party manages your servers, like a shared host or CDN, this is handled by the third party and you don’t need to do anything other than determine that it is being handled correctly. If you are managing your own server, you might find Mozilla's handy configuration generator and documentation about Server Side TLS useful.

One important consideration is that the server and its keys should be configured for PFS, an abbreviation for either Perfect Forward Security or Perfect Forward Secrecy. Prior to the implementation of PFS, an attacker could record encrypted traffic over time and store it. If they got access to the private key later, they could then decrypt all that historic data with the private key. Security around the private key might be relaxed once the certificate expires, so this is a genuine issue. PFS ensures that even if the private key gets disclosed later, it can’t be used to decrypt prior encrypted traffic. An example of why this is important is the Heartbleed bug, where PFS would have prevented some of the damage caused by Heartbleed. If you’re using a third-party service for SSL, be sure it uses PFS. Cloudflare does, for instance.

Normally SSL certificates have a one-to-one relationship to the IP address of their domains. Server Name Indication (SNI) is an extension of TLS that provides a way to manage multiple certificates on the same IP address. SNI-compatible browsers (most modern browsers are SNI-compatible) can communicate with the server to retrieve the correct certificate for the domain they are trying to reach, which allows multiple HTTPS sites to be served from a single IP address.

Test the server’s configuration with Qualys' handy SSL Server Test. You can use this test even on servers you don’t control! It will run a battery of tests and give the server a security score for any HTTPS domain.

Review Your Site for HTTPS Problems

Once a certificate has been installed, it’s time to scrutinize the site to be sure it is totally valid using HTTPS. This is one of the most important, and potentially time-consuming, steps in switching a site to HTTPS.

To review your site for HTTPS validation, visit it by switching the HTTP in the address to HTTPS and scan the page source. Do this after a certificate has been installed, otherwise, the validation error from the lack of a certificate may prevent other validation errors from even appearing.

A common problem that prevents validation is the problem of mixed content, or content that mixes HTTP and HTTPS resources on the page. A valid HTTPS page should not include any HTTP resources. For instance, all JavaScript files and images should be pulled from HTTPS sources. Watch canonical URLs and link meta tags, as they should use the same HTTPS protocol. This is something that can be fixed even before switching the site to HTTPS, since HTTP pages can use HTTPS resources without any problem, just not the reverse.

There used to be a recommendation to use protocol-relative links, such as // instead of, but now the recommendation is to just always use HTTPS, if available since a HTTPS resource works fine under either protocol.

Absolute internal links should not conflate HTTP and HTTPS references. Ideally, all internal links should be relative links anyway, so they will work correctly under either HTTP or HTTPS. There are lots of other benefits of relative links, and few reasons not to use them.

For the most part, stock Drupal websites already use relative links wherever possible. In Drupal, some common sources of mixed content problems include:

  • Hard-coded HTTP links in custom block content.
  • Hard-coded HTTP links added by content authors in body, text, and link fields.
  • Hard-coded HTTP links in custom menu links.
  • Hard-coded HTTP links in templates and template functions.
  • Contributed modules that hard-code HTTP links in templates or theme functions.

Most browsers will display HTTPS errors in the JavaScript console. That’s the first place to look if the page isn’t validating as HTTPS. Google has an example page with mixed content errors where you can see how this looks.

undefined Redirect all Traffic to HTTPS

Once you’ve assured yourself that your website passes SSL validation, it’s time to be sure that all traffic goes over HTTPS instead of HTTP. You need 301 redirects from your HTTP pages to HTTPS, especially when switching from HTTP to HTTPS. If a website was already in production on HTTP, search engines have already indexed your pages. The 301 redirect ensures that search engines understand the new pages are a replacement for the old pages.

If you haven’t already, you need to determine whether you prefer the bare domain or the www version, vs You should already be redirecting traffic away from one to the other for good SEO. When you include the HTTP and HTTPS protocols, at a minimum you will have four potential addresses to consider:,,, and One of those should survive as your preferred address. You’ll need to set up redirects to reroute traffic away from all the others to that preferred location.

Specific details about how to handle redirects on the website server will vary depending on the operating system and configuration on the server. Shared hosts like Acquia Cloud and Pantheon provide detailed HTTPS redirection instructions that work on their specific configurations. Those instructions could provide useful clues to someone configuring a self-hosted website server as well.

HTTP Strict Transport Security (HSTS)

The final level of assurance that all traffic uses HTTPS is to implement the HTTP Strict Transport Security (HSTS) header on the secured site. The HSTS header creates a browser policy to always use HTTPS for the specified domain. Redirects are good, but there is still the potential for a Man-in-the-Middle to intercept the HTTP communication before it gets redirected to HTTPS. With HSTS, after the first communication with a domain, that browser will always initiate communication with HTTPS. The HSTS header contains a max-age when the policy expires, but the max-age is reset every time the user visits the domain. The policy will never expire if the user visits the site regularly, only if they fail to visit within the max-age period.

If you’re using Cloudflare’s SSL, as in my previous article, you can set the HSTS header in Cloudflare’s dashboard. It’s a configuration setting under the “Crypto” tab.

Local, Dev, and Stage Environments

A final consideration is whether or not to use HTTPS on all environments, including local, dev, and stage environments. That is truly HTTPS everywhere! If the live site uses HTTPS, it makes sense to use HTTPS in all environments for consistency.

HTTPS Is Important

Hopefully, this series of articles provides convincing evidence that it's important for sites of all sizes to start using the HTTPS protocol, and some ideas of how to make that happen. HTTPS Everywhere is a worthy initiative!

Drupal Serialization Step-by-Step

In my previous article about the serializer component, I touched on the basic concepts involved when serializing an object. To summarize, serialization is the combination of encoding and normalization. Normalizers simplify complex objects, like User or ComplexDataDefinition. Denormalizers perform the reverse operation. Using a structured array of data, they generate complex objects like the ones listed above.

In this article, I will focus on the Drupal integration of the Symfony serializer component. For this, I will guide you step-by-step through a module I created as an example. You can find the module at I have created a different commit for each step of the process, and this article includes a link to the code in each step at the beginning of each section. However, you can use GitHub UI to browse the code at any time and see the diff.


When this module is finished, you will be able to transform any content entity into a Markdown representation of it. Rendering a content entity with Markdown might be useful if you wanted to send an email summary of a node, for instance, but the real motivation is to show how serialization can be important outside the context of web services.

Add a new normalizer service

These are the changes for this step. You can browse the state of the code at this step in here.

Symfony’s serializer component begins with a list of normalizer classes. Whenever an object needs to be normalized or serialized the serializer will loop through the available normalizers to find one that declares support for the type of object at hand, in our case a content entity. If you want to add a class to the list of eligible normalizers you need to create a new tagged service.

A tagged service is just a regular class definition that comes with an entry in the so the service container can find it and instantiate it whenever appropriate. For a service to be a tagged as a service you need to add a tags property with a name. You can also add a priority integer to convey precedence with respect to services tagged with the same name. For a normalizer to be recognized by the serialization module, you need to add the tag name normalizer.

When Drupal core compiles the service container, our newly created tagged service will be added to the serializer list in what it’s called a compiler pass. This is the place in Drupal core where that happens. The service container is then cached for performance reasons. That is why you need to clear your caches when you add a new normalizer.

Our normalizer is an empty class at the moment. We will fix that in a moment. First, we need to turn our attention to another collection of services that need to be added to the serializer, the encoders.

Include the encoder for the Markdown format

These are the changes for this step. You can browse the state of the code at this step in here.

Similarly to a normalizer, the encoder is also added to the serialization system via a tagged service. It is crucial that this service implements `EncoderInterface`. Note that at this stage, the encoder does not contain its most important method encode(). However, you can see that it contains supportsEncoding(). When the serializer component needs to encode an structured array, it will test all the encoders available (those tagged services) by executing supportsEncoding() and passing the format specified by the user. In our case, if the user specifies the 'markdown' format. Then, our encoder will be used to transform the structured array into a string, because supportsEncoding() will return TRUE. To do the actual encoding it will use the encode() method. We will write that method later. First, let me describe the normalization process.

Normalize content entities

The normalization will differ each time. It depends on the format you want to turn your objects into, and it depends on the type of objects you want to transform. In our example, we want to turn a content entity into a Markdown document.

For that to happen, the serializer will need to be able to:

  1. Know when to use our normalizer class.
  2. Normalize the content entity.
  3. Normalize any field in the content entity.
  4. Normalize all the properties in every field.
Discover our custom normalizer

These are the changes for this step. You can browse the state of the code at this step in here.

For a normalizer to be considered a good fit for a given object it needs to meet two conditions:

  • Implement the `NormalizerInterface`.
  • Return `TRUE` when calling `supportsNormalization()` with the object to normalize and the format to normalize to.

The process is nearly the same as the one we used to determine which encoder to use. The main difference is that we also pass the object to normalize to the supportsNormalization() method. That is a critical part since it is very common to have multiple normalizers for the same format, depending on the type of object that needs to be normalized. A Node object will have different code that turns it into an structured array when compared to an HttpException. We take that into account in our example by checking if the object being normalized is an instance of a ContentEntityInterface.

Normalize the content entity

These are the changes for this step. You can browse the state of the code at this step in here.

This step contains a first attempt to normalize the content entity that gets passed as an argument to the normalize() method into our normalizer.

Imagine that our requirements are that the resulting markdown document needs to include an introductory section with the entity label, entity type, bundle and language. After that, we need a list with all the field names and the values of their properties. For example, the body field of a node will result in the name field_body and the values for format, summary and value. In addition to that any field can be single or multivalue, so we will take that into consideration.

To fulfill these requirements, I've written a bunch of code that deals with the specific use case of normalizing a content entity into an structured array ready to be encoded into Markdown. I don’t think that the specific code is relevant to explain how normalization works, but I've added code comments to help you follow my logic.

You may have spotted the presence of a helper method called normalizeFieldItemValue() and a comment that says Now transform the field into a string version of it. Those two are big red flags suggesting that our normalizer is doing more than it should, and that it’s implicitly normalizing objects that are not of type ContentEntityInterface but of type FieldItemListInterface and FieldItemInterface. In the next section we will refactor the code in ContentEntityNormalizer to defer that implicit normalization to the serializer.

Recursive normalization

These are the changes for this step. You can browse the state of the code at this step in here.

When the serializer is initialized with the list of normalizers, for each one it checks if they implement SerializerAwareInterface. For the ones that do, the serializer adds a reference to itself into them. That way you can serialize/normalize nested objects during the normalization process. You can see how our ContentEntityNormalizer extends from SerializerAwareNormalizer, which implements the aforementioned interface. The practical impact of that is that we can use $this->serializer->normalize() from within our ContentEntityNormalizer. We will use that to normalize all the field lists in the entity and the field items inside of those.

First turn your focus to the new version of the ContentEntityNormalizer. You can see how the normalizer is divided into parts that are specific to the entity, like the label, the entity type, the bundle, and the language. The normalization for each field item list is now done in a single line: $this->serializer->normalize($field_item_list, $format, $context);.  We have reduced the LOC to almost half, and the cyclomatic complexity of the class even further. This has a great impact on the maintainability of the code.

All this code has now been moved to two different normalizers:

  • FieldItemListNormalizer contains the code that deals with normalizing single and multivalue fields. It uses the serializer to normalize each individual field item.
  • FieldItemNormalizer contains the code that normalizes the individual field items values and their properties/columns.

You can see that for the serializer to be able to recognize our new `FieldListItemNormalizer` and `FieldItemNormalizer` objects we need to add them to the service container, just like we did for the ContentEntityIterface normalizer.

A very nice side effect of this refactor, in addition to the maintainability improvement, is that a third party module can build upon our code more easily. Imagine that this third party module wants to make all field labels bold. Before the refactor they would need to introduce a normalizer for content entities—and play with the service priority so it gets selected before ours. That normalizer would contain a big copy and paste of a big blob of code in order to be able to make the desired tweaks. After the refactor, our third party would only need to have a normalizer for the field item list (which outputs the field label) with more priority than ours. That is a great win for extensibility.

Implement the encoder

As we said above the most important part of the encoder is encapsulated in the `encode()` method. That is the method in charge of turning the structured array from the normalization process into a string. In our particular case we treat each entry of the normalized array as a line in the output, then we append any suffix or prefix that may apply.

Further development

At this point the Entity Markdown module is ready to take any entity and turn it into a Markdown document. The only question is how to execute the serializer. If you want to execute it programmatically you only need do:

\Drupal::service(‘serializer’)->serialize(Node::load(1), ‘markdown’);

However there are other options. You could declare a REST format like the HAL module so you can make an HTTP request to and get a Markdown representation of the node in response (after configuring the corresponding REST settings).


The serialization system is a powerful tool that allows you to reform an object to suit your needs. The key concepts that you need to understand when creating a custom serialization are:

  • Tagged services for discovery
  • How a normalizer and an encoder get chosen for the task
  • How recursive serialization can improve maintainability and limit complexity.

Once you are aware and familiar with the serializer component you will start noticing use cases for it. Instead of using hard-coded solutions with poor extensibility and maintainability, start leveraging the serialization system.