Lullabot

Get a decorator for your Drupal home

Design patterns

Software design patterns are general and reusable software design solutions to a defined problem. They were popularized in 1994 by the “Gang of Four” after their book Design Patterns: Elements of Reusable Object-Oriented Software.

These generic solutions are not snippets of code, ready to be dropped in your project, nor a library that can be imported and reused. Instead they are a templated solution to the common software challenges in your project. Design patterns can also be seen as best practises when encountering an identified problem.

With Drupal 8’s leap into modern PHP, design patterns are going to be more and more relevant to us. The change from (mostly) procedural code to (a vast majority of) object oriented code is going to take the Drupal community at large through a journey of adaptation to the new programming paradigm. Quoting the aforementioned Design Patterns: Elements of Reusable Object-Oriented Software:

[…] Yet experienced object-oriented designers do make good designs. Meanwhile new designers are overwhelmed by the options available and tend to fall back on non-object-oriented techniques they've used before. It takes a long time for novices to learn what good object-oriented design is all about. Experienced designers evidently know something inexperienced ones don't. What is it?

Even if you don’t know what they are, you have probably been using design patterns by appealing to common sense. When you learn what they are you’ll be thinking “Oh! So that thing that I have been doing for a while is called an adapter!”. Having a label and knowing the correct definition will help you communicate better.

The decorator

Although there are many design patterns you can learn, today I want to focus in one of my favorites: the decorator.

The decorator pattern allows you to do unobtrusive behavior inheritance. We can have many of the benefits of inheritance and polymorphism without creating a new branch in the class’ inheritance tree. That sounds like a mouth full of words, but this concept is especially interesting in the Drupal galaxy.

In Drupal, when you are writing your code, you cannot possibly know what other modules your code will run with. Imagine that you are developing a feature that enhances the entity.manager service, and you decide to swap core’s entity.manager service by your enhanced version. Now when Drupal uses the manager, it will execute your manager, which extends core’s manager and overrides some methods to add your improvements. The problem arises when there is another module that does the same thing. In that situation either you are replacing that module’s spiced entity manager or that module is replacing your improved alternative. There is no way to get both improvements at the same time.

This is the type of situations where the decorator pattern comes handy. You cannot have this new module inheriting from your manager, because you don’t know if all site builders want both modules enabled at the same time. Besides, there could be an unknown number of modules –even ones that are not written yet– that may create the conflict again and again.

Using the decorator

In order to create our decorator we’ll make use of the interface of the object we want to decorate. In this case it is EntityManagerInterface. The other key component is the object that we are decorating, let’s call it the subject. In this case our subject will be the existing object in the entity.manager service.

Take for instance a decorator that does some logging every time the getStorage() method is invoked, for debugging purposes. To create a decorator you need to create a pristine class that implements the interface, and receives the subject.

class DebuggableEntityManager implements EntityManagerInterface { protected $subject; public function __construct(EntityManagerInterface $subject) { $this->subject = $subject; } }

The key concept for a decorator is that we are going to delegate all method calls to the subject, except the ones we want to override.

class DebuggableEntityManager implements EntityManagerInterface { protected $subject; // ... public function getViewModeOptions($entity_type_id) { return $this->subject->getViewModeOptions($entity_type_id); } // … public function getStorage($entity_type) { // We want to add our custom code here and then call the “parent” method. $this->myDebugMethod($entity_type); // Now we use the subject to get the actual storage. return $this->subject->getStorage($entity_type); } }

As you have probably guessed, the subject can be the default entity manager, that other module’s spiced entity manager, etc. In general you’ll take whatever the entity.manager service holds. You can use any object that implements the EntityManagerInterface.

Another nice feature is that you can decorate an already decorated object. That allows you to have multiple decorators adding different features without changing the inheritance. You can now have a decorator that adds logging to every method the entity manager executes, on top of a decorator that adds extra definitions when calling getFieldDefinitions(), on top of …

I like the coffee making example in the decorator pattern entry in Wikipedia, even if it’s written in Java instead of PHP. It’s a simple example of the use of decorators and it reminds me of delicious coffee.

Benefits and downsides

One of the downsides of using the decorator pattern is that you can only act on public methods. Since you are –intentionally– not extending the class of the decorated object, you don’t have access to any private or protected properties and methods. There are a couple of situations similar to this one where the decorator pattern may not be the best match.

The business logic you want to override is contained in a protected method, and that method is reused in several places. In this case you would end up overriding all the public methods where that protected one is called.

You are overriding a public method that is executed in other public methods. In such scenario you would not want to delegate the execution to the subject, because in that delegation your overridden public method would be missed.

If you don’t find yourself in one of those situations, you’ll discover that the decorator has other additional benefits:

  • It favors the single responsibility principle.
  • It allows you to do the decoration during run-time, whereas subclassing can only be done in compile-time.
  • Since the decorator pattern is one of the commonly known design patterns, you will not have to thoroughly describe your implementation approach during the daily scrum. Instead you can be more precise and just say “I’m going to solve the problem using the decorator pattern. Tada!”.
Write better designs

Design patterns are a great way to solve many complex technical problems. They are a heavily tested and discussed topic with lots of examples and documentation in many programming languages. That does not imply that they are your new golden hammer, but a very solid source of inspiration.

In particular, the decorator pattern allows you to add features to an object at run-time while maintaining the object’s interface, thus making it compatible with the rest of the code without a single change.

Tugboat: A Visual Website QA Tool for Enterprise Teams

Matt and Mike sit down with a number of Lullabots to talk about Tugboat, which allows you to "review every feature or bug fix with automatic builds." In developer-speak, it's an easy-to-use, multi-platform pull-request environment generation tool. Tugboat is one of the ingredients of Lullabot's secret sauce, and it's awesome!

The AMP Module for Drupal 8

The mobile web can be an experience that leaves a lot to be desired for the end user. In January 2016, Google and Lullabot started working together on a Drupal 8 module to support Accelerated Mobile Pages, or AMP Project, an open-source initiative to help make the mobile web faster and more user-friendly. For publishers, AMP helps create content that is optimized for mobile, but also loads instantly, virtually anywhere.

One of the most touted features of Drupal software is its flexibility, so making Drupal produce AMP HTML required a lot of careful consideration with regard to its design approach. In a recent article, Matthew Tift, a senior developer here at Lullabot, discusses why this open-source project is so important to the future of the web, and what to expect from the Drupal 8 AMP module we created. 


The team who worked on the AMP Project for Drupal 8 (a Drupal 7 module is forthcoming) also submitted a proposal to speak about the project at DrupalCon 2016.
 

UX for Brains: Let’s Be Honest, People Suck

I really love psychology, and learning about what makes people tick. I even debated psychology as a possible career, but settled for a minor in order to focus on my design obsession full time. Post graduation, I thought I‘d left that psychologist’s hat behind me. But like many recent grads, I was wrong. Now, as a user experience designer, I bring these foundations into my work constantly. If you're a designer or work in the web, what I'm about to share are some basic ways to leverage psychology in your work.

Understand Psychology To Understand People

Using a psychological perspective to approach design problems isn’t meant to side-step UX practices, but actually enhance them. I think of it as adding dimension to your practices and teams, by allowing you to see out of two eyes, instead of just one.

As a designer in this ever-evolving technical world, it is essential to have a core understanding of the fundamentals of human perception, cognition, and behavior, especially as they apply to how humans use websites. Utilize psychology as part of your own design practice to better tap into your users. There are volumes out there—not in the design section—full of timeless explanations of what motivates people, what their needs are, how they complete tasks, and how they think...or, you know, don’t. And as Donald Norman—the cognitive psychologist who originally coined the term UX—puts it in his must-read design bible, Design for Everyday Things:

The one thing I can predict with certainty is that the principles of human psychology remain the same, which means that the design principles here, based on psychology, on the nature of human cognition, emotion, action, and interaction with the world, will remain unchanged. Donald Norman, The Design of Everyday Things: Revised and Expanded Edition
People Aren't Always The Users We Dream Of

Ok, I admit, I have an underlying reason for pushing you all to learn about people through psychology. There is one observation that I have made in particular that I would hope for more designers to discover for themselves. And no matter how I try to sugarcoat it, or back it up with study findings, everyone thinks I’m a bummer when I share it. Simply, there are some things about human bodies, minds, and habits that kind of suck. Look at some of my examples, then decide for yourself how much credit we should give our users.

Before I jump into the quantitative statistics, let me ease into the fun with a qualitative persona. This is a culmination of users I’ve tested, all rolled up into my grandmother. I pray she never finds this article.

Think of this persona while you work, and consider how she might struggle with an interaction.

Elderly Eleanor is an older woman and mother of eight, who first experienced computers in the DOS era. Eleanor wrote technology off early after making some irreversible mistakes. And the poor dear had no Google, Siri, or user-centric customer support teams to help, so she gave up until her family forcibly gifted technology back into her life. Now, she has a PC and an iPad. She uses them to look at weather predictions (Maine gets quite a bit of snow) and to stay connected with her family through social media. But she doesn’t really understand how it works, and often points her camera way above her head during Skype calls. She marvels when a family member buys something online. But if they try to explain it, she will cut them off to say that it’s beyond her.

I know that Eleanor is not our standard user. But keep her in mind as you design your next experience, as the ultimate usability test. WWED?

The Technology Adoption Cycle shows us how rare our innovators and early adopters are.

While Eleanor’s aversion to technology is not the norm, it’s not too far off; in fact, an aversion to technology is mainstream, as we can see in a standard technology adoption lifecycle. This trend can be easy to overlook when we are the innovators building the technology, but the adventurous innovators and adopters are rare. The rest of humanity is not nearly so fearless, and will wait a long time before trying something new.

And They’re Needy

Not only do people interact with our experiences in less than ideal ways—more on that to follow—but they also have less than ideal expectations. People need a lot, and expect even more. If you need a refresher on the hierarchy of needs, check out Abraham Maslow’s pyramid: the gist is that people seek to meet their most basic needs first, like food and safety. As soon as these are met, instead of feeling satisfaction, now they need the next thing, and the next, in an endless and frustrating adaptation cycle known as the hedonic treadmill.

From Smashing Magazine, this pyramid suggests the hierarchy of design needs, with functionality as our primary focus.

I like this Smashing Magazine spin-off pyramid for a design hierarchy of needs. It follows the same bottom-up process, but with functionality at the bottom and creativity at the top. This means that people need a working product well before they need a beautiful one, so our priorities as a design team should follow this hierarchy. Another interesting observation here is the modest value provided by each of these levels. It’s frustrating, isn’t it? So much of our hard work to launch a working product is taken for granted. 

Take a deep breath before we look at the next chart.

The Kano Model is a theory of customer satisfaction developed in 1984 by Professor Noriaki Kano.

The Kano model was developed in the 80’s, but is still a classic. Note the trends of basic needs (must-haves), performance needs (wants), and latent needs (delighters). First of all, the must-haves are completely below the threshold of satisfaction, so it’s not enough to meet users’ basic needs. As far as they’re concerned, it’s the bare minimum that an app lets them sign up, keeps their information saved but private, doesn’t crash, and on and on and on. Wants can go either way, and some people will appreciate these extra thoughtful features and interactions. Users will delight at thoughtful experiences that meet their higher emotional and intellectual needs, but this can be a tall order. Since these latent needs are the ones users struggle most to articulate. I quote and re-quote Henry Ford on this, the manufacturer of the first automobiles: “If I had asked people what they wanted, they would have said faster horses.”

Humans Miss A LOT Of Goings-On

One of the hardest steps when people interact with something you’ve made is the first one: getting users to discover your hard work. I really like information architect Steve Krug’s way of describing the situation:

We’re thinking “great literature”... the user’s reality is much closer to “billboard going by at 60 miles an hour”. Steve Krug, Don't Make Me Think

So much for that thoughtfully-crafted experience. As you’ll see, humans have tons of mechanisms built in to protect them from actually caring about stuff. Let’s start with our eyes.

Our cone cells produce a small, precise foveal area, while our rod cells fill in the rest with our blurry peripheral vision.

In the center of our eyeballs, our cone receptor cells capture light with quality and focus. The rod cells around the rest of the back of the eye are lazier, and can’t distinguish things like color or shape all that well (but they shine when it comes to night vision). The cones have a lot of heavy lifting to do, but sadly, they only create an area of focused vision no bigger than your thumbnail, known as your foveal area. The rest of the world (and your website) is getting seen in our blurry peripheral vision, but we don’t notice because our eyes—and foveal areas— are constantly moving to fill in the gaps. Not too surprising that a lot of things get overlooked.

The next step requires keeping our attention, which is borderline goldfish status. As I’m sure you know, humans are overwhelmed with a firehose of information. We divide our attention between many distractions, and as a result, have a hard time focusing on anything. It doesn’t take long to lose focus: almost half of users give up if a page takes three or more seconds to load. Our attention spans are just minutes long—how many minutes exactly is debatable because it’s getting shorter all the time. These minutes are broken up by distractions too, so you can really only expect about eight focused seconds of unbroken attention to a task at a time. Eight!

And it’s way easier for a user to get distracted while using your product at home, online, alone (as compared to supervised or incentivized test subjects). Even though it’s fiction, I think this Portlandia bit does a nice job of telling the story. Point being, take that aspect of your usability tests with a grain of salt. For example—this one always cracks me up—think about the last time you went into a retail store, got a cart, filled it up with stuff, and then got bored and left that cart in an aisle and walked out. Hopefully you can’t, because who does that? And yet online cart abandonment is 67 percent!

With all the distractions we are already faced with, users are averse to anything that could be mistaken for a distraction. We have to seriously commit ourselves to getting from point A (our desk chairs) to point B (purchasing those picture frames from Target’s online store), so any pop ups are probably going to get killed immediately. Even that helpful overlay with 15% off brushed silver frames gets the X, because we’d made the snap judgement to terminate it before reading and comprehending any value. While I get that we’re all busy, I would argue that this behavior goes deeper than that. Often designers come up with unique, novel interactions that require learning unique, branded terminology, and I usually end up hating it. Because I don’t want to learn some crazy new mobile gesture if I came to your app to achieve a goal that wasn’t entertainment. Again, A to B.

UCLA psychologists asked 85 undergrads to draw the Apple logo from memory, and only one got it right. From Gizmodo.

Going back to the example: let’s say you’re giving your full focus to the picture frame hunt, because you are going to get that Christmas shopping done early this year, gosh darn it. Even when you think you’re giving something your full attention, your brain is doing you a solid and ignoring most of the thing. People learn as little as they can and instead rely on salient cues to get by. A recent study discussed on Gizmodo leveraged a popular brand to illustrate this concept: when participants were asked to draw the Apple logo from memory, most of them weren’t even close. At least they’re all apples?

This heatmap shows us how users skim past carousels, due to an evolved banner blindness. From ConversionXL.

Heatmaps provide an even more relevant example, showing us how humans skim down the left side of a webpage in an F-shaped pattern, without processing everything. Note how people are getting accustomed to ads and tuning them out. But you’ve probably heard of banner blindness; the exciting twist is that this has mutated so that people ignore anything that even looks like an ad. Like, say, carousels: turns out hero image carousels generate the same blindness, and consequently waste a lot of important space. So please don’t use them!

There is another wrinkle to this content discovery hurdle, which usability consultant Jakob Nielsen sums up succinctly in his Nielsen Norman Group article:

“How users read on the web: they don’t”. Jakob Nielsen

Humans are wired for language, but specifically for speaking it, and not for reading it. Evolution hasn’t had nearly enough time to catch up to the Gutenberg press, and so we skim. Users at most have the time to read around 20% of the words on a webpage during an average visit. And even if we do read something, we will probably forget it, because, spoiler alert, our memories are garbage.

Even More Minefields There are tons of pitfalls that users can struggle with over the different steps of human interactions: discovery, thinking, and acting/decision-making.

The section above outlined the pitfalls of discovery, which is just the first step in human interactions. There are tons of other problems that arise in the steps that follow: thinking, interacting, and decision-making. I will spare you the novel and give you the highlights: people come from different backgrounds, cultures, and places, and often misinterpret the message; they create and reinforce incorrect mental models of the world; they’re irrational, often making emotionally-biased or too-quick snap decisions instead of thoughtful judgments; they make all kinds of mistakes, from motor malfunctions and slips to memory biases and full-on gaps.

Now What?

To be clear, I’m not telling you all of this so you feel bad, and then try to correct your own behavior. I’m telling you so you can accept these limitations, expect them from your users, and adapt as a designer. There is hope. In my next article, I will outline suggestions for enhancing usability and reducing friction around some of these struggles. But I want to touch on the biggest challenge in the meantime, so you can start adapting your design practices: stop designing experiences for us, for the “interactive 1%”. Before, I asked you to think about how much credit you give users. I think often the truth is that we give our users far too much credit. Maybe we assume people will do the right thing, or that they’ll understand if they just take time to read through the page and figure it out. I don’t think this is doing our users any favors. We’ve seen that they can’t be expected to process something the same way that we do, and they won't keep trying for more than a few seconds.

It’s easy to misjudge intricate interaction patterns as common sense because we are extremely familiar with them. Even basic interactions can be difficult to understand for “pedestrians”—this was a term an old design professor used to describe any non-designer. The biggest mistake designers make, in my humble opinion, is leaning too heavily on design trends without checking for visual communication basics. Designers are taking the signifiers out of interactions more and more, in the name of aesthetic value, but at the cost of hurting usability and adding unnecessary confusion. I know flat design is sexy and everything, but make sure your buttons actually look like clickable buttons before—and after—you ship.

Do your due diligence and make sure you aren’t alienating anyone before jumping into design. Do some user research to understand your users’ needs as best you can, prioritize those needs, then start with the basics. Conduct usability tests to understand how users try to meet those needs, making sure your designs serve the user's purpose and fade into the background.

Finally, remember that it’s a mistake to assume that our users’ needs and expectations will be much like our own, and simply design for ourselves. Psychology offers us valuable insights towards what users actually need, and a little effort toward understanding our users can go a long way towards creating more successful designs.

Who's Who in VR

Some of us at Lullabot have been following the Virtual Reality and WebVR scene for a couple of years now. It’s a rich landscape with a lot of interest being thrown into the ring and attracting some high stakes players in the industry. We thought we’d share with you who some of the top hardware companies are in this space and what they’re building to advance the future of VR.

Facebook’s Oculus, and Samsung

Facebook bought Oculus for $2b in March of 2014. Oculus makes a tethered headset called the Rift which uses a Samsung OLED screen, headphones, Xbox and remote controllers, and positional tracking. They also have hand-tracked controllers which will be sold separately.

The Rift has had more than a few versions so far. The Development Kit 1 and 2 (commonly called the DK1 and DK2) were limited in production and sold to developers for building experiences. There were also some prototypes shown off but not released such as the HD prototype, Crystal Cove, and Crescent Bay. The consumer version (shown above) is selling for $599 and will be shipping to consumers in the first and second quarter of this year. That doesn’t include the high-end computer you’ll also need to run this technical marvel.

Samsung partnered with Oculus to produce a mobile headset called the GearVR which uses about five possible Samsung phone models as the screen and processing for apps bought through the Oculus Store which you pay for through Samsung Pay.

The GearVR is the most highly polished VR experience on the market. Samsung’s Oculus store is really locked down the way Apple’s app store is, and you can only use a few different Samsung phones (Note 4, Note 5, S6, S6 Edge and S6 Edge plus) but this helps ensure a superior VR experience from the moment you snap your phone into the lightweight headset. Headsets sell for $100USD and the phones can run you from somewhere around $400 to $800. Still the best experience for under $1000.

Valve's SteamVR, and HTC

Valve is the maker of the SteamVR software platform. HTC partnered with Valve to produce the tethered headset called the HCT Vive which uses an HTC screen, hand controllers, positional tracking, and a front facing camera.

The Vive’s big differentiator is what it calls “Room Scale” VR. There are two laser position sensors which can track an area of up to 15x15 ft (4.5 meter) and capture the user, allowing you to move around within a virtual space. You can see what I mean by watching this video of Fantastic Contraption, which is being built for the Vive.

Again, being a tethered headset means that along with buying the headset you also need a high-end computer to run games and applications at that magical 90fps smoothness.

Google’s Cardboard, and Magic Leap

Google makes the software and a very inexpensive open source mobile headset called Cardboard to achieve the dual rendering on any Android or iOS phone. Just fold your cardboard like a puzzle, insert the lenses, download the software, and insert your phone.

While Google Cardboard is making VR as cheap as possible and therefore accessible to the widest range of consumers, the experience (while amazing) is still sub-par to that of many others. But then if you haven’t experienced the other solutions, you’re not really going to know the difference anyway, so you really have nothing to lose in trying out Cardboard.

Magic Leap is an Augmented Reality company whom Google invests in which supposedly uses retinal projection (no one has yet seen this device) to overlay digital images over the real world.

They have released a few videos, and they’ve hired Neal Stephenson, the author of the infamous Snow Crash as their “Chief Futurist”. Google invested $542m USD in 2014. Total funding is somewhere around $1.4b USD and the post-money valuation on Magic Leap is around $4.5b USD. Needless to say the people who have seen this technology are being wowed, but the secrecy around this may also be partly hype. We just don’t know, but we’re very hopeful.

Google has also reportedly created a department for VR and there are several job listings now that mention VR in them.

Sony’s PlayStation VR

Sony is building VR for their Playstation 4 platform called PlayStation VR which uses a tethered headset, positional tracking, and hand controllers connected to their video game console.

PlayStation VR has been said to be working on 100s of game titles and is of course, mostly focused on the gaming market. They had an earlier prototype you may have heard of, Project Morpheus. The consumer-grade hardware should launch in the first half of 2016 and with many people already owning a PS4, they have a great market and a leg up on those who still need to buy a high-end computer.

These are some of the major players. Others that are worth noting are:

Friendly reminder: this is by no means a comprehensive list, just some of the top players.

It’s also being speculated that Apple is jumping into VR/AR space as well in that it holds several patents related to VR, has hired a human-computer interaction researcher who specializes in 3D interfaces, and hired a former Microsoft employee who worked on the HoloLens. Read some more here for other evidence.

AMPing up Drupal

Today we are proud to announce a new Drupal 8 module that provides support for the Accelerated Mobile Pages (AMP) Project. The AMP Project is a new open source initiative, led by Google, which drastically improves the performance of the mobile web. In January 2016 Lullabot and Google started working together to create the Drupal AMP module. A beta version of AMP module for Drupal 8 is available immediately, and we are starting work on a Drupal 7 version of the module that will be available in mid-March.

In many cases, the mobile web is a slow and frustrating experience. The AMP Project is an open source initiative that embodies the vision that publishers can create mobile optimized content once and have it load instantly everywhere. When AMP was first introduced last October, many commentators immediately compared it to Facebook's Instant Articles and Apple's News app. One of the biggest differentiators between AMP and other solutions is the fact that AMP is open source.

AMP HTML is, essentially, a subset of HTML. And it really makes the web fast. AMP HTML is designed to support smart caching, predictable performance, and modern, beautiful mobile content. Since AMP HTML is built on existing web technologies, publishers continue to host their own content, craft their own user experiences, and flexibly integrate their advertising and business models -- all using familiar tools, which will now include Drupal!

One of the most touted features of Drupal is its flexibility, so making Drupal produce AMP HTML has required a lot of careful consideration of the design approach. To make Drupal output AMP HTML, we have created an AMP module, AMP theme, and a PHP Library.

When the AMP module is installed, AMP can be enabled for any content type. At that point, a new AMP view mode is created for that content type, and AMP content becomes available on URLs such as node/1/amp or node/article-title/amp. We also created special AMP formatters for text, image, and video fields.

The AMP theme is designed to produce the very specific markup that the AMP HTML standard requires. The AMP theme is triggered for any node delivered on an /amp path. As with any Drupal theme, the AMP theme can be extended using a subtheme, allowing publishers as much flexibility as they need to customize how AMP pages are displayed. This also makes it possible to do things like place AMP ad blocks on the AMP page using Drupal's block system.

The PHP Library analyzes HTML entered by users into rich text fields and reports issues that might make the HTML non-compliant with the AMP standard. The library does its best to make corrections to the HTML, where possible, and automatically converts images and iframes into their AMP HTML equivalents. More automatic conversions will be available in the future. The PHP Library is CMS agnostic, designed so that it can be used by both the Drupal 8 and Drupal 7 versions of the Drupal module, as well as by non-Drupal PHP projects.

We have done our best to make this solution as turnkey as possible, but the module, in its current state, is not feature complete. At this point only node pages can be converted to AMP HTML. The initial module supports AMP HTML tags such as amp-ad, amp-pixel, amp-img, amp-video, amp-analytics, and amp-iframe, but we plan to add support for more of the extended components in the near future. For now the module supports Google Analytics, AdSense, and DoubleClick for Publisher ad networks, but additional network support is forthcoming.

While AMP HTML is already being served by some of the biggest publishers and platforms in the world — such as The New York Times, The Economist, The Guardian, BBC, Vox Media, LinkedIn, Pinterest, and many more! — you don't have to be a large publisher to take advantage of AMP. Today, any Drupal 8 site can output AMP HTML using the AMP module. We invite you to try it out and let us know what you think!

Finally, if you are interested in this topic and want to learn more about publishing AMP with Drupal, please leave a comment on our DrupalCon New Orleans proposal.

New and shiny vs Good old software

«It depends». That may be the most used sentence when evaluating the correct technology for a generic challenge. Choosing the right technology is not a trivial problem. When the needs are poorly defined, or not defined at all, it may not even be possible to suggest a good solution. But even if the problem is broken down into simple technical challenges, there are multiple factors that influence the implementation of the solution. Budget constraints, internal knowledge, timelines, resourcing issues, corporate policies, integration with existing systems, technical feasibility, and a long etcetera will influence the final decision.

Leaving these other—extremely valid—considerations aside, let’s focus on the technical part. Often times you have different frameworks, applications, libraries, SAAS, … to provide a solution with different degrees of success. The sheer size of this list of options can complicate the decision. It is common practice to check what everyone else is doing. Doing market research is a good first step, but it should not determine the final decision.

Many times when a new technology arises, there is media hype. All of a sudden many computer science related blogs and news sites are flooded with posts explaining the merits of this groundbreaking new solution. Often times, these communications are very passionate and contagious, and that’s good! Their goal is to get other people to try the new tools. Information flows rapidly and allows people that are solving that particular problem to assess if they want to try a new direction.

This situation can result in what I call the shiny effect. I admit that I have been blinded by it in the past. I have spent large amounts of time learning frameworks that turned out to be great for only a narrow set of use cases. That has made me cautious about how I approach new tools, even though I still get excited about them.

One widely used argument is based on authority. (If all these multi-milion dollar organizations are using this solution for the exact same problem I have, then I should do the same. After all, they would never choose to implement it without careful analysis.) Resorting to the argument from authority will not always lead you to a reliable answer. The market trend will not always be accurate, there are multiple examples of that.

I remember how in a not so distant past, NoSQL databases were postulated by some people as the future, some people went so far as to say that SQL was dead. There was a time when Apache was to be completely usurped by other alternatives. The LAMP stack was supposed to be replaced by the MEAN stack, without leaving any trace. SOAP and RPC were to disappear because of REST, while REST is now irrelevant because of GraphQL. Also, at the present time no one was supposed to be using Basic Auth. It goes without saying that the opposite happened as well: prophecies of emerging solutions being doomed to disappear that were never fulfilled.

None of that happened. All the tools and technologies listed above have found a way to coexist and share the spectrum of solutions. Even when they seemed to overlap at the beginning, the emerging alternatives ended up being only a better solution for a subset of scenarios in most of the cases. That is a very big win for everyone. We now have two different tools that are very good at solving similar problems. Even if you can build a search backend in MySQL, you are probably going to have a better experience if you do it with Solr, for instance.

Disregarding well-proven technological solutions, with proliferous ecosystems, is dangerous. Failing to know new solutions that may prove to be better at solving your problem is just as dangerous. We really need to take the time to understand the problem that the new solution is fixing before jumping into it with both feet. Carey Flichel attributes this, amongst other causes, to boredom and lack of understanding in Why Developers Keep Making Bad Technology Choices.

It’s no surprise then that we want to try something new, even if we’ve adequately solved a problem before. We are natural puzzle solvers, and sometimes you just want to try a new puzzle.

Choosing the right solution often involves checking what everyone else is doing, and then analyzing the problem for yourself while taking all the options into consideration. You must not trust new solutions just because they're new anymore than you trust old solutions just because they're old. Instead, zero in on the problem to be solved and find the perfect solution regardless of the buzz. Keep every technology solution on the table until you understand the nuances of the problem space, and let that be your guiding light. Being aware of all the technologies involved, and knowing what’s the best choice, takes time and a lot of research. Many times this will require a software architect to guide you.

Javascript aggregation in Drupal 7

Javascript aggregation in Drupal 7

What is it? Why should we care?

Javascript aggregation in Drupal is just what it sounds like: it aggregates Javascript files that are added during a page request into a single file. Modules and themes add Javascript using Drupal’s API, and the Javascript aggregation system takes care of aggregating all of that Javascript into one or more files. Drupal does this in order to cut down on the number of HTTP requests needed to load a page. Fewer HTTP requests is generally better for front-end performance.

In this article we’ll take a look at the API for adding Javascript, paying specific attention to the options affecting aggregation in order to make best use of the system. We’ll also take a look at some common pitfalls to look out for and how you can avoid them using Advanced Aggregation (AdvAgg) module. This article will mostly be looking at Drupal 7, however much of what we’ll cover applies equally to Drupal 8, and we’ll take a look specifically at what’s changed in Drupal 8.

If fewer is better, why multiple aggregates?

Before we dig in, I said that fewer HTTP requests is generally better for front-end performance. You may have noticed that Drupal creates more that one Javascript aggregate for each page request. Why not a single aggregate? The reason Drupal does this is to take advantage of browser caching. If you had a single aggregate with all the Javascript on the page, any difference in the aggregate from page to page would cause the browser to download a different aggregate with a lot of the same code. This situation arises when Javascript is conditionally added to the page, as is the case with many modules and themes. Ideally, we would group Javascript that appears on every page apart from conditionally added Javascript. That way the browser would download the every page Javascript aggregate once and subsequently load it from cache on other page requests on your website. Drupal’s Javascript aggregation attempts to give you the ability to do exactly that, striking a balance between making fewer HTTP requests, and leveraging browser caching.

The Basics

Let’s take a step back and briefly go over how to use Javascript aggregation in Drupal 7. You can turn on Javascript aggregation by going to Site Configuration > Development > Performance in your Drupal admin. Check off "Aggregate Javascript files" and Save. That’s it. Drupal will start aggregating module and theme Javascript from there on out.

Using the API

Once you’ve enabled Javascript aggregation, great! You’re on your way to sweet, sweet performance gains. But you’ve got modules and themes to build, and Javascript to go along with them. How do you make use of Javascript aggregation provided by Drupal? What kind of control do you have over the process?

There are a couple API entry points for adding Javascript to the page. Most API entry points have parameters that let you tell Drupal how to aggregate your Javascript. Let’s look at each in turn.

drupal_add_js

This is a loaded function that is used to add a Javascript file, externally hosted Javascript, a setting, or inline Javascript to the page. When it comes to Javascript aggregation, only Javascript files are aggregated by Drupal, so we’re going to focus on adding files. The other types (inline and external) do have important impacts on Javascript aggregation though, which we’ll get into later.

hook_library/drupal_add_library

Drupal also provides a mechanism to register Javascript/CSS "libraries" associated with a module. This is nice because you can specify the options to pass to drupal_add_js in one place. That gives you the flexibility to call drupal_add_library in multiple places, without having to repeat the same options. The other nice thing is that you can specify dependencies for your library and Drupal will take care of resolving those dependencies and putting them in the right order for you.

I like to register every script in my module as a library, and only add scripts using drupal_add_library (or the corresponding #attached method, see below). This way I’m clearly expressing any dependencies and in the case where my script is added in more than one place, the options I give the aggregation system are all in one place in case I need to make a change. It’s also a nice way to safely deprecate scripts. If all the dependencies are spelled out, you can be assured that removing a script won’t break things.

Instructions for using hook_library and drupal_add_library are well documented. However, one important thing to note is the third parameter to drupal_add_library, every_page, which is used to help optimize aggregation. At registration time in hook_library, you typically don’t know if a library will be used on every page request or not. Especially if you’re registering libraries for other module authors to use. This is why drupal_add_library has an every_page parameter. If you’re adding a library unconditionally on every page, be sure to set every_page parameter to TRUE so that Drupal will group your script into a stable aggregate with other scripts that are added to every page. We’ll discuss every_page in more detail below.

#attached

attached is a render array property that lets you add Javascript, CSS, libraries and arbitrary data to the page. #attached is the preferred approach for adding Javascript in Drupal. In fact, drupal_add_js and drupal_add_library are gone in Drupal 8 in favor of #attached. #attached is nice since it is part of a render array and can be easily altered by other modules and themes. It’s also essential for places where render arrays are cached. With caching in play, functions that build out a render array are bypassed in favor of a cached copy. In that scenario, you need to have your Javascript addition included in #attached on the render array, because a call to drupal_add_js/drupal_add_library in your builder function would otherwise be missed.

Drupal developers often fall back on drupal_add_js for lack of a render array in context. It’s common to add Javascript in places like hook_init, preprocess and process functions, where there is no render array available to add your Javascript via #attached. In these scenarios, consider whether your Javascript is more appropriately added via a hook_page_build, or hook_node_view, where there is a render array available to add your script via #attached. Not only will you begin to align yourself with the Drupal 8 way of doing things, but you open yourself up to being able to use the render_cache module, which can gain you some performance. See the documentation for drupal_process_attached for more information.

Controlling aggregation

Let’s take a close look at the parameters affecting Javascript aggregation. Drupal will create an aggregate for each unique combination of the scope, group, and every_page parameters, so it’s important to understand what each of these mean.

scope

The possible values for scope are ‘header’ and ‘footer’. A scope of ‘header’ will output the script in the

tag, a scope of ‘footer’ will output the script just before the closing tag.

group

The ‘group’ option takes any integer as a value, although you should stick to the constants JS_LIBRARY, JS_DEFAULT and JS_THEME to avoid excessive numbers of aggregates and follow convention.

every_page

The ‘every_page’ option expects a boolean. This is a way to tell the aggregation system that your script is guaranteed to be included on every page. This flag is commonly overlooked, but is very important. Any scripts that are added on every page should be included in a stable aggregate group, one that is the same from page to page, such that the browser can make good use of caching. The every_page parameter is used for exactly that purpose. Miss out on using every_page and your script is apt to be lumped into a volatile aggregate that changes from page to page, forcing the browser to download your code anew on each page request.

What can go wrong

There are a few problems to watch out for when it comes to Javascript aggregation. Let’s look at some in turn.

Potential number of aggregates

The astute reader will have noticed that there is a potential for a lot of aggregates to be created. When you combine scope, group and every_page options, the number of aggregates per page can get out of control if you’re not careful. With two values for scope, three for groups and two for every_page, there is potential to climb to twelve aggregates in a page. Twelve seems a little out of control. Sure, we want to leverage browser caching, but this many HTTP requests is concerning. If you’re getting high into the number of aggregates it’s likely that modules and themes adding Javascript are not using the API as best as they could.

Misuse of Groups

As was mentioned, JS_LIBRARY, JS_DEFAULT and JS_THEME are default constants that ship with Drupal to group Javascript. When adding Javascript, modules and themes should stick to these groups. As soon as you deviate, you introduce a new aggregate. Often when people deviate from the default groups, it’s to ensure ordering. Maybe you need your script to be the very first script on the page, so you set your group to JS_LIBRARY - 100. Instead, use the ‘weight’ option to manage ordering. Set your group to JS_LIBRARY and set the weight very low to ensure your first in the JS_LIBRARY aggregate. Another issue I see is themes adding script explicitly under the JS_THEME group. This is logical, after all, it is Javascript added by the theme! However, it’s better to make use of hook_library, declare your dependencies and let the group default to JS_DEFAULT. The order you need is preserved by declaring your dependencies, and you avoid creating a new aggregate unessesarily.

Inline and External Scripts

Extra care should be employed when adding external and inline scripts. drupal_add_js and friends preserve order really well, such that they track the order the function is called for all the script added to the page and respect that. That’s all well and good. However, if an inline or external script is added between file scripts, it can split the aggregate. For example if you add 3 scripts: 2 file scripts and 1 external, all with the same scope, group, and every_page values, you might think that Drupal would aggregate the 2 file scripts and output 2 script tags total. However, if drupal_add_js gets called for the file, then external, then file, you end up getting two aggregates generated with the external script in between for a total of 3 script tags. This is probably not what you want. In this case it’s best to specify a high or low weight value for the external script so it sits at the top or bottom of the aggregate and doesn’t end up splitting it. The same goes for inline JS.

Scripts added in a different order

I alluded to this in the last one, but certain situations can arise where scripts get added to an aggregate in a different order from one request to the next. This can be for any number of reasons, but because Drupal is tracking the call order to drupal_add_js, you end up with a different order for the same scripts in a group and thus a different aggregate. Sadly the same code will sit in two aggregates on the server, with slightly different source order, but otherwise they could have been exactly equal, produced a single aggregate and thus cached by the browser from page to page. The solution in this case is to use weight values to ensure the same order within an aggregate from page to page. It’s not ideal, because you don’t want to have to set weights on every hook_library / drupal_add_js. I’d recommend handling it on a case by case basis.

Best Practices

Clearly, there is a lot that can go wrong, or at least end up taking you to a place that is sub-optimal. Considering all that we’ve covered, I’ve come up with a list of best practices to follow:

Always use the API, never ‘shoehorn’ scripts into the page

A common problem is running into modules and themes that sidestep the API to add their Javascript to the page. Common methods include using drupal_add_html_head to add script to the header, using hook_page_alter and appending script markup to $variables[‘page_bottom’], or otherwise outputting a raw

Use hook_library

hook_library is a great tool. Use it for any and all Javascript you output (inline, external, file). It centralizes all of the options for each script so you don’t have to repeat them, and best of all, you can declare dependencies for your Javascript.

Use every_page when appropriate

The every_page option signals to the aggregation system that your script will be added to all pages on the site. If your script qualifies, make sure your setting every_page = TRUE. This puts your script into a "stable" aggregate that is cached and reused often.

AdvAgg

Advanced CSS/JS Aggregation (AdvAgg) is a contributed module that replaces Drupal’s built in aggregation system with it’s own, making many improvements and adding additional features along the way. The way that you add scripts is the same, but behind the scenes, how the groups are generated and aggregated into their respective aggregates is different. AdvAgg also attempts to overcome many of the scenarios where core aggregation can go wrong that I listed above. I won’t cover all of what AdvAgg has to offer, it has an impressive scope that would warrant it’s own article or more. Instead I’ll touch on how it can solve some of the problems we listed above, as well as some other neat features.

Out of the box the core AdvAgg module supplies a handful of transparent backend improvements. One of those is stampede protection. After a code release with JS/CSS changes, there is a potential that multiple requests for the same page will all trigger the calculation and writing of the same new aggregates, duplicating work. On high traffic sites, this can lead to deadlocks which can be detrimental to performance. AdvAgg implements locking so that only the first process will perform the work of calculating and writing aggregates. Advagg also employs smarter caching strategies so the work of calculating and writing aggregates is done as infrequently as possible, and only when there is a change to the source files. These are nice improvements, but there is great power to behold in AdvAgg’s submodules.

AdvAgg Compress Javascript

AdvAgg comes with two submodules to enhance JS and CSS compression. They each provide a pluggable way to have a compressor library act on the each aggregate before it’s saved to disk. There are a few options for JS compression, JSqueeze is a great option that will work out of the box. However, if you have the flexibility on your server to install the JSMIN C extension, it’s slightly more performant.

AdvAgg Modifier

AdvAgg Modifier is another sub module that ships with AdvAgg, and here is where we get into solving some of the problems we listed earlier. Let’s explore some of the more interesting options that are made available by AdvAgg Modifier:

Optimize Javascript Ordering

There are a couple of options under this fieldset that seek to solve some of the problems laid out above. First up, "Move Javascript added by drupal_add_html_head() into drupal_add_js()" does just what it says, correcting the errant use of the API by module/theme authors. Doing this allows the Javascript in question to participate in Javascript aggregation and potentially removes an HTTP request if it was a file that was being added. Next “Move all external scripts to the top of the execution order” and “Move all inline scripts to the bottom of the execution order” are both an attempt at resolving the issue of splitting the aggregate. This is more or less assuming that external files are more library-esque and inline Javascript is meant to be run after all other Javascript has run. That may or may not be the case depending on what you’re doing and you should definitely use caution. Such as that is, if it works for you, it could be a simple way to avoid splitting the aggregate without having to do manual work of adjusting weights.

Move JS to the footer

As we discussed, you typically add Javascript to one of two scopes, header or footer. A common performance optimization is to move all the Javascript to the footer. AdvAgg gives you a couple choices in how you do that ranging from some to all Javascript moved to the footer. This can easily break your site however, and you definitely need to test and use caution here. The reason of course is that there could be other script that aren’t declaring their dependencies properly, and/or are ‘shoehorning’ their Javascript into the page that are depending on a dependency existing in the header. These will need to be fixed on a case by case basis (submit patches to contrib modules!). If you still have Javascript that really does need to be in the header, AdvAgg extends the options for drupal_add_js and friends to include a boolean scope_lock that when set to TRUE, will prevent AdvAgg from moving the script to the footer. There always seems to be some stubborn ads or analytics provider that insists on presence in the header, and is backed by the business that forces you to make at least one of these exceptions. Another related option is "Put a wrapper around inline JS if it was added in the content section incorrectly". This is an attempt to resolve the problem of inline Javascript in the body depending on things being in the header. AdvAgg will wrap any inline Javascript in a timer that waits on jQuery and Drupal to be defined before running the inline code. It uses a regular expression to search for the scripts, which while neat, feels a little brittle. There is an alternate option to search for the scripts using xpath instead of regex, which could be more reliable. If you try it, do so with care, test first.

Drupal 8

Not a lot is different when it comes to Javascript aggregation in D8, but there have been a few notable changes. First, the aggregation system has been refactored under the hood to be pluggable. You can now cleanly swap out and/or add to the aggregation system, whereas in Drupal 7 it was a difficult and messy affair to do so. Modules like AdvAgg should have a relatively easier time implementing their added functionality. Other improvements to the aggregation system that didn’t quite make the D8 release will now have a clearer path to implementation for the same reason. In terms of its function and the way in which Javascript is grouped for aggregation, everything is as it was in Drupal 7.

The second change centres around adding Javascript to the page. drupal_add_js and drupal_add_library have been removed in Drupal 8 in favor of #attached. The primary motivation is to improve cacheability. With Javascript consistently added using #attached, it’s possible to do better render array caching. The classic example being that if you were using drupal_add_js in your form builder function, your Javascript would be missed if the cached render array was used and the builder function skipped. Caching aside, it’s nice that there is a single consistent way to add Javascript. Also related to this change, is hook_library definitions have gone the way of the dodo. In their place, *.libraries.yml files are now used to define your Javascript and CSS assets in libraries, continuing the Drupal 8 theme of replacing info hooks with YML files.

A third somewhat related change that I’ll point out is that Drupal core no longer automatically adds jQuery, Drupal.js and Drupal.settings (drupalSettings in Drupal 8). jQuery still ships with core, but if you need it, you have to declare it as a dependency in your *.libraries.yml file. There have also been steps taken to reduce Drupal core’s reliance on jQuery. This isn’t directly related to Javascript aggregation per se, but it’s a big change nonetheless that you’ll have to keep in mind if you’re used to writing Javascript in Drupal 7 and below. It’s a welcome improvement with potential for some nice front end performance gains. For more information and specific implementation details, check out the guide for adding Javascript and CSS from a theme and adding Javascript and CSS from a module.

HTTP 2.0

Any article about aggregating Javascript assets these days deserves a disclaimer regarding HTTP 2.0. For the uninitiated, HTTP 2.0 is the new version of the HTTP protocol that while maintaining all the semantics of HTTP 1.1, significantly changes the implementation details. The most noted change in a nutshell is that instead of opening a bunch of TCP connections to download all the resources you need from a single origin, HTTP 2.0 opens a single connection per origin and multiplexes resources on that connection to achieve parallelism. There is a lot of promise here because HTTP 2.0 enables you to minimize the number of TCP connections your browser has to open, while still downloading assets in parallel and individually caching assets. That means that you could be better off not aggregating your assets at all, allowing the browser to maximize caching efficiency without incurring the cost of multiple open TCP connections. In practice, the jury is still out on whether no aggregation at all is a good idea. It turns out that compression doesn’t do as well on a collection of individual small files as it does on aggregated large files. It’s likely that some level of aggregation will continue to be used in practice. However as hosts more widely support HTTP 2.0 and old browsers fall off, this will become a larger area of interest.

Conclusion

Whew, we’ve come to the end at last. My purpose was to go over all of the nuances of Javascript aggregation in Drupal. It’s an important topic when it comes to front end performance, and one that deserves careful consideration when you're trying to eek every last bit of performance out of your site. With Drupal 8 and HTTP 2.0 now a reality, it’ll be interesting to watch as things evolve for the better.

The Uncomplicated Firewall

Firewalls are a tool that most web developers only deal with when sites are down or something is broken. Firewalls aren’t fun, and it’s easy to ignore them entirely on smaller projects.

Part of why firewalls are complicated is that what we think of as a "firewall" on a typical Linux or BSD server is responsible for much more than just blocking access to services. Firewalls (like iptables, nftables, or pf) manage filtering inbound and outbound traffic, network address translation (NAT), Quality of Service (QoS), and more. Most firewalls have an understandably complex configuration to support all of this functionality. Since firewalls are dealing with network traffic, it’s relatively easy to lock yourself out of a server by blocking SSH by mistake.

In the desktop operating system world, there has been great success in the "application" firewall paradigm. When I load a multiplayer game, I don’t care about the minutiae of ports and protocols - just that I want to allow that game to host a server. Windows, OS X, and Ubuntu all support application firewalls where applications describe what ports and protocols they need open. The user can then block access to those applications if they want.

Uncomplicated Firewall (ufw) is shipped by default with Ubuntu, but like OS X (and unlike Windows) it is not turned on automatically. With a few simple commands we can get it running, allow access to services like Apache, and even add custom services like MariaDB that don’t ship with a ufw profile. UFW is also available for other Linux distributions, though they may have their own preferred firewall tool.

Before you start

Locking yourself out of a system is a pain to deal with, whether it’s lugging a keyboard and monitor to your closet or opening a support ticket. Before testing out a firewall, make sure you have some way to get into the server should you lock yourself out. In my case, I’m using a LAMP vagrant box, so I can either attach the Virtualbox GUI with a console, or use vagrant destroy / vagrant up to start clean. With remote servers, console access is often available through a management web interface or a "recovery" SSH server like Linode’s Lish.

It’s good to run a scan on a server before you set up a firewall, so you know what is initially being exposed. Many services will bind to ‘localhost’ by default, so even though they are listening on a network port they can’t be accessed from external systems. I like to use nmap (which is available in every package manager) to run port scans.

$ nmap 192.168.0.110 Starting Nmap 6.40 ( http://nmap.org ) at 2015-09-02 13:16 EDT Nmap scan report for trusty-lamp.lan (192.168.0.110) Host is up (0.0045s latency). Not shown: 996 closed ports PORT STATE SERVICE 22/tcp open ssh 80/tcp open http 111/tcp open rpcbind 3306/tcp open mysql Nmap done: 1 IP address (1 host up) scanned in 0.23 seconds

Listening on for SSH and HTTP connections makes sense, but we probably don’t need rpcbind (for NFS) or MySQL to be exposed.

Turning on the firewall

The first step is to tell UFW to allow SSH access:

$ sudo ufw app list Available applications: Apache Apache Full Apache Secure OpenSSH $ sudo ufw allow openssh Rules updated Rules updated (v6) $ sudo ufw enable Command may disrupt existing ssh connections. Proceed with operation (y|n)? y Firewall is active and enabled on system startup $ sudo ufw status Status: active To Action From -- ------ ---- OpenSSH ALLOW Anywhere OpenSSH (v6) ALLOW Anywhere (v6)

Test to make sure the SSH rule is working by opening a new terminal window and ssh’ing to your server. If it doesn’t work, run sudo ufw disable and see if you have some other firewall configuration that’s conflicting with UFW. Let’s scan our server again now that the firewall is up:

$ nmap 192.168.0.110 Starting Nmap 6.40 ( http://nmap.org ) at 2015-09-02 13:31 EDT Note: Host seems down. If it is really up, but blocking our ping probes, try -Pn Nmap done: 1 IP address (0 hosts up) scanned in 3.07 seconds

UFW is blocking pings by default. We need to run nmap with -Pn so it blindly checks ports.

$ nmap -Pn 192.168.0.110 Starting Nmap 6.40 ( http://nmap.org ) at 2015-09-02 13:32 EDT Nmap scan report for trusty-lamp.lan (192.168.0.142) Host is up (0.00070s latency). Not shown: 999 filtered ports PORT STATE SERVICE 22/tcp open ssh Nmap done: 1 IP address (1 host up) scanned in 6.59 seconds

Excellent! We’ve blocked access to everything but SSH. Now, let’s open up Apache.

$ sudo ufw allow apache Rule added Rule added (v6)

You should now be able to access Apache on port 80. If you need SSL, allow "apache secure" as well, or just use the “apache full” profile. You’ll need quotes around the application name because of the space.

To remove a rule, prefix the entire rule you created with "delete". To remove the Apache rule we just created, run sudo ufw delete allow apache.

Blocking services

UFW operates in a "default deny" mode, where incoming traffic is denied and outgoing traffic is allowed. To operate in a “default allow” mode, run sudo ufw default allow. After running this, perhaps you don’t want Apache to be able to listen for requests, and only want to allow access from localhost. Using ufw, we can deny access to the service:

$ sudo ufw deny apache Rule updated Rule updated (v6)

You can also use "reject" rules, which tell a client that the service is blocked. Deny forces the connection to timeout, not telling an attacker that a service exists. In general, you should always use deny rules over reject rules, and default deny over default allow.

Address and interface rules

UFW lets you add conditions to the application profiles it ships with. For example, say you are running Apache for an intranet, and have OpenVPN setup for employees to securely connect to the office network. If your office network is connected on eth1, and the VPN on tun0, you can grant access to both of those interfaces while denying access to the general public connected on eth0:

$ sudo ufw allow in on eth1 to any app apache $ sudo ufw allow in on tun0 to any app apache

Replace from <interface> with on <address> to use IP address ranges instead of interface names.

Custom applications

While UFW lets you work directly with ports and protocols, this can be complicated to read over time. Is it Varnish, Apache, or Nginx that’s running on port 8443? With custom application profiles, you can easily specify ports and protocols for your own custom applications, or those that don’t ship with UFW profiles.

Remember up above when we saw MySQL (well, MariaDB in this case) listening on port 3306? Let’s open that up for remote access.

Pull up a terminal and browse to /etc/ufw/applications.d. This directory contains simple INI files. For example, openssh-server contains:

[OpenSSH] title=Secure shell server, an rshd replacement description=OpenSSH is a free implementation of the Secure Shell protocol. ports=22/tcp

We can create a mariadb profile ourselves to work with the database port.

[MariaDB] title=MariaDB database server description=MariaDB is a MySQL-compatible database server. ports=3306/tcp $ sudo ufw app list Available applications: Apache Apache Full Apache Secure MariaDB OpenSSH $ sudo ufw allow from 192.168.0.0/24 to any app mariadb

You should now be able to access the database from any address on your local network.

Debugging and backup

Debugging firewall problems can be very difficult, but UFW has a simple logging framework that makes it easy to see why traffic is blocked. To turn on logging, start with sudo ufw logging medium. Logs will be written to /var/log/ufw.log. Here’s a UFW BLOCK line where Apache has not been allowed through the firewall:

Jan 5 18:14:50 trusty-lamp kernel: [ 3165.091697] [UFW BLOCK] IN=eth2 OUT= MAC=08:00:27:a1:a3:c5:00:1e:8c:e3:b6:38:08:00 SRC=192.168.0.54 DST=192.168.0.142 LEN=60 TOS=0x00 PREC=0x00 TTL=64 ID=65499 DF PROTO=TCP SPT=41557 DPT=80 WINDOW=29200 RES=0x00 SYN URGP=0

From this, we can see all of the information about the source of the request as well as the destination. When you can’t access a service, this logging makes it easy to see if it’s the firewall or something else causing problems. High logging can use a large amount of disk space and IO, so when not debugging it’s recommended to set it to low or off.

Once you have everything configured to your liking, you might discover that there isn’t anything in /etc with your rules configured. That’s because ufw actually stores its rules in /lib/ufw. If you look at /lib/ufw/user.rules, you’ll see iptables configurations for everything you’ve set. In fact, UFW supports custom iptables rules too if you have one or two rules that are just too complex for UFW.

For server backups, make sure to include the /lib/ufw directory. I like to create a symlink from /etc/ufw/user-rules to /lib/ufw. That way, it’s easy to remember where on disk the rules are stored.

Next steps

Controlling inbound traffic is a great first step, but controlling outbound traffic is better. For example, if your server doesn’t send email, you could prevent some hacks from being able to reach mail servers on port 25. If your server has many shell users, you can prevent them from running servers without being approved first. What other security tools are good for individual and small server deployments? Let me know in the comments!

Sharing Breakpoints Between Drupal 8 and Sass

Among the many great new features in Drupal 8 is the Breakpoint module. This module allows you to define sets of media queries that can be used in a Drupal site. The most notable place that Drupal core uses breakpoints is the Responsive Image module.

To add and configure breakpoints, you create a file named *theme-name*.breakpoints.yml in the root directory of your theme, replacing *theme-name* with the theme’s machine name. Modules can also have *module-name*.breakpoints.yml configuration files at their root. In this file you define each breakpoint by these properties:

  • label - the human readable name of the breakpoint
  • mediaQuery - a valid @media query
  • weight - where the breakpoint is ordered in relation to other breakpoints: the breakpoint targeting the smallest viewport size should have a smaller weight, and larger breakpoints should have larger weight values
  • multiplier - the ratio between the physical pixel size of the active device and the device-independent pixel size

Exposing all this information to Drupal from a single file is great. But having this file creates a problem. The breakpoints defined here are not exposed to the Sass code used to style our site.

A solution

One best practice in software development is avoiding repetition whenever possible, often called keeping things DRY (Don’t Repeat Yourself). To apply the DRY method to the breakpoints.yml file and Sass, I wrote drupal-sass-breakpoints. This is an Eyeglass module that allows importing breakpoints from our theme’s breakpoints.yml into our Sass code.

In this article we are going to setup Drupal Sass Breakpoint in a minimal Drupal 8 theme. If you would like an introduction to creating a Drupal 8 theme, I recommend going through John Hannah’s article on Drupal 8 Theming Fundamentals. For this article's example (neelix), we are going to start with the following files:

├── scss │ └── style.scss ├── css ├── neelix.libraries.yml ├── neelix.info.yml ├── neelix.breakpoints.yml

This gives us a directory for our Sass, a directory for the compiled CSS, a file to declare a library for our CSS, an *.info.yml file, and our breakpoints.yml file. Inside the neelix.breakpoints.yml file add the following:

neelix.small: label: small mediaQuery: '' weight: 0 multipliers: - 1x neelix.medium: label: medium mediaQuery: 'screen and (min-width: 560px)' weight: 1 multipliers: - 1x neelix.wide: label: wide mediaQuery: 'screen and (min-width: 600px)' weight: 2 multipliers: - 1x neelix.xwide: label: xwide mediaQuery: 'screen and (min-width: 860px)' weight: 2 multipliers: - 1x

Now that we have a starting point for the new theme and some breakpoints in place, we need to install drupal-sass-breakpoints. For the example here, we are going to use Grunt to compile Sass. But It is important to mention that drupal-sass-breakpoints can work with Gulp or any other task runner that supports Eyeglass modules. To get started with Grunt, we first need to set up dependencies. This can be done by defining the dependencies in a file named package.json in the root directory of the neelix theme:

{ "devDependencies": { "breakpoint-sass": "^2.6.1", "drupal-sass-breakpoints": "^1.0.1", "eyeglass": ^"0.7.1", "grunt": "^0.4.5", "grunt-contrib-watch": "^0.6.1", "grunt-sass": "^1.0.0" } }

Then run npm install this from the command line while in the theme’s root directory.

This installs all the necessary dependencies we need, which were defined in package.json. Next, we need to define Grunt tasks. Do so by adding the following in a Gruntfile.js file in the root directory of the theme:

'use strict'; var eyeglass = require("eyeglass"); module.exports = function(grunt) { grunt.initConfig({ watch: { sass: { files: ['scss/*.{scss,sass}', 'Gruntfile.js', 'neelix.breakpoints.yml'], tasks: ['sass:dist'] }, }, sass: { options: require("eyeglass").decorate({ outputStyle: 'expanded' }), dist: { files: { 'css/style.css': 'scss/style.scss' } } } }); grunt.registerTask('default', ['sass:dist', 'watch']); grunt.loadNpmTasks('grunt-sass'); grunt.loadNpmTasks('grunt-contrib-watch'); };

This creates all that we need to compile Sass with Eyeglass. Now we can set up a test case to our scss/style.scss file by adding the following:

@import "drupal-sass-breakpoints"; body { @include media('medium') { content: "Styles that apply to our 'medium' breakpoint"; } }

In the above example we are using the media() mixin. This is a mixin added by drupal-sass-breakpoints. The argument we pass to it is the label for the media query we are targeting ( medium). Now run this from the command line in the root directory of our theme to compile Sass:

grunt

Now check out the compiled results inside css/style.css:

@media (min-width: 560px) { body { content: "Styles that apply to our 'medium' breakpoint"; } }

That is it! We now have Sass importing the media queries from our theme’s breakpoints.yml file.

What about srcset and sizes?

If you find you're overwhelmed by the number of breakpoints in your theme, remember that there are many instances where you only need to use srcset for responsive image styles. The srcset attribute is particulary useful when used in conjunction with the sizes attribute. I recommend this article on CSS-Tricks which outlines the scenarios which srcset and sizes are best for. In these cases, it may not be necessary to define all of our breakpoints in the .breakpoints.yml file.

Feedback

Do have any feature requests or suggestions for improving drupal-sass-breakpoints? Feel free to contribute on GitHub.

You can find a fully working example of this article here. Happy styling!

My Four Kickstarter Mistakes Can Improve Your Software Projects

So I wrote a children’s book. To get it printed and published, I launched a Kickstarter that was successful thanks to the many people who believed in the idea and committed with their wallets.

But the campaign succeeded in spite of myself. I made a lot of mistakes. There are many things I would do differently if I were to launch it again, and will do differently for the next campaign. As I reflected on the mistakes and lessons learned, however, there was commonality with general best practices for implementing a project, and in particular, software projects.

So here are my mistakes (at least, the ones I know about) with the resulting lessons learned.

1. Rushing the Launch

Many of the mistakes that follow sprouted from impatience. That’s why this is first. If you don’t make this mistake, you’ll save yourself a lot of headaches.

There is such a thing as analysis paralysis, or fear of launching because you are scared to fail. But that was not my problem. My problem was pure impatience. I had worked on the book for months, on both editing and requisitioning sample illustrations. I had spent almost another month crafting the kickstarter page and putting together the video. By the time the video was done, I just wanted to get it out there.

However, I would have benefited from sitting down and doing more intentional research, and pulling in some advice from people I trust. I could have tested the video to see if it spurred people to action. I could have looked for the ideal time to launch a children’s book Kickstarter, and then actually waited. I could have spent some time and effort collecting a list of media contacts and prepared a message to send them on launch day. I could have set up a social media campaign to capture email addresses and test initial copy. I could have...done so many other things.

Instead of having a methodical plan to follow, I just started throwing things at the wall to see what stuck, desperate to try anything. It caused some unnecessary stress and worry, especially when the momentum started to slow down after the first few days of launch. I knew I hadn’t done everything I could to ensure a better chance at success, and at that point, there was less and less that I could do about it.

Lesson

Don’t be haphazard with your launch. If you feel sick of working on something and just want to get it out...stop. Think. Sleep on it. Your whims and emotional health are not the most important factors in a launch strategy. Take time to plan the launch and the weeks following the launch. Be intentional about establishing a calendar that makes sense, and do your best stick to it.

2. Underestimating Time and Cost Involved

The consequences of this mistake can be severe. If the funding of your Kickstarter doesn’t cover all of your costs, the remainder comes directly from your own pocket. If you can’t afford the overage, you risk breaking your promises to your backers. Or a potential lawsuit.

It is critical that you count the cost and attempt to measure possible risk. An unfunded, failed campaign is disappointing. Possibly heartbreaking. Worse, though, is a campaign that leaves you burned out, destitute, with the residue of broken promises polluting your wake.

Some things I underestimated or ignored:

  • The time required to finish illustration. I originally wanted to ship books to backers by April 2015, but ended up being over 3 months late. This is no fault of the illustrator, but rather myself forgetting about my own nitpicking tendencies, and the vast amount of back and forth communication that it required.
  • The time required to manage the campaign, both during and after. Posting updates, reaching out to blogs, the packing and shipping of the products (so many trips to the post office. So, so many.) Since I wasn’t paying myself anything for labor, this didn’t have a direct impact on costs, but it did have a negative impact on timelines.
  • The price of international shipping. This is always more expensive than you think it is, and it varies heavily from carrier to carrier, country to country. Total package weight and size matter much more, and a slight increase can mean a big bump in cost. I literally lost money on every single international shipment, even the ones to Canada. If you think you are charging enough for International shipping, you probably aren’t.
  • The cost of shipping materials. It turns out that a 9.5x9.5 book needs larger envelopes than a book that is just one inch narrower. Packaging a print so it doesn’t bend during shipment requires some extra materials. And on and on. The small things add up.
  • The cost of marketing. I found some niche newsletters to advertise in. Did I plan and budget for these ads? Nope.

I ended up spending a decent amount of my own money to cover the extra costs. Since it was a passion project for me, and additional amount needed was not astronomical, it wasn’t that big of a deal...but it could have been a disaster.

Lesson

Measure twice, cut once. Estimation is hard, and if you are at the mercy of other companies whose prices can change, be sure you under-promise so you set yourself up to over-deliver. Make sure you are accounting for as much as possible, and then add some more to your estimation to take into account possible risks and other uncertainties.

3. Rewards Didn’t Align With What People Actually Wanted

I thought I knew what potential backers would want. I myself was a connoisseur of children’s books, and so I had a lofty view of my own opinion. And while my desires intersected somewhat with what people wanted, I missed the mark badly on many rewards. This was either because the reward itself was undesirable, the price for the reward was too much, or a combination of the two.

For example, an original sketch from the illustrator was a reward that some people obviously wanted, but at only three takers, the price point probably kept many from claiming it. Instead of setting it at a round number that “felt” right, I should have taken the time to see if the numbers worked for a lower price point.

Likewise, I didn’t really highlight signed books as a potential benefit. This was me being oblivious and living inside my head too much. I personally don’t value signed books that much, even from my favorite authors, and so I didn’t expect others to care about my own sloppy signature. But that was stupid. Nearly every other book project on Kickstarter offers signed copies, and I should have taken the obvious hint.

While I did take time to see what rewards other successful campaigns had offered, I didn’t open myself up to learning everything I could. I had already chiseled some ideas in stone, as if they were footnotes to the Ten Commandments.

Lesson

Is your project actually offering what people want? Does your marketing message accurately reflect the benefits that your target users are looking for? Remember, most of the time, you are not your target audience. Do not assume you have the authoritative opinion on your product or service. Get some empathy, talk to people who aren’t you, and don’t ignore what other successful projects (in the same domain space) might have in common. Perhaps do some organized user research. Be prepared to change (or kill) your darlings.

4. Not Properly Determining a Minimum Viable Product

I aimed too high. If not for generous friends, family, and coworkers, I would have turned out like Icarus, with melted wings pushing me down toward Poseidon's ambivalent embrace. Or like something much less melodramatic, but equally disastrous.

If I was going to publish a book, I was going to do it right. The best materials, the best hardcover binding, book dimensions usually reserved for a ping pong table. Oh, and a dust jacket, of course. At one point, I’m sure I even entertained the idea of gold foil on the cover with blinking lights, or some such nonsense. Thankfully, the realities of book printing held me in check.

In my mind, this premium, sewn-bound, 9.5x9.5 hardcover book was the minimum product. I assumed that people would be more likely to back the campaign if they were getting something that could be showcased. That might have been true for some. For the most part, however, I projected my own excitement and gilded assumptions, and it hampered the campaign.

What should have been my minimum viable product? Something much cheaper to print, for starters. An 8x8 softcover book would have cut the print cost by at least 35%. This would have allowed me to set my campaign goal much lower, and have a much lower-priced reward tier where a backer would still get a physical copy.

I could have still offered the premium upgrades, but as stretch goals. If the campaign reached a certain amount, the softcover would be upgraded to a hardcover. Another could have added the dust jackets. I suspect I would have had a higher total, with more backers, and would have still been able to deliver a premium hardcover at the end.

But even if not, more people would have had my book in their hands, even if it was a more mundane softcover version. Instead, I outpriced my market a bit, and discouraged impulse backers.

Lesson

Make sure your Minimum Viable Product is actually your Minimum Viable Product. Are the features you think of as “minimal” actually required for people to extract value from your product or service? Be brutally honest over what are “nice-to-haves.” Otherwise, you might spend time and money chasing the wrong things, limiting both future growth and agility. Extra features and luxuries can be added later, once there is more demand.

Conclusion

Successful projects are hard to bring about. They don’t happen by accident, though you may get more luck than you deserve, like I did. Any one of the above mistakes has the potential to wreck a project, to leave the ruins scattered into the well-populated landfill of failed intentions. I hope some of my experiences can help you avoid some of these mistakes in the future.

What you made a project management mistake that has resulted in a valuable lesson? Let us know!

Rebuilding POP in D8

Outside my Drupal and Lullabot life, I help my girlfriend Nicole run a non-profit called Pinball Outreach Project, POP for short. POP brings pinball to kids by taking games to children’s hospitals, as well as working with organizations like Children’s Cancer Association, Girls Make Games, and Big Brothers Big Sisters to bring pinball to their events or host kids at our pinball location here in Portland.

The POP website was built in Wordpress and has served us well for three years, but as we have become more active and our needs have expanded, it has started to show its age. There are several parts of the site that are difficult to update because they are hardcoded in templates, it relies on some paid components to keep running, and the theme hides a lot of basic functionality that it would be nice to have access to.  On the plus side, it is a simple clean design that is fully responsive, so navigating the site is pretty easy.

We wanted to build a new site that kept the design elements we liked, but expanded the functionality to make the site easier to update and more flexible to allow for future growth and needs. Given my expertise, Drupal seemed like the obvious choice, and given where we're at in the release cycle I thought, "Why not do it in Drupal 8?" It would give me some real hands-on experience, and hopefully we'll end up with a modern tool that can help us grow into the future.

Thus began our odyssey, a new website for a small non-profit in Drupal 8. In the coming weeks I will outline this process from the standpoint of a Drupal 7 expert experiencing a real site build with D8 for the first time. While it is true that I have more D8 experience than many due to my role as lead of the Configuration Management Initiative, that experience is a couple years old and almost entirely involved backend code and architecture. The site building and theming changes are completely new to me, and in many cases I don't know what I don't know. In this way, I feel I am like many of you reading this who are also about to embark on this journey. Let’s discover the new stuff in Drupal 8 together, and we can all learn something along the way.


About the new site

Before we get into the build, let’s look into what we're building. The POP site has several high priority functions that we need to address:

  • Provide information about the organization and its mission.
  • Publicize upcoming events, as well as wrapup information and photos about events that have already happened.
  • Provide news about other POP happenings.
  • Provide information related to POP HQ, our location here in Portland (hours, league, party rental, etc.)
  • Allow users to get information about our current needs and donate.
  • Enable users to interact with us through social media and our newsletter.

For better or worse, we don't have a ton of design or UX resources at hand, so our goal is to shoot for a simple information architecture and wrap a super clean theme around it. We're going to start with some very basic wireframes, prototype the site, then head into the design/theming phase. Now I know what you're thinking, if a client came to me with this plan, I would scoff and laugh too. Nevertheless, remember that most clients are also extremely particular about their design, UX, and branding. From our perspective, we are far more interested in getting the functionality we need in place and the information we have up front to our site users. Beyond that, a theme that is reasonably equivalent to what we have now is perfectly fine. We are going in knowing exactly what our limitations and expertise is, and being willing to work within those constraints.

The one thing we want from a new design, that we don't have now, is to put our imagery much more front and center. Pinball is a hobby that has some fantastic visuals, and our events pair that up with kids having fun which makes it all the more appealing. We should find a way to put that in front of people as much as possible.

Beyond that, I picture a pretty straightforward site with about a half dozen of your usual content types (page, article, event, promo block, maybe images and galleries.) The site will be simple enough that I was figuring we could build it with Context to manage block placement and an assortment of Views. Add on a few custom blocks and I figured we would be most of the way there. So let’s see how that played out from day one.


Getting started

Getting Drupal 8 installed was not much different than it was in D7, so I won't spend a ton of time dwelling on that (although it was really nice that I could let the installer create my new database for me, thanks Angie!) If you'd like to read more about installation, I recommend Karen Stevenson's recent article on using Composer to install D8.

The first thing I wanted to check once I got Drupal installed was the state of a few contributed modules, and first on that list was Context, as I was planning on having it serve as the main organizational paradigm for the site. I was pretty surprised to see that a D8 port of Context had not even been started, much less in any usable state. I mentioned this on IRC, and someone said that perhaps the core block enhancements in D8 might meet my needs. Say what? Blocks are actually useful now? Hard to believe, but it is true! There are two main enhancements to blocks in D8 that make them super-powered over D7.

Instancing

Let’s say you have a block, and you want to place it in the sidebar in some contexts, and in the triptych in others. A huge problem in past versions of Drupal is that blocks can only be placed in one region per theme. If you want it in two different regions based on different visibility rules, you have to have two blocks. This limitation alone was a huge driver towards Context for many sites. In D8 blocks can be placed as many times as needed! This addition is a huge step up for blocks.

Drupal blocks being displayed in multiple places on a single page. Blocks are entities

Blocks are full-blown entities in D8. This means they have a ton of functionality they never had before.

Just like with content types, blocks also now have custom types. This means you can set up unique blocks with their own sets of fields, and use them for their own specific purposes. We can have the promo block type described above, but then also have one we use like a nodequeue, and then one that is just a big HTML block which we can use to embed something like a Mailchimp signup form or other non-fielded content. 

Drupal 8 block type administration screen.

Blocks also now have fields. On the surface this can sound like a case of over-engineering something that is supposed to be simple, but this actually has enormous utility. For instance, one of the things we often find ourselves doing on modern sites is creating a "Promo" content type which is not much more than a title, text, a photo and a link promoting another piece of content. 

We can now do that with core blocks by creating a Promo block type, and creating placement rules around where it appears. Since we also have instancing, we could have it in the sidebar on some content, in the content area on other content, and in the footer on the home page.

Drupal blocks being displayed in multiple places on a single page.

Fields on blocks can also allow us to replicate some simple nodequeue-like behaviors. Create a block with a multi-instance entity reference field and a title. You can now choose a set of entities for the block to display, and place that block using the normal core block placement rules. This could be expanded on in a lot of different ways, the combination of block placement with fields provides a ton of flexibility which we never had before.

Finally, having fields means that you can have blocks with efficiently “chunked” data, rather than just a big text field you dump HTML into. This has enormous implications for accessibility and having a more robust device-independent layouts.

Block Visibility Groups

While it’s not a part of core, the Block Visibility Groups module can help expand and organize the core blocks functionality even further. Block Visibilty Groups adds the concept of a group of blocks, which basically act is if they were one block. Say you need three blocks in the sidebar, but all of them appear at the same path, and only to authenticated users. You can add them all to the visibility group, and then apply the rules to the group. Down the road, if those rules change, you can change them in one place. It also organizes the blocks page better, limiting the number of individual entries you have to deal with. 

Administration screen for the Block Visibility Group module Conclusion

Fields and instancing, when combined with the existing block visibility rules and contrib modules like Block Visibility Groups, make blocks in D8 a killer feature. Based on my pretty basic needs, I no longer need to worry about whether or not Context is ported, because I'm pretty sure I can do everything I want with core. On top of all that, all my config for these blocks is now deployable through the Configuration Management System, which will be the topic of our next article. 

Special thanks go out to larowlan, timplunkett, and eclipsegc for pushing through the major patches that made blocks become super useful in Drupal 8!

A Front-end JavaScript Framework in Drupal Core?

Matt & Mike talk with Acquia's Preston So, Chapter Three's Drew Bolles, and Lullabot's own Sally Young on the possibility of a front-end JavaScript framework in Drupal core, and what that would look like.

The Genesis Of Lullabot

On January 1st, 2006, Matt Westgate and I announced our new company by launching a website at Lullabot.com. That was 10 years ago.

Matt and I got to know each other very slowly over the course of 2005. I was building my first Drupal site and we met through Drupal.org. Actually, my very first post on Drupal.org was an interaction with Matt. On February 14th of that year, I'd found a bug in Matt's, then-popular eCommerce module and I'd posted an issue trying to track down the error I was getting. Matt's response, our first interaction, posted one week later, was "This has been fixed. Thanks."

My first Drupal project used many of Matt's Drupal modules. It was an overwhelming and awful project. My wife (a designer) and I had vastly underestimated and underbid the complex web site which would be my first experience with Drupal. At the time, the Drupal community had big dreams, but the project was still in its early years. I found that much of the functionality that I needed for my project wasn't really mature yet. Most of Drupal's documentation existed only as comments in the code itself. There were no books about Drupal. There were no podcasts. There were no conferences or workshops. There were no companies that I could go to for practical help or advice. I thought my first Drupal project would take me 2 or 3 months. It ended up taking almost a year.

I posted desperately in the Drupal IRC channels and on the Drupal.org forums. Many of my questions were very basic. These questions posted in the #drupal IRC were simply met with links to PHP.net or MySQL.org. I was being told to go "read the fucking manual." It was painful. I felt ashamed. I felt unqualified, lost, and hopeless.

I decided to reach out to the friendly guy whose modules were being used all throughout my project. He had committed a bunch of my code to his modules and he always seemed upbeat and grateful about it. Even in that first interaction, he was upbeat: "This has been fixed." as well as thankful: "Thanks." He'd also amicably answered a few of my forum questions and he obviously knew his stuff. I sent Matt an IRC message and offered to pay him to get on the phone with me and answer my Drupal questions. He was a bit taken aback. In 2005, few of its core developers were getting paid for their Drupal work. I think we settled on $40/hr and sometime early in the summer, I spoke to Matt on the phone for the first time. He lived in Ames, Iowa. I lived near Providence, RI. We became internet buddies, having long conversations over AOL Instant Messenger about Drupal and life. He was friendly and knowledgeable. He was also curious and inquisitive with a positive attitude. There weren't many things that seemed to daunt him. My state of hopelessness moved to one of hope and gratitude.

Prior to 2005, my career success was in the music industry. It was amazing to see so many talented developers working together and contributing such amazing software… for free! In the same way that I was inspired by other musicians' music, I was inspired by Drupal's developers' code. Although I was overwhelmed, I was also inspired. I couldn't understand why there weren't crowds of people cheering when they released a new module. I was so thankful to Matt for his help and generosity. I told him, "You've got an amazing talent. I'm going to make this up to you one day. We're going to do something amazing together. I promise you." I imagined Matt rolling his eyes, but I was intent to follow through on my promise.

In an effort to finish this difficult project, I'd completely immersed myself in Drupal and the Drupal community with a tremendous amount of gratitude and hope. I contributed many Drupal modules over the next few years. My overwhelming first Drupal project began to wind down around October and I started telling Matt that we should start a company together. Drupal was so powerful, but no one was out there acting as an advocate, providing practical information or consulting services to help companies harness its power.

Around the same time, friends at Adaptive Path in San Francisco offered me a project for an up and coming film production company called Participant Productions (now Participant Media). They were about to release a new film called "An Inconvenient Truth" and they wanted to create a community action website where they could inspire visitors to take action on various social causes. Adaptive Path thought Drupal might be a good match and they thought of me. I said I'd do it, but only if we could get Matt to help.

I first met Matt Westgate in-person at the San Francisco airport. It was awkward. We'd been talking over the web and on the phone for almost 6 months and it was really weird to be together, in person. Also, he had flown to San Francisco in November with no clothing heavier than a t-shirt. We stopped at Old Navy on Market Street and bought Matt a sweatshirt. We sat in the Adaptive Path offices and coded up most of the site in about a week. We had flow. Adaptive Path was one of the most prestigious web consulting firms at the time and several of their founders, upon seeing our accomplishment, said we really should start a company doing Drupal. That was all the encouragement we needed.

Matt finally decided to leave the security of his university job some time in December. We spent a lot of time going back and forth trying to name our new company. With Matt's background as a Zen meditator and my background as a musician, we liked the combination of the calm music of "lullaby" with "robot", because we were doing technical work and also, robots are just cool. It also combined the organic and inorganic in a nice way, reminding us that it's important to remember the squishy stuff even while we're working with ones and zeroes.

In our first post on Lullabot.com, I wrote: "It is often said that open source software is 'made by developers for developers'. We hope to break down some of those walls and provide an entry for less technical users. We hope to break through the cacophony, and provide clear and understandable information and guidance."

We created Lullabot because we didn't want others to have the frustrating first experience that I had with Drupal. We wanted to make Drupal more accessible to a wider group of web developers. Over our first year as a company, we started the first Drupal podcast, we taught the first Drupal workshops, and Matt co-wrote the first Drupal book, Pro Drupal Development, which became required reading for aspiring Drupal developers. We also built some of the first household-name Drupal sites and helped to move Drupal toward the tipping point of acceptance and adoption.

While we started Lullabot with a lot of energy, hope, and competitive spirit, it's pretty safe to say that the company has been more successful than either of us had hoped in our wildest dreams. We've been blessed to work on some truly amazing project with some amazing clients. We've hired and worked with the best talent in the business. We've tried to infuse Lullabot with the sense of gratitude, hope, and excitement that we had back in 2006. We've built a stellar reputation for doing superlative work and we're constantly astounded by our opportunities and success.

In part 2, I'll talk about some of those successes and growing the company over the past 10 years.

Lullabot Celebrates 10 Years Today

On January 1st, 2006, Matt Westgate and I announced our new company by launching a website at Lullabot.com. That was 10 years ago today.

Over the past 10 years, we've worked with a long list of amazing clients on a long list of amazing projects. We've run workshops, and we've recorded podcasts and training videos. We've run conferences. We've written books. We've written and contributed a lot of Drupal code, modules, and themes. We printed up a ton of Lullabot t-shirts and handed out temporary tattoos (which people seem to promptly affix to their children). We've had legendary parties. We've met lots of great people. We've trained thousands of Drupal developers. We launched Drupalize.Me and grew it into its own company. We've delved deeply into decoupled Drupal sites and built expertise in React, Node.js, and mobile and connected-television development. We received the highest rating on Clutch's worldwide list of development companies. We created a video delivery platform called Videola (which kind of flopped… but we learned a lot!). We created a cool business-card app called Shoot. More recently, we've launched a new product for website testing called Tugboat.

Most importantly: we've built an amazing team of talented developers, designers, project managers, management, and admin staff. We've grown to over 60 people and we still feel like we've got much of the culture, participation, and collaborative spirit that marked our early years. Our team is dedicated, happy, hardworking, and they encourage each other to grow and become the best of themselves. I am so grateful to these people for helping us to build Lullabot. We feel a tremendous responsibility to our team and we are dedicated to be as good to them as they've been to us.

Over the next few weeks, we'll be posting a series of articles talking about Lullabot's conception and growth over the past 10 years, the technological changes that we've seen, changes in Drupal and the web/mobile, and what it's been like to work as a distributed company.

We'll keep this post short today though, because we know you're probably tired and/or hung-over from ringing in the new year. But check back on Monday as The Lullabot Anniversary Series begins.

Drupal 8 Theming--Twig and Responsive Images

Matt & Mike talk about new things a Drupal themer will find in Drupal 8, including Twig templates and responsive images. They're joined by Lullabot Front-end Developers Marc Drummond and Wes Ruvalcaba

The New Drupal 8 Configuration System

Matt & Mike talk to Alex Pott, Matthew Tift, and Greg Dunlap about all things Drupal Configuration Management and their experiences as owners of Drupal 8's Configuration Management Initiative known as CMI.

Pages