Zoom Mocks: Bridging The Divide Between Styles And Page Design

At Lullabot, our design team is constantly looking for ways to improve our process for collaborating and communicating ideas. Lately, we’ve been experimenting with ways our team can explore styles more efficiently, using a leaner process that allows us to produce and iterate more quickly. When it comes to establishing visual style efficiently, two popular approaches are style tiles and element collages. These approaches provide a much-needed exploration phase prior to diving into full page mocks, but we’ve found that each has shortcomings, as well. They often result in beautiful artifacts that don't necessarily map nicely to the actual components and holistic layout for the page types we're designing. 

When we've created style tiles in the past, we’ve often spent a significant amount of time reworking and modifying the approved styles they helped produce in order to fit to real components and page layouts. We initially addressed this by including an assortment of actual components from the design system in our style tiles (something more akin to element collages), but even this can lead to arrangements and layouts that won't exist within the final site, again leading to re-work as we translate style tiles into final mocks.

Designing components in random assortments and layouts (element collages do this essentially) can also lead to design decisions that miss the finer points of visual hierarchy that occurs when components appear in their actual layouts. Visual weight, color, etc. can take on new meaning due to what's right up against a given component.

As we evaluated these challenges within our process, our fundamental desire was to find a way to ensure that our design decisions were being made based on actual page elements and components in real-world context. The result of this process has been a new hybrid approach that we call “Zoom mocks”.

undefined Zoom Mocks To The Rescue

Zoom mocks are styled elements from part of a wireframe. We take a portion of one of our wireframes, and we zoom in on it (e.g. the top 600px of an article). We then explore style for actual components that live in their actual contextual layouts. This approach helps us solve several of the problems mentioned above that our team was running into with traditional style tiles.

We’re approaching style in a much more efficient way that allows us to keep moving forward and work much quicker while designing. Because we’re referencing an already approved wireframe, we spend less time modifying a template or creating a layout to a style tile or element collage. Components are styled in context to the page, which helps bridge the gap between style exploration and designing page mocks. Zooming into a page and testing style on actual positioned components helps avoid rework when translating those styles to other page types and elements. Using wireframes as a starting point also helps us evaluate all the various components and identify the differences so we can try to design for them early in the process.

How Do We Create A Zoom Mock?

Before starting any of the style work, we make sure that we have all of our questions answered from the client regarding their brand, so we can accurately connect their vision and brand to the style of the site. Since we’re trying to make informed decisions based on actual components and their positioning on the page, we begin the style exploration process after wireframes have been approved by the client. We then evaluate the wireframes to determine which components we should focus on during the style exploration phase. Components are often chosen based on their complexity, priority, and how often that pattern will appear throughout the site. When exploring style, we keep the components within the context of the page and zoom in, purposely cropping elements off the page. During the process, this helps us make more concrete style decisions that take into account positioning and hierarchy.  It also helps us ensure that conversation with the client stays focused on style and not layout. 

When reviewing the zoom mock with the client, we ask them if the style is going in the right direction. If they give us the thumbs up, we’ll continue to expand the canvas of the zoom mock, applying style to other elements on the page. This allows us to continuously refine the zoom mock as it expands to include other components that appear on the page. Gradually, the zoom mock turns into a fully designed page, which creates a smooth transition for us into the design phase. If the client gives us the thumb down, we’ll refine the style of the zoom mock until the client feels comfortable with the direction that the style is moving.


The zoom mock approach to style tiles has been a real improvement to the style exploration process for our team. As we continue to refine our process, I’m excited to see how we can help our team work together more efficiently and to share our experiences with the design community.

Wi-Fi & Coffee, A Vacation? The DrupalCon Dublin Recap

Matt and Mike sit down with fellow Lullabots Joe Shindelar and Chris Albrecht to talk about DrupalCon Dublin. We talk about our favorite sessions, bofs, social events, and attempt to answer the question, "Is DrupalCon a vacation?".

Extending a Field Type in Drupal 8

With Drupal 8 comes the promise of OOP and more straight-forward code reuse. This improvement shines most brightly with the new plugin system, and, in particular, with Field plugins.

What if a field type does almost what you want? Say we want to reference entities, but also associate a quantity with what we reference. A real world example might be a deck builder for a trading card game like Magic: The Gathering or the DragonBall Z TCG. We want to reference a card from a deck entity and put in the quantity at the same time.


That seems like a better user experience than adding the same card three different times. 

There are several ways we could implement this. Something like Field Collection could provide this functionality, but this would create a whole new entity for our association. That seems like overkill.

We could also use an additional text field that paired up with our entity reference field. If we are referencing only one entity, this isn’t a bad solution. But what if we need to reference over 20 different entities using a multi-value field? This would be a pain to maintain and render. Accidentally drag one of your values out of order, and the integrity of your data is lost.

But what if we could just add an extra text input field to the core entity reference field? And without creating everything from scratch? Turns out, with Drupal 8, we can. In this article, we’ll walk through extending the core Entity Reference field type with a quantity textfield. Hopefully, this example will also open up other possibilities for you.

We’ll need to extend three different types of plugins:

  1. FieldType: defines properties and backend storage for the field.
  2. FieldWidget: what the admin sees when putting data into a field.
  3. FieldFormatter: how the field data is rendered on the front end for theming.

First, we need to define a new FieldType plugin, but one that extends the core entity reference type. I’m going to go ahead and assume you have a custom module to put the following code in.

/** * @FieldType( * id = "entity_reference_quantity", * label = @Translation("Entity reference quantity"), * description = @Translation("An entity field containing an entity reference with a quantity."), * category = @Translation("Reference"), * default_widget = "entity_reference_autocomplete", * default_formatter = "entity_reference_label", * list_class = "\Drupal\Core\Field\EntityReferenceFieldItemList", * ) */ class EntityReferenceQuantity extends EntityReferenceItem { }

Our plugin annotation is almost the same as the class we are extending, except for the id, label, and description properties. Now, if you clear your cache and try adding a field, you’ll see this new field type. Congratulations!

Of course, it doesn’t really do anything special yet. So let’s continue.

To add a quantity, we need to override two methods. The first is EntityReferenceItem::propertyDefinitions(). This describes the data that this field will contain. We return an array that has the ‘quantity’ key defined, its value being an instance of DataDefinition.

While typed data is out of the scope of this article, you can view the types defined by core and an overview of the API on

public static function propertyDefinitions(FieldStorageDefinitionInterface $field_definition) { $properties = parent::propertyDefinitions($field_definition); $quantity_definition = DataDefinition::create('integer') ->setLabel(new TranslatableMarkup('Quantity')) ->setRequired(TRUE); $properties['quantity'] = $quantity_definition; return $properties; }

The only constraint we will add to our quantity DataDefinition is to make it required, but we could add other constraints, like minimum or maximum values, using the addConstraint() method. An example of that would be something like ->addConstraint('Range', ['min' => 1]);

Constraints are also outside the scope of this article, but you can read more about them and the Entity Validation API here.

The only other method required to override is schema(), which tells Drupal how this new data will be stored, regardless of the entity storage type. The column name needs to match our property name.

public static function schema(FieldStorageDefinitionInterface $field_definition) { $schema = parent::schema($field_definition); $schema['columns']['quantity'] = array( 'type' => 'int', 'size' => 'tiny', 'unsigned' => TRUE, ); return $schema; }

You might also want to look at overriding the EntityReferenceItem::generateSampleValue() method, but it is not required.

Now, we need to define a custom widget, which will be the form for this field. It needs to be aware of our quantity requirement. Otherwise, we’ll have some confused users getting yelled at for required data that had no corresponding form field.

This calls for another plugin, but like the Field Type, we can just extend the already existing EntityReferenceAutocompleteWidget.

/** * @FieldWidget( * id = "entity_reference_autocomplete_quantity", * label = @Translation("Autocomplete w/Quantity"), * description = @Translation("An autocomplete text field with an associated quantity."), * field_types = { * "entity_reference_quantity" * } * ) */ class EntityReferenceAutocompleteQuantity extends EntityReferenceAutocompleteWidget { public function formElement(FieldItemListInterface $items, $delta, array $element, array &$form, FormStateInterface $form_state) { $widget = parent::formElement($items, $delta, $element, $form, $form_state); $widget['quantity'] = array( '#title' => $this->t('Quantity'), '#type' => 'number', '#default_value' => isset($items[$delta]) ? $items[$delta]->quantity : 1, '#min' => 1, '#weight' => 10, ); return $widget; } }

Our annotation is almost completely different this time. Pay special attention to the field_types property, because that will allow this widget to be used on our new field type. But we only need to override one method.

So now, users can enter and save a quantity associated with an entity reference… but we still need some way to render this new information to the page. The default field formatters for entity references don’t take into account our wonderful new quantity data.

So let’s create our own formatter. Again, we’ll create a new plugin, and again, we just need to extend an already existing formatter. Here, we have a lot of options to base our own formatter on, but we’ll just use the EntityReferenceLabelFormatter in this article for simplicity. It would be a good idea to provide different formatters for a good site building experience.

The following code adds a suffix to the default implementation, and again we only have to override one method. An entity with a label of “My Cool Entity”  and a quantity of “3” will be displayed as “My Cool Entity x3”.

/** * @FieldFormatter( * id = "entity_reference_quantity_view", * label = @Translation("Entity label and quantity"), * description = @Translation("Display the referenced entities’ label with their quantities."), * field_types = { * "entity_reference_quantity" * } * ) */ class EntityReferenceQuantityFormatter extends EntityReferenceLabelFormatter { public function viewElements(FieldItemListInterface $items, $langcode) { $elements = parent::viewElements($items, $langcode); $values = $items->getValue(); foreach ($elements as $delta => $entity) { $elements[$delta]['#suffix'] = ' x' . $values[$delta]['quantity']; } return $elements; } }

Again, you’ll want to pay close attention to the field_types property in the annotation.

After you clear your caches, you should see this new formatter as an option on all entity reference fields.

Where could we go from here? Lots of places, but I wanted to draw attention to one more area of our field type declaration, in case you need more fine-tuned customization. In the annotation for our FieldType, notice this line:

// list_class = "\Drupal\Core\Field\EntityReferenceFieldItemList",

That List class can be anything that implements the FieldItemListInterface. This is the class that will store the list of values for any given instance of your field type.

In the methods we overrode in our widget and formatter classes, you’ll see $items is passed in as a parameter. That will be an instance of whatever you put as your list_class. Customizing that class is another way to tailor your field type specific to your needs. You could add additional helper methods like EntityReferenceFieldItemList does with EntityReferenceFieldItemList::referencedEntities(). Or you could add additional constraints that would apply to the whole list of values. For example, you might not want to allow more than 60 cards to be referenced, or limit certain cards to just 2 copies.

Whatever you choose to do, however, you don’t need to reinvent the wheel each time. And that’s a beautiful thing.

Keeping a project journal

Have you ever felt insecure when starting a project? Usually, I feel excited, as it means a new challenge that I will face with other Lullabots. However, last year I had my first experience flying solo on a client project and, this time, I felt nervous. My main worries were:

  • Do I have the required communication skills to transform client requirements into tasks?
  • How can I avoid missing or forgetting things that the client says?
  • How do I know if I am meeting Lullabot's expectations managing the client?
  • How do I know when to say No to the client?

Being diligent in my note-taking and email communication would ensure nothing was lost in translation or forgotten so I could check off the first two concerns on my list. But the last two items on my list were more about me feeling that I needed external support; someone at Lullabot who could oversee my notes and tell me every once in a while "this is great Juampy, keep it up" or "Juampy, perhaps you could handle this in some other way."

Writing is something I love to do: I like to document my code, I like to describe my pull requests, and I like to take notes on kick-off meetings. Not only does this help the project, it helps me in the future because I have a bad memory. When I am in a cafe, I see waiters taking mental notes and, on their way to the bar, taking even more notes, nodding, and getting it right! I admire their memory. You can ask them "what are the orders?" and they tell you what every table ordered, with all the details. I can't do that so I take notes of everything that I do, or have to do, and then I turn that into tickets, calendar reminders, or TODOs.

In the following sections, I will share with you some of the benefits that I discovered by keeping a journal. Throughout, I will refer to some examples from the journal that I kept while I was part of the Module Acceleration Program.

Starting the day with a plan

When I start working, I spend the first minutes gathering notes from what happened while I was away by reading email notifications and chat logs. I also look at yesterday's notes to see if there was anything that I did not complete. With that input, I make a list of tasks like the following one:


Once I have a list like the above, I feel that I have a set of things to complete by the end of the day, which motivates me immensely. If I manage to complete them all (plus whatever else may arise), then it's time to celebrate. If I can't, then I will leave a comment underneath the remaining tasks that describes their status. I may also copy and paste these statuses into their respective tickets so the client knows my progress. The next day, I will continue working from there.

There are days when I don't get much done. With the journal, it is easy to go back and remember why—that something else happened such as "one of my teammates got stuck with a bug and needed help" or "we suddenly had to jump into a video conference that took too long." With these notes, I can see where the time is going and make future adjustments in how I manage my time.

Ticket writing

Countless times I have closed a tab by mistake while I was typing something and had to write it again. While there are some ticket systems and browser plugins that can restore your draft, others don't. Besides, some systems like Jira seem less natural and break up your writing time when, for example, you need to paste a screenshot (no, you can't just hit Ctrl + v). Both Dropbox Paper and Google Docs are great for writing a journal because you can copy and paste screenshots, add links, create TODO lists, etc., in a more seamless way. 

While working on tickets, I suddenly started to write my findings and paste screenshots in the journal (especially on the tricky ones). Then, if I completed the ticket, I would use this material for my pull request to ease peer review. If I had to work further, I would copy my findings from the journal and paste them in the ticket so the team could see my progress. Here is an example from my journal, with some annotations that describe how I will treat the notes:

undefined SCRUMs / Stand-ups

With the journal, it is very easy for me to share my status as it is just a matter of reading my notes out loud. I also use it to take post-SCRUM notes in the journal, which I use as follow-ups for my next tasks. Here is an example:


There have been times where I offered myself to lead a meeting since I already had some material in my journal that could serve as the agenda. In these cases, I shared my screen with everyone and took notes that could be seen and discussed in real-time. This approach proved to be a way to structure and provide leadership for a meeting.

Keeping your peers up to date

Betty Tran, Sally Young, and I did an on-site last year for a potential client. By the end of the on-site, we had two weeks to write a document that contained our project proposal. During these two weeks, there were many emails and shared documents sent by the client with data that we had to classify and filter to use it in the proposal. It was crucial to keep everything organized in an efficient way so I asked Betty and Sally to subscribe to my journal, where I would keep a log of all the events that happened every day with a link, each item linked to its respective document. Doing so allowed them to focus on writing the proposal and use the data without having to skim through email threads. Here is an example of how a day in that journal looked like:

undefined Management likes it

My managers at Lullabot, Seth Brown and James Sansbury, realized that keeping a journal was a transparent and effective way to monitor projects. By subscribing to my journal, they get an email every day with my latest changes. Furthermore, I can mention them, and they will get an immediate email notification to help me by posting a comment. Therefore, they encouraged Lullabot Architects and others doing solo work on projects to start writing a journal.

For example, I subscribed to Dave Reid's journal, who is currently working on a project with me. Every day, I get an email like this with his updates:


This gives me a chance to support him and keep up to date with what he is working on. If I want to add feedback, I can open his journal and write a comment.


Try it for a week using a writing tool that feels natural to you. The less that you need to think about the tool, the better. Leave it open at all times and take all your notes in this one location. Eventually, you will either experience some of the benefits that I mentioned above or realize that you are like one of those waiters with an elephant's memory that I admire so much.

Thanks to Seth Brown and James Sansbury for your feedback while writing this. Also thanks to Adam Balsam for letting me share the journal that I kept while working at the Module Acceleration Program.

Hero photo by Barry Silver.

Building Social With "Open Social"

Matt and Mike sit down with Taco Potze and Mieszko Czyzyk, as well as Lullabot Director of Technology, Karen Stevenson, to talk about the new Open Social Drupal distribution. We talk about the new features of Open Social, as well as the business model, developing in Drupal 8, and the pros and cons of distributions in general.

Modern decoupling is more performant

Two years ago, I started to be interested in API-first designs. I was asked during my annual review, “what kind of project would you like to be involved with, this next year?” My response: I want to develop projects from an API-first perspective.

Little did I know that I was about to embark on a series of projects that would teach me not only about decoupled Drupal, but also the subtleties of designing proper APIs to maximize performance and minimize roundtrips to the server.

Embedding resources

I was lucky enough that the client that I was working with at the time—The Tonight Show with Jimmy Fallon—decided on a decoupled approach. I was involved in the Drupal HTTP API server implementation. The project went on to win an Emmy Award for Outstanding Interactive Program.

The idea of a content repository that could be accessed from anywhere via HTTP, and leverage all the cool technologies 2014 had to offer, was—if not revolutionary—forward-looking. I was amazed by the possibilities the approach opened. The ability to expose Drupal’s data to an external team that could work in parallel using the front-end technologies that they were proficient with meant work could begin immediately. Nevertheless, there were drawbacks to the approach. For instance, we observed a lot of round trips between the consumer of the data—the client—and the server.

As it turns out, The Tonight Show with Jimmy Fallon was only the first of several decoupled projects that I undertook in rapid succession. As a result, I authored version 2.x of the RESTful module to support the JSON API spec in Drupal 7. One of the strong points of this specification is resource embedding. Embedding resources—also called resource composition—is a technique where the response to a particular entity also contains the contents of the entities it is related to. Embedding resources for relationships is one of the most effective ways to reduce the number of round trips when dealing with REST servers. It’s based on the idea that the consumer requests the relationships that it wants embedded in the response. This same idea is used in many other specifications like GraphQL.

In JSON API, the consumer can interrogate the data with a single query, tracing relationships between objects and returning the desired data in one trip. Imagine searching for a great grandparent with a genealogy system that would only let you find the name of a single family member at a time versus a system that could return every ancestor going back three generations with a single request. To do so, the client appends an include parameter in the URL. For example: 


The response will include information about four entities:

  • The entity being requested (the one that contains the relationships).
  • The entity that relationship1 points to. This may be an entity reference field inside of the entity being requested.
  • The entity that relationship2 points to.
  • The entity that nestedRelationship1 points to. This may be an entity reference field inside of the entity that relationship2 is pointing to.

A single request from a consumer can return multiple entities. Note that for the same API different consumers may follow different embedding patterns, depending on the designs being implemented.

The landscape for JSON API and Drupal nowadays seems bright. Dries Buytaert, the product lead of the Drupal project, hopes to include the JSON API module in core. Moreover, there seems to be numerous articles about decoupling techniques for Drupal 8.

But does resource embedding offer a performance edge over multiple round-trip requests? Let’s quantitatively compare the two.

Performance comparison

This performance comparison uses a clean Drupal installation with some automatically generated content. Bear in mind, performance analysis is tightly coupled to the content model and merits case-by-case study. Nevertheless, let’s analyze the response times to test our hypothesis: that resource embedding provides a performance improvement over traditional REST approaches.

Our test case will involve the creation of an article detail page that comes with the Standard Drupal profile. I also included the profile image of a commenter to make things a bit more complex.


In Figure 1, I’ve visually indicated the “levels” of relationships between the article itself and each accompanying chunk of content necessary to compose the “page.” Using traditional REST, a particular consumer would need to make the following requests:

  • Request the given article (node/2410).
  • Once the article response comes back it will need to request, in parallel:
    • The author of the article.
      • The profile image of the author of the article.
    • The image for the article.
    • The first tag for the article.
    • The second tag for the article.
    • The first comment on the article.
      • The author of the first comment on the article.
        • The profile image of the author of the first comment of the article.
    • The second comment of the article.
      • The author of the second comment of the article.

In contrast, using the JSON API module (or any other with resource composition), will only require a single request with the include query parameter set to 


When the server gets such a request it will load all the requested entities and return them back in a single swoop. Thus, the front-end framework for your decoupled app gets all of its data requirements in a JSON document in a single request instead of many.

For simplicity I will assume that the overall response time of the REST-based approach will be the one with the longest path (four levels deep). Having four parallel requests that happen at the same time will not have a big impact on the final response time. In a more realistic performance analysis, we would take into account that having four parallel calls degrades the overall performance. Even in this handicapped scenario the resource embedding should have a better response time.

Once the request reaches the server, if the response to it is ready in the different caching layers, it takes the same effort to retrieve a big JSON document for the JSON API request than to retrieve a small JSON document for one of the REST requests. That indicates that the big effort is in bootstrapping Drupal to a point where it can serve a cached response. That is true for anonymous and authenticated traffic, via the Page Cache and Dynamic Page Cache core modules.


The graphic above shows the response time for each approach. Both approaches are cached in the page cache, so there is a constant response time to bootstrap Drupal and grab the cache. For this example the response time for every request was ~7 ms every time.

It is obvious that the more complex the interconnections between your data are the greater the advantage of using JSON API's resource embedding. I believe that even though this example is extremely simple, we were able to cut response time by 75%.

If we now introduce latency between the consumer and the server, you can observe that the JSON API response still takes 75% less time. However, the total response time is degraded significantly. In the following chart, I have assumed an optimistic, and constant, transport time of 75 ms.

undefined Conclusion

This article describes some sophisticated techniques that can dramatically improve the performance of our apps. There are some other challenges in decoupled systems that I did not mention here. If you are interested in those—and more!—please attend my session at DrupalCon Dublin Building Advanced Web Services With JSON API, or watch it later. I hope to prove that you can create more engaging products and services by using advanced web services. After all, when you are building digital experiences one of your primary goals you should be to make it enjoyable to the user. Shorter response times correlate to more successful user engagement. And successful engagements make for amazing digital experiences .

Lessons Learned in Scaling Agile

“Be like water, flow.” - Bruce Lee

Being agile (note agile with a lower-case a) is not about sticking to a prescribed set of principles or methodologies. It’s about minimizing the prescribed set so that you focus on adapting to change. One of the biggest lessons to impart here is that you need to be agile in the sense that you can adapt to how your client works, but not give up on the course you feel is more “right” when you sense something needs to change.

Throughout this piece, you’ll note that I advise against certain things, but I also encourage you to understand the “why” behind your client’s motivations. I tend to be judgemental of client processes when they’re not as efficient as the ones I’m used to, but I also understand that not everything can be improved overnight.

I hope to illustrate how even when you come up against adversity and difficult situations it helps first to take a step back and relate to the client, understand the difference in values, and then try to be the mediator between those opposing values.

The Gig

Not so long ago we were part of a project in which the company decided that they wanted their Scrum teams to adhere to the Scaled Agile Framework and began rolling that out across their organization. Everyone got a JIRA license, a Scrum Coach was added to the team, and they split up into cross-functional teams. The teams were comprised of multiple Product Owners and their direct reports—almost all of whom worked on multiple projects. Lullabot was brought into the mix to help out on just one of these projects: the website.

Before continuing, I’d like to take a moment to reflect on the irony that presents itself when looking at the complexity illustrated in this graphic from the Scaled Agile Framework in the context of the word “agile.” I guarantee this is not what the founders of the Agile Manifesto had in mind.


Compare this to the Agile Manifesto:

  • Individuals and interactions over processes and tools
  • Working software over comprehensive documentation
  • Customer collaboration over contract negotiation
  • Responding to change over following a plan

Upon introduction to the client, we found that there was a fairly extensive Backlog for the website project already written up as User Stories. Many of these User Stories were incomplete, and there were no other written requirements or specifications though the Product Owners possessed this knowledge and could answer questions we had about a given Story.

Soon after we started, our client’s upper management assigned the team a Scrum Coach from an outside consultancy. Part of the coach’s job was to sit in on our meetings and listen to how the projects were being run and to make suggestions on what should change about how the meetings were run and the projects planned out to adhere to the Scrum methodology.

Longer meetings resulted—some six hours long—in which all members of the team were present to groom the Backlog and plan our Sprints. Sprint iterations grew from two-week intervals to three. Time was spent going through the priorities of the Product Owners, the subsequent User Stories that represented those priorities, sizing the User Stories with Story Points, and planning which Sprint they would go into. Discussions consisted of what Tasks were necessary to accomplish a given User Story, and then estimating the Tasks.

Eventually, as we started to write better User Stories and, in turn, gain a better understanding of the work, we were able to get ahead of the meetings. We planned out the tasks ahead of time for the priorities that were set and the meetings became shorter and shorter.

Through all of this, I gained some significant insight into our client’s company and how it works. I also gained a better understanding of how the Scrum methodology can fit into a larger corporation, and where it does not. I learned where it makes sense to diverge from the process and where it does not. Here are some of my key takeaways from this experience.

Multiple Product Owners on a Scrum Team is Hard

We’re used to just one product being worked on by a team, but our client had a different structure where the team was cross functional and worked on multiple products at once as well as supported them. Since there were multiple products, there were multiple Product Owners. Each product had a team, but each team member was responsible for multiple products, resulting in divided attention, and sometimes diverging priorities. After working in this way, it’s become clear that it is more ideal, if not practical, to have a team dedicated to one product. Multiple product owners was the key factor in why our Sprint Planning meetings lasted so long. We weren’t planning out what we were going to work on for just one product, but for many.

Competing Priorities

With multiple Product Owners on a team, there is competition for resources and priorities. Which product comes first? Which task for which product does an individual work on first? A developer now has multiple Product Owners to report to, and that means direction from more than one person, which can be confusing and frustrating. More time is spent discussing these priorities and balancing them than if you just had one product and one Owner.

Balancing the Work

Even though teams can be pretty cross functional, the cross functionality is not always completely balanced. Some people have more knowledge about one system than another and so are more suited to that work. Or perhaps—as in Lullabot’s case—you’re an outside vendor only working on one project. If your overall team already has a target for your maximum amount of story points you can have within a sprint, and most of those points are allotted to a product that is unrelated to your work, you end up with a number of story points that may be less than what you can actually handle for a given sprint for your product. In short, it becomes difficult to make sure everyone has enough work to last them through the end of the sprint. The team spends more time trying to balance the work than doing the work.

In a truly agile scenario, you respond to the needs of the client, and to the bandwidth of the developers. If a developer runs out of work, you can pull that work and begin—not sit around waiting for the end of the sprint—without repercussions.

Estimating the Work

With multiple areas of expertise and responsibilities, it takes more time to estimate the work because the goal is to have everyone understand the work being estimated. Estimation sessions should be limited to just the people with the appropriate knowledge, and as more people become familiar with the work they can lend their input, but time is wasted having people in a meeting who don’t understand the work. That person also tends to feel devalued because their time is being wasted.

For instance, in this case, we have multiple products being worked on. In a sprint planning meeting, we have members of each product. Lullabot is one of them, but we’re only part of one product. So while estimation is happening around issues that are relevant to Lullabot’s work, the people in this meeting who have no work at all on the same product are having their time wasted and feeling devalued. By having sprint planning meetings around just one product at a time, you can have only the people who are relevant to the discussions in the meeting and avoid wasting anyone’s time.

People over Process

The first value in the Agile Manifesto is “Individuals and interactions over processes and tools.” This resonates with Lullabot’s values—especially Be Human. If a process isn’t working, we change it. We are always on the lookout for opportunities to make more valuable use of our time—Lullabot’s brand of continuous process improvement. Our affinity for efficiency extends to everything we do—whether that’s designing our processes, coding a migration to run as fast as possible, or optimizing the page load of a website. How does that relate to being human? It shows how much we care about our people. We value their time. 

Since our first reaction to tedious processes is to cut them up and change them around, it was difficult to keep from trying to change this client’s processes. Considering the client was just starting to understand how Scrum works and to get the various Ceremonies into place, we felt it was more prudent to resist our typical reaction of trying to optimize the processes early on. They were already changing their existing process and had a guide in the form of a Scrum Coach to help them understand and implement this new way of working. Any changes to this process would have been more of a hinderance at this point. We’ve been through this before and can seemingly see into the future around this and where things would need to change to make them more efficient, but without a solid understanding of the Scrum Methodology first amongst the various team members, our words would fall on deaf ears which were already focused on the changes at hand.

“Progress is impossible without change, and those who cannot change their minds cannot change anything.” - George Bernard Shaw

Focusing on helping people understand the changes to their process was where we decided it would be best to spend our time. We let them know that certain things may take a bit longer this way, but we were dedicated to the team learning the new process and adhering to how they wanted to work. In this we established a base of trust and knowledge sharing, helping to position ourselves as experts. We kept the process optimizations in our back pocket for when the team was more solid and knowledgeable and could then make more educated decisions about which processes should change, and which ones should adhere to the typical Scrum methodology being implemented. With an understanding of what work may suffer because of this restructuring, we were now free to focus on the relationship and to meet the expectations of our Product Owner.

In this way, we didn’t take our typical approach of changing the process to help our people, but instead focused on helping people understand the process. People understanding the process was the way Lullabot could put the people first and optimize the process later.

Create more Value than you Receive

One of the challenges in working with an inefficient process that cannot be changed is that you’re directly violating that value of efficiency that we hold so dear. When a value is violated, even for a short time, feelings of guilt and remorse set in. If this continues unchallenged and unchanged, apathy can result. You begin not to care about your work because you know you cannot make it as valuable as you know it could be. We want to avoid apathy at all costs. So how do we avoid it when there seems to be nothing we can do, or we’re too tired to continue fighting for our values

A poignant reminder came from one of our Lullabots facing a similar situation. 

“…this is a necessary first step in a process, [and we] need to work toward activities that [the client] can look out the window and appreciate…”

This spoke to the need of going through these hoops while the company reorganized the way it works, while continuing to strive to find those areas in which Lullabot has become known for improving any project. By sticking to our values and sharing them with others through our work and our daily interactions, we can slowly but surely improve not only the projects we work on, but the relationships and by extension the entire team.

Putting such a value on hold even for a short time is difficult, so we’ve tried to find ways to balance it out. Participating in other more lenient projects, passion projects, and taking ‘mental health’ days are just some of the coping mechanisms we employed. But building trust and establishing a solid foundation for your relationship at the beginning is important. Don’t give up on trying to make positive changes to the project, but be patient instead of pushy. Establishing that base will lead to the conversations that need to happen to make your situation better.

Process for Process Sake

Another value of ours as developers is simplicity. We strive not to create complexity for its sake. And when a process is complex, it’s a double whammy. To cope with a tedious process, it helps to understand its origins. You might still resent that it’s a complex process, but at least you’ll understand the thinking behind it, if not the need for it. If you still don’t understand, then perhaps there is room for change. But you have to understand it first to change it.

We advise against doing any process just for the sake of the process in any situation. Strive to understand the why behind the processes being put into place, especially if you think they don’t make sense. In our case, there was always a factor that we weren’t seeing. Typically that factor had something to do with a side of the client’s business that we simply did not fully understand at the time. After it was explained, it became apparent why a process was so complex and why the complexity was deemed necessary. That doesn’t mean we don’t still strive to reduce the complexity. In fact, we look for every opportunity to reduce complexity wherever we see it. 

Be Truly Agile

Agility can come in many forms. Agile with a capital “A” has become a “thing” in the business world and in the software world that has connotations that are starting to change the meaning of the word—for some in a negative manner due to the a number of meetings and process overhead that can result. Being truly agile—with a lowercase “a”—is a thing of rarity and beauty. Be “agile” in the sense that you are nimble, quick, and alert. Be agile in the sense that you can respond quickly to change but also agile enough to recognize when it isn’t yet time to change a process. Seek to understand first, recommend changes second. Learn about the subscribed Agile methodologies that are out there, and be agile enough to adapt them as necessary to fit the needs of your project.

Syntax is Syntax? Lullabot's Non-Drupal Development Work

Did you know that Lullabot does a significant amount of non-Drupal work? Matt and Mike sit down with several Lullabots who are working on non-Drupal projects including Node, Swift, and React. We talk pros and cons of working in various languages, how they compare to our PHP world, and lots more.

Who Sponsors Drupal Development?

(This article, co-authored with Dries Buytaert, the founder and project lead of Drupal, was cross-posted on,, and

There exist millions of Open Source projects today, but many of them aren't sustainable. Scaling Open Source projects in a sustainable manner is difficult. A prime example is OpenSSL, which plays a critical role in securing the internet. Despite its importance, the entire OpenSSL development team is relatively small, consisting of 11 people, 10 of whom are volunteers. In 2014, security researchers discovered an important security bug that exposed millions of websites. Like OpenSSL, most Open Source projects fail to scale their resources. Notable exceptions are the Linux kernel, Debian, Apache, Drupal, and WordPress, which have foundations, multiple corporate sponsors and many contributors that help these projects scale.

We (Dries Buytaert is the founder and project lead of Drupal and co-founder and Chief Technology Officer of Acquia and Matthew Tift is a Senior Developer at Lullabot and Drupal 8 configuration system co-maintainer) believe that the Drupal community has a shared responsibility to build Drupal and that those who get more from Drupal should consider giving more. We examined commit data to help understand who develops Drupal, how much of that work is sponsored, and where that sponsorship comes from. We will illustrate that the Drupal community is far ahead in understanding how to sustain and scale the project. We will show that the Drupal project is a healthy project with a diverse community of contributors. Nevertheless, in Drupal's spirit of always striving to do better, we will also highlight areas where our community can and should do better.

Who is working on Drupal?

In the spring of 2015, after proposing ideas about giving credit and discussing various approaches at length, added the ability for people to attribute their work to an organization or customer in the issue queues. Maintainers of Drupal themes and modules can award issues credits to people who help resolve issues with code, comments, design, and more.

undefined's credit system captures all the issue activity on This is primarily code contributions, but also includes some (but not all) of the work on design, translations, documentation, etc. It is important to note that contributing in the issues on is not the only way to contribute. There are other activities — for instance, sponsoring events, promoting Drupal, providing help and mentoring — important to the long-term health of the Drupal project. These activities are not currently captured by the credit system. Additionally, we acknowledge that parts of Drupal are developed on GitHub and that credits might get lost when those contributions are moved to For the purposes of this post, however, we looked only at the issue contributions captured by the credit system on

What we learned is that in the 12-month period from July 1, 2015 to June 30, 2016 there were 32,711 issue credits — both to Drupal core as well as all the contributed themes and modules — attributed to 5,196 different individual contributors and 659 different organizations.

Despite the large number of individual contributors, a relatively small number do the majority of the work. Approximately 51% of the contributors involved got just one credit. The top 30 contributors (or top 0.5% contributors) account for over 21% of the total credits, indicating that these individuals put an incredible amount of time and effort in developing Drupal and its contributed modules:

Rank Username Issues 1 dawehner 560 2 DamienMcKenna 448 3 alexpott 409 4 Berdir 383 5 Wim Leers 382 6 jhodgdon 381 7 joelpittet 294 8 heykarthikwithu 293 9 mglaman 292 10 drunken monkey 248 11 Sam152 237 12 borisson_ 207 13 benjy 206 14 edurenye 184 15 catch 180 16 slashrsm 179 17 phenaproxima 177 18 mbovan 174 19 tim.plunkett 168 20 rakesh.gectcr 163 21 martin107 163 22 dsnopek 152 23 mikeryan 150 24 jhedstrom 149 25 xjm 147 26 hussainweb 147 27 stefan.r 146 28 bojanz 145 29 penyaskito 141 30 larowlan 135 How much of the work is sponsored?

As mentioned above, from July 1, 2015 to June 30, 2016, 659 organizations contributed code to Drupal is used by more than one million websites. The vast majority of the organizations behind these Drupal websites never participate in the development of Drupal; they use the software as it is and do not feel the need to help drive its development.

Technically, Drupal started out as a 100% volunteer-driven project. But nowadays, the data suggests that the majority of the code on is sponsored by organizations in Drupal's ecosystem. For example, of the 32,711 commit credits we studied, 69% of the credited work is “sponsored.”

We then looked at the distribution of how many of the credits are given to volunteers versus given to individuals doing "sponsored work" (i.e. contributing as part of their paid job):


Looking at the top 100 contributors, for example, 23% of their credits are the result of contributing as volunteers and 56% of their credits are attributed to a corporate sponsor. The remainder, roughly 21% of the credits, are not attributed. Attribution is optional so this means it could either be volunteer-driven, sponsored, or both.

As can be seen on the graph, the ratio of volunteer versus sponsored don't meaningfully change as we look beyond the top 100 — the only thing that changes is that more credits that are not attributed. This might be explained by the fact that occasional contributors might not be aware of or understand the credit system, or could not be bothered with setting up organizational profiles for their employer or customers.

As shown in jamadar's screenshot above, a credit can be marked as volunteer and sponsored at the same time. This could be the case when someone does the minimum required work to satisfy the customer's need, but uses his or her spare time to add extra functionality. We can also look at the amount of code credits that are exclusively volunteer credits. Of the 7,874 credits that marked volunteer, 43% of them (3,376 credits) only had the volunteer box checked and 57% of them (4,498) were also partially sponsored. These 3,376 credits are one of our best metrics to measure volunteer-only contributions. This suggests that only 10% of the 32,711 commit credits we examined were contributed exclusively by volunteers. This number is a stark contrast to the 12,888 credits that were “purely sponsored,” and that account for 39% of the total credits. In other words, there were roughly four times as many “purely sponsored” credits as there were “purely volunteer” credits.

When we looked at the 5,196 users, rather than credits, we found somewhat different results. A similar percentage of all users had exclusively volunteer credits: 14% (741 users). But the percentage of users with exclusively sponsored credits is only 50% higher: 21% (1077 users). Thus, when we look at the data this way, we find that users who only do sponsored work tend to contribute quite a bit more than users who only do volunteer work.

None of these methodologies are perfect, but they all point to a conclusion that most of the work on Drupal is sponsored. At the same time, the data shows that volunteer contribution remains very important to Drupal. We believe there is a healthy ratio between sponsored and volunteer contributions.

Who is sponsoring the work?

Because we established that most of the work on Drupal is sponsored, we know it is important to track and study what organizations contribute to Drupal. Despite 659 different organizations contributing to Drupal, approximately 50% of them got 4 credits or less. The top 30 organizations (roughly top 5%) account for about 29% of the total credits, which suggests that the top 30 companies play a crucial role in the health of the Drupal project. The graph below shows the top 30 organizations and the number of credits they received between July 1, 2015 and June 30, 2016:


While not immediately obvious from the graph above, different types of companies are active in Drupal's ecosystem and we propose the following categorization below to discuss our ecosystem.

Category Description Traditional Drupal businesses Small-to-medium-sized professional services companies that make money primarily using Drupal. They typically employ less than 100 employees, and because they specialize in Drupal, many of these professional services companies contribute frequently and are a huge part of our community. Examples are Lullabot (shown on graph) or Chapter Three (shown on graph). Digital marketing agencies Larger full-service agencies that have marketing led practices using a variety of tools, typically including Drupal, Adobe Experience Manager, Sitecore, WordPress, etc. They are typically larger, with the larger agencies employing thousands of people. Examples are Sapient (shown on graph) or AKQA. System integrators Larger companies that specialize in bringing together different technologies into one solution. Example system agencies are Accenture, TATA Consultancy Services, Capgemini or CI&T. Technology and infrastructure companies Examples are Acquia (shown on graph), Lingotek (shown on graph), BlackMesh, RackSpace, Pantheon or End-users Examples are Pfizer (shown on graph), (shown on graph) or NBC Universal.

Most of the top 30 sponsors are traditional Drupal companies. Sapient (120 credits) is the only digital marketing agency showing up in the top 30. No system integrator shows up in the top 30. The first system integrator is CI&T, which ranked 31st with 102 credits. As far as system integrators are concerned CI&T is a smaller player with between 1,000 and 5,000 employees. Other system integrators with credits are Capgemini (43 credits), Globant (26 credits), and TATA Consultancy Services (7 credits). We didn't see any code contributions from Accenture, Wipro or IBM Global Services. We expect these will come as most of them are building out Drupal practices. For example, we know that IBM Global Services already has over 100 people doing Drupal work.


When we look beyond the top 30 sponsors, we see that roughly 82% of the code contribution on comes from the traditional Drupal businesses. About 13% of the contributions comes from infrastructure and software companies, though that category is mostly dominated by one company, Acquia. This means that the technology and infrastructure companies, digital marketing agencies, system integrators and end-users are not meaningfully contributing code to today. In an ideal world, the pie chart above would be sliced in equal sized parts.

How can we explain that unbalance? We believe the two biggest reasons are: (1) Drupal's strategic importance and (2) the level of maturity with Drupal and Open Source. Various of the traditional Drupal agencies have been involved with Drupal for 10 years and almost entirely depend on on Drupal. Given both their expertise and dependence on Drupal, they are most likely to look after Drupal's development and well-being. These organizations are typically recognized as Drupal experts and sought out by organizations that want to build a Drupal website. Contrast this with most of the digital marketing agencies and system integrators who have the size to work with a diversified portfolio of content management platforms, and are just getting started with Drupal and Open Source. They deliver digital marketing solutions and aren't necessarily sought out for their Drupal expertise. As their Drupal practices grow in size and importance, this could change, and when it does, we expect them to contribute more. Right now many of the digital marketing agencies and system integrators have little or no experience with Open Source so it is important that we motivate them to contribute and then teach them how to contribute.

There are two main business reasons for organizations to contribute: (1) it improves their ability to sell and win deals and (2) it improves their ability to hire. Companies that contribute to Drupal tend to promote their contributions in RFPs and sales pitches to win more deals. Contributing to Drupal also results in being recognized as a great place to work for Drupal experts.

We also should note that many organizations in the Drupal community contribute for reasons that would not seem to be explicitly economically motivated. More than 100 credits were sponsored by colleges or universities, such as the University of Waterloo (45 credits). More than 50 credits came from community groups, such as the Drupal Bangalore Community and the Drupal Ukraine Community. Other nonprofits and government organization that appeared in our data include the Drupal Association (166), National Virtual Library of India (25 credits), Center for Research Libraries (20), and Welsh Government (9 credits).

Infrastructure and software companies

Infrastructure and software companies play a different role in our community. These companies are less reliant on professional services (building Drupal websites) and primarily make money from selling subscription based products.

Acquia, Pantheon and are venture-backed Platform-as-a-Service companies born out of the Drupal community. Rackspace and AWS are public companies hosting thousands of Drupal sites each. Lingotek offers cloud-based translation management software for Drupal.


The graph above suggests that Pantheon and have barely contributed code on during the past year. ( only became an independent company 6 months ago after they split off from CommerceGuys.) The chart also does not reflect sponsored code contributions on GitHub (such as drush), Drupal event sponsorship, and the wide variety of value that these companies add to Drupal and other Open Source communities.

Consequently, these data show that the Drupal community needs to do a better job of enticing infrastructure and software companies to contribute code to The Drupal community has a long tradition of encouraging organizations to share code on rather than keep it behind firewalls. While the spirit of the Drupal project cannot be reduced to any single ideology — not every organization can or will share their code — we would like to see organizations continue to prioritize collaboration over individual ownership. Our aim is not to criticize those who do not contribute, but rather to help foster an environment worthy of contribution.

End users

We saw two end-users in the top 30 corporate sponsors: Pfizer (158 credits) and (132 credits). Other notable end-users that are actively giving back are Workday (52 credits), NBC Universal (40 credits), the University of Waterloo (45 credits) and (33 credits). The end users that tend to contribute to Drupal use Drupal for a key part of their business and often have an internal team of Drupal developers.

Given that there are hundreds of thousands of Drupal end-users, we would like to see more end-users in the top 30 sponsors. We recognize that a lot of digital agencies don't want, or are not legally allowed, to attribute their customers. We hope that will change as Open Source continues to get more and more adopted.

Given the vast amount of Drupal users, we believe encouraging end-users to contribute could be a big opportunity. Being credited on gives them visibility in the Drupal community and recognizes them as a great place for Open Source developers to work.

The uneasy alliance with corporate contributions

As mentioned above, when community-driven Open Source projects grow, there becomes a bigger need for organizations to help drive its development. It almost always creates an uneasy alliance between volunteers and corporations.

This theory played out in the Linux community well before it played out in the Drupal community. The Linux project is 25 years old now has seen a steady increase in the number of corporate contributors for roughly 20 years. While Linux companies like Red Hat and SUSE rank highly on the contribution list, so do non-Linux-centric companies such as Samsung, Intel, Oracle and Google. The major theme in this story is that all of these corporate contributors were using Linux as an integral part of their business.

The 659 organizations that contribute to Drupal (which includes corporations), is roughly three times the number of organizations that sponsor development of the Linux kernel, “one of the largest cooperative software projects ever attempted.” In fairness, Linux has a different ecosystem than Drupal. The Linux business ecosystem has various large organizations (Red Hat, Google, Intel, IBM and SUSE) for whom Linux is very strategic. As a result, many of them employ dozens of full-time Linux contributors and invest millions of dollars in Linux each year.

In the Drupal community, Acquia has had people dedicated full-time to Drupal starting nine years ago when it hired Gábor Hojtsy to contribute to Drupal core full-time. Today, Acquia has about 10 developers contributing to Drupal full-time. They work on core, contributed modules, security, user experience, performance, best practices, and more. Their work has benefited untold numbers of people around the world, most of whom are not Acquia customers.

In response to Acquia’s high level of participation in the Drupal project, as well as to the number of Acquia employees that hold leadership positions, some members of the Drupal community have suggested that Acquia wields its influence and power to control the future of Drupal for its own commercial benefit. But neither of us believe that Acquia should contribute less. Instead, we would like to see more companies provide more leadership to Drupal and meaningfully contribute on

Who is sponsoring the top 30 contributors? Rank Username Issues Volunteer Sponsored Not specified Sponsors 1 dawehner 560 84.1% 77.7% 9.5% Drupal Association (182), Chapter Three (179), Tag1 Consulting (160), Cando (6), Acquia (4), Comm-press (1) 2 DamienMcKenna 448 6.9% 76.3% 19.4% Mediacurrent (342) 3 alexpott 409 0.2% 97.8% 2.2% Chapter Three (400) 4 Berdir 383 0.0% 95.3% 4.7% MD Systems (365), Acquia (9) 5 Wim Leers 382 31.7% 98.2% 1.8% Acquia (375) 6 jhodgdon 381 5.2% 3.4% 91.3% Drupal Association (13), Poplar ProductivityWare (13) 7 joelpittet 294 23.8% 1.4% 76.2% Drupal Association (4) 8 heykarthikwithu 293 99.3% 100.0% 0.0% Valuebound (293), Drupal Bangalore Community (3) 9 mglaman 292 9.6% 96.9% 0.7% Commerce Guys (257), Bluehorn Digital (14),, Inc. (12), LivePerson, Inc (11), Bluespark (5), DPCI (3), Thinkbean, LLC (3), Digital Bridge Solutions (2), Matsmart (1) 10 drunken monkey 248 75.4% 55.6% 2.0% Acquia (72), StudentFirst (44), epiqo (12), Vizala (9), Sunlime IT Services GmbH (1) 11 Sam152 237 75.9% 89.5% 10.1% PreviousNext (210), Code Drop (2) 12 borisson_ 207 62.8% 36.2% 15.9% Acquia (67), Intracto digital agency (8) 13 benjy 206 0.0% 98.1% 1.9% PreviousNext (168), Code Drop (34) 14 edurenye 184 0.0% 100.0% 0.0% MD Systems (184) 15 catch 180 3.3% 44.4% 54.4% Third and Grove (44), Tag1 Consulting (36), Drupal Association (4) 16 slashrsm 179 12.8% 96.6% 2.8% (89), MD Systems (84), Acquia (18), Studio Matris (1) 17 phenaproxima 177 0.0% 94.4% 5.6% Acquia (167) 18 mbovan 174 7.5% 100.0% 0.0% MD Systems (118), ACTO Team (43), Google Summer of Code (13) 19 tim.plunkett 168 14.3% 89.9% 10.1% Acquia (151) 20 rakesh.gectcr 163 100.0% 100.0% 0.0% Valuebound (138), National Virtual Library of India (NVLI) (25) 21 martin107 163 4.9% 0.0% 95.1%   22 dsnopek 152 0.7% 0.0% 99.3%   23 mikeryan 150 0.0% 89.3% 10.7% Acquia (112), Virtuoso Performance (22), Drupalize.Me (4), North Studio (4) 24 jhedstrom 149 0.0% 83.2% 16.8% Phase2 (124), Workday, Inc. (36), Memorial Sloan Kettering Cancer Center (4) 25 xjm 147 0.0% 81.0% 19.0% Acquia (119) 26 hussainweb 147 2.0% 98.6% 1.4% Axelerant (145) 27 stefan.r 146 0.7% 0.7% 98.6% Drupal Association (1) 28 bojanz 145 2.1% 83.4% 15.2% Commerce Guys (121), Bluespark (2) 29 penyaskito 141 6.4% 95.0% 3.5% Lingotek (129), Cocomore AG (5) 30 larowlan 135 34.1% 63.0% 16.3% PreviousNext (85), Department of Justice & Regulation, Victoria (14), amaysim Australia Ltd. (1), University of Adelaide (1)

We observe that the top 30 contributors are sponsored by 45 organizations. This kind of diversity is aligned with our desire not to see Drupal controlled by a single organization. The top 30 contributors and the 45 organizations are from many different parts in the world and work with customers large or small. We could still benefit from more diversity, though. The top 30 lacks digital marketing agencies, large system integrators and end-users — all of whom could contribute meaningfully to making Drupal for them and others.

Evolving the credit system

The credit system gives us quantifiable data about where our community's contributions come from, but that data is not perfect. Here are a few suggested improvements:

  1. We need to find ways to recognize non-code contributions as well as code contributions outside of (i.e. on GitHub). Lots of people and organizations spend hundreds of hours putting together local events, writing documentation, translating Drupal, mentoring new contributors, and more — and none of that gets captured by the credit system.
  2. We'd benefit by finding a way to account for the complexity and quality of contributions; one person might have worked several weeks for just one credit, while another person might have gotten a credit for 30 minutes of work. We could, for example, consider the issue credit data in conjunction with Git commit data regarding insertions, deletions, and files changed.
  3. We could try to leverage the credit system to encourage more companies, especially those that do not contribute today, to participate in large-scale initiatives. Dries presented some ideas two years ago in his DrupalCon Amsterdam keynote and Matthew has suggested other ideas, but we are open to more suggestions on how we might bring more contributors into the fold using the credit system.
  4. We could segment out organization profiles between end users and different kinds of service providers. Doing so would make it easier to see who the top contributors are in each segment and perhaps foster more healthy competition among peers. In turn, the community could learn about the peculiar motivations within each segment.

Like Drupal the software, the credit system on is a tool that can evolve, but that ultimately will only be useful when the community uses it, understands its shortcomings, and suggests constructive improvements. In highlighting the organizations that sponsor work on, we hope to provoke responses that help evolve the credit system into something that incentivizes business to sponsor more work and that allows more people the opportunity to participate in our community, learn from others, teach newcomers, and make positive contributions. We view Drupal as a productive force for change and we wish to use the credit system to highlight (at least some of) the work of our diverse community of volunteers, companies, nonprofits, governments, schools, universities, individuals, and other groups.


Our data shows that Drupal is a vibrant and diverse community, with thousands of contributors, that is constantly evolving and improving the software. While here we have examined issue credits mostly through the lens of sponsorship, in future analyses we plan to consider the same issue credits in conjunction with other publicly-disclosed Drupal user data, such as gender identification, geography, seasonal participation, mentorship, and event attendance.

Our analysis of the credit data concludes that most of the contributions to Drupal are sponsored. At the same time, the data shows that volunteer contribution remains very important to Drupal.

As a community, we need to understand that a healthy Open Source ecosystem is a diverse ecosystem that includes more than traditional Drupal agencies. The traditional Drupal agencies and Acquia contribute the most but we don't see a lot of contribution from the larger digital marketing agencies, system integrators, technology companies, or end-users of Drupal — we believe that might come as these organizations build out their Drupal practices and Drupal becomes more strategic for them.

To grow and sustain Drupal, we should support those that contribute to Drupal, and find ways to get those that are not contributing, involved in our community. We invite you to help us figure out how we can continue to strengthen our ecosystem.

We hope to repeat this work in 1 or 2 years' time so we can track our evolution. Special thanks to Tim Lehnen (Drupal Association) for providing us the credit system data and supporting us during our research.

Announcing the Yonder Podcast

As you may already know, Lullabot is a completely distributed company. We have no central office, and our employees are spread out across six different countries. For the past three years, Lullabot has organized a conference for leaders of companies like ours. The event is called Yonder and over the years, people from companies such as Automattic, GitHub, Upworthy, The World Adult Kickball Association, and many smaller digital agencies have gotten together to share ideas and address common problems.

Last month, we revamped the website and we launched the Yonder Podcast to bring these conversations to a larger audience. Every two weeks, I’ll interview another interesting person who has been thinking about the world of remote work.

While remote/distributed work is often held up as “the future of work,” I’ve talked to many leaders of brick-and-mortar businesses who are fearful about the idea. They are worried that you cannot build a vibrant company culture or that productivity will suffer. They worry that employees will take advantage or that their clients/customers will not take them seriously. Most simply worry that it will be lonely, and they will feel disconnected from their coworkers.

The Yonder Podcast doesn’t attempt to debunk these ideas so much as share the successes that distributed companies have had in these areas and others. We’ll talk about the advantages and also the difficulties of remote work and distributed teams. It’s quite something to hear from a variety of businesses working this way, and see the patterns that emerge. To me, it feels like we’re on the verge of an evolutionary step in the way that people work.

If you’re a remote worker, a business owner or manager, or you’re just curious what it might look like to work from home, I invite you to subscribe to the Yonder Podcast. You can find us on iTunes, Google Play, Stitcher, and most other podcast platforms. If you subscribe to the Yonder mailing list you’ll receive updates whenever new podcasts or articles get posted to the site.

Thanks for listening!

Monitoring Web Accessibility Compliance With Pa11y Dashboard

Auditing websites for accessibility is imperative. Ignore accessibility and you unintentionally exclude 56.7mm people (Source: US Census Bureau, 2012) from having access to your website. But, before you can do that, you need to audit your site and find out what problems exist. To manually audit every single page on medium-to-large sites can be time-consuming, if not impossible. When a large digital education service provider asked us to audit a sampling of 800 landing pages and 30 micro sites for accessibility (a total of about 4,000 pages) one month...we needed to find a better way. Pa11y Dashboard to the rescue. Pa11y Dashboard provides an open source interface for keeping track of automated accessibility tests over time. It served as a handy tool that we could reference in our extensive written report. It also allowed us to see common patterns across various sites quickly and provide custom solutions.

The Dashboard

Out of the box, the front page of Pa11y Dashboard lists a filterable overview of all tasks. By default, automated audits are performed daily at 12:30 a.m., but the timing can be reconfigured in the Cron settings. Through the dashboard, you can also create, edit, and delete your tasks, and manually trigger an audit. Check out the Pa11y Dashboard demo to see it in action.

undefined The Task Pages

Individual task pages detail the errors, notices, and warnings, giving a short description of each along with their location in the HTML. Export these reports in CSV or JSON format, as you need. Also, a graph illustrates the delta in errors, warnings, and notices over time. A task holds a user readable name, a URL to track, the accessibility standard to perform tests against, timeout settings, HTTP Basic Authentication credentials, and the optional ability to ignore certain rules.

undefined Errors, warnings, and notices

Pa11y Dashboard will output three types of messages. The system detects any clear-cut guideline violations and reports them as "errors." The system also issues "warnings" for items it identifies, but that require manual, human verification to confirm the guideline violations. And, "notices" are items that the system cannot detect, but still require a keen human eye (or ear) to ensure the web page meets a11y requirements. Warnings and notices are just as important as errors to verify and resolve to achieve standards compliance.

What's under the hood?
  • Pa11y Dashboard: A web UI to view and manage scheduled audit tasks, built with Node.js, Express for routing, Handlebars for templating, and LESS for CSS preprocessing.
  • Pa11y Webservice: The Pa11y JSON web service that allows you to run tests against multiple scheduled URLs with PhantomJS, and allows you to store and query results from a MongoDB database.
  • Pa11y: A Node.js module, which provides a command-line and javascript interface for HTML Codesniffer tests. The module relies on PhantomJS. Pa11y was originally created by a group of front-end developers at Nature Publishing Group (later Springer Nature), and continues today as its own organization on Github.
  • HTML Codesniffer: A client-side javascript application, which performs accessibility tests against the DOM. It contains rules and tests for Section 508, and all three levels of WCAG 2.0 web accessibility standards. It also includes a modal UI to run these tests in the browser via a bookmarklet.

Because the Pa11y Dashboard is fully free and open source, it allowed us to make customizations easily to serve our client's specific needs.

We created a page listing all the errors, warnings, and notices found across the audit. For every issue found, it is indicated how often it occurs, on how many pages, and lists all the pages where it is found. This gives our client a bird’s-eye view of all the audited sites and a quick insight into the most common issues. We added permalinks so we could reference individual issues easily from our written audit.


Out of the box, Pa11y Dashboard does not accommodate crawling entire sites. We wrote a custom script to collect and import all URLs from the micro sites, and adjusted the UI to visually bundle micro site URLs together.

undefined Importing tasks in batches

As previously mentioned, adding new tasks can be done through the UI. If you need to add a lot of tasks, you can leverage the Pa11y Webservice Client library that is included in the dashboard. This way, you can import your tasks in batches with a simple script.

1. Make sure the dashboard is up and running. Detailed instructions on how to set it up can be found on the Pa11y Dashboard Github page.

2. Next, create an CSV file and place it in the /data folder. We'll name it pa11y-tasks.js. It lists all the values we need to create a new task. Optionally, there are more values that can be added to tasks: ignore, timeout, wait, username, and password (for HTTP Basic Authentication).

"name","url","standard" "Hillary for America","","WCAG2AA" "Donald J Trump for President","","WCAG2AA" "Mike Herchel for President","","WCAG2AAA"

3. Install the CSV parser module.

npm install csv-parser --save

4. Add the following JavaScript file. We named it pa11y-import.js. In it we read the CSV file and import the tasks. It leverages the config file from the dashboard to create a client endpoint to store our data.

var fs = require('fs'); var csv = require('csv-parser'); var createClient = require('pa11y-webservice-client-node'); var config = require('./config'); var client = createClient('http://' + + ':' + config.webservice.port + '/'); // Read the CSV file. fs.createReadStream('./data/pa11y-tasks.csv')  .pipe(csv())  .on('data', function(data) {    // Create task.    client.tasks.create({      name:,      url: data.url,      standard: data.standard    }, function (error, task) {      // Error and success handling.      if (error) {        console.error('Error:', error);      }      if (task) {        console.log('Imported:',;      }    });  }).on('end', function () {    console.log('All tasks imported!');  });

5. Finally, run the import script once. Optionally replace the Node environment value if you are using development, or test.

NODE_ENV=production node pa11y-import.js

That's it! All your tasks should now be imported.

The future of Pa11y Dashboard

While Pa11y Dashboard in its current form is a very helpful tool, plans are being made for a successor project, named Sidekick. Among proposed solutions are full site audits, increased focus on Continuous Integration and deployment, and showing a “diff” that compares the current results to the last run. If you are interested in learning more about where the project is headed or getting involved, I suggest checking out the Sidekick proposal page.

The importance of manual audits

While we were able to do some really powerful and interesting things with the Pa11y Dashboard, keep in mind that in the end, optimizing your site for accessibility is more than getting rid of errors and warnings by audit tools. While tools like this one can give you a good overview of some of the most important issues, and are certainly a big help, no tool can completely replace manually assessing your pages. Using manual review techniques such as navigating your site by keyboard only and with a screen reader are crucial for determining whether or not you have an accessible website. Make sure every part of your site is navigable, and content can be accessed by every user. Even better, collect feedback from users with disabilities. They’ll likely catch things even the best pros won’t.

As part of this audit, we empowered our client to be able to perform manual audits, and find issues that can’t be found using automated tests. Helena’s on-site visit was critical in this process to instil enthusiasm and share practical knowledge. Ultimately, what really matters is the user experience.