Lullabot

Keeping a project journal

Have you ever felt insecure when starting a project? Usually, I feel excited, as it means a new challenge that I will face with other Lullabots. However, last year I had my first experience flying solo on a client project and, this time, I felt nervous. My main worries were:

  • Do I have the required communication skills to transform client requirements into tasks?
  • How can I avoid missing or forgetting things that the client says?
  • How do I know if I am meeting Lullabot's expectations managing the client?
  • How do I know when to say No to the client?

Being diligent in my note-taking and email communication would ensure nothing was lost in translation or forgotten so I could check off the first two concerns on my list. But the last two items on my list were more about me feeling that I needed external support; someone at Lullabot who could oversee my notes and tell me every once in a while "this is great Juampy, keep it up" or "Juampy, perhaps you could handle this in some other way."

Writing is something I love to do: I like to document my code, I like to describe my pull requests, and I like to take notes on kick-off meetings. Not only does this help the project, it helps me in the future because I have a bad memory. When I am in a cafe, I see waiters taking mental notes and, on their way to the bar, taking even more notes, nodding, and getting it right! I admire their memory. You can ask them "what are the orders?" and they tell you what every table ordered, with all the details. I can't do that so I take notes of everything that I do, or have to do, and then I turn that into tickets, calendar reminders, or TODOs.

In the following sections, I will share with you some of the benefits that I discovered by keeping a journal. Throughout, I will refer to some examples from the journal that I kept while I was part of the Module Acceleration Program.

Starting the day with a plan

When I start working, I spend the first minutes gathering notes from what happened while I was away by reading email notifications and chat logs. I also look at yesterday's notes to see if there was anything that I did not complete. With that input, I make a list of tasks like the following one:

undefined

Once I have a list like the above, I feel that I have a set of things to complete by the end of the day, which motivates me immensely. If I manage to complete them all (plus whatever else may arise), then it's time to celebrate. If I can't, then I will leave a comment underneath the remaining tasks that describes their status. I may also copy and paste these statuses into their respective tickets so the client knows my progress. The next day, I will continue working from there.

There are days when I don't get much done. With the journal, it is easy to go back and remember why—that something else happened such as "one of my teammates got stuck with a bug and needed help" or "we suddenly had to jump into a video conference that took too long." With these notes, I can see where the time is going and make future adjustments in how I manage my time.

Ticket writing

Countless times I have closed a tab by mistake while I was typing something and had to write it again. While there are some ticket systems and browser plugins that can restore your draft, others don't. Besides, some systems like Jira seem less natural and break up your writing time when, for example, you need to paste a screenshot (no, you can't just hit Ctrl + v). Both Dropbox Paper and Google Docs are great for writing a journal because you can copy and paste screenshots, add links, create TODO lists, etc., in a more seamless way. 

While working on tickets, I suddenly started to write my findings and paste screenshots in the journal (especially on the tricky ones). Then, if I completed the ticket, I would use this material for my pull request to ease peer review. If I had to work further, I would copy my findings from the journal and paste them in the ticket so the team could see my progress. Here is an example from my journal, with some annotations that describe how I will treat the notes:

undefined SCRUMs / Stand-ups

With the journal, it is very easy for me to share my status as it is just a matter of reading my notes out loud. I also use it to take post-SCRUM notes in the journal, which I use as follow-ups for my next tasks. Here is an example:

undefined

There have been times where I offered myself to lead a meeting since I already had some material in my journal that could serve as the agenda. In these cases, I shared my screen with everyone and took notes that could be seen and discussed in real-time. This approach proved to be a way to structure and provide leadership for a meeting.

Keeping your peers up to date

Betty Tran, Sally Young, and I did an on-site last year for a potential client. By the end of the on-site, we had two weeks to write a document that contained our project proposal. During these two weeks, there were many emails and shared documents sent by the client with data that we had to classify and filter to use it in the proposal. It was crucial to keep everything organized in an efficient way so I asked Betty and Sally to subscribe to my journal, where I would keep a log of all the events that happened every day with a link, each item linked to its respective document. Doing so allowed them to focus on writing the proposal and use the data without having to skim through email threads. Here is an example of how a day in that journal looked like:

undefined Management likes it

My managers at Lullabot, Seth Brown and James Sansbury, realized that keeping a journal was a transparent and effective way to monitor projects. By subscribing to my journal, they get an email every day with my latest changes. Furthermore, I can mention them, and they will get an immediate email notification to help me by posting a comment. Therefore, they encouraged Lullabot Architects and others doing solo work on projects to start writing a journal.

For example, I subscribed to Dave Reid's journal, who is currently working on a project with me. Every day, I get an email like this with his updates:

undefined

This gives me a chance to support him and keep up to date with what he is working on. If I want to add feedback, I can open his journal and write a comment.


Conclusion

Try it for a week using a writing tool that feels natural to you. The less that you need to think about the tool, the better. Leave it open at all times and take all your notes in this one location. Eventually, you will either experience some of the benefits that I mentioned above or realize that you are like one of those waiters with an elephant's memory that I admire so much.

Thanks to Seth Brown and James Sansbury for your feedback while writing this. Also thanks to Adam Balsam for letting me share the journal that I kept while working at the Module Acceleration Program.

Hero photo by Barry Silver.

Building Social With "Open Social"

Matt and Mike sit down with Taco Potze and Mieszko Czyzyk, as well as Lullabot Director of Technology, Karen Stevenson, to talk about the new Open Social Drupal distribution. We talk about the new features of Open Social, as well as the business model, developing in Drupal 8, and the pros and cons of distributions in general.

Modern decoupling is more performant

Two years ago, I started to be interested in API-first designs. I was asked during my annual review, “what kind of project would you like to be involved with, this next year?” My response: I want to develop projects from an API-first perspective.

Little did I know that I was about to embark on a series of projects that would teach me not only about decoupled Drupal, but also the subtleties of designing proper APIs to maximize performance and minimize roundtrips to the server.

Embedding resources

I was lucky enough that the client that I was working with at the time—The Tonight Show with Jimmy Fallon—decided on a decoupled approach. I was involved in the Drupal HTTP API server implementation. The project went on to win an Emmy Award for Outstanding Interactive Program.

The idea of a content repository that could be accessed from anywhere via HTTP, and leverage all the cool technologies 2014 had to offer, was—if not revolutionary—forward-looking. I was amazed by the possibilities the approach opened. The ability to expose Drupal’s data to an external team that could work in parallel using the front-end technologies that they were proficient with meant work could begin immediately. Nevertheless, there were drawbacks to the approach. For instance, we observed a lot of round trips between the consumer of the data—the client—and the server.

As it turns out, The Tonight Show with Jimmy Fallon was only the first of several decoupled projects that I undertook in rapid succession. As a result, I authored version 2.x of the RESTful module to support the JSON API spec in Drupal 7. One of the strong points of this specification is resource embedding. Embedding resources—also called resource composition—is a technique where the response to a particular entity also contains the contents of the entities it is related to. Embedding resources for relationships is one of the most effective ways to reduce the number of round trips when dealing with REST servers. It’s based on the idea that the consumer requests the relationships that it wants embedded in the response. This same idea is used in many other specifications like GraphQL.

In JSON API, the consumer can interrogate the data with a single query, tracing relationships between objects and returning the desired data in one trip. Imagine searching for a great grandparent with a genealogy system that would only let you find the name of a single family member at a time versus a system that could return every ancestor going back three generations with a single request. To do so, the client appends an include parameter in the URL. For example: 

?include=relationship1,relationship2.nestedRelationship1

The response will include information about four entities:

  • The entity being requested (the one that contains the relationships).
  • The entity that relationship1 points to. This may be an entity reference field inside of the entity being requested.
  • The entity that relationship2 points to.
  • The entity that nestedRelationship1 points to. This may be an entity reference field inside of the entity that relationship2 is pointing to.

A single request from a consumer can return multiple entities. Note that for the same API different consumers may follow different embedding patterns, depending on the designs being implemented.

The landscape for JSON API and Drupal nowadays seems bright. Dries Buytaert, the product lead of the Drupal project, hopes to include the JSON API module in core. Moreover, there seems to be numerous articles about decoupling techniques for Drupal 8.

But does resource embedding offer a performance edge over multiple round-trip requests? Let’s quantitatively compare the two.

Performance comparison

This performance comparison uses a clean Drupal installation with some automatically generated content. Bear in mind, performance analysis is tightly coupled to the content model and merits case-by-case study. Nevertheless, let’s analyze the response times to test our hypothesis: that resource embedding provides a performance improvement over traditional REST approaches.

Our test case will involve the creation of an article detail page that comes with the Standard Drupal profile. I also included the profile image of a commenter to make things a bit more complex.

undefined

In Figure 1, I’ve visually indicated the “levels” of relationships between the article itself and each accompanying chunk of content necessary to compose the “page.” Using traditional REST, a particular consumer would need to make the following requests:

  • Request the given article (node/2410).
  • Once the article response comes back it will need to request, in parallel:
    • The author of the article.
      • The profile image of the author of the article.
    • The image for the article.
    • The first tag for the article.
    • The second tag for the article.
    • The first comment on the article.
      • The author of the first comment on the article.
        • The profile image of the author of the first comment of the article.
    • The second comment of the article.
      • The author of the second comment of the article.

In contrast, using the JSON API module (or any other with resource composition), will only require a single request with the include query parameter set to 

?include=uid,uid.field_image,field_tags,comments,comments.uid,comments.uid.field_image

When the server gets such a request it will load all the requested entities and return them back in a single swoop. Thus, the front-end framework for your decoupled app gets all of its data requirements in a JSON document in a single request instead of many.

For simplicity I will assume that the overall response time of the REST-based approach will be the one with the longest path (four levels deep). Having four parallel requests that happen at the same time will not have a big impact on the final response time. In a more realistic performance analysis, we would take into account that having four parallel calls degrades the overall performance. Even in this handicapped scenario the resource embedding should have a better response time.

Once the request reaches the server, if the response to it is ready in the different caching layers, it takes the same effort to retrieve a big JSON document for the JSON API request than to retrieve a small JSON document for one of the REST requests. That indicates that the big effort is in bootstrapping Drupal to a point where it can serve a cached response. That is true for anonymous and authenticated traffic, via the Page Cache and Dynamic Page Cache core modules.

undefined

The graphic above shows the response time for each approach. Both approaches are cached in the page cache, so there is a constant response time to bootstrap Drupal and grab the cache. For this example the response time for every request was ~7 ms every time.

It is obvious that the more complex the interconnections between your data are the greater the advantage of using JSON API's resource embedding. I believe that even though this example is extremely simple, we were able to cut response time by 75%.

If we now introduce latency between the consumer and the server, you can observe that the JSON API response still takes 75% less time. However, the total response time is degraded significantly. In the following chart, I have assumed an optimistic, and constant, transport time of 75 ms.

undefined Conclusion

This article describes some sophisticated techniques that can dramatically improve the performance of our apps. There are some other challenges in decoupled systems that I did not mention here. If you are interested in those—and more!—please attend my session at DrupalCon Dublin Building Advanced Web Services With JSON API, or watch it later. I hope to prove that you can create more engaging products and services by using advanced web services. After all, when you are building digital experiences one of your primary goals you should be to make it enjoyable to the user. Shorter response times correlate to more successful user engagement. And successful engagements make for amazing digital experiences .

Lessons Learned in Scaling Agile

“Be like water, flow.” - Bruce Lee

Being agile (note agile with a lower-case a) is not about sticking to a prescribed set of principles or methodologies. It’s about minimizing the prescribed set so that you focus on adapting to change. One of the biggest lessons to impart here is that you need to be agile in the sense that you can adapt to how your client works, but not give up on the course you feel is more “right” when you sense something needs to change.

Throughout this piece, you’ll note that I advise against certain things, but I also encourage you to understand the “why” behind your client’s motivations. I tend to be judgemental of client processes when they’re not as efficient as the ones I’m used to, but I also understand that not everything can be improved overnight.

I hope to illustrate how even when you come up against adversity and difficult situations it helps first to take a step back and relate to the client, understand the difference in values, and then try to be the mediator between those opposing values.

The Gig

Not so long ago we were part of a project in which the company decided that they wanted their Scrum teams to adhere to the Scaled Agile Framework and began rolling that out across their organization. Everyone got a JIRA license, a Scrum Coach was added to the team, and they split up into cross-functional teams. The teams were comprised of multiple Product Owners and their direct reports—almost all of whom worked on multiple projects. Lullabot was brought into the mix to help out on just one of these projects: the website.

Before continuing, I’d like to take a moment to reflect on the irony that presents itself when looking at the complexity illustrated in this graphic from the Scaled Agile Framework in the context of the word “agile.” I guarantee this is not what the founders of the Agile Manifesto had in mind.

undefined

Compare this to the Agile Manifesto:

  • Individuals and interactions over processes and tools
  • Working software over comprehensive documentation
  • Customer collaboration over contract negotiation
  • Responding to change over following a plan

Upon introduction to the client, we found that there was a fairly extensive Backlog for the website project already written up as User Stories. Many of these User Stories were incomplete, and there were no other written requirements or specifications though the Product Owners possessed this knowledge and could answer questions we had about a given Story.

Soon after we started, our client’s upper management assigned the team a Scrum Coach from an outside consultancy. Part of the coach’s job was to sit in on our meetings and listen to how the projects were being run and to make suggestions on what should change about how the meetings were run and the projects planned out to adhere to the Scrum methodology.

Longer meetings resulted—some six hours long—in which all members of the team were present to groom the Backlog and plan our Sprints. Sprint iterations grew from two-week intervals to three. Time was spent going through the priorities of the Product Owners, the subsequent User Stories that represented those priorities, sizing the User Stories with Story Points, and planning which Sprint they would go into. Discussions consisted of what Tasks were necessary to accomplish a given User Story, and then estimating the Tasks.

Eventually, as we started to write better User Stories and, in turn, gain a better understanding of the work, we were able to get ahead of the meetings. We planned out the tasks ahead of time for the priorities that were set and the meetings became shorter and shorter.

Through all of this, I gained some significant insight into our client’s company and how it works. I also gained a better understanding of how the Scrum methodology can fit into a larger corporation, and where it does not. I learned where it makes sense to diverge from the process and where it does not. Here are some of my key takeaways from this experience.

Multiple Product Owners on a Scrum Team is Hard

We’re used to just one product being worked on by a team, but our client had a different structure where the team was cross functional and worked on multiple products at once as well as supported them. Since there were multiple products, there were multiple Product Owners. Each product had a team, but each team member was responsible for multiple products, resulting in divided attention, and sometimes diverging priorities. After working in this way, it’s become clear that it is more ideal, if not practical, to have a team dedicated to one product. Multiple product owners was the key factor in why our Sprint Planning meetings lasted so long. We weren’t planning out what we were going to work on for just one product, but for many.

Competing Priorities

With multiple Product Owners on a team, there is competition for resources and priorities. Which product comes first? Which task for which product does an individual work on first? A developer now has multiple Product Owners to report to, and that means direction from more than one person, which can be confusing and frustrating. More time is spent discussing these priorities and balancing them than if you just had one product and one Owner.

Balancing the Work

Even though teams can be pretty cross functional, the cross functionality is not always completely balanced. Some people have more knowledge about one system than another and so are more suited to that work. Or perhaps—as in Lullabot’s case—you’re an outside vendor only working on one project. If your overall team already has a target for your maximum amount of story points you can have within a sprint, and most of those points are allotted to a product that is unrelated to your work, you end up with a number of story points that may be less than what you can actually handle for a given sprint for your product. In short, it becomes difficult to make sure everyone has enough work to last them through the end of the sprint. The team spends more time trying to balance the work than doing the work.

In a truly agile scenario, you respond to the needs of the client, and to the bandwidth of the developers. If a developer runs out of work, you can pull that work and begin—not sit around waiting for the end of the sprint—without repercussions.

Estimating the Work

With multiple areas of expertise and responsibilities, it takes more time to estimate the work because the goal is to have everyone understand the work being estimated. Estimation sessions should be limited to just the people with the appropriate knowledge, and as more people become familiar with the work they can lend their input, but time is wasted having people in a meeting who don’t understand the work. That person also tends to feel devalued because their time is being wasted.

For instance, in this case, we have multiple products being worked on. In a sprint planning meeting, we have members of each product. Lullabot is one of them, but we’re only part of one product. So while estimation is happening around issues that are relevant to Lullabot’s work, the people in this meeting who have no work at all on the same product are having their time wasted and feeling devalued. By having sprint planning meetings around just one product at a time, you can have only the people who are relevant to the discussions in the meeting and avoid wasting anyone’s time.

People over Process

The first value in the Agile Manifesto is “Individuals and interactions over processes and tools.” This resonates with Lullabot’s values—especially Be Human. If a process isn’t working, we change it. We are always on the lookout for opportunities to make more valuable use of our time—Lullabot’s brand of continuous process improvement. Our affinity for efficiency extends to everything we do—whether that’s designing our processes, coding a migration to run as fast as possible, or optimizing the page load of a website. How does that relate to being human? It shows how much we care about our people. We value their time. 

Since our first reaction to tedious processes is to cut them up and change them around, it was difficult to keep from trying to change this client’s processes. Considering the client was just starting to understand how Scrum works and to get the various Ceremonies into place, we felt it was more prudent to resist our typical reaction of trying to optimize the processes early on. They were already changing their existing process and had a guide in the form of a Scrum Coach to help them understand and implement this new way of working. Any changes to this process would have been more of a hinderance at this point. We’ve been through this before and can seemingly see into the future around this and where things would need to change to make them more efficient, but without a solid understanding of the Scrum Methodology first amongst the various team members, our words would fall on deaf ears which were already focused on the changes at hand.

“Progress is impossible without change, and those who cannot change their minds cannot change anything.” - George Bernard Shaw

Focusing on helping people understand the changes to their process was where we decided it would be best to spend our time. We let them know that certain things may take a bit longer this way, but we were dedicated to the team learning the new process and adhering to how they wanted to work. In this we established a base of trust and knowledge sharing, helping to position ourselves as experts. We kept the process optimizations in our back pocket for when the team was more solid and knowledgeable and could then make more educated decisions about which processes should change, and which ones should adhere to the typical Scrum methodology being implemented. With an understanding of what work may suffer because of this restructuring, we were now free to focus on the relationship and to meet the expectations of our Product Owner.

In this way, we didn’t take our typical approach of changing the process to help our people, but instead focused on helping people understand the process. People understanding the process was the way Lullabot could put the people first and optimize the process later.

Create more Value than you Receive

One of the challenges in working with an inefficient process that cannot be changed is that you’re directly violating that value of efficiency that we hold so dear. When a value is violated, even for a short time, feelings of guilt and remorse set in. If this continues unchallenged and unchanged, apathy can result. You begin not to care about your work because you know you cannot make it as valuable as you know it could be. We want to avoid apathy at all costs. So how do we avoid it when there seems to be nothing we can do, or we’re too tired to continue fighting for our values

A poignant reminder came from one of our Lullabots facing a similar situation. 

“…this is a necessary first step in a process, [and we] need to work toward activities that [the client] can look out the window and appreciate…”

This spoke to the need of going through these hoops while the company reorganized the way it works, while continuing to strive to find those areas in which Lullabot has become known for improving any project. By sticking to our values and sharing them with others through our work and our daily interactions, we can slowly but surely improve not only the projects we work on, but the relationships and by extension the entire team.

Putting such a value on hold even for a short time is difficult, so we’ve tried to find ways to balance it out. Participating in other more lenient projects, passion projects, and taking ‘mental health’ days are just some of the coping mechanisms we employed. But building trust and establishing a solid foundation for your relationship at the beginning is important. Don’t give up on trying to make positive changes to the project, but be patient instead of pushy. Establishing that base will lead to the conversations that need to happen to make your situation better.

Process for Process Sake

Another value of ours as developers is simplicity. We strive not to create complexity for its sake. And when a process is complex, it’s a double whammy. To cope with a tedious process, it helps to understand its origins. You might still resent that it’s a complex process, but at least you’ll understand the thinking behind it, if not the need for it. If you still don’t understand, then perhaps there is room for change. But you have to understand it first to change it.

We advise against doing any process just for the sake of the process in any situation. Strive to understand the why behind the processes being put into place, especially if you think they don’t make sense. In our case, there was always a factor that we weren’t seeing. Typically that factor had something to do with a side of the client’s business that we simply did not fully understand at the time. After it was explained, it became apparent why a process was so complex and why the complexity was deemed necessary. That doesn’t mean we don’t still strive to reduce the complexity. In fact, we look for every opportunity to reduce complexity wherever we see it. 

Be Truly Agile

Agility can come in many forms. Agile with a capital “A” has become a “thing” in the business world and in the software world that has connotations that are starting to change the meaning of the word—for some in a negative manner due to the a number of meetings and process overhead that can result. Being truly agile—with a lowercase “a”—is a thing of rarity and beauty. Be “agile” in the sense that you are nimble, quick, and alert. Be agile in the sense that you can respond quickly to change but also agile enough to recognize when it isn’t yet time to change a process. Seek to understand first, recommend changes second. Learn about the subscribed Agile methodologies that are out there, and be agile enough to adapt them as necessary to fit the needs of your project.

Syntax is Syntax? Lullabot's Non-Drupal Development Work

Did you know that Lullabot does a significant amount of non-Drupal work? Matt and Mike sit down with several Lullabots who are working on non-Drupal projects including Node, Swift, and React. We talk pros and cons of working in various languages, how they compare to our PHP world, and lots more.

Who Sponsors Drupal Development?

(This article, co-authored with Dries Buytaert, the founder and project lead of Drupal, was cross-posted on drupal.org, matthewtift.com, and buytaert.net.)

There exist millions of Open Source projects today, but many of them aren't sustainable. Scaling Open Source projects in a sustainable manner is difficult. A prime example is OpenSSL, which plays a critical role in securing the internet. Despite its importance, the entire OpenSSL development team is relatively small, consisting of 11 people, 10 of whom are volunteers. In 2014, security researchers discovered an important security bug that exposed millions of websites. Like OpenSSL, most Open Source projects fail to scale their resources. Notable exceptions are the Linux kernel, Debian, Apache, Drupal, and WordPress, which have foundations, multiple corporate sponsors and many contributors that help these projects scale.

We (Dries Buytaert is the founder and project lead of Drupal and co-founder and Chief Technology Officer of Acquia and Matthew Tift is a Senior Developer at Lullabot and Drupal 8 configuration system co-maintainer) believe that the Drupal community has a shared responsibility to build Drupal and that those who get more from Drupal should consider giving more. We examined commit data to help understand who develops Drupal, how much of that work is sponsored, and where that sponsorship comes from. We will illustrate that the Drupal community is far ahead in understanding how to sustain and scale the project. We will show that the Drupal project is a healthy project with a diverse community of contributors. Nevertheless, in Drupal's spirit of always striving to do better, we will also highlight areas where our community can and should do better.

Who is working on Drupal?

In the spring of 2015, after proposing ideas about giving credit and discussing various approaches at length, Drupal.org added the ability for people to attribute their work to an organization or customer in the Drupal.org issue queues. Maintainers of Drupal themes and modules can award issues credits to people who help resolve issues with code, comments, design, and more.

undefined

Drupal.org's credit system captures all the issue activity on Drupal.org. This is primarily code contributions, but also includes some (but not all) of the work on design, translations, documentation, etc. It is important to note that contributing in the issues on Drupal.org is not the only way to contribute. There are other activities — for instance, sponsoring events, promoting Drupal, providing help and mentoring — important to the long-term health of the Drupal project. These activities are not currently captured by the credit system. Additionally, we acknowledge that parts of Drupal are developed on GitHub and that credits might get lost when those contributions are moved to Drupal.org. For the purposes of this post, however, we looked only at the issue contributions captured by the credit system on Drupal.org.

What we learned is that in the 12-month period from July 1, 2015 to June 30, 2016 there were 32,711 issue credits — both to Drupal core as well as all the contributed themes and modules — attributed to 5,196 different individual contributors and 659 different organizations.

Despite the large number of individual contributors, a relatively small number do the majority of the work. Approximately 51% of the contributors involved got just one credit. The top 30 contributors (or top 0.5% contributors) account for over 21% of the total credits, indicating that these individuals put an incredible amount of time and effort in developing Drupal and its contributed modules:

Rank Username Issues 1 dawehner 560 2 DamienMcKenna 448 3 alexpott 409 4 Berdir 383 5 Wim Leers 382 6 jhodgdon 381 7 joelpittet 294 8 heykarthikwithu 293 9 mglaman 292 10 drunken monkey 248 11 Sam152 237 12 borisson_ 207 13 benjy 206 14 edurenye 184 15 catch 180 16 slashrsm 179 17 phenaproxima 177 18 mbovan 174 19 tim.plunkett 168 20 rakesh.gectcr 163 21 martin107 163 22 dsnopek 152 23 mikeryan 150 24 jhedstrom 149 25 xjm 147 26 hussainweb 147 27 stefan.r 146 28 bojanz 145 29 penyaskito 141 30 larowlan 135 How much of the work is sponsored?

As mentioned above, from July 1, 2015 to June 30, 2016, 659 organizations contributed code to Drupal.org. Drupal is used by more than one million websites. The vast majority of the organizations behind these Drupal websites never participate in the development of Drupal; they use the software as it is and do not feel the need to help drive its development.

Technically, Drupal started out as a 100% volunteer-driven project. But nowadays, the data suggests that the majority of the code on Drupal.org is sponsored by organizations in Drupal's ecosystem. For example, of the 32,711 commit credits we studied, 69% of the credited work is “sponsored.”

We then looked at the distribution of how many of the credits are given to volunteers versus given to individuals doing "sponsored work" (i.e. contributing as part of their paid job):

undefined

Looking at the top 100 contributors, for example, 23% of their credits are the result of contributing as volunteers and 56% of their credits are attributed to a corporate sponsor. The remainder, roughly 21% of the credits, are not attributed. Attribution is optional so this means it could either be volunteer-driven, sponsored, or both.

As can be seen on the graph, the ratio of volunteer versus sponsored don't meaningfully change as we look beyond the top 100 — the only thing that changes is that more credits that are not attributed. This might be explained by the fact that occasional contributors might not be aware of or understand the credit system, or could not be bothered with setting up organizational profiles for their employer or customers.

As shown in jamadar's screenshot above, a credit can be marked as volunteer and sponsored at the same time. This could be the case when someone does the minimum required work to satisfy the customer's need, but uses his or her spare time to add extra functionality. We can also look at the amount of code credits that are exclusively volunteer credits. Of the 7,874 credits that marked volunteer, 43% of them (3,376 credits) only had the volunteer box checked and 57% of them (4,498) were also partially sponsored. These 3,376 credits are one of our best metrics to measure volunteer-only contributions. This suggests that only 10% of the 32,711 commit credits we examined were contributed exclusively by volunteers. This number is a stark contrast to the 12,888 credits that were “purely sponsored,” and that account for 39% of the total credits. In other words, there were roughly four times as many “purely sponsored” credits as there were “purely volunteer” credits.

When we looked at the 5,196 users, rather than credits, we found somewhat different results. A similar percentage of all users had exclusively volunteer credits: 14% (741 users). But the percentage of users with exclusively sponsored credits is only 50% higher: 21% (1077 users). Thus, when we look at the data this way, we find that users who only do sponsored work tend to contribute quite a bit more than users who only do volunteer work.

None of these methodologies are perfect, but they all point to a conclusion that most of the work on Drupal is sponsored. At the same time, the data shows that volunteer contribution remains very important to Drupal. We believe there is a healthy ratio between sponsored and volunteer contributions.

Who is sponsoring the work?

Because we established that most of the work on Drupal is sponsored, we know it is important to track and study what organizations contribute to Drupal. Despite 659 different organizations contributing to Drupal, approximately 50% of them got 4 credits or less. The top 30 organizations (roughly top 5%) account for about 29% of the total credits, which suggests that the top 30 companies play a crucial role in the health of the Drupal project. The graph below shows the top 30 organizations and the number of credits they received between July 1, 2015 and June 30, 2016:

undefined

While not immediately obvious from the graph above, different types of companies are active in Drupal's ecosystem and we propose the following categorization below to discuss our ecosystem.

Category Description Traditional Drupal businesses Small-to-medium-sized professional services companies that make money primarily using Drupal. They typically employ less than 100 employees, and because they specialize in Drupal, many of these professional services companies contribute frequently and are a huge part of our community. Examples are Lullabot (shown on graph) or Chapter Three (shown on graph). Digital marketing agencies Larger full-service agencies that have marketing led practices using a variety of tools, typically including Drupal, Adobe Experience Manager, Sitecore, WordPress, etc. They are typically larger, with the larger agencies employing thousands of people. Examples are Sapient (shown on graph) or AKQA. System integrators Larger companies that specialize in bringing together different technologies into one solution. Example system agencies are Accenture, TATA Consultancy Services, Capgemini or CI&T. Technology and infrastructure companies Examples are Acquia (shown on graph), Lingotek (shown on graph), BlackMesh, RackSpace, Pantheon or Platform.sh. End-users Examples are Pfizer (shown on graph), Examiner.com (shown on graph) or NBC Universal.

Most of the top 30 sponsors are traditional Drupal companies. Sapient (120 credits) is the only digital marketing agency showing up in the top 30. No system integrator shows up in the top 30. The first system integrator is CI&T, which ranked 31st with 102 credits. As far as system integrators are concerned CI&T is a smaller player with between 1,000 and 5,000 employees. Other system integrators with credits are Capgemini (43 credits), Globant (26 credits), and TATA Consultancy Services (7 credits). We didn't see any code contributions from Accenture, Wipro or IBM Global Services. We expect these will come as most of them are building out Drupal practices. For example, we know that IBM Global Services already has over 100 people doing Drupal work.

undefined

When we look beyond the top 30 sponsors, we see that roughly 82% of the code contribution on Drupal.org comes from the traditional Drupal businesses. About 13% of the contributions comes from infrastructure and software companies, though that category is mostly dominated by one company, Acquia. This means that the technology and infrastructure companies, digital marketing agencies, system integrators and end-users are not meaningfully contributing code to Drupal.org today. In an ideal world, the pie chart above would be sliced in equal sized parts.

How can we explain that unbalance? We believe the two biggest reasons are: (1) Drupal's strategic importance and (2) the level of maturity with Drupal and Open Source. Various of the traditional Drupal agencies have been involved with Drupal for 10 years and almost entirely depend on on Drupal. Given both their expertise and dependence on Drupal, they are most likely to look after Drupal's development and well-being. These organizations are typically recognized as Drupal experts and sought out by organizations that want to build a Drupal website. Contrast this with most of the digital marketing agencies and system integrators who have the size to work with a diversified portfolio of content management platforms, and are just getting started with Drupal and Open Source. They deliver digital marketing solutions and aren't necessarily sought out for their Drupal expertise. As their Drupal practices grow in size and importance, this could change, and when it does, we expect them to contribute more. Right now many of the digital marketing agencies and system integrators have little or no experience with Open Source so it is important that we motivate them to contribute and then teach them how to contribute.

There are two main business reasons for organizations to contribute: (1) it improves their ability to sell and win deals and (2) it improves their ability to hire. Companies that contribute to Drupal tend to promote their contributions in RFPs and sales pitches to win more deals. Contributing to Drupal also results in being recognized as a great place to work for Drupal experts.

We also should note that many organizations in the Drupal community contribute for reasons that would not seem to be explicitly economically motivated. More than 100 credits were sponsored by colleges or universities, such as the University of Waterloo (45 credits). More than 50 credits came from community groups, such as the Drupal Bangalore Community and the Drupal Ukraine Community. Other nonprofits and government organization that appeared in our data include the Drupal Association (166), National Virtual Library of India (25 credits), Center for Research Libraries (20), and Welsh Government (9 credits).

Infrastructure and software companies

Infrastructure and software companies play a different role in our community. These companies are less reliant on professional services (building Drupal websites) and primarily make money from selling subscription based products.

Acquia, Pantheon and Platform.sh are venture-backed Platform-as-a-Service companies born out of the Drupal community. Rackspace and AWS are public companies hosting thousands of Drupal sites each. Lingotek offers cloud-based translation management software for Drupal.

undefined

The graph above suggests that Pantheon and Platform.sh have barely contributed code on Drupal.org during the past year. (Platform.sh only became an independent company 6 months ago after they split off from CommerceGuys.) The chart also does not reflect sponsored code contributions on GitHub (such as drush), Drupal event sponsorship, and the wide variety of value that these companies add to Drupal and other Open Source communities.

Consequently, these data show that the Drupal community needs to do a better job of enticing infrastructure and software companies to contribute code to Drupal.org. The Drupal community has a long tradition of encouraging organizations to share code on Drupal.org rather than keep it behind firewalls. While the spirit of the Drupal project cannot be reduced to any single ideology — not every organization can or will share their code — we would like to see organizations continue to prioritize collaboration over individual ownership. Our aim is not to criticize those who do not contribute, but rather to help foster an environment worthy of contribution.

End users

We saw two end-users in the top 30 corporate sponsors: Pfizer (158 credits) and Examiner.com (132 credits). Other notable end-users that are actively giving back are Workday (52 credits), NBC Universal (40 credits), the University of Waterloo (45 credits) and CARD.com (33 credits). The end users that tend to contribute to Drupal use Drupal for a key part of their business and often have an internal team of Drupal developers.

Given that there are hundreds of thousands of Drupal end-users, we would like to see more end-users in the top 30 sponsors. We recognize that a lot of digital agencies don't want, or are not legally allowed, to attribute their customers. We hope that will change as Open Source continues to get more and more adopted.

Given the vast amount of Drupal users, we believe encouraging end-users to contribute could be a big opportunity. Being credited on Drupal.org gives them visibility in the Drupal community and recognizes them as a great place for Open Source developers to work.

The uneasy alliance with corporate contributions

As mentioned above, when community-driven Open Source projects grow, there becomes a bigger need for organizations to help drive its development. It almost always creates an uneasy alliance between volunteers and corporations.

This theory played out in the Linux community well before it played out in the Drupal community. The Linux project is 25 years old now has seen a steady increase in the number of corporate contributors for roughly 20 years. While Linux companies like Red Hat and SUSE rank highly on the contribution list, so do non-Linux-centric companies such as Samsung, Intel, Oracle and Google. The major theme in this story is that all of these corporate contributors were using Linux as an integral part of their business.

The 659 organizations that contribute to Drupal (which includes corporations), is roughly three times the number of organizations that sponsor development of the Linux kernel, “one of the largest cooperative software projects ever attempted.” In fairness, Linux has a different ecosystem than Drupal. The Linux business ecosystem has various large organizations (Red Hat, Google, Intel, IBM and SUSE) for whom Linux is very strategic. As a result, many of them employ dozens of full-time Linux contributors and invest millions of dollars in Linux each year.

In the Drupal community, Acquia has had people dedicated full-time to Drupal starting nine years ago when it hired Gábor Hojtsy to contribute to Drupal core full-time. Today, Acquia has about 10 developers contributing to Drupal full-time. They work on core, contributed modules, security, user experience, performance, best practices, and more. Their work has benefited untold numbers of people around the world, most of whom are not Acquia customers.

In response to Acquia’s high level of participation in the Drupal project, as well as to the number of Acquia employees that hold leadership positions, some members of the Drupal community have suggested that Acquia wields its influence and power to control the future of Drupal for its own commercial benefit. But neither of us believe that Acquia should contribute less. Instead, we would like to see more companies provide more leadership to Drupal and meaningfully contribute on Drupal.org.

Who is sponsoring the top 30 contributors? Rank Username Issues Volunteer Sponsored Not specified Sponsors 1 dawehner 560 84.1% 77.7% 9.5% Drupal Association (182), Chapter Three (179), Tag1 Consulting (160), Cando (6), Acquia (4), Comm-press (1) 2 DamienMcKenna 448 6.9% 76.3% 19.4% Mediacurrent (342) 3 alexpott 409 0.2% 97.8% 2.2% Chapter Three (400) 4 Berdir 383 0.0% 95.3% 4.7% MD Systems (365), Acquia (9) 5 Wim Leers 382 31.7% 98.2% 1.8% Acquia (375) 6 jhodgdon 381 5.2% 3.4% 91.3% Drupal Association (13), Poplar ProductivityWare (13) 7 joelpittet 294 23.8% 1.4% 76.2% Drupal Association (4) 8 heykarthikwithu 293 99.3% 100.0% 0.0% Valuebound (293), Drupal Bangalore Community (3) 9 mglaman 292 9.6% 96.9% 0.7% Commerce Guys (257), Bluehorn Digital (14), Gaggle.net, Inc. (12), LivePerson, Inc (11), Bluespark (5), DPCI (3), Thinkbean, LLC (3), Digital Bridge Solutions (2), Matsmart (1) 10 drunken monkey 248 75.4% 55.6% 2.0% Acquia (72), StudentFirst (44), epiqo (12), Vizala (9), Sunlime IT Services GmbH (1) 11 Sam152 237 75.9% 89.5% 10.1% PreviousNext (210), Code Drop (2) 12 borisson_ 207 62.8% 36.2% 15.9% Acquia (67), Intracto digital agency (8) 13 benjy 206 0.0% 98.1% 1.9% PreviousNext (168), Code Drop (34) 14 edurenye 184 0.0% 100.0% 0.0% MD Systems (184) 15 catch 180 3.3% 44.4% 54.4% Third and Grove (44), Tag1 Consulting (36), Drupal Association (4) 16 slashrsm 179 12.8% 96.6% 2.8% Examiner.com (89), MD Systems (84), Acquia (18), Studio Matris (1) 17 phenaproxima 177 0.0% 94.4% 5.6% Acquia (167) 18 mbovan 174 7.5% 100.0% 0.0% MD Systems (118), ACTO Team (43), Google Summer of Code (13) 19 tim.plunkett 168 14.3% 89.9% 10.1% Acquia (151) 20 rakesh.gectcr 163 100.0% 100.0% 0.0% Valuebound (138), National Virtual Library of India (NVLI) (25) 21 martin107 163 4.9% 0.0% 95.1%   22 dsnopek 152 0.7% 0.0% 99.3%   23 mikeryan 150 0.0% 89.3% 10.7% Acquia (112), Virtuoso Performance (22), Drupalize.Me (4), North Studio (4) 24 jhedstrom 149 0.0% 83.2% 16.8% Phase2 (124), Workday, Inc. (36), Memorial Sloan Kettering Cancer Center (4) 25 xjm 147 0.0% 81.0% 19.0% Acquia (119) 26 hussainweb 147 2.0% 98.6% 1.4% Axelerant (145) 27 stefan.r 146 0.7% 0.7% 98.6% Drupal Association (1) 28 bojanz 145 2.1% 83.4% 15.2% Commerce Guys (121), Bluespark (2) 29 penyaskito 141 6.4% 95.0% 3.5% Lingotek (129), Cocomore AG (5) 30 larowlan 135 34.1% 63.0% 16.3% PreviousNext (85), Department of Justice & Regulation, Victoria (14), amaysim Australia Ltd. (1), University of Adelaide (1)

We observe that the top 30 contributors are sponsored by 45 organizations. This kind of diversity is aligned with our desire not to see Drupal controlled by a single organization. The top 30 contributors and the 45 organizations are from many different parts in the world and work with customers large or small. We could still benefit from more diversity, though. The top 30 lacks digital marketing agencies, large system integrators and end-users — all of whom could contribute meaningfully to making Drupal for them and others.

Evolving the credit system

The credit system gives us quantifiable data about where our community's contributions come from, but that data is not perfect. Here are a few suggested improvements:

  1. We need to find ways to recognize non-code contributions as well as code contributions outside of Drupal.org (i.e. on GitHub). Lots of people and organizations spend hundreds of hours putting together local events, writing documentation, translating Drupal, mentoring new contributors, and more — and none of that gets captured by the credit system.
  2. We'd benefit by finding a way to account for the complexity and quality of contributions; one person might have worked several weeks for just one credit, while another person might have gotten a credit for 30 minutes of work. We could, for example, consider the issue credit data in conjunction with Git commit data regarding insertions, deletions, and files changed.
  3. We could try to leverage the credit system to encourage more companies, especially those that do not contribute today, to participate in large-scale initiatives. Dries presented some ideas two years ago in his DrupalCon Amsterdam keynote and Matthew has suggested other ideas, but we are open to more suggestions on how we might bring more contributors into the fold using the credit system.
  4. We could segment out organization profiles between end users and different kinds of service providers. Doing so would make it easier to see who the top contributors are in each segment and perhaps foster more healthy competition among peers. In turn, the community could learn about the peculiar motivations within each segment.

Like Drupal the software, the credit system on Drupal.org is a tool that can evolve, but that ultimately will only be useful when the community uses it, understands its shortcomings, and suggests constructive improvements. In highlighting the organizations that sponsor work on Drupal.org, we hope to provoke responses that help evolve the credit system into something that incentivizes business to sponsor more work and that allows more people the opportunity to participate in our community, learn from others, teach newcomers, and make positive contributions. We view Drupal as a productive force for change and we wish to use the credit system to highlight (at least some of) the work of our diverse community of volunteers, companies, nonprofits, governments, schools, universities, individuals, and other groups.

Conclusion

Our data shows that Drupal is a vibrant and diverse community, with thousands of contributors, that is constantly evolving and improving the software. While here we have examined issue credits mostly through the lens of sponsorship, in future analyses we plan to consider the same issue credits in conjunction with other publicly-disclosed Drupal user data, such as gender identification, geography, seasonal participation, mentorship, and event attendance.

Our analysis of the Drupal.org credit data concludes that most of the contributions to Drupal are sponsored. At the same time, the data shows that volunteer contribution remains very important to Drupal.

As a community, we need to understand that a healthy Open Source ecosystem is a diverse ecosystem that includes more than traditional Drupal agencies. The traditional Drupal agencies and Acquia contribute the most but we don't see a lot of contribution from the larger digital marketing agencies, system integrators, technology companies, or end-users of Drupal — we believe that might come as these organizations build out their Drupal practices and Drupal becomes more strategic for them.

To grow and sustain Drupal, we should support those that contribute to Drupal, and find ways to get those that are not contributing, involved in our community. We invite you to help us figure out how we can continue to strengthen our ecosystem.

We hope to repeat this work in 1 or 2 years' time so we can track our evolution. Special thanks to Tim Lehnen (Drupal Association) for providing us the credit system data and supporting us during our research.

Announcing the Yonder Podcast

As you may already know, Lullabot is a completely distributed company. We have no central office, and our employees are spread out across six different countries. For the past three years, Lullabot has organized a conference for leaders of companies like ours. The event is called Yonder and over the years, people from companies such as Automattic, GitHub, Upworthy, The World Adult Kickball Association, and many smaller digital agencies have gotten together to share ideas and address common problems.

Last month, we revamped the yonder.io website and we launched the Yonder Podcast to bring these conversations to a larger audience. Every two weeks, I’ll interview another interesting person who has been thinking about the world of remote work.

While remote/distributed work is often held up as “the future of work,” I’ve talked to many leaders of brick-and-mortar businesses who are fearful about the idea. They are worried that you cannot build a vibrant company culture or that productivity will suffer. They worry that employees will take advantage or that their clients/customers will not take them seriously. Most simply worry that it will be lonely, and they will feel disconnected from their coworkers.

The Yonder Podcast doesn’t attempt to debunk these ideas so much as share the successes that distributed companies have had in these areas and others. We’ll talk about the advantages and also the difficulties of remote work and distributed teams. It’s quite something to hear from a variety of businesses working this way, and see the patterns that emerge. To me, it feels like we’re on the verge of an evolutionary step in the way that people work.

If you’re a remote worker, a business owner or manager, or you’re just curious what it might look like to work from home, I invite you to subscribe to the Yonder Podcast. You can find us on iTunes, Google Play, Stitcher, and most other podcast platforms. If you subscribe to the Yonder mailing list you’ll receive updates whenever new podcasts or articles get posted to the site.

Thanks for listening!

Monitoring Web Accessibility Compliance With Pa11y Dashboard

Auditing websites for accessibility is imperative. Ignore accessibility and you unintentionally exclude 56.7mm people (Source: US Census Bureau, 2012) from having access to your website. But, before you can do that, you need to audit your site and find out what problems exist. To manually audit every single page on medium-to-large sites can be time-consuming, if not impossible. When a large digital education service provider asked us to audit a sampling of 800 landing pages and 30 micro sites for accessibility (a total of about 4,000 pages)...in one month...we needed to find a better way. Pa11y Dashboard to the rescue. Pa11y Dashboard provides an open source interface for keeping track of automated accessibility tests over time. It served as a handy tool that we could reference in our extensive written report. It also allowed us to see common patterns across various sites quickly and provide custom solutions.

The Dashboard

Out of the box, the front page of Pa11y Dashboard lists a filterable overview of all tasks. By default, automated audits are performed daily at 12:30 a.m., but the timing can be reconfigured in the Cron settings. Through the dashboard, you can also create, edit, and delete your tasks, and manually trigger an audit. Check out the Pa11y Dashboard demo to see it in action.

undefined The Task Pages

Individual task pages detail the errors, notices, and warnings, giving a short description of each along with their location in the HTML. Export these reports in CSV or JSON format, as you need. Also, a graph illustrates the delta in errors, warnings, and notices over time. A task holds a user readable name, a URL to track, the accessibility standard to perform tests against, timeout settings, HTTP Basic Authentication credentials, and the optional ability to ignore certain rules.

undefined Errors, warnings, and notices

Pa11y Dashboard will output three types of messages. The system detects any clear-cut guideline violations and reports them as "errors." The system also issues "warnings" for items it identifies, but that require manual, human verification to confirm the guideline violations. And, "notices" are items that the system cannot detect, but still require a keen human eye (or ear) to ensure the web page meets a11y requirements. Warnings and notices are just as important as errors to verify and resolve to achieve standards compliance.

What's under the hood?
  • Pa11y Dashboard: A web UI to view and manage scheduled audit tasks, built with Node.js, Express for routing, Handlebars for templating, and LESS for CSS preprocessing.
  • Pa11y Webservice: The Pa11y JSON web service that allows you to run tests against multiple scheduled URLs with PhantomJS, and allows you to store and query results from a MongoDB database.
  • Pa11y: A Node.js module, which provides a command-line and javascript interface for HTML Codesniffer tests. The module relies on PhantomJS. Pa11y was originally created by a group of front-end developers at Nature Publishing Group (later Springer Nature), and continues today as its own organization on Github.
  • HTML Codesniffer: A client-side javascript application, which performs accessibility tests against the DOM. It contains rules and tests for Section 508, and all three levels of WCAG 2.0 web accessibility standards. It also includes a modal UI to run these tests in the browser via a bookmarklet.
Customizations

Because the Pa11y Dashboard is fully free and open source, it allowed us to make customizations easily to serve our client's specific needs.

We created a page listing all the errors, warnings, and notices found across the audit. For every issue found, it is indicated how often it occurs, on how many pages, and lists all the pages where it is found. This gives our client a bird’s-eye view of all the audited sites and a quick insight into the most common issues. We added permalinks so we could reference individual issues easily from our written audit.

undefined

Out of the box, Pa11y Dashboard does not accommodate crawling entire sites. We wrote a custom script to collect and import all URLs from the micro sites, and adjusted the UI to visually bundle micro site URLs together.

undefined Importing tasks in batches

As previously mentioned, adding new tasks can be done through the UI. If you need to add a lot of tasks, you can leverage the Pa11y Webservice Client library that is included in the dashboard. This way, you can import your tasks in batches with a simple script.

1. Make sure the dashboard is up and running. Detailed instructions on how to set it up can be found on the Pa11y Dashboard Github page.

2. Next, create an CSV file and place it in the /data folder. We'll name it pa11y-tasks.js. It lists all the values we need to create a new task. Optionally, there are more values that can be added to tasks: ignore, timeout, wait, username, and password (for HTTP Basic Authentication).

"name","url","standard" "Hillary for America","https://www.hillaryclinton.com/","WCAG2AA" "Donald J Trump for President","https://www.donaldjtrump.com/","WCAG2AA" "Mike Herchel for President","http://imwithherchel.com/","WCAG2AAA"

3. Install the CSV parser module.

npm install csv-parser --save

4. Add the following JavaScript file. We named it pa11y-import.js. In it we read the CSV file and import the tasks. It leverages the config file from the dashboard to create a client endpoint to store our data.

var fs = require('fs'); var csv = require('csv-parser'); var createClient = require('pa11y-webservice-client-node'); var config = require('./config'); var client = createClient('http://' + config.webservice.host + ':' + config.webservice.port + '/'); // Read the CSV file. fs.createReadStream('./data/pa11y-tasks.csv')  .pipe(csv())  .on('data', function(data) {    // Create task.    client.tasks.create({      name: data.name,      url: data.url,      standard: data.standard    }, function (error, task) {      // Error and success handling.      if (error) {        console.error('Error:', error);      }      if (task) {        console.log('Imported:', task.name);      }    });  }).on('end', function () {    console.log('All tasks imported!');  });

5. Finally, run the import script once. Optionally replace the Node environment value if you are using development, or test.

NODE_ENV=production node pa11y-import.js

That's it! All your tasks should now be imported.

The future of Pa11y Dashboard.

While Pa11y Dashboard in its current form is a very helpful tool, plans are being made for a successor project, named Sidekick. Among proposed solutions are full site audits, increased focus on Continuous Integration and deployment, and showing a “diff” that compares the current results to the last run. If you are interested in learning more about where the project is headed or getting involved, I suggest checking out the Sidekick proposal page.

Internal API Design for Distributed Teams

My first real project with multiple distributed teams and a timeline measured in weeks (not months!) was the launch of The Tonight Show with Jimmy Fallon. With six weeks to go from nothing to a CMS, a website, and multiple apps, we were forced to figure out the best way to keep all the teams always moving. I wrote then about Legally Binding your Web APIs, but I think it’s time to clarify and update some of it’s points for maximising your team’s productivity.

Introduce the teams

Before features are defined, APIs are designed, and work is started, it’s really important to have some sort of team introduction. This doesn’t need to be a long meeting. What’s important is to get at least one technical person who will need to create or use APIs from each distinct team introduced. At a high level, identify the API providers and consumers from a business perspective. Determine who owns the APIs and their specifications. Make sure each technical person understands the goals of the project. Then, unless you are one of those technical people, get out of the way! Leave the meeting, drop the call, and let those people get to work.

Lay the API Foundation

Every API makes assumptions - about the basic structure of data, about authentication, and about maintenance and updates. Before determining your core objects, figure these things out. Discuss if you’re implementing an RPC API or a REST API. Decide if you’re supporting JSON, XML, or both. For REST APIs, find a suitable specification that supports the Hypermedia Factors you need, like JSON API or HAL, and use it to structure your data and API to simplify client and server implementations. Decide on authentication protocols, like OAuth or HTTP basic. Decide on a versioning strategy such as versioned URLs or custom MIME types. Steal from existing internal implementations where possible. Finally, start writing code.

The problem with beginning implementations now is that there is no source of truth for the API. It’s easy for teams to start making bad assumptions, leading to rework and putting your launch at risk. Instead, use a tool built for managing API documentation like Apiary’s Blueprint or Swagger. Ideally, the tools will let you easily generate API stubs too. Doing so will unblock client implementations, as they won’t be dependent on the production API to start their work. Best yet, when there is disagreement between implementations, the docs become the arbiter of what is right. Sure, documentation can be wrong or miss key details, but it’s much easier to understand a spec than someone else’s code. Best yet, since the first API consumer will be an involved participant in writing the docs, your API will have docs ready to go when that second consumer comes along.

Now, your teams can start writing code, pulling in the core libraries for authentication and data structures.

Design API Objects

Now that you have the basic API details figured out, you can start to design objects (or in REST parlance ‘resources’). At this stage, you are modelling data, and not actions. Assume every object has a GET call, even if it’s not immediately clear that it’s needed. Letting your clients inspect the state of objects will help them to debug your API and their code without relying on expensive calls and meetings that suck up time from getting things done. Design the object schema to be as “obvious” as possible—don’t use ‘image’ in one object attribute and ‘picture’ in another, unless they mean different things. Use REST best practices such as using URLs as canonical identifiers of objects. A critical step many teams miss is determining who manages unique IDs. With the inherent concurrency and assumed unreliability of HTTP requests, identifying which systems manage which IDs is something not to forget.

Again, at this point you should have new documentation and stubs, but no code. Let your documentation continue to be the source of truth.

Design API Operations

Using your work to this point, API operations should start to have intuitive implementations, especially if you are using REST best practices. Map your various CRUD operations to GET / POST / PATCH / PUT / DELETE as required. Find good HTTP status codes to communicate the results of REST calls (my favourite new discovery is 409 Conflict).

One common mistake services teams make at this point is to create a “wide-open POST endpoint” that accepts arbitrary data, and then to reverse-engineer the data into an implementation. Remember, our goal is to keep any sort of dependencies between the teams to a minimum. This sort of development methodology maximizes the dependencies between the teams, and requires implementations that might be totally wrong. It makes what should be an explicit API contract implicit. When the API calls break, there’s no documentation to fall back on, forcing calls and meetings that block all the teams.

Implement All The Things!

All the work so far has been to get to this point of productivity. By now, there should be no hard dependencies between the teams. Stubs can be replaced by implementations as development continues. New APIs can be added quickly by following the existing patterns. API consumers can even make assumptions about how an API would likely be designed before they contact the API team. Questions and iterations can be focused on tickets and documentation comments, helping to preserve the feasibility of the launch date. Stakeholders, project managers, developers, all win.

Header image from The simplest foundation, a padstone

Making Web Accessibility Great Again: Auditing the US Presidential Candidates Websites for Accessibility

Imagine that you arrive at a website, but you cannot see the screen. How do you know what’s there? How do you navigate? This is normal for many people, and the accessibility of your site will make or break their experience. Accessibility is about including everyone. People with physical and cognitive disabilities have specific challenges online—and making your site accessible removes those barriers and opens the door to more users.

Severely disabled Americans constitute a population of 38.3 million people, and make up a huge swath of voters (see the #CripTheVote movement on Twitter). Some notable U.S. presidential elections have been decided by much less, and because of this, we’re auditing the US presidential election candidates’ websites.

During this audit, we’ll see what the candidates’ websites are doing right and wrong, and where the low-hanging fruit lies. This article won’t be a full top-to-bottom audit, but we will show some of the important things to look for and explain why they’re important.

Our Methods Automated Testing

The first step in our a11y audit is to do a quick automated check. If you’re new to accessibility, the WAVE tool by WebAIM is a great place to start. It’ll check for standard accessibility features and errors in alt attributes, contrast, document outline, and form labels. For the features or errors it finds, it provides an info icon that you can click to learn what the issue is, why it’s important, and how to do it right. WAVE is free, and highlights both negative (errors, alerts, features, structural elements, ARIA attributes, contrast), and positive (features, structural elements, ARIA attributes).

Keyboard Testing

As great as WAVE is, an automated tool is never as good as a live person. This is because some accessibility requirements need human logic to apply them properly. For the next part, we’re going to navigate around each website using only the keyboard.

This is done by using the tab button to move to the next element, and shift-tab to move backwards. The spacebar (or return) is used to click or submit a form element. If everything is done right, a person will be able to navigate through your website without falling into tab rabbit-holes (tabbit-holes?). We should be able to tab through the whole page in a logical order without getting stuck or finding things that we can’t access or interact with.

Beyond that, we need to be able to see where the focus lies as we tab across the page. Just as interactive elements give a visual cue on hover, we should get an indication when we land on an interactive element while tabbing, too. That state is referred to as ‘having focus’. You can extend your hover state to focus, or you can make a whole new interaction for focus. It’s up to you!

.link--cta-button:hover, .link--cta-button:focus { /* The :focus pseudo-class for a11y */ background: #2284c0; } Screen Reader Audit

Screen readers are used by visually impaired and blind people to navigate websites. For this purpose we’ll use VoiceOver, which is the default pre-installed screen reader for OS X. We’re looking for things that read oddly (like an acronym), things that don’t get read that should get read (and vice-versa), and making sure all of the information is available.

Let’s start with Donald Trump’s website

The first thing that we did while looking at Donald Trump’s website was audit it using the WAVE tool. Here’s what we found:

  • 8 Errors
  • 14 Alerts
  • 15 Features
  • 35 Structural Elements
  • 5 HTML5 and ARIA
  • 5 Contrast Errors (2 were false flags)
The Good Bold Colors for a Bold Candidate

The color scheme is very accessible. On the front page, there were only two places where the contrast wasn’t sufficient. Outside of that, his color scheme provides text that stands out well from the background and is easy to read for low-vision users.

The Bad Lacking Focus

Remember how we talked about focus states? This site has almost none. Tabbing through the page is confusing because there are no visual indications of where you are.

This is especially egregious because the browser automatically applies focus states to focusable elements for you. In order for there not to be focus elements at all, the developer has to actively go out of their way to break them by applying outline: 0; to the element’s focus state. This is okay if you’re doing it to replace the focus state with something more decorative, but taking it off and not replacing it is a big accessibility no-no.

Skipped Skip Link

When tabbing through The Donald’s website, the first thing we notice is the absence of a skip link. Without a skip link, a keyboard user is forced to tab through each link in the navigation when arriving at each page before they can access the rest of the content. This repetitive task can become aggravating quickly, especially on sites with a lot of navigation links.

Unclear Link Text

Links should always have text that clearly explains where the user is going. The link’s text is what a person using a screen reader will hear, so text like ‘Read More’ or text and icons that require visual context to understand their destination aren’t ideal.

In this area, the link text that goes out to the linked Twitter posts read ‘12h’ and ‘13h’. Without the visual context of a Twitter icon (that’s a background image, so there’s no alternative text to provide that), the user probably has no idea what ‘12h’ is referring to or where that link will lead.

The Ugly Navigation Nightmares

The most important part of any website, in terms of access, is the navigation. An inaccessible navigation menu can block access to large portions of the website. Unfortunately, the navigation system of Trump’s website does just that, and prevents users with disabilities from directly accessing the sub-navigation items under the issues and media sections.

A more accessible way to do this is to use a click dropdown instead of a :hover. If that doesn’t work for the design, make sure that the :hover state of the menu applies to :focus as well, so that the menu will open the nested links when the parent menu item is tabbed to.

Disorganized Structure

Structural elements (h1, h2, h3, etc tags) are very helpful when used properly. In this case, they’re definitely not. Heading levels aren’t sequential, and nested information isn’t always relevant to its parent.

Audit of Hillary Clinton’s website The Good

Overall, Clinton’s website is better than most when it comes to accessibility. It’s that clear that her development team made it a purposeful consideration during the development process. While it’s not perfect, there was a lot of good done here. Let’s explore some examples of things done right.

Keyboard Accessibility

The keyboard accessibility on this site is very good. We found that we could access the elements and navigate to other pages easily without a mouse. It was easy to open and shut the drop-down ‘More’ area in the navigation, and access its nested links, which is a good example of how to implement what we were talking about when we covered the shortfalls of the navigation system on Trump’s website.

Skip Link

Hillary Clinton’s website includes a proper skip link, which allows users to skip the navigation and go directly to the content.

Great Focus States

The other thing we found when checking the keyboard accessibility was that everything has a focus state that makes it visually obvious where you are on the page. The light dotted border focus state is a bit subtle for low-vision users, but the fact that the focus state of the elements was styled independently from the hover state shows that the developer was aware of the need for focus indicators and made a conscious effort to implement them.

Translation

We usually think of accessibility in terms of people with disabilities because they often benefit from it the most, but accessibility is really just about including as many people as possible. A nice touch we found on Clinton’s site was a button at the top to translate the site into Spanish. With 41 million native Spanish speakers in the US, providing the option to experience the content in the user’s first language is a great accessibility move.

Video Captioning

Deaf people rely on captions to get the dialogue from videos, since it’s very difficult to lip-read in film. The videos on Hillary’s site are furnished with open captions, which means that they’re always on. Open captions are great for people with disabilities, but they’re also a smart move to capture your non-disabled audience as well. Often autoplay videos won’t play any sound unless they’re interacted with, but providing open captions on the video gives you another chance to capture the audience’s interest by displaying the words on the screen.

The Bad No Transcripts for Video

While it was great that the videos were captioned, we couldn’t find a transcript provided. Many people erroneously believe that you only need one or the other, but captions and transcripts actually serve different purposes, so it’s ideal to provide both. Captions are great for the Deaf, who want to read the words in real-time with the video. Transcripts are more useful for the blind and the Deaf-blind, who benefit from a written summary of what’s visually happening onscreen in each scene before the written dialogue begins. A braille terminal, used by the Deaf-blind, can’t convert open captions inlaid into the video’s frames into braille for its users, so these users won’t benefit from that.

Low Contrast

Contrast is important for low-vision users. We know that subtlety is all the rage, but design choices like putting blue text on a blue background makes it really difficult for some people to read. There are some great free tools like Das Plankton that will let you see if your contrast level is high enough to meet accessibility standards.

Schemes like this fail contrast testing on every level, so it’s best to avoid them. A better choice probably would have been white over a slightly darker blue.

The Ugly The Horrible Modal

Despite the obvious hard work that went into making Hillary’s website accessible, much of the effort is lost due to a modal that appears when visiting the website for the first time (or in incognito mode). The problem is that the modal doesn’t receive focus when it pops up, and its close button has no focus indicator. While it technically can be closed via keyboard by navigating backwards (or tabbing through every single link on the page) once it pops up, it’s not obvious visually when that close button has focus, and navigating backwards isn’t exactly intuitive.

Conclusion

With one glaring exception, it’s obvious that lots of thought and work had been put into making Hillary Clinton’s website accessible to voters with disabilities. There is definitely room for improvement with small things like somewhat irrelevant alternative attributes on photos, but on the whole the site is better on accessibility than the vast majority of the sites that we see.

Unfortunately, it is also obvious that accessibility is deeply neglected within Donald Trump’s website, which leaves a large swath of potential voters unable to browse to his stance on issues and other content. Hopefully, this will be attended to shortly.

Hopefully these auditing case studies lead you to think about your own website from the point of view of a person with a disability. There are plenty of challenges online for the disability community, but lots of those can be fixed with a few easy tweaks like the ones we covered here. We hope you'll use what you've learned to make your website accessible, too.

The Dark Art of Web Design

Matt & Mike sit down with Lullabot's entire design team and talk the ins, outs, processes and tools behind sites such as GRAMMY.com, MSNBC, This Old House, and more!

UX For Brains: Eight Ways Psychology Can Improve Your Designs

In my last article, I described the value of psychology for web and UX designers and provided examples of some things it can reveal about human bodies and behaviors. The title is a decent summary of the findings: “Let’s be honest, people suck.” In this article I'd like to present some practical ways to take that understanding of psychology and use it in your work.

How I Learned To Worry More And Love Limitations

In part one I described (maybe even ranted about) how people have unfairly high expectations, even though their own performance is limited: in their cognitive abilities, attention spans, vision, and reading, to name a few. But not all of these limitations are that problematic; they serve a purpose. Life is really hard, and it can be overwhelming to get through the day. Imagine being endlessly caught within analysis paralysis, or following an obsessive-compulsive loop. We are lucky to have shortcuts to keep us moving (even if those shortcuts are imperfect).

UX designers can make things easier if we work within these limitations. Designers have a responsibility to consider human complexities and build things that can work across the greater gamut. Allow me to repeat my plea from part one:

Stop designing experiences for us, for the “interactive 1%”.

I try to always remember that I’m coming from an industry that is much more familiar with web patterns than my typical user. The director of my college’s Graphic Design program would label anyone outside of a design domain a “pedestrian.” At the time, I thought it was pejorative, but now I see his point: imagine bringing a random person right off the street into your design process: they have no design background or project context. How successfully could they interact with your work?

By thinking about these less informed users, we can enhance usability for everyone. Take sidewalk curb cuts as an example: this real world solution for assisting the disabled also benefits a much larger group. I appreciate them on those days I’ve tricked myself into wearing heels. Ask yourself if your experiences are accessible enough to help all types of pedestrians get around (see what I did there?).

How To Help Humans Succeed

What follows are some guidelines to keep in mind to keep your designs work easy to interact with.

1. Simplify everything. Every. Thing. People don't want to buy a quarter-inch drill. They want a quarter-inch hole! - Theodore Levitt

Don't lose sight of the purpose of a design. Define the goals it serves, and break them down to their most basic form (a good way to do this is to ask “why?”, five times). Keep this handy, maybe on a post-it, to reference as you build your flows and pages.

Once you get into the nitty gritty, ask yourself how to reduce cognitive load. Don't use ten words when five will do. Replace the cheeky marketing jargon for clear labels and calls to action.

Be ruthless in removing design elements that are only decorative and not informative. Present a limited number of choices (this one can seem counter-intuitive, because users might request more options, but deep down, they don't want them and the analysis paralysis that too many choices lead to; check out the jam study, if you don’t believe me). One method Jason Fried describes that can be helpful as you try to simplify is to begin grouping things (tasks, needs, features, etc.) into what needs to be obvious, easy and possible.

2. Make a great first impression

The saying “never judge a book by it’s cover” exists for a reason: people are naturally judgemental, and form their initial impressions of a website in under a second. But designers can use these snap judgments to their advantage. The most cited factor used when evaluating whether a website is credible is the appeal of the visual design! The right layout, typography, and color can make a huge impact and make it immediately.

3. Get users where they’re going

Users need to know where they are, and where they can get to. Create navigation that helps users peruse and get a sense of the kinds of content they could access, without forcing them to click to and view every page. And show them where they are. Breadcrumbs are a great example of how to help users understand their current context and how the site is structured around it.

This sounds obvious, but sometimes the easiest way to help a user find something is by putting it on the page. It requires a lot more mental effort for a user to read a link in the navigation, decide it’s relevant to their current task, bring the mouse over to it, click it, and wait for the page to load, compared to say, flicking their finger on the mouse, wherever it may be, to scroll down the page. Your clients will often focus their concerns on keeping the right things "above the fold," but be sure to encourage them to not be overly concerned with that. Remember, scrolling is as cheap as it gets.

Sometimes you need a user to find something very specific, for example, an error message within a form. In these cases, keep this feedback in close proximity to their most recent action, such as locking it up with the Submit button they just clicked. If you do have to place important information elsewhere, make sure it’s going to get noticed in their peripheral view by creating additional cues like animation or—dare I say it—sound.

4. Provide opportunities for focus

People have a hard enough time paying attention, so make things easy to find during quick, half-hearted skimming. Break things up into clear sections, with whitespace in between. Use labels, lists, and graphics to make hunting even easier. Test that your hierarchy is clear by seeing what stands out first, second, and third, making sure it aligns with the user's priorities for that page. Hopefully, you’ve already put the most important thing right at the top.

And take out any distracting cruft to reduce cognitive load. This might be in the form of too much content, or design elements that add clutter, or a decorative typeface that is hard to read. This goes back to keeping it simple, but again we should ask ourselves if our experience is getting in the way of the A-to-B journey most users are actually on.

5. Leverage existing patterns

It can feel icky as a designer to take the road most traveled, but we have to be very careful and intentional when creating new whiz-bang. Users are coming to your site with preconceived expectations (like where they will find the shopping cart), so you'd better have a really good reason for reinventing the wheel. Otherwise, put content and navigation where users expect to see it. Make links look like links, and buttons look like buttons. Don’t make static titles blue and suggest interaction.

If you do need to teach something new, you can still leverage existing patterns to relate to something a user is already comfortable with. My favorite example of this is the simplicity of dragging files into your computer’s trash can. And of course, once you have your patterns set, maintain consistency across the entire experience, as users will be jumping from across multiple pages and different devices.

6. Plan for mistakes

First of all, of course, you want to try to reduce mistakes in the first place. Make sure all your copy is super duper clear, like a button with a “Process Order” label, or text explaining when a person’s card is going to be charged. Consider what slips could occur; are your buttons large enough and far apart enough to prevent accidental taps on mobile? If someone came back to this tab in three weeks, would they have any idea what they were doing? Are you confirming when something is actually going right, with progress bars and confirmation messages, or are you leaving users to press an unresponsive button three and four times?

Hopefully, you’re done all of that, but sadly, mistakes will still happen. Part two is building backup plans and working through those error flows to get users back on track. Try to clearly explain what went wrong, so they have better luck with their next attempt. For really complicated systems, build in undos instead of making users start over. And for really important, terrifying systems (like piloting aircraft), create layers and layers of design redundancy.

James Reason knew what was up when he created the Swiss Cheese Model to illustrate the importance of design redundancy, 7. Put users in a good mood

Amazingly, creating an experience users enjoy can help with reliability and usability. Users perceive enjoyable experiences as easy to use and efficient, as the state of flow that they create promotes flexible thinking and problem solving. And if a user doesn’t create a workaround in their cool state of mind, they are also more forgiving and more likely to excuse any hiccups they might encounter. There are lots of ways to create fun, and I don’t want to make it too formulaic, but consider starting points like inspiring designs and photography, humor, novelty, and creating flow.

8. Know your audience

These steps go a long way toward usability, but you have to validate your specific work with real users within your target audience. Whether it's two hours, two days or two weeks, spend some time talking to real people that will use what you’re making: first to understand your personas and the types of problems they experience, laying the groundwork for your design process; then touch base with them as designs progress, to see how they use things and complete (or struggle with) important tasks. The insights you'll gain are extremely valuable and the time spent doesn't have to be great.

Accept Human Behavior As It Is

So there you have it, a few gadgets for your toolbelt. I hope you find them useful and wish you luck with tackling these usability challenges. Had enough metaphors? No? You can equip yourself even further by combining these fundamentals with specific persona knowledge to understand your users’ problems more holistically before you get building. Good for one more? To truly solve users’ problems, we need to make ourselves familiar with the realities of this rocky usability terrain and adapt to it.

A Drupal Developer Crashes Laracon 2016

I attended Laracon 2016 recently, and I was glad I went. Why would I, a Drupal Developer, go to a conference about Laravel, a seemingly unrelated PHP framework? A few reasons:

  1. It was in Louisville, KY. That’s where I live. We don't get many open-source tech conferences, which is sad because Louisville is a great host for conferences. Good venues, Midwest prices, Southern charm, and local restaurants to make the mouth water.
  2. Laravel is based on many Symfony components, just like Drupal 8, so there is some overlap.
  3. I wanted to support another open-source community and just see how they did things.
  4. High-quality names and speakers: Zeev Suraski of PHP, Fabien Potencier of Symfony, Evan You of Vue.js, and Ryan Singer of Basecamp were just a few. With a conference of just 500 people, the opportunity to bump elbows was greater.

It drew in people from all over the world. I made it a point to meet at least one new person a day during the afternoon mingling hours and ended up meeting people from Egypt, Canada, Nigeria, and Australia.

It was a tech-heavy conference, with lots of live coding on the stage, which is the public-speaking equivalent of skydiving without a backup parachute. Gutsy.

The conference offered only one track, and while that obviously limits your choices, if you aren’t sure exactly what you want to get out of a conference, you want your choices limited. Otherwise, you are overthinking and overplanning the night before, and it's hard to focus on a single session because you have serious anxiety about what you might be missing out on (the cool kids call it FOMO). It's nice to be able just to show up and take it all in.

All of the sessions were beneficial in some way, but for me, these were the highlights of the conference:.

Adam Wathan - Test Driven Laravel

Adam started developing an app with test-driven development, writing the tests first, then writing code that fixed the failures one at a time, step-by-step. Adam only allowed himself to write code when the code addressed a particular test failure. Quoting another author, he called it “programming by wishful thinking.” Another interesting idea he demonstrated was starting off with a broad functional test which acted sort of like an “outer loop” that could then drill down into more specific unit tests.

He also helped answer the question “Where do I start?” You want something that is simple, but cuts through your whole app and returns some results. Something that touches many parts but is not too complex. He used “Viewing a user’s tweets” as an example.

(Watch the full Test Driven Laravel presentation)

Jack McDade - Wizards, Lawnmowers, and Hovercrafts

Jack gave a fun talk and clearly loved being on the stage. Developers are creative creatures but how do we keep the creative juices flowing? Jack quoted one of my favorite authors, Brandon Sanderson, who, when asked his opinion on the hardest day job for an aspiring writer, answered: being a developer, because it draws on the same creative reserves. It’s hard to write after a day of coding. This is part of why writing documentation is such a struggle.

His advice—change perspectives (even if it means moving your desk a little bit) and ensure downtime—was not revolutionary, but was a good reminder presented in a skilled way. 

(Watch the full Wizards, Lawnmowers, and Hovercrafts presentation)

Sandi Metz - Get a Whiff of This

Overall, the conference felt very unpretentious. It's a community and ecosystem that caters first and foremost to the developer experience, offering lots of hand-holding for beginners, coupled with a desire to prevent developers from getting bogged down in a swamp of configuration. To achieve this, they are more than willing to pull ideas from other areas (like .NET and Rails), and that certainly came through. Sandi Metz typified this perfectly.

First, she doesn't even run around in the PHP world, let alone Laravel—and yet she was invited to speak. The community clearly loves Laravel, but they don't suffer from an excessive pride in their toolset.

Second, her talk on code smells was one that could have quickly gotten lost in some advanced weeds but remained bright, funny, and succinct while delivering a ton of value. Every developer, beginner to experienced, could have taken something valuable from it. I know I did.

(Watch the full Get a Whiff of This presentation)

Ryan Singer - Design: Case Study

Ryan’s talk had echoes of "The Design of Everyday Things" and probably a ton of other material, but he has internalized and thought through so many problems and solutions that his voice was one of personal experience.

He reiterated that design is not the skin, but how something works. He walked through a short case study in discovering affordances and user flows, which doubled as a great advertisement for the iPad Pro, and then summarized his "tower of interface design" comprised of:

  • Domain experience
  • Situations
  • Flows 
  • Accordances
  • 2D Layout

Based on the number of questions at the end, and the crowd surrounding him after the conference, this talk resonated with a lot of people. His presentation camped at the intersection of development and design, which is a problem space our team at Lullabot thinks about a lot.

(Watch the full Design: Case Study presentation)

What Would I Change?

Laracon might benefit by adding sprints or hackathons. After parties are ok, but for an introvert like me, that's not how I get to know people. I get to know people by working with them, struggling through shared problems, and then relaxing at a dinner table over a shared meal.

Having a sprint of some kind before the conference (and maybe one after) would have made a good experience even better. First, it would have allowed people to kick off the conference by beginning to form meaningful relationships or renew old friendships, and this would make the rest of the conference more enjoyable. Second, sprints would give outsiders and newbies a good idea of the type of problems the community is grappling with.

Overall, I think I would go back if the opportunity comes up again. Especially if it happens to be in Louisville.

Around the Drupalverse with Enzo García

Matt & Mike talk with Eduardo "Enzo" García about his 18 country tour around the the world! We talk about various far-flung Drupal communities, motivations, challenges, and more.

Scaling CSS Components with BEM, REMs, & EMs

Using relative units for measurements has a lot of advantages, but can cause a lot of heartache if inheritance goes wrong. But as I’ll demonstrate, BEM methodology and some clever standardization can reduce or eliminate the downsides.

A little vocab before we get too far into this: em units

CSS length unit based on the font size of the current element. So if we set:

.widget { font-size: 20px; }

For any style on that .widget element that uses em units, 1em = 20px
 

rem units

Similar to em units, except it’s based on the “root’s em”, more specifically the <html> tag’s font-size. So if we set:

html { font-size: 14px; }

Then anywhere in the page 1rem = 14px, even if we used rem inside of our .widget element’s styles. Although rem can be changed, I recommend leaving it alone. Majority of user's will have a default font-size of 16px, so 1rem will equal 16px.

Relative units

Any length that’s context based. This includes: em , rem, vw , vh and there are more. % isn’t technically a unit, but for the purposes of this article, you can include them in the mix.
 

em inheritance

This is em‘s blessing and curse. Font size inherits by default, so any child of widget will also have a 20px font size unless we provide a style saying otherwise, so the size of an em will also inherit to the child.

Let's say we set font-size in em units, it can't inherit from itself, so for the font-size style an em is based on the parent’s calculated font-size. Any other style’s em measurement will be based on the calculated font-size of that element.

When the page renders, all em units are calculated into a px value, browser inspector tools will have a "computed" tab for CSS that will give you calculated px values for any em based style.

Here’s an example that’s good for demonstration, but would create a ball of yarn if it was in your code.

If we have these three elements, each nested in the one before:

.widget { font-size: 20px; margin-bottom: 1.5em; // 1.5em = 30px } .widget-child { font-size: 0.5em; // 0.5em = 10px margin-bottom: 1.5em; // 1.5em = 15px } .widget-grandchild { font-size: 1.4em; // 1.4em = 14px margin-bottom: 1.5em; // 1.5em = 21px } line 3

em will be calculated from the font-size of this element, .widget, which is 20px
1.5em = 20px * 1.5 = 30px 

line 7

Because this is a font-size style, it calculates em from the parent’s font-size
.widget  has a calculated font-size of 20px,  so 1em = 20px in this context.
0.5em = 20px * 0.5 = 10px 

line 8

em will be calculated from the font-size of this element, .widget-child, which is 10px
1.5em = 10px * 1.5 = 15px

line 12

Because this is a font-size style, it calculates em from the parent’s font-size. 
.widget-child  has a calculated font-size of 10px,  so 1em = 10px in this context.
1.4em = 10px * 1.4 = 14px

line 13

em will be calculated from the font-size of this element, .widget-grandchild, which is 14px. 
1.5em = 14px * 1.5 = 21px

This is why most developers stay away from em’s, if done poorly it can get really hard to tell how 1em will be calculated, especially because elements tend to have multiple selectors applying styles to them.
 

em contamination

Unintended em inheritance that makes you angry.


All that said, there are two reasons I love rem and em units.
 

1 ♡ Scalable Components

We’ve been creating fluid and responsive web sites for years now, but there are other ways our front-end code needs to be flexible.

Having implemented typography systems on large sites, I've found that specific ‘real world’ implementations provoke a lot of exceptions to the style guide rules.

While this is frustrating from a code standardization standpoint, design is about the relationship of elements, and it’s unlikely that every combination of components on every breakpoint will end up working as well as the designer wants. The designer might want to bump up the font size on an element in this template because its neighbor is getting too much visual dominance, a component may need to be scaled down on smaller screens for appropriate line lengths, or, on a special landing page, the designer may want to try something off script.

I’ve found this kind of issue to be much less painful when components have a base scale set on the component wrapper in rem units, and all other sizes in the component are set in em units. If a component, or part of a component, needs to be bumped up in a certain context, you can simply increase the font size of the component and everything inside will scale with it!

To show this in action, here are some article teasers to demonstrate em component scaling. Click on the CODEPEN image below and try modifying lines 1-15:

Note: For the rest of my examples I’m using a class naming convention based on BEM and Atomic Design, also from my previous article.

To prevent unwanted em contamination, we're going to set a rem font-size on every component. Doing so, locks the scope of our em inheritance, and BEM givus us an obvious place to do that.

For example, using the teaser-list component from my previous article we want featured ‘teaser lists’ to be more dominant than a regular ‘teaser list’:

.o-teaser-list { font-size: 1rem; } .o-teaser-list--featured { font-size: 1.25rem; } .o-teaser-list__title { font-size: 1.25em; // In a normal context this will be 20px // In a featured context this will be 25px }

Meanwhile, in our teaser component:

.m-teaser { font-size: 1rem; } .m-teaser__title { font-size: 1.125em; // This will be 18px if it's a o-teaser-list or o-teaser-list--featured, // because the component wrapper is using REM }

Element scaling and adjustment becomes much easier, exceptions become less of a headache, and we accomplish these things without the risk of unintended em inheritance.

As a somewhat absurd example to demonstrate how powerful this can be, I’ve created a pixel perfect Lullabot logo out of a few <div> tags and CSS. A simple change to the font-size on the wrapper will scale the entire graphic (see lines 1–18  in the CSS pane):

One potential drawback to this approach, it does take a little more brain-power to write your component CSS. Creating scaling components means your calculator app will be opened a lot, and you’ll be doing a lot of this kind of computation:

$element-em = $intended-size-px / $parent-element-calculated-px

Sass (or similar preprocessor) functions can help you with that, and there are some tools that can auto-convert all px values to rem in your CSS for you since rem can be treated as a constant.

2 ♡ Works with Accessibility Features

The most common accessibility issue is low-vision, by a wide margin. For quite a while browser's zoom has worked really well with px units, I hung my hat on that fact and moved on using px everywhere in my CSS assuming that low vision users were taken care of by an awesome zoom implementation.

Unfortunately that’s not the case.

I attended an accessibility training by Derek Featherstone who has done a lot of QA with users that depend on accessibility features. In his experience, low vision users often have applications, extensions or settings that increases the default font size instead of using the browser’s zoom.

A big reason for this might be that zoom is site specific, while font size is a global setting.

Here’s how a low vision user’s experience will breakdown depending on how font-sizes and breakpoints are defined:

Scenario 1:

fonts-size: px
@media (px)

The text size will be unaffected. The low vision user will be forced to try another accessibility feature so they can read the text, no good.

Scenario 2:

font-size: rem or em
@media (px)

The page can easily appear 'broken', since changes in font-size do not effect page layout.
For example, a sidebar could have 6 words per line with default browser settings:

But a user with a 200% font-size, the breakpoint will not change, so the sidebar will have 3 words per line, which will not look how we wanted, and often produces broken looking pages.

Scenario 3:

font-size: rem or em
@media (em)

Users with an increased font-size default font size will get an (almost*) identical experience to users that use zoom. An increase in default font-size will proportionately effect the breakpoints, so in my opinion, the EMs still have it.

For example, if we have breakpoints in em a user with a viewport width of 1000px that has their font-size at 200% will get the same breakpoint styles as a user with a 500px viewport and the default font-size.

* The exception being media elements will definitely be resized with zoom, but may not change in size with an increased default font-size, unless their width is set with relative units.

A quick note on em units in media queries:
Since media queries cannot be nested in a selector and don't have a 'parent element', em units will be calculated using the document's font-size.

Why not use rem you ask? It's buggy in some browsers some of the time, and even if it wasn't, it's one less character to type.

Conclusion

For sites that don’t have dedicated front-end people, or don’t have complex components, this might be more work than it’s worth. At minimum all of our font-size styles should be in rem units. If that sounds like a chore, PostCSS has a plugin that will convert all px units to REM (which can be used downstream of Sass/Less/Stylus). That will cover the flexibility and sturdiness needed for accessibility tools.

For large projects with ornate components, specific responsive scaling needs, and where one bit gone wrong can make something go from beautiful to broken, this can save your bacon. It’ll take some extra work, but keeping the em scope locked at the component level will keep extra work at a minimum with some pretty slick benefits.

Prioritizing with Dual-Axis Card Sorts

At a recent client design workshop, the Lullabot team ran into a classic prioritization challenge: tension between the needs of a business and the desires of its customers. Our client was a well-known brand with strong print, broadcast, and digital presences—and passionate fans who'd been following them for decades. Our redesign efforts were focused on clearing away the accumulated cruft of legacy features and out-of-control promotional content, but some elements (even unpopular ones) were critical for the company's bottom line. Asking stakeholders to prioritize each site feature gave deeply inconsistent results. Visitors loved feature X, but feature Y generated more revenue. Which was more important?

Multi-Axis Delphi Sort to the Rescue, Yo

Inspired by Alberta Soranzo and Dave Cooksey's recent presentation on advanced taxonomy techniques, we tried something new. We set up dual-axis card sort that captured value to the business and value to the user in a single exercise. Every feature, content type, and primary site section received a card and was placed on a whiteboard. The vertical position represented business value, horizontal represented value to site visitors, and participants placed each card at the intersection.

In addition, we used the "Delphi Card Sorting" technique described in the same presentation. Instead of giving each participant a blank slate and a pile of cards, we started them out with the results of the previous participant's card sort. Each person was encouraged to make (and explain) any changes they felt were necessary, and we recorded the differences after each 15-minute session.

With axis balancing user interest and business value, the upper-right corner becomes an easy way to spot high-priority features.

The results were dramatic. The hands-on, spatial aspect of card sorting made it fast and easy for participants to pick up the basics, and mapping the two kinds of "value" to different axis made each stakeholder's perspectives much clearer. Using the Delphi sorting method, we quickly spotted what features everyone agreed on, and which required additional investigation. Within an hour, we'd gathered enough information to make some initial decisions.

The Takeaway

Both of the tools we used—Delphi sorting and multi-axis card sorting—are quick, easy, and surprisingly versatile additions to design workshops and brainstorming sessions. Multi-axis sorts can be used whenever two values are related but not in direct conflict; the time to produce a piece of content versus the traffic it generates is another great example. Delphi sorting, too, can be used to streamline the research process whenever a group is being asked to help categorize or prioritize a collection of items.

Based on the results we've seen in our past several client engagements, we'll definitely be using these techniques in the future.

BEM & Atomic Design: A CSS Architecture Worth Loving

Atomic Design is all the rage; I’ve recently had the pleasure of using BEM (or CEM in Drupal 8 parlance) and Pattern Lab on Carnegie Mellon’s HCII’s site. Working through that site I landed on a front-end architecture I’m very happy with, so much so, I thought it’d be worth sharing.

CSS Architecture Guidelines

Generally I’m looking for:

  • Low specificity: If specificity battles start between selectors, the code quality starts to nosedive. Having a low specificity will help maintain the integrity of a large project’s CSS for a lot longer.
  • Easy-to-understand naming: Ideally it won’t take mental effort to understand what a class does, or what it should apply to. I also want it to be easy to learn for people who are new to the code base.
  • Easy to pick up: New developers need to know how and where to write features, and how to work in the project. A good developer experience on a project can help prevent a mangled code base and decrease the stress level.
  • Flexible & sturdy: There’s a lot of things we don’t control when we build a site. Our code needs to work with different viewports, user settings, when too much content gets added in a space, and plenty of other non-ideal circumstances.

As part of the project I got to understand the team that maintains the site, and what they were used to doing. They had a Sass tool chain with some custom libraries, so I knew it was safe in making something custom that relied on front-end developers to maintain it.

Other teams and other projects can warrant very different approaches from this one.

Low specificity is a by-product of BEM, but the other problems are a little trickier to solve. Thankfully the tools I went for when I started the project helped pave the way for a great way to work.

Class Naming and File Organization

At Lullabot we’ve been using BEM naming on any project we build the theme for. On most of our projects, our class names for a menu might break down like this:

  • menu - the name of the component
  • menu__link , menu__title , menu__list - Examples of elements inside of the menu
  • menu--secondary , menu__title--featured , menu__link--active - Examples of variations. A variation can be made on the component, or an element.

We try to make our file organization similarly easy to follow, we generally use Sass and create a ‘partial’ per component with a SMACSS-ish folder naming convention. Our Sass folder structure might look something like this:

base/ ├ _root.scss ├ _normalize.scss └ _typography.scss component/ ├ _buttons.scss ├ _teaser.scss ├ _teaser-list.scss └ _menu.scss layout/ ├ _blog.scss ├ _article.scss └ _home.scss style.scss

On this project, I went for something a little different.

As I wrapped my head around Pattern Lab’s default component organization, I really started to fall in love with it! Components are split up into three groups: atom, molecule, and organism. The only thing that defines the three levels is ‘size’, and if the component is a parent to other components. The ‘size’ comes down to how much HTML a component is made up of, for instance:

  • one to a few tags is likely to be an atom
  • three to ten lines of HTML is likely a molecule
  • any more is likely an organism

In the project I tried to make sure that child components were at least a level lower in particle size than their parent. Organisms can be parents of molecules and atoms, molecules can be parents of atoms, but atoms can't be parents of any component. Usually things worked out that way, but in a few instances I bumped a component up a particle size.

Pattern Lab also divides layouts into pages and templates. Templates are re-used on many web pages, pages are a specific implementation of a template. For instance, the section-overview template is the basic layout for all section pages, and we could make an about page, which builds on the section-overview styles.

Getting deeper into Pattern Lab, I loved how that structure informed the folder organization. To keep things consistent between the Pattern Lab template folders and my Sass folders I decided to use that naming convention with my folders.

Basically this split my components folder into three folders (atom, molecule and organism), and layouts into two (template, page). While I liked the consistency I started having a hard time figuring out which folder I put some components in, and I’m the guy who wrote it!

Fortunately this was early in the project, so I quickly reworked the class names with a prefix of the first letter of their “Pattern Lab type”.

The component classes now look like this:

  • a-button a-button--primary
  • m-teaser
    • m-teaser__title
  • m-menu
    • m-menu__link
  • o-teaser-list
    • o-teaser-list__title
    • o-teaser-list__teaser

The SCSS folders then followed a similar pattern:

00_base/ ├ _root.scss ├ _normalize.scss └ _typography.scss 01_atom/ └ _a-buttons.scss 02_molecule/ ├ _m-teaser.scss ├ _m-menu.scss └ _m-card.scss 03_organism/ ├ _o-teaser-list.scss └ _m-card-list.scss 04_template/ ├ _t-section.scss └ _t-article.scss 05_page/ ├ _p-blog.scss └ _p-home.scss style.scss

With those changes, a class name will inform me where to find my Sass partials, the Pattern Lab templates, and in the Pattern Lab library. Win!

Other class naming prefixes

Drupal 8 wisely used a convention from SMACSS that all classes added/touched/used by Javascript behaviors should be prefixed with js-. In Drupal 7 I would often remove classes from templates and accidentally break behavior, but no more!

A few examples of JS classes:

js-dropdown__toggle js-navigation-primary js-is-expanded

Another convention we have used in our projects at Lullabot is to add a u- prefix for utility classes.

A few examples of potential utility classes:

u-clearfix u-element-invisible u-hide-text

This clearly communicates that class has a specific purpose and should not be extended. No one should write a selector like this .t-sidebar .u-clearfix, unless they want angry co-workers.

These little prefixes makes utility and javascript classes easy to spot, in case they aren’t immediately obvious.

Intersection of Particles

In past BEM projects I’ve felt uneasy about the parent/child component relationship. There are sometimes points where I feel a parent/child relationship of two components is preferred, but that’s not what BEM is good for.

When I do create a parent/child relationship, often one component’s styles will need to be higher in the cascade than the other, which means more code documentation to make sure that that ‘gotcha’ is understood, and it introduces a little fragility.

This architecture handles that issue very well.

Let’s take a look at a ‘teaser list’, it contains a title for the listing and then a list of ‘teaser’ components.

In this example the wrapper of the ‘teaser’ component has two jobs:

  • define how the 'teaser’ is laid out in the ‘teaser list’ component
  • define any backgrounds, borders, padding, or other interior effects that the teaser has

With the naming convention we’ve gone with, that wrapper has two classes that cover each function:

  • o-teaser-list__teaser makes sure the component is flowing correctly inside of its parent
  • m-teaser class is responsible for the aesthetic and how the wrapper affects the contents

Using two classes will make sure the styles needed for the teaser’s layout in its parent won’t affect teasers that are in different contexts. The single letter prefix makes it even easier to to tell which class comes from the parent, and which defines the child component.

The partials are loaded in ‘particle size’, smallest to largest. I know if there’s any overlap in the styles for those two classes, the parent component’s class will win. No cascade guesswork, and no fragility introduced by the parent/child relationship.

As I mentioned before, I tried to make sure that parent components were always a larger ‘particle size’ than their children. There are a few instances where organisms were parents to other organisms, and, looking back, I wish I’d separated those organisms by adding a new particle size to the mix. Pattern Lab would have allowed me to add onto to it’s naming conventions, or even completely replace them.

Conclusion

I’m not selling the silver bullet to solve CSS architecture, but I am really happy with how this turned out. This setup certainly isn’t for every project or person.

Although I tend to use a few preprocessors (mainly Node Sass and PostCSS), there’s no need for any fancy preprocessors or ‘buzz word tools’, although I do find them helpful. The core of this is still something that can be written in vanilla CSS.

Leave a comment, I’d love to hear any feedback of other’s experiences trying to create these kinds of systems. What worked, what didn’t work; or specific feedback on anything I brought up here!

Pages