Lullabot

Beyond Discovery: Designing a More Valuable Phase One

If you’ve ever hired an agency to redesign, re-platform or otherwise do a bunch of work on your organization’s website, you’ve likely had to consider whether to pay for an initial phase of work some agencies refer to as “Discovery.” What follows are some reasons why you were right to be nervous about investing in something called discovery, but more importantly, what you should look for in a phase one with any agency to ensure that you lay the foundations for success and get a great return on your investment.

What is "Discovery" and where did it come from?

As far as I can tell, the use of the word “discovery” within professional services finds its origin in the practice of law. In legal practice,

[discovery] is the formal process of exchanging information between the parties about the witnesses and evidence they’ll present at trial. —the American Bar Association

I know what you’re thinking. Web designers and developers probably began using this term in the hopes of charging rates like attorneys. While there are definite corollaries between legal discoveries and the kinds of things that need to happen in any large digital project (e.g. the exchange of information to get everyone on the same page), it’s worth noting that our field is borrowing a term that describes a process for creating a fair fight. That isn't exactly the way I like to think about my client projects.

Also, the word discovery, based on its origins, alludes to simply an exchange of information, but not really the production of anything. It’s about finding things out so that you can then begin charting a course. It’s about input, not output.

Don’t get me wrong, discovery activities in and of themselves are valuable and necessary for most projects. The larger the undertaking, the more valuable they become. Discovery activities include things like:

  • The sharing and review of existing guiding documentation (e.g. style guides, personas, wikis, analytics)
  • Requirements gathering
  • Technical infrastructure review and analysis
  • The review of roles and responsibilities within a stakeholder group
  • The review and analysis of project repositories and assets
  • Inventories and audits of existing content, assets, resources, design patterns, etc.

These are all important things that you should want the vendor you’re working with to facilitate, but I’m guessing your team and your project sponsors may feel nervous, even disappointed if this is all they’re being promised in phase one.

Why agencies propose "discovery" phases and what's missing

If you’ve gotten a proposal that defines a discovery phase, in all likelihood, it provided a price for this initial phase that’s clear, but for the rest of your project is less clear. I know this because I’ve helped write many of these proposals. Often one of the “deliverables” of a discovery phase will be some sort of project plan that outlines a more narrowed cost estimate for the entire project. Prior to the discovery phase, the proposal may provide what’s referred to in the industry as a SWAG, or a ballpark range for the entire project to help you assess whether the vendor is at least within the ballpark of the budget you’re working with.

This is very typical for larger scale digital projects. If you’ve ever received a proposal like this, you may have been disappointed but not surprised to see that the proposal did not include an exact, fixed price for the entire project. It’s more important to feel confident that the vendor you’re choosing can be trusted to deliver something that’s successful within your budget. Ironically, at least at Lullabot, what we’re also looking for as we create project proposals is to feel confident that we can deliver success within your budget. However, if you’ve never worked with us and we’ve never worked with you, a mutual leap of faith is required.

Many agencies try to solve for the discomfort caused by discovery phase proposals by making the discovery phase as small and cheap as possible—the bare minimum needed to do just enough to feel somewhat confident in an estimate. Some even offer the discovery phase for free if the project itself is large enough to warrant the risk. I’m guessing if you’ve considered a proposal like this that you may have appreciated the cost consideration, but if the discovery phase did not appear to create anything particularly valuable for you and your team, you were probably still reluctant to invest the time in it. I’ve found that most of the clients we work with are just as busy as we are. If you’re hiring an agency right now to help you redesign or re-platform a large site, you likely don’t want your team to invest a bunch of time and energy in a discovery process, no matter how cheap, with a vendor you may not wind up working with, only to find yourself back reviewing proposals like this again. From day one, you want to find a partner you can trust to see you through to a successful launch.

So what’s the solution? If the lack of an exact estimate isn’t the biggest problem to solve, and making discovery phases smaller and cheaper also misses the needs and concerns prospective clients have, then what should we propose in these scenarios?

Moving beyond discovery and into design

A couple of years ago I was working with a prospective client to iron out an initial phase one engagement. They had a corporate intranet they were embarrassed by. Their organization was growing fast and needed to continue attracting top talent. Their existing intranet was terrible for onboarding new employees. It also looked like a historical artifact rather than an intranet for one of the world’s most innovative tech companies. After about three or four rounds of proposal revisions in which they kept saying this was not quite what they were looking for, I had the brilliant idea to just ask them “What are the things you need to have to get your team and the project sponsors excited and confident in the project by the end of phase one?”

What I discovered was that, as an organization, they understood that they needed a better understanding of the problems to be solved, and they understood that would require some research work, but our phase one proposal was essentially just research work. What was really valuable for their team was not just understanding all the problems more clearly, but rather seeing a vision for how to solve those problems. We did not need to simply understand things; we needed to make things based on that understanding. That would get their team excited and give them visual things to show their CEO and project sponsors who ultimately needed to get behind the investment in the project. That would even help them assess the value of the project and hence their budget. We were not certain exactly what we’d be designing and building, and therefore we did not feel confident providing a fixed estimate for the entire project. They did not know exactly what they needed, and therefore had no way to assess their budget based on value.

This is one of those things that seems so obvious in hindsight. As designers, it’s easy to know the amazing insights and game-changing ideas that can come out of initial research and forget that, while that’s all great and true, it’s the actual designs that solve the problems (even the problems we just feel but don’t clearly understand) that get people excited. As designers, we get excited about these insights because they’re the eureka moments that lead to the designs our clients rave about. Our clients want those designs that they’ll rave about.

The good news in the situation I just described was that the client was not saying they needed us to do the whole project as phase one. Nor were they saying they saw no value in the research and discovery-oriented activities we needed to do at the outset. They realized that the project was large enough that we couldn’t really describe and estimate it accurately without starting somewhere.

What we did was to simply end phase one at the point where we had produced what they were all eager to see; the initial designs for what their intranet should be like. We went from proposing a phase one that was just user research and discovery activities to a phase one that also included workshopping that produced wireframes and initial visual design ideas for a few key page types. Our early research and workshopping helped establish what page types were actually most important to success (e.g. Their employee dashboard at login and a couple other highly used content types and tools). Phase one got bigger by one or two weeks but produced things that were really valuable for our client.

Since that project, we’ve taken this approach to designing engagements with every new client. When a project warrants a phase one, our proposal process tries to establish the earliest things that are hugely valuable to the client team (usually via collaboration with that team). It’s not always the same things, but most phase one engagements seem to include the following kinds of things now:

  • Discovery activities (see above)
  • Lightweight research (user-research: both internal and external, market research, etc.)
  • Design workshopping to produce a plan in the form of models, diagrams, wireframes, and Zoom Mocks (see below)
Design workshopping: producing value quickly To achieve great things, two things are needed: a plan, and not quite enough time. — Leonard Bernstein

We try to keep phase one as brief as possible, but ensure that by the end we've produced the initial designs for what we’re planning to create together. One of the huge benefits of including some actual design work in an intentionally-brief phase one is that the time constraint can provide amazing focus and clarity. It will force your team to quickly zero in on the things that are most important to your site’s users and your business.

While our initial research and discovery activities provide hugely valuable insights, they also typically raise questions. Decisions (or the lack of them) are what most often slow down design projects. There are teams of people involved, each with their own areas of focus and concerns. While user research and testing can certainly help guide decisions, even with these tools there are often several directions a given product or interface can go in. If you hire a professional design team, you may be thinking they will simply answer all these questions for you, but that’s not typically how it works. We’ve found that one of the keys to a successful project is the ability to arrive quickly at the early, foundational decisions together. We achieve this through design workshopping.

The concept of design workshopping is not something we invented. Our team has been doing this for years, and we’ve refined a lot over the years by learning from others. Kevin Hoffman has shared a lot over the years, and the book GameStorming has also been helpful. Recently Google Ventures’ book Sprint seems to articulate a slightly more formal, start-up focused version of the kinds of things we’ve been doing as well.

At its heart, design workshopping is about getting all the various roles and perspectives together in one place for collaborative exercises aimed at making initial design decisions together. Though we’re a fully distributed team at Lullabot, we almost always do these design workshops in-person. We fly a few of our people to wherever our client is and spend two to three days on-site together workshopping. We design the workshopping agenda together with our clients in the weeks leading up so that it fits with the various schedules of the client team we’re working with. Each day is divided up into workshopping sessions. The sessions are pretty brief (thirty minutes to an hour each), and no one stakeholder or team member needs to be in all of them (which would be impossible for most of our client teams). While no two projects are identical in their needs, some of the common things we focus on in our workshops include:

  • The existing pain points and problems to be solved as your team and users see them
  • Reframing those problems into prioritized goals and how to measure them
  • Your project and product’s constraints: political, technical, staff, resources and assets, brand, etc.
  • The competitive landscape for your product or brand
  • Your audience and its various segments, scenarios, needs, and priorities
  • Your content and its model, priorities, creation/editorial processes and governance
  • Your site and its IA, interface, patterns, page types, layouts and their respective priorities
  • Your brand and its characteristics, visual style, values and personality
  • The project process and your team's needs and values regarding phases, artifacts, deliverables, cadence, communication, etc.
  • Possible layouts and interface approaches to solve the problems and achieve the goals

Our workshops aren’t just conversations about these topics. We use a combination of methods and exercises to help elicit insights and drive decision making. Some of the more common methods we employ include:

  • Card sorts (often multi-axis)
  • Sticker voting (often in combination with a card sort)
  • Metaphor games
  • Gut checks
  • Design Studio sketching exercises

In one of Tom Wujec’s Ted Talks, he shares stories from an exercise he’s done with lots of business people where he has them draw how to make toast. At about five minutes into the talk he shares what he found when he began having groups do this exercise together, and he mentions a fascinating thing. When he had groups do the exercise together in complete silence they did it “much better and much more quickly!”

It never ceases to amaze me how you can get 5-10 people from an organization in a room and spend hours in conversation about challenges and goals for a project and still not arrive at a clear decision. However, when you apply the constraints of a “game” and give people set time limits, letting them work both individually and collaboratively, moving things physically in space to express things like importance, potential impact, level of effort required, etc., you can get that entire group to arrive at decisions together in less than an hour. Turning things from purely verbal, mental work into something physical and visual is hugely powerful!

Likewise, it can be incredibly difficult to articulate in words exactly what you don’t like about your existing site, or how you want your new site to feel. Instead, walking teams through metaphors with visuals and letting them describe why they think their sign-up process should feel like checking in at a W hotel rather than a Marriott, or why their non-profit organization’s brand is actually more like a Volvo than a Mercedes, can unlock so many more useful descriptive words and ideas to define what we need to create together.

While the decision making that happens in these workshops is often what blows away our clients initially, it’s the visual ideas that come out of our sketching exercises that put it all together. By the end of our workshops we have sketched wireframes for the key page or component types of a given site, and often our clients have helped produce these sketches. We’ve found that most project teams at the organizations we work with have at least some ideas for how to solve the problems on their site, and sketching is a great way to engage those ideas together.

undefined

Typically, within a week after our workshopping, we’ve gathered together more refined versions of the things that document and visualize all the decisions we made together; things like goals and metrics, site maps and phase diagrams, IA and content models, wireframes and even Zoom Mocks that increase the visual fidelity showing style ideas that reflect the brand feel we’ve described together with metaphors. We have things we’re ready to test with real users and then iterate on to create a successful end product. What's more, we've continued to find these briefs can radically ease the onboarding of new team members later in a project or even to a product team after launch.

What have you found?

We’re always learning and refining, looking for better ways to design engagements so that they provide the greatest value to our clients and lead to successful outcomes. We’ve found that turning phase one from being just an exchange of information, discovery process into a process that produces ready-to-test designs to be a big win for our clients and us. We’d love to hear from you. If you’ve hired a design and development team to work on your project, we’d love to hear about what worked well and what you’d like to change on your next engagement. Drop us a line and share your thoughts. We’d greatly appreciate it!

Merging Entities During a Migration to Drupal 8

Migrations provide an excellent opportunity to take stock of your current content model. You’re already neck deep in the underlying structures when planning for data migrations, so while you’re in there, you might as well ensure the new destination content types will serve you going forward and not present the same problems. Smooth the edges. Fill in some gaps. Get as much benefit out of the migration as you can, because you don’t want to find yourself doing another one a year from now.

This article will walk through an example of migrating part of a Drupal 7 site to Drupal 8, with an eye toward cleaning up the content model a bit. You will learn:

  • To write a custom migrate source plugin for Drupal 8 that inherits from another source plugin.
  • To take advantage of OO inheritance to pull field values from other entities with minimal code.
  • To use the Drupal 8 migrate Row object to make more values available in your migration yaml configuration.
Scenario: A music site moving from Drupal 7 to Drupal 8

Let’s say we have a large music-oriented website. It grew organically in fits and starts, so the data model resembles a haphazard field full of weeds instead of a well-trimmed garden. We want to move this Drupal 7 site to Drupal 8, and clean things up in the process, focusing first on how we store artist information.

Currently, artist information is spread out:

  • Artist taxonomy term. Contains the name of the artist and some other relevant data, like references to albums that make up their discography. It started as a taxonomy term because editors wanted to tag artists they mentioned in an article. Relevant fields:

    • field_discography: references an album content type.
       
  • Artist bio node. More detailed information about the artist, with an attached photo gallery. This content type was implemented as the site grew, so there was something more tangible for visitors to see when they clicked on an artist name. Relevant fields:
     
    • field_artist: term reference that references a single artist taxonomy term.
    • field_artist_bio_body: a formatted text field.
    • field_artist_bio_photos: a multi-value file field that references image files.
    • field_is_deceased: a boolean field to mark whether the artist is deceased or not.
Choosing the Migration’s Primary Source

With the new D8 site, we want to merge these two into a single node type. Since we are moving from one version of Drupal to another, we get to draw on some great work already completed.

First, we need to decide which entity type will be our primary source. After some analysis, we determine that we can’t use the artist_bio node because not every Artist taxonomy term is referenced by an artist_bio node. A migration based on the artist_bio node type would leave out many artists, and we can’t live with those gaps.

So the taxonomy term becomes our primary source. We won’t have an individual migration at all for the artist_bio nodes, as that data will be merged in as part of the taxonomy migration.

In addition to the migration modules included in core (migrate and migrate_drupal), we’ll also be using the migrate_plus module and migrate_tools.

Let’s create our initial migration configuration in a custom module, config/install/migrate_plus.migration.artists.yml.

id: artists label: Artists source: plugin: d7_taxonomy_term bundle: artist destination: plugin: entity:node bundle: artist process: title: name type: plugin: default_value default_value: artist field_discography: plugin: iterator source: field_discography process: target_id: plugin: migration migration: albums source: nid

This takes care of the initial taxonomy migration. As a source, we are using the default d7_taxonomy_term plugin that comes with Drupal. Likewise, for the destination, we are using the default fieldable entity plugin.

The fields we have under “process” are the fields found on the Artist term, though we are just going to hard code the node type. The field_discography assumes we have another migration that is migrating the Album content type.

This will pull in all Artist taxonomy terms and create a node for each one. Nifty. But our needs are a bit more complicated than that. We also need to look up all the artist_bio nodes that reference Artist terms and get that data. That means we need to write our own Source plugin.

Extending the Default Taxonomy Source Plugin

Let’s create a custom source plugin, that extends the d7_taxonomy_term plugin.

use Drupal\taxonomy\Plugin\migrate\source\d7\Term; use Drupal\migrate\Row; /** * * @MigrateSource( * id = "artist" * ) */ class Artist extends Term { /** * [email protected]} */ public function prepareRow(Row $row) { if (parent::prepareRow($row)) { $term_id = $row->getSourceProperty('tid'); $query = $this->select('field_data_field_artist', 'fa'); $query->join('node', 'n', 'n.nid = fa.entity_id'); $query->condition('n.type', 'artist_bio') ->condition('n.status', 1) ->condition(fa.field_artist_tid, $term_id); $artist_bio = $query->fields('n', ['nid']) ->execute() ->fetchAll(); if (isset($artist_bio[0])) { foreach (array_keys($this->getFields('node', 'artist_bio')) as $field) { $row->setSourceProperty($field, $this->getFieldValues('node', $field, $artist_bio[0]['nid'])); } } } } }

Let’s break it down. First, we see if there is an artist_bio that references the artist term we are currently migrating.

$query = $this->select('field_data_field_artist', 'fa'); $query->join('node', 'n', 'n.nid = fa.entity_id'); $query->condition('n.type', 'artist_bio') ->condition('n.status', 1) ->condition(fa.field_artist_tid', $term_id);

All major D7 entity sources extend the FieldableEntity class, which gives us access to some great helper functions so we don’t have to write our own queries. We utilize them here to pull the extra data for each row.

if (isset($artist_bio[0])) { foreach (array_keys($this->getFields('node', 'artist_bio')) as $field) { $row->setSourceProperty($field, $this->getFieldValues('node', $field, $artist_bio[0]['nid'])); } }

First, if we found an artist_bio that needs to be merged, we are going to loop over all the field names of that artist_bio. We can get a list of all fields with the FieldableEntity::getFields method.

We then use the FieldableEntity::getFieldValues method to grab the values of a particular field from the artist_bio.

These field names and values are passed into the row object we are given. To do this, we use Row::setSourceProperty. We can use this method to add any arbitrary value (or set of values) to the row that we want. This has many potential uses, but for our purposes, the artist_bio field values are all we need.

Using the New Field Values in the Configuration File

We can now use the field names from the artist_bio node to finish up our migration configuration file. We add the following to our config/install/migrate_plus.migration.artists.yml:

field_photos: plugin: iterator source: field_artist_bio_photos process: target_id: plugin: migration migration: files source: fid 'body/value': field_artist_bio_body 'body/format': plugin: default_value default_value: plain_text field_is_deceased: field_is_deceased

The full config file:

id: artists label: Artists source: plugin: d7_taxonomy_term bundle: artist destination: plugin: entity:node bundle: artist process: title: name type: plugin: default_value default_value: artist field_discography: plugin: iterator source: field_discography process: target_id: plugin: migration migration: albums source: nid field_photos: plugin: iterator source: field_artist_bio_photos process: target_id: plugin: migration migration: files source: fid 'body/value': 'field_artist_bio_body/value' 'body/format': plugin: default_value default_value: plain_text field_is_deceased: field_is_deceased Final Tip

When developing custom migrations with the Migrate Plus module, configuration is stored in the config/install of a module. This means it will only get reloaded if the module is uninstalled and then installed again. The config_devel module can help with this. It gives you a drush command to reload a module’s install configuration.

11 Tools for VR Developers

As the VR market continues to push forward, there is an increasing amount of tools available to VR developers. While this is not an exhaustive list, I've compiled the most commonly used VR tools and platforms used today in hopes of making it easier for you to find what works best for your needs.

Desktop Tools

This is a collection of tools VR developers are using to create native application experiences, typically for Windows machines.

Unity 3D

Unity is by far one of the most ubiquitous of tools being used today for VR development. At its heart, it’s a game engine. It has a direct VR mode to preview your work in an HMD (Head Mounted Display) which can really boost productivity by designing for VR within a virtual environment. Unity is quickly becoming the default tool for VR development due to its ease of use and ability to quickly prototype VR applications with it.

There’s a huge community around this tool and so there are plenty of resources and documentation to learn from. A market of 3D assets can get you up and running in a short amount of time. If you’re familiar with C# or JavaScript, you can get into the scripting pretty easily as well. All major HMDs are supported and you can export your work to almost any platform imaginable, even WebGL.

Learn more about Unity.

Unreal Engine (UE4)

One of the main competitors of Unity 3D, Unreal Engine is also a gaming engine with VR integrations, an asset store, and great documentation. The graphics are debatably more advanced and realistic and the learning curve is similar to Unity. Many of the VR demos built with UE4 are much more life-like and smoother to navigate within. It provides great performance with the conveniences of a modern editing environment. UE4 also exports to most platforms but exports to slightly less than Unity.

Learn more about Unreal Engine.

3DS Max & Maya 

These are Autodesk products for modeling, animation, lighting, and VFX. They don’t have VR support by default but through pricey plugins instead. AutoCAD and 3DS Max are long-time standards in the architectural design industry and have some of the most precise tools in their UI. Like almost all GUI’s for building 3D environments and drawings, these tend to be quite massive UI’s with a lot of tools hidden behind menus, sub-menus, and toolbars.

Learn more about 3DS Max, Maya, and other Autodesk products.

Blender 

Blender is quickly becoming a favorite modeler for many VR developers. It’s free and open source software written in Python and is available for Windows, Mac, and Linux. There’s a huge community of people devoted to this software and its use. Many websites provide tutorial videos, forums, and documentation.

The software’s official documentation is also quite comprehensive. Mainly for modeling, UV mapping, lighting, rigging, and animation, you can export your models to a multitude of formats that can then be used with many other tools. There’s even a great plugin for exporting your creations into JanusVR with a free open source plugin called FireVR.

Learn more about Blender.

SketchUp 

Google’s SketchUp is a basic modeling application with a very low learning curve that can get anyone up and running in a short amount of time. The tutorials on the website are excellent, not only teaching the basics of the software but also as introductory lessons to basic 3D modeling concepts. You can quickly learn the basics of modeling with SketchUp and then move onto more advanced tools like Blender if you desire. It’s really great for modeling, quickly learning the lingo, and then moving onto bigger and better things. There's a free trial version of this software.

Learn more about SketchUp.

WebVR Tools 

These are tools used for developing WebVR in the various browsers. Most browsers are still struggling with headset device support, but they’re getting much closer to be included in the main builds of modern browsers like Chrome and Firefox. However, most phones can be detected with the WebVR-polyfill and if turned sideways, switched to a dual display mode that you can use with Google Cardboard, Samsung VR, or other headsets that are built to be used in conjunction with a smart phone.

Three.js 

This is a JavaScript library which works as a layer on top of WebGL. It has many helpers and abstractions that make working with WebGL much easier than the WebGL API alone. WebGL is an OpenGL implementation within modern browsers such as Chrome, Firefox, and Safari. There are some excellent applications being developed with Three.js which utilize 3D design to create anything from fun demos to multiplayer worlds and games.

Most WebVR implementations are built using Three.js due to its ease of use and in part due to how popular JavaScript has become. Doing 3D graphics in the browser is hardly done without Three.js. 

Learn more about Three.js.

A-Frame 

If you’d like to skip learning Three.js and WebGL directly, check out the open-source project A-Frame by Mozilla. This is a web framework built on top of Three.js and WebGL to build virtual reality experiences with HTML using an Entity-Component ecosystem. Works on Vive, Rift, desktop, and mobile platforms.

If you’re coming from the web development world, this is a great place to start as it has an HTML-like syntax and you can use all of your typical web development tools. It’s hands-down the easiest to pick up and so far the framework of choice at Lullabot.

Learn more about A-Frame.

React VR 

Promising to be the next big thing in WebVR, React VR promises rapid iteration and a syntax that is similar to A-Frame’s but hinges on the benefits that React brings. You can read more about how to get started in their documentation.

Learn more about React VR.

Vizor.io 

Vizor is an interesting take on a WebVR editor in your browser built with NodeJS and Three.js. It’s a visual programming environment for WebGL, WebVR and other HTML5 APIs. It features live preview, data flow visualization, network communication, publishing, unlimited undo, and a catalog of presets that can be used as modular building blocks. Complex logic can be nested in subgraphs and they can be rendered directly to a specific render target, or simply used as a texture. Loops are modeled as nested graphs that are evaluated once per loop iteration. All within your browser.

Learn more about Vizor.io.

JanusVR

Janus is more akin to a web browser for VR than a development tool. It’s a platform and while the client is closed source and built in QT5, the server side component is open source and written in NodeJS. Janus has full Oculus Rift support with avatars and some hand control via the Leap Motion controller. HTC Vive is also supported and Razor OSVR support is on the roadmap.

One thing that separates Janus from these other tools, is that it’s really more of a web browser. Virtual environments are written much the same way a more traditional website is created. An HTML-like syntax is used to set up what is referred to as “rooms” and there is basic JavaScript support as well. Since virtual environments are just websites in Janus, you can serve them up just like a traditional website, using whatever web server you like, and hosting it wherever you please. It’s totally distributed just like the web of today.

Within Janus you can hit ‘tab’ to create a portal to a new website, much the way you enter an address into your traditional web browser. Just click the portal and walk through to a new website, or “room”. You can even place the Janus markup inside of a traditional website within comment tags so that when you view the site with a typical browser, you see your regular website which parses the HTML, and Janus will see the 3D version of the site.

Janus has a means of editing the code of the room from directly within Janus and saving. This is one feature that’s making it become a popular option for web developers to start the transition to building virtual environments. Janus even has built-in multi-user support so you can walk around the internet with your friends, and talk to them via the built-in voice chat support. It’s a lot of fun and has a very low learning curve, especially if you’re already familiar with web technologies. You can learn more about Janus and its many uses or refer to the ever-helpful subReddit.

Learn more about JanusVR.

JanusWeb 

Janus also has a WebVR counterpart which aims to support everything that the browser does within WebVR. Check out the open source code.

Learn more about JanusWeb.

Do you use any of these tools or platforms mentioned? I'd be curious to hear your thoughts on them and any others not on this list.

A successful one-click deployment in Drupal 8

When I started building the DrupalCamp Spain 2017 website I was very excited to see how far could I get with Configuration Management. A lot of effort went into Drupal 8 to make the management of configuration seamless without the dependency on contributed modules like Features. Long story short: Configuration Management works wonderfully if you introduce it in your development workflow. In this article we will walk through how to implement, test, and deploy a release of DrupalCamp Spain’s website, including some issues we ran into and how we resolved them.

The goal: just push the button

Our goal with deployments at DrupalCamp Spain was that there should be nothing left to do on the website after the Jenkins job that performs the deployment has completed. It worked in the following way:

Someone would open the Jenkins production deployment job, select a git tag, and click build:

undefined

The above job would, via an Ansible task:

  1. Deploy the source code.
  2. Run composer install in a new directory.
  3. Run database updates: drush update-db -y.
  4. Import configuration: drush config-import -y.
  5. Purge Varnish caches: varnish clean domain all.

Here is the log of a successful deployment. In the following sections we will see how to setup our project in order to achieve one-click deployments.

The starting point

The DrupalCamp Spain website has a public repository that you can fork and explore. It uses:

Now, let’s look at one of the past deployments and then dissect it together.

A sample deployment

Even though the DrupalCamp Spain website was a fairly small project, we did many deployments to the production environment. As an example, I have chosen the 2.2.2 release - List session proposals because:

  • It includes a database update.
  • It contains configuration changes.
  • It involves code changes.
  • It adds new dependencies.

Here is the list of changes for the release:

undefined

If you are wondering how all of the above could be deployed successfully, the answer is twofold. First, we had a Jenkins job in charge of deploying the code. Second, we had a development process which tested deployments that we will look at in the next section.

Working on an issue

Release 2.2.2 was the result of the Call4Papers milestone. This milestone consisted of four issues which were completed via pull requests. We followed this development process:

  1. Install the site locally following the repository’s README.
  2. Pick an open issue like List session proposals.
  3. Create a new branch from master with the issue number such as 113-list-proposals.
  4. Implement the requirements and create a pull request when the branch is ready for review. If there are changes in configuration, export them via drush config-export and include them in the pull request.
  5. Once the pull request is merged, check Jenkins to verify that the automatic deployment to the development environment worked.
Polishing the deployment

In this case, I merged the pull request, and that triggered a Jenkins job that deployed the changes in the master branch to the development environment. It failed, as I made a change in the pull request just before it was merged that introduced a bug. Once I fixed that in a second pull request, I introduced a different bug (slap me with a trout, please) which needed a third pull request. Once it was merged, Jenkins could run the deployment task to the development environment successfully and there was much rejoicing.

In order to verify the deployment, I opened the development environment in the web browser. I was puzzled when I found out that even though the database update that installed the new module did complete successfully, the module was not installed. After some debugging, I discovered that I forgot to export my local environment’s configuration into the config directory, which, after doing it with drush config-export, contained a setting to keep the new module installed. Since Jenkins runs drush config-import after drush update-db, it uninstalled the module before the module was present in the site configuration. Therefore, I created a fourth pull request that included this fix. When Jenkins deployed it to the development environment, this time the module was successfully installed. Phew!

It took us four deployments to get the release working in the development environment. For production, we want to accomplish it in a single deployment. In the next section we will see how to test that.

We use a tool called Tugboat in our client's projects that helps us to spot and fix implementation and deployment bugs in the early stages of development. You can find out further details at https://tugboat.qa.

Testing the deployment to production

Once we completed all the milestone’s issues for listing sessions, we needed to make sure they could be deployed as a whole without causing errors.

Simulating a production deployment

Here is an example of how we teste production deployments:

First, we triggered the Jenkins job that copies the production environment's database and files into the development environment:

undefined

Here is the log. Once we had fresh data at the development environment, we triggered the job that kicks in when someone makes changes to the master branch (which ran the Ansible task to update the database).

undefined

The job completed successfully (Yay!). I also opened the development environment’s website and verified that the new module was installed and that all the new features that we worked on the milestone were working as expected. We were ready to publish this to production!

Hitting the button

The release was complete and we tested its deployment. Therefore, we were ready to publish it. We used the Jenkins job that takes a git tag as a parameter, deploys the code to the production environment, and finally runs the post-deploy task to update the database. It worked!

When you test deployments beforehand, you get a deep understanding of what happens during such a process. It gets especially interesting when something goes wrong, as it gives you a chance to debug the deployment process locally, fix it, and test it again at the development environment until it works flawlessly.

What happens when a deployment goes wrong in production? Setting up a roll back process is something that deserves its own article. In the meantime, what’s your development and deployment process like? Do you have any suggestions? I am looking forward to hearing your insights below.

Acknowledgements

This article was possible thanks to the following folks:

Footer image by By włodi from London, UK (The Big Red ButtonUploaded by Yarl) [CC BY-SA 2.0], via Wikimedia Commons.

Why Choose React?

I recently wrote an article about how to learn React. One thing I didn’t touch on was why you would choose to use React for a project. When you’re building something for yourself, it doesn’t matter much what software you choose. So long as you like it and it gets the job done, that’s good enough.

When you’re building a product or working with clients, the choice of software becomes much more important. So why choose React?

It’s Proven at Scale

React plays a key role in the largest social media platform in the world—Facebook. It’s proven at massive scale and has a dedicated team of developers at Facebook with hundreds of other contributors outside the company.

Why should this matter? Let’s imagine for a moment that you’re a technical manager. Maybe you’re the Director of Development or the CTO. Ultimately, you’re in charge of selecting software your company is going to use to build stuff. You ask your developers what they think, but ultimately you have to make the call. If it blows up, it’s on you, not the developers.

One of the first questions to ask is, “Who else is using this to build big, important stuff?” In the case of React, there is first and foremost the previously mentioned Facebook. React was originally developed for use on their ads dashboard—the place where they generate their revenue. They also use it on parts of the main Facebook site. If you have React DevTools installed, you can open it up on facebook.com to check out the parts that are built using React, but the point here is that Facebook uses React extensively on critical parts of their products.

Instagram, another of the largest sites in the world, is also built with React—both the website and mobile app. Netflix, Twitter, Uber, BBC, Airbnb, Dropbox, Lyft, and NBC.com (which the team here at Lullabot helped build) are a few other examples of large-scale applications that are built using React.

If you’re our hypothetical technical manager and you’re making a decision about what technology you’re going to use on a project with a budget that may run into the millions, it’s reassuring to know that your choice is battle tested. It’s also a good indicator of a healthy ecosystem around the software—good tools, supporting libraries, and training. This is certainly true of React—the ecosystem surrounding it is large and mature. You’ll also be better off when you need to add or replace team members because the pool of developers will inevitably be larger.

There are no other JavaScript frameworks that play such a key role in the infrastructure of the sponsoring organization. Google doesn’t use the Angulars for any of its major products (the two Angular frameworks are Google-backed projects). Other frameworks often have little to no backing from a large organization and depend on a single developer for the bulk of the code. 

It’s Influential and Innovative

There are a number of software patterns and techniques that React has helped popularize. Among them, use of a virtual DOM, component-based architecture, declarative programming style, HTML-in-JS, CSS-in-JS and functional programming.

All of these things within React have been influential in the broader JavaScript community. When React first came out, the use of JSX—which is sort of an HTML-in-JS approach—raised a lot of eyebrows. Over time, JSX has mostly been embraced and has encouraged developers to reconsider previously held ideas about separation of concerns. One result has been the creation of CSS-in-JS libraries like Aphrodite and styled-components. Together, these have gone a long way toward realizing the potential of component-based architectures.

You may have also noticed a lot of talk about functional programming in JavaScript circles. It’s a programming approach that JavaScript is well suited to and React has helped fuel interest. An important library within the React ecosystem, Redux, has accelerated that interest to the point that React development now leans toward functional programming approaches.

It’s Just JavaScript

Much of the code written in a React application is plain JavaScript. Contrast this with a library or framework that has a large API and you’re looking at potential time savings in getting your team up to speed. Another way to think of this is that if you have developers that already know JavaScript, their existing knowledge will be quickly put to use instead of spending time learning how to use the new software.

I’ve seen this dynamic firsthand. A developer may not have a lot of React-specific knowledge, but they can usually pick up tickets because there are tasks that only require general JavaScript knowledge. They can learn the React stuff as they go and become productive relatively quickly.

A separate but related point concerns frameworks that do not use JavaScript, but rather compile to JavaScript. The most popular example of this is TypeScript. It’s an interesting language and offers some useful features, but it can add to the learning curve.

For many teams, the prospect of learning a new language and a new framework at the same time, can be daunting. Because these languages are used by a subset of JavaScript developers, they can also limit the pool of developers available to quickly step in when there is turnover on your team.

It’s Versatile

One of the most exciting things about React is that you can use use it to build apps across multiple platforms—web, mobile, desktop and other devices. This has been referred to as a “learn once, write anywhere” approach to building software. This isn’t a hypothetical benefit, either. Here at Lullabot we’ve seen clients use React to build apps across various platforms and move developers from one team to the next with minimal disruption. This is huge.

You can also reuse React Native code to publish your app as both an iOS and Android app. Remember our hypothetical tech manager? How attractive is it to have a team of developers with the flexibility to move from building a web app to a mobile app? As a developer, it’s also exciting to be able to move into a new area of development without starting from scratch.

In the past, it was possible to write “hybrid” mobile apps using web technologies and various helper frameworks, but they often had limited functionality and poor performance. React Native—the React framework for mobile app development—is different. It allows you to build apps that are indistinguishable from those built in Objective-C, Swift or Java.

And Finally…

We’ve discussed why React is a good choice. Nevertheless, React is just a tool. As you don’t use a hammer to cut down a tree, React isn’t right for every project or every team. 

For example, if you need very high performance, then choosing a framework like Preact perhaps makes more sense as the team at Uber discovered. One nice feature of Preact is that it shares the same API as React so it’s easy for a React team to take up. The point, however, is to go with the software that is the right tool for the problem at hand.

React obviously has a lot to offer. Here at Lullabot we like it because it helps us solve problems for our clients better than the alternatives.

Eight Reasons Why Security Matters for Distributed Agencies

As I was doing my deep dive into an IoT camera, a question came up: why does it matter? Sure, any given device might not be secure, but how does that affect employees or our business?

I’m glad you asked!

1. Consumer Routers Are Mostly Garbage

Every home internet connection needs a router and some sort of WiFi network. Otherwise, you’re stuck connecting a single device directly to a cable or DSL modem. Unfortunately, most routers have poor security. The CIA has used home router exploits for at least the past 10 years, and odds are good that non-state actors have been too.

  1. In general, router security is not a selling point, and the lowest-cost routers are the bulk of the market.
  2. In order to reduce costs, routers usually use older hardware, WiFi chipsets, and ship with Linux. Since the WiFi drivers are often proprietary and out of the kernel tree, even new devices often ship with an ancient version of Linux. That means that your shiny new router (like the recently released Netgear Nighthawk X10) might ship with a kernel from half a decade ago (according to their GPL code release), missing security improvements since then [1].
  3. Very few routers offer automatic updates, so even if manufacturers provided comprehensive security updates they would be ignored by the majority of users.

Sometimes, ISPs give or require home users to use routers provided by them, but they have a poor security track record too. In one instance, a router’s DNS settings could be changed, which would let an attacker redirect traffic to servers of their choice.

Why does this matter? In the end, every single bit of internet traffic goes through your router. If an attacker wants to sniff unencrypted traffic, or attempt to downgrade an encrypted connection, this is the place to do it. Your router should be the most secure device on your network, and instead it’s likely the least secure.

Our security team recommends to our employees that their overall security starts with their router.

Try to find devices that offer some sort of automatic update and vendors with a good security record. Consider running an open-source router distribution like pfSense, OPNSense, or OpenWRT that makes it easier to keep up to date. Don’t trust your ISP’s equipment unless they’ve shown they are security conscious.

2. Home Networks Have Untrusted Devices

If you have a family at home, odds are you’ve given out your WiFi password. After all, you want kids or guests to be able to access WiFi when they need it. But, have you checked those devices to make sure they’re secure? What are the odds that the laptop your kid’s friend brought over to do homework on has some sort of virus on it? Or, that your babysitter’s old unpatched Galaxy phone is infected with a rootkit? You wouldn’t want these devices plugged in at work, and they shouldn’t be on the same network as your work devices either.

The easiest way to handle untrusted devices is to use the “guest network” functionality in your WiFi access point.

Usually, these networks limit traffic between devices, and only allow them to communicate out to the internet. Many access points allow multiple guest networks, so you could separate “mostly trusted” devices from “patient zero infection vector” devices [2].

3. Security Includes Privacy Too

Imagine that after reading the previous point, you go out and setup a perfectly secure and segmented network. Then, a grandparent gives your kids internet connected teddy bears. Great! You put them on the kid’s WiFi network, and rest knowing that your work data is secure.

Until you realize that they left the toy in your office, and you had conference calls with enterprise clients talking about unannounced products, and that the teddy bear was uploading all recorded audio to an unprotected database.

One of the best parts of working from home is being able to create your own space, or multiple spaces to work in. But, in sharing that home, you open yourself up to potential leaks and vulnerabilities. Of course, in the above hypothetical, the odds of an attacker combing through those voice recordings and finding something useful is small. Then again, what if your contracts require client notification in the case of a suspected breach? Even if the real risk is small, the impact on your reputation could be huge.

Treat your client data like your personal photo collection, your home budget, or your medical records.

Think not just about ways you can be directly hacked, but about ways data can be intercepted, and how you can limit those vulnerabilities.

4. IoT Devices Punch Holes By Design

What is it that every IoT device markets as being the most important feature? Usually, it’s some combination of “cloud,” “app,” and “integration.” If it’s a security camera, the marketing will almost always show some picture of a person out travelling, viewing their kids at home. Door locks alerting you when they are unlocked. A thermostat detecting you driving home, and starting to warm up the house.

In other words, these devices need to have a two-way connection to the Internet—they need to send statuses out to the cloud, and receive commands from your phone or the cloud. That means they’ve opened a hole through your router.

It might be a surprise, but while your home router is probably the most important security device on your network, they all include methods for devices and applications to open up your network to the Internet—without any sort of authorization or controls. uPNP and NAT-PMP are the most common protocols for this. STUN is also used as it works even if uPNP and NAT-PMP are disabled on the router.

No matter how they do it, IoT devices for the home place accessibility over security almost universally. That is a fundamental conflict with many agency (and customer!) priorities, making every single IoT device employees own a potential threat to your operations.

Prefer “smart” devices that work without an internet connection, or use a separate network entirely such as Zigbee.

As well, disable uPNP and NAT-PMP on your routers, and use a stateful firewall instead of relying on NAT to protect your home network.

5. Hacked Devices Put Private Networks At Risk

I’m sure many are thinking, “it’s OK, we require the use of a VPN for all of our work.” That’s fine, and certainly a good practice. It stops direct attacks on your private services from the broader Internet, and ensures employee’s connections can’t be sniffed by malicious devices at home.

Or… does it?

Ask yourself: how many VPNs do you have for client work that use self-signed SSL certificates? How many intranet sites require you to click through and ignore HTTPS warnings in your browser? How many of your critical domains use DNSSEC? How many client devices are validating DNSSEC signatures?

What prevents a hacked “smart” electrical plug from hacking a router in turn, and then redirecting traffic from there? How likely are you to notice that the self-signed VPN certificate has changed?

VPNs are great, but they’re only a start. The connection process is still vulnerable to attack by other devices on the network. Ignore best practices in but one layer of the system, and the whole thing becomes vulnerable. All because that WiFi thermostat was on sale for $29.99.

Don’t rely on VPNs as the sole method to protect your company.

Make sure all employees are aware of the risks that come with using work resources and VPNs at home, and that they understand the trust that comes with VPN access.

6. Agencies Are Great Targets

How many different clients do you work with today? How many have you worked with in the last year? How many access credentials do you have “on ice,” that are active, but not in daily use?

Imagine a hacker is trying to gain access to an enterprise’s network or data. What’s easier: hacking their well monitored and well-staffed corporate networks, or hacking a remote employee or agency protected by a mere consumer-grade router? And, if the target is not a specific company, but simply a company in a given vertical, agencies are perfect victims. At least, if the agency doesn’t consider security in a holistic and comprehensive manner.

Don’t fall into the “we’re too small to hack” trap.

Just as smart devices might be used as a vector to hack your laptop, your small agency might be used as a vector to hack a client.

7. Enterprises are Great Targets, Too

Ok, so agencies are great targets for hacks, and we should all just give up.

Well, enterprises don’t always have great security either. I’ve worked with companies with hundreds of thousands of employees, who don’t have SSL on a single intranet site. I’ve also seen companies with APIs that have zero authentication, allowing unauthenticated POST requests to modify business critical data. Or, AWS root keys left in cleartext on company wikis or source code.

As agencies, we’re often hired to set the standard for our client’s teams. That means, when we see an SSL certificate fail, we click cancel and call support instead of forcing it through. We use best practices for APIs like request signing instead of plaintext passwords. We change passwords we see posted in Slack, and remind the team to use something like LastPass or GnuPG instead.

But, to do this effectively, we need to have our own security house in order. We need to not just communicate the best practices, but live them ourselves, so we can know we aren’t leading clients towards an unusable and burdensome set of restrictions.

Bake good security practices into how you work with clients.

Follow the same security practices with your own teams, so when you make suggestions to clients you come from a place of experience.

8. The Internet Is A Community

In the Drupal world, we’re always telling our clients how being a part of the community is the best way to build sites efficiently. A hacked web server doesn’t just affect our client’s and their users—it affects other, innocent users online. A server taking part in a DDoS might not be noticeable at all to the server admins—but the other end of the attack is having a very bad day.

For digital agencies, our livelihoods depend on a functional and reliable internet. If we ignore security in the name of hitting our next deadline, we hurt the commons we all need to thrive.

Think about the downstream effects of a security breach.

Remember that the bulk of hacks these days aren’t about data exfiltration, but computing resources for DDoS attacks or spam. Be aware of the common resources your company has (hosting, email, domains, websites) that may be valuable to attackers in their own right simply because they can be used in other attacks.

Technical Notes

[1] I compared their source to the upstream LTS 3.10.105 release, which showed that CVE-2016-3070 was patched in August in commit af110cc4b24250faafd4f3b9879cf51e350d7799. It doesn’t appear that fix is shipped with the Netgear router. It’s possible the fix isn’t required for this hardware, but do we really trust that they’ve done their due diligence for every single patch? It’s a much better practice to apply all security patches, instead of selectively deploying them. Even if they’ve backported security patches, the Linux kernel itself has added significant security features since then, such as Live patching, write-only protection to data, and merges from the grsecurity project.

[2] Another solution is to implement multiple “virtual networks” or VLANs with firewall rules to control traffic. Combined with a managed switch and appropriate access points, you can “tag” traffic to different networks. For example, let’s say you have a Chromecast you want to be able to use from both your work laptop and from phones guests have. A VLAN would let you create three networks (devices, work, and guest), and add rules that allow traffic from “work” and “guest to send traffic to “devices”, but not the other way around. Likewise, “work” could open a connection to a “guest” device, but “guests” wouldn’t initiate a connection to a “work” device. Obviously this requires some learning to set up, but is great for flexibility if you have more than just the simple “guest” scenario.

Header image is Broken Rusty Lock: Security (grunge) by Nick Carter.

4 Common Surprises Experienced During Design Discoveries

It’s time to get started on that next big CMS redesign project and you’ve been tasked with finding a vendor. Proposals begin flowing in that provide estimated quotes, but one vendor tells you that an accurate estimate can’t be determined without first doing a “discovery.” You might feel a little annoyed by this because all you want is an estimate of how much the vendor will charge, allowing you weed out the ones that don’t fit within your budget. Discoveries are often met with hesitance, but companies are often surprised by what they learn and then realize these newfound “discoveries” that surface throughout the process are not only necessary but make the project as a whole that much more successful.

What is a discovery really?

First of all, a discovery is not about just gathering project requirements that your team has already identified. It’s so much more than that. Discoveries entail interviewing all stakeholders and conducting workshops to get a third-party perspective on your team’s workflows, processes, pain points, and how both internal and external users engage with the site. This research and workshopping phase is also used to understand your goals and oftentimes, to refocus your goals to more effectively align with what you and your stakeholders ultimately are wanting to accomplish.

undefined

All of these elements provide the critical foundation for determining what the best possible design solution is for your organization, including an accurate project cost estimate. After all, it’s never fun to be in the middle of a project only to find out that it will cost you far more than anticipated. Not to mention, the delayed completion date that goes along with that. The good news is that this is all totally avoidable.

Discoveries uncover the unexpected

Having completed many discoveries, we’ve witnessed many different scenarios and situations. Nevertheless, there are some common surprises we’ve seen repeatedly during the discovery phase over the years that justify and confirm their importance. Had these not been addressed prior to the project’s beginning, the process would have taken longer, cost more than planned, and overall, been a massive headache for all parties involved.

1. Not everyone is always on the same page.

The most important outcome of a design discovery is to ensure all stakeholders come away with a shared understanding of what problems need to be solved and the strategy required to solve them. Many companies looking to design an experience have goals that target their site’s external users or customers but often forget about the internal users who work with the site every day. There are always several internal stakeholders involved in the decision-making, maintenance, and editorial functions that come with operating a CMS.

As part of our design discovery process, ALL stakeholders are not only interviewed separately but are also brought together to go through a series of exercises that create a level playing field to ensure a shared understanding. As with most companies, decisions are made at the executive or management level, but those actually doing the work don’t always have a voice. Through interviews and group exercises, these disconnects—such as communication breakdowns, workflow issues, or a number of any other problems—become apparent and are often eye-opening. They bring about things that had not been thought of previously, and oftentimes lead to changing strategy, developing a much better workflow, and improving relationships among all involved.

I just want to note that our two strategic initiatives as an organization got zero votes for importance by anyone. Apparently they’re not what our team thinks are important to the actual success of the site. - Project Sponsor during a recent on-site workshop 2. Pre-determined project plans don’t always paint the full picture

A company’s decision-makers have often established what they want and how they want to do it before hiring a vendor. They want the vendor to come in and implement their plans and strategies. In our experience, successful vendors will still always conduct the research and workshopping necessary to fully understand not only the plans shared with them, but, more importantly, to drill down further to ensure everything—people, workflow, processes, goals, and strategy—are aligned.

During the discovery phase, user interviews and testing often help reveal misconceptions about the project direction. Companies launch these projects because they’re aiming to reach goals with their CMS set forth by the business strategy. To achieve these goals, plans are often devised by looking from the inside out, and not taking a deeper look into what their audience wants. We’ve witnessed companies wanting to design from a demographic perspective versus user modes, which are the common ways users engage with a site depending on their intent.

For example, while working on a project for a client that provides an online directory of local services, we found that determining the needs and values a user would have based on their mode (e.g. a user in emergency mode looking for a gas leak repair service versus a user in exploration mode researching ideas for a kitchen remodel) allowed us to find the constants regardless of a user's age, gender, etc. Taking this approach also enabled us to create an end-product that consistently provides a great experience. 

3. Valuable content created over the years can get lost in the archives

With the amount of content any given company produces over many years, it’s sometimes impossible to keep track of it all. Furthermore, turnover within a team adds to the chaos of knowing what exists out there. Most companies don’t just have one website; they also have a few or many microsites.

Discoveries often resurface valuable content that’s still relevant and can be used in their content marketing strategies. In many cases, we have found that organizations may be sitting on a gold mine of content that needs to be re-surfaced, instead of investing in creating new content unnecessarily.

4. Discoveries are far more productive than expected

Because there’s a misconception around what discoveries are and what they aim to accomplish, many clients have been amazed by how in-depth, yet efficient, they actually are. One critical part of this is to conduct discoveries in person as opposed to remotely. Being onsite gives us quick access to the internal stakeholders, and the face time this allows is an invaluable way of gaining a stronger understanding of their dynamics, processes, problems, needs, and the overall big picture.

Once the discovery is complete, all necessary information has been collected and usually, wireframes have been started from the collaborative workshopping exercises. Completing this critical work efficiently and effectively via an onsite sets the project up for success on so many levels and speeds up the process overall since due diligence has been done.

Conclusion

Although many clients are initially hesitant about participating in discoveries, we have found that they always appreciate them afterward and come to understand the value they deliver. Many times, they are surprised at seeing how much better their projects can be than originally expected. In some cases, clients have even adopted the best practices learned during the discovery process. Most importantly, like for our clients, discoveries set your company up for success and help you avoid suffering unwelcome costs and timeline surprises along the way.

Special thanks to Jared Ponchot, Jeff Eaton, Jen Witkowski, Maggie Griner, and Marissa Epstein for their help with writing this article.

How to Learn React: A Five-Step Plan

I was going through my email recently when I came across an article that caught my attention. It was about a team of .NET developers that learned Node.js and React while building a product. The following reflection stood out to me:

The philosophy of React is totally different to anything we have worked on so far. We were in a constant fight with it. One of the problems was finding the correct source of truth. There are so many different articles and tutorials online, some of which are no longer relevant. React documentation is OK-ish, but we didn’t want to invest too much time going over it and opted for a quick start instead.

For the past two+ years I’ve worked exclusively on React projects and I’ve had my own up-and-down learning experience with it. Over that time, I’ve developed some advice for how to learn React—the resources, the sequence and the important takeaways.

What follows is a five-step plan for learning React. All of the steps point you to free resources, although there are fallbacks listed on some of the steps that are paid courses or tutorials. The free stuff will get you there, however.

Finally, we finish with an “other things to consider” section. You’ll often hear people say that React is just a view library. While that’s true to some extent, it’s also a large and vibrant ecosystem. This last section will touch on important things to know or consider that aren’t covered in the five main steps. These things aren’t essential to building something real and significant, but you should look into them as you continue learning how to build software with React.

Step One - React Documentation + Code Sandbox

Yes, you should start by reading the React documentation. It’s well written and you’ll understand the essential terminology and concepts by the time you’re finished. The link I shared above points to the first section of the documentation on installation. There is a link there to a CodePen to help you get started.

An alternative that I prefer is Code Sandbox. I think it gives a better feel for a basic React application. You can use it to try things out as you work through the docs.

There is another option on that page under the “Create a New App” tab, which is to use Create React App to build a development environment on your local machine.  Create React App is a great tool and perhaps using it right away will help you. Everyone has a learning approach that works best for them.

Personally, I feel it adds mental load right as you’re getting started. My advice is to stick with Code Sandbox or CodePen when you’re beginning and focus on fundamental concepts instead of the development environment. You can always circle back later.

Key Takeaways
My recommendation is to read the Quick Start and Advanced Guides sections. I know not everyone likes reading documentation, particularly those that are visual or auditory learners. If you’re struggling with getting through those, then the key areas to focus on are in the Quick Start:

  • Introducing JSX
  • Rendering Elements
  • Components and Props
  • State and Lifecycle (super important!)
  • Handling Events
  • Composition vs Inheritance
  • Thinking In React

Again, my advice it to read the full two sections, but at a bare minimum, power through the areas in the list above. They include basic concepts that you will need.

Step Two - React Fundamentals Course

After Step One, you should have a gist of what React is all about. Maybe not everything is clear, but you’re starting to see some of the concepts take shape in your mind. The next step is the React Fundamentals course from React Training. It’s free and it’s good. The instructor is Tyler McGinnis and he’s both knowledgeable and easy to follow in his instruction.

Why this course? If you’re more of a visual or auditory learner, this will help you. It covers the basics as well as introduces key things that you’ll need to build something real—like webpack and fetching remote data.

By the end of the course you will have what you need to build a basic React application. Depending on what your goals are in learning React, you may even have all the information you need upon completing this course. Most of you—those learning React to build client projects—will need to keep going.

Important point: Do the exercises in the course. Following along will benefit you much more than just watching him go through them.

Key Takeaways
There’s a lot of good stuff in this course, but you’ll want to come away with a handle on the following:

  • Re-enforcement of principles from React docs
  • Intro to the build tools for React projects, particularly webpack
  • Understanding the this keyword
  • Stateless functional components
  • Routing (how you navigate from one “page” to the next)
  • Fetching async data

Another Option
I’ve heard nothing but good things about the courses Wes Bos creates. If you have the budget, his React for Beginners may be something to consider. It’s not cheap, though. Depending on the option, it’s $89-127. If your employer is paying, it may be helpful. To get a feel for how he teaches, check out his free course, JavaScript 30.

Step Three - Read ReactBits

The next step is ReactBits, a wonderful resource from Vasa at WalmartLabs. This isn’t a book, really. It’s more a series of tips - in a very useful outline format—that can help fill in the gaps left by other tutorials.

Key Takeaways
There is gold here. And it’s a resource that gets regularly updated so you can return to it as React continues to evolve. Again, I encourage you to read all the tips, but if you struggle with it, here’s what to focus on:

  • Design Patterns and Techniques (most important tip is probably this)
  • Anti-Patterns (if nothing else, read this section)
  • Perf Tips

Another great thing about ReactBits is that each tip includes references so you can do more research on the topic yourself - or at least understand why the author thinks it’s a best practice.

Architecture Interlude

Thus far we have identified resources that will teach how to create simple applications. Before we continue, however, we need to pause to consider what happens when a React application becomes more complex. As a React application grows, common problems often arise. For example, how does one share state across multiple components? How can one clean up API calls that have been scattered throughout the app as functionality has been added?

Facebook came up with an answer to these questions—the Flux architecture. Some time later, Dan Abramov created an implementation of Flux he called Redux. It has been hugely influential in the React community and beyond.  Facebook liked Dan’s work and subsequently hired him as part of the React team.

I recommend you learn Redux, but you should know there are other options, most notably, MobX. I’m not going to do a Redux vs. MobX run down in this post. I will only note that the general consensus is that MobX is typically easier to learn, but Redux is better suited to larger projects.

One of the reasons Redux is viewed as especially suitable for large projects is that, much like React itself, Redux is more than the simple library it is often billed as. It is also an architectural pattern that can bring a lot of predictability to your app. On a big project with lots of moving parts (and developers), this is a tremendous asset.

One final thing to note about Redux is it also has a very robust ecosystem around it. Redux supports middleware and there are a large number of libraries that can add debugging (with time travel), data handling, user flows, authentication, routing and more.

I encourage you to learn Redux. If you take a look at it and decide it’s not right for you, then MobX may be something that will work better. These are just tools. Use what helps you.

I’m including a talk below by Preethi Kasireddy from React Conf 2017 on Redux vs. MobX so you can get a feel for the pros and cons of each.

Videos require iframe browser support.

One last thing on architecture…
If it feels like Redux or MobX are too heavy for your application, consider the container component pattern. It can tidy things up by separating logic from presentation. It can help you see at a glance where API calls and other logic resides and may be all that you need to improve the organization of your app.

Step Four - Redux Documentation + Redux Video Series

The documentation for Redux is good and you should start there. One thing to note is that Redux uses a functional programming style and if you’re coming from a Java or C# background, there may be some unfamiliar syntax. Don’t worry about it. If you see something weird, set it aside. After you’re through with this step and the next, you’ll have a handle on it.

There is a series of videos called Getting Started with Redux by Dan Abramov. They’re available for free on Egghead.io and they’re a good resource. I have a colleague who thought that the videos made the documentation easier to understand. If you learn better from video tutorials, then start with them, but be sure you go back to the docs. The documentation has information that will help you and is omitted in the videos.

Key Takeaways
The thing you want to have at this point is not mastery, but a basic handle on Redux terminology and concepts:

  • Three principles of Redux
  • Actions
  • Reducers
  • Data flow
  • Usage with React
  • Async actions
  • Async Flow
  • Middleware

Go through these resources and if, when you’re finished, you feel like you somewhat get it, then you’re on track. Redux has an easy/hard thing to it at first. The individual pieces are mostly easy to understand. Putting them all together in your head often takes a bit more time. The resources in the next step will help with this.

Step Five - The Complete Redux Book + Redux Video Series Part 2

There is a great book on Redux that can be had for free, The Complete Redux Book. This is a book written by developers who are building serious React applications. It will help you learn how to architect your application and go deep into the concepts introduced in the previous step. It will also help you understand the basics of functional programming and make working with Redux easier.

Note that this book is on LeanPub and the suggested price is $32, but you can get it for free. If you have the money, consider paying. The authors have done a very good job and it’s worth the money.

The next resource is a second video tutorial series by Dan Abramov —Building React Applications with Idiomatic Redux. There is overlap between these videos and the book. What you choose to do here will depend on how much time you have and what learning style best suits you. If you can, do both.

Another Option
There is a book that runs $39 called FullStackReact. It’s $79 if you want all the code samples and a three hour screencast. I haven’t read the book, but one of the authors is Tyler McGinnis. I recommended his work in Step Two.

This might be worth a look if you have the funds. One thing I’m cautious about is the emphasis on GraphQL and Relay. Those two technologies—particularly GraphQL—are interesting. They are worth learning. However, if you are going to be building an app that uses a REST API, then maybe postpone the purchase.

Key Takeaways
Congratulations - this is the final step! At this point you should have:

  • Re-enforcement of principles from Redux docs and/or first video series
  • Understanding of basic functional programming principles
  • Understanding of creating and writing Redux middleware
  • Understanding of how to architect a React + Redux application

Of course, when it comes to software, our learning is never complete…

Other Things

Learning how to build JavaScript applications can be difficult, particularly if you need to build enterprise software. It doesn’t matter if it’s React, Angular or some other framework or library. However, if you’ve gone through these five steps, which should take a dedicated student about a week, then you have the basic tools you need to get started.

There are a few other things about the React ecosystem I’d like to share that can be the subject of future learning. I won’t go into too much detail, but be aware that you may run into these sooner or later, depending on your projects.

Webpack
Webpack is the primary bundler tool for React applications. It’s discussed in the React Fundamentals course, but you’ll probably have to go deeper at some point. You can use another tool, but finding examples if you get stuck may be difficult. A good, free introduction is this presentation by Emil Oberg and it includes a link to the code he writes in the video. 

Another good resource—not free—is Webpack 2 the Complete Developer’s Guide by Stephen Grider. This is a good course and is available on Udemy for $10-75. Udemy frequently offers discounts on courses, so you should be able to get a good deal on it.

Server Rendering
Most JavaScript frameworks do not support server rendering without additional work. Server rendered apps are also referred to as isomorphic or universal applications.  At Lullabot, we do a lot of websites with React and that makes server rendering essential. If you’re not familiar with the issue, server rendering builds the initial page on the server before sending it to the browser. This is important for two reasons.

First, search engines don’t render JavaScript. If you don’t have server rendering set up then it’s typically a big hit for your SEO. Meta tags and body content will likely be missing. Not good if you want your app to show up in search results.

Second, it helps with the performance of the app. If you’ve already done an initial render on the server, your page will load up with the initial content while the rest of the app loads in the background. Without server rendering, all of your JavaScript has to first load into the browser and then render. It’s slower.

With regard to server rendering with Redux, the docs are a good place to start. Another free resource you may find helpful is this post from Emil Ong on Hacker Noon. When I first learned about server rendering, I pieced it together from many different sources. One book that might help is Isomorphic Application Development with JavaScript by Konstantin Tarkus. It costs about $32 for the Kindle version.

If you need server rendering and your head feels full from too much learning, you might consider Next.js which I will discuss shortly.

Redux Saga
Redux Saga is middleware for Redux. It acts as a single place for side effects in your application. Side effects are often asynchronous things like data fetching and keeping them contained is an important concept in functional programming—something that is big in the React community.

Using middleware like Redux Saga can help with the architecture of your application. It will certainly make writing tests easier (see Jest and Enzyme to learn more about testing React apps). The downside to Redux Saga is that it adds more mental load, particularly if you aren’t yet familiar with ES6 generators. In the long term, however, it’s a good investment. Consider learning it and other Redux middleware once you’ve got a firm handle on the content in the five steps.

Reselect
Reselect is a selector library for Redux. It can help improve performance by computing derived data. For example, if you have an item in your Redux store that needs be calculated, it won’t recompute unless one of the arguments changes, preventing unnecessary re-rendering. This can be useful for shopping carts, “likes”, scoring, etc.

App Scaffolders
At the beginning of this post, I mentioned Create React App. It’s an app scaffolder. It can help you get started building an app very quickly. I leave it to you to read up on it, but one potential downside is that it doesn’t have server rendering. If you want that, you’ll have to add it on your own.

Another option is Next.js from Zeit. I haven’t used it, but it looks interesting. Maybe it could help you get started. It’s sort of a framework within a framework (React). It does a lot for you and as a result, is opinionated, but there are good docs for it. My concern would be the “black box” nature of it. I would need to understand the internals well before I felt confident using it on a client project. I’d be interested to hear from anyone that has experience with it.

And Finally…

Thanks for hanging with me. This was a long post. I wish I could have have written a post called, “How to Quickly Learn React in Five Easy Steps,” but that’s not how it goes. There is a lot to learn and the learning never stops. It’s something I like about software development, but there are times it can be stressful. 

If you’ve found yourself on a React project and you feel like you’ve been thrown in the deep end, hang in there. The React community is large, active and helpful. More than any other JavaScript framework, React will help move your career forward. Stick with it and have fun.

How Lullabot Fosters Human Connection for New Hires in a Distributed Company

Coming into a new company, and a new distributed company, in particular, can be overwhelming. When you start a job in a physical building, often someone in the office takes you under their wing (you hope) or is asked to give you a tour. Making connections with people while trying to master new tools and processes is hard. Having one person who is not just answering questions, but perhaps takes you out to lunch or sits down with you over coffee, offers a safe place to begin exploring the new world around you. This person can become your go-to as you acclimate yourself to the general workflow in the office. 

Akin to that, Lullabot has found a way to embrace this practice in a distributed fashion. We assign each new hire with a “Lullabuddy.” Their job is to create a safe environment for a new employee to ask any question about their new job with confidence. 

We introduce new hires to their Lullabuddy in their “welcome email,” and the Lullabuddy takes it from there. The Lullabuddy either answers the new hire's questions or points them to the person who can. We often compare starting a new job at Lullabot to being thrown into the deep end of a pool as there’s no gentle wading into a distributed company. We never want a new hire to feel lost or alone like they’re drowning in logins and novel communication methods. By giving each new hire a Lullabuddy, Lullabot also hopes to preclude a sense of isolation that sometimes occurs when a person starts a new job from home. It's one of the most effective things that Lullabot has found to do to foster humanity across time zones and distance.

Over the past few years, we’ve put together a few guidelines that ensure a good fit in a Lullabuddy.

A Lullabuddy:

  • Remembers what it’s like to be a new hire but has enough experience to guide.
  • Is not your boss and not already a friend, but is your first friend at Lullabot.
  • Takes initiative to check in regularly.
  • When possible is working on the same initial project.
  • Is not on vacation in the new hire’s first two weeks (heh, we only had to learn that once.)
  • Is available; has time to convey empathy and warmth.
  • Is a good listener; can also read between the lines.
  • Takes notice of the new hire’s Yammer/Slack participation and makes comments.
  • Explains memes and inside jokes.
  • Sets a good example in their communication.
  • Is like a good driving instructor (tells you what to focus on and what to tune out.)
  • Lets management know if something is amiss.
  • Understands and transmits Lullabot’s core values.

Any Lullabot can request to be a Lullabuddy; we just ask that they have the bandwidth in their schedule to commit to the responsibilities. It’s a bonus when the Lullabuddy and the new employee are close geographically. An in-person sync-up for dinner or coffee (on Lullabot) is a wonderful way to start a new job. (Yes, we admit, there's no substitute for face-to-face contact.)

We have evolved the Lullabuddy role so that it follows a checklist in BambooHR with a few tasks. Here's what it looks like:

undefined

We like to go over the expectations before a team member commits to the role, so they are clear on what they're signing up to do. The primary goal is to support our new hire so that it may look a little different depending on an individual employee’s needs. (In addition, we expect human resources to be everyone’s Lullabuddy, from when they first start until they leave Lullabot.)

In an increasingly digital world, fostering the human connection has become extremely important to Lullabot. Our “Be Human” core value emphasizes our belief that it’s these human connections that “bring our work to life.”  Could we start working at Lullabot without Lullabuddies? Yes. Would we feel as supported and integrated into the team? Probably not. Having a Lullabuddy is important not just for new hires, but being a Lullabuddy is important for seasoned team members as well. The Lullabuddy practice is one of the ways Lullabot ensures human connections every day.

A Lullabuddy is for life! That’s not an official stance, but it warms my HR heart to hear people call out to each other, “Hi Lullabuddy!” years after they started at Lullabot. 

Common Pitfalls When Configuring Entity Browser in Drupal 8

One of Drupal’s biggest strengths is its flexibility and extensibility, allowing developers to create, for example, custom-tailored editorial workflows. In this article, we will examine Entity Browser, a powerful component of Drupal’s contributed ecosystem of modules for managing digital media assets.


To make it easier to follow along with the article, let’s consider two fictional characters: Bob, who works as a copy editor for a big news website, and Sylvia, the Drupal developer in charge of the website Bob uses to do his job.


Bob is not very happy with the current solutions he has when editing an article, because:

  • When uploading an image, he is unable to re-use an image uploaded in another article.
  • When uploading an image to a gallery field, he has to upload them individually, and he wishes he could bulk-upload several images at once.
  • When Bob needs to relate articles, he uses an auto-complete field which only accepts the title, forcing him open a new tab to search for articles by author, category or date, because he doesn't remember all past articles' titles by heart.
  • Sometimes Bob wants to relate an article to a “quote” content type that doesn't exist yet, but he is going to create. Now he needs to stop his work, open a new tab, create the quote content, come back to the article, and then reference the quote. It bothers him a little bit having to switch so often between tabs and contexts.
  • Other things (Bob always wants more).

Sylvia knows how to solve all those needs, but she is not satisfied because the solutions she previously used in Drupal 7 were not standardized, sometimes “overkill," and often difficult to maintain and extend. However, she heard that the Entity Browser module solves all these needs (and more) in a generic and powerful way.

undefined

Sylvia already discovered that this module was deliberately created with a very abstract, flexible and “pluggable” architecture. While this allows for powerful tools and workflows to be implemented, the price is often an increased learning curve about how to properly configure it for the different user needs.

This article is intended to help Sylvia (and you) discover Entity Browser. Here you will find:

  • A brief description of the architecture of the module and the central concepts with which you need to be familiar.
  • A list of “browser” examples that can help you understand what is possible.
  • A step-by-step tutorial for configuring a browser from scratch, identifying common pitfalls that often occur in that process.
The basics


If this is your first contact with the Entity Browser module or if you are still wrapping your head around its configuration, there are some key concepts that you should start familiarizing yourself with, indicated below.

Note: The following description uses concepts and terminology not very easy to understand at first sight. It’s OK if not everything is 100% clear after the first read. The configuration steps below will certainly help you confront this challenge. In any case, I would highly recommend you take the time to go over the documentation indicated in each of the links below and check the examples there. It's always easier to grasp the concepts with some images and real examples.

The “browser” is a configuration entity that can be created through the UI by the site-builder, or imported from a configuration YAML file. This config entity will base its behavior on the settings defined by the following four plugins:

  • The Display plugin, which is responsible for presenting the entity browser to the end user (editor), in different contexts. Examples of this are “iFrame” or “Modal.
  • The Widget plugin, which is the tool the editor will use to effectively browse/search/select or create the piece of content they are looking for. Examples of this plugin that ship with the module is “File Upload” or “Views.
  • The Widget Selector plugin, which as the name suggests, is responsible for providing a mechanism for the end user to select a specific widget, among a set of available ones. Selector examples could be “Tabs” or “Dropdown.
  • The Selection Display plugin, which allows a “temporary storage area,” where the editor will be able to have visual feedback about the entities they've selected. 


Once all these are plugins, other contributed modules (and also your custom code!) can easily provide new ones. So don’t be surprised if at some point you encounter more options than the ones listed above.

Check some existing examples…

Many modules and distributions showcase what is possible to build using Entity Browser. The following video is based on the File Entity Browser module, which provides a pre-configured browser, along with some nice front-end adjustments.

Videos require iframe browser support.


This is an excellent way to become familiar with the module and the different use cases. These are some of the existing examples you can explore:

Modules that provide pre-configured browsers:

Full-featured Drupal distributions that showcase entity browsers:

…or create your own brand new Entity Browser!

Even if you are using one of the pre-packaged solutions mentioned above, it’s always good to understand how things work behind the scenes. For that, there’s no substitute for creating an entity browser from scratch and seeing for yourself all of the details involved. Doing this will help you discern whether a packaged solution or a custom browser will best address the particular use case involved in your project.

To create a browser from scratch, the first thing to do, after enabling the module, is to navigate to:

Configuration -> Content authoring -> Entity browsers

or go directly to the URL /admin/config/content/entity_browser and click “Add Entity browser

undefined

Common pitfall 

The module does not depend on the Ctools module to work. However, the multi-step wizard that helps you create and edit entity browsers using the UI does. This means that in order to create a browser as shown here you need to have Ctools enabled on your site. It’s not necessary to leave it enabled afterward though.  Once your browser is finished, and no modifications are foreseen, if you don’t need Ctools you can safely disable it in your production environment.

The wizard will walk you through five steps, each of them intended to define the configuration options for the different plugins we saw before.

Step 1 - General browser configuration undefined

In this step, you define the basic configuration of your browser, and depending on what you choose here, some of the subsequent steps may not need any configuration or will ask for different information.

If you are in doubt on what to choose here, the default values suggested may be just fine for many use cases. In our case, to make the most of the example, we will select:

  • Modal display
  • Tabs selector
  • Multi-step selection display

Common pitfall 

If you plan to use this browser as part of an embedding workflow using the Entity Embed module, you must choose the iFrame display plugin.

Step 2 - Display configuration undefined

This step will ask you for some information that refers to the display type you selected in the first step.

In our case, we are asked to indicate the width and height of the modal window, the label of the link/button to launch the browser, and an option to automatically open the browser, when possible (not always recommended).

For our example, we will leave the defaults.

Step 3 - Widget selector configuration undefined

Because we are using the “Tabs” widget selector type, there's no additional configuration needed.

Common pitfall 

Note that when using the multi-step wizard, the config values won’t be effectively saved until you go to the last step and click “Finish”. Even if some steps have no configuration, or you just want to edit the configuration of a single step, for now, you always need to go to the last one and click “Finish”. You can refer to this issue if you want to help to improve this.

Step 4 - Selection display configuration undefined

Because we selected “Multi-step” in a previous step, we have some configuration to do.

In our case, we will define that we'll be working with “File” entities (images), and this “work-in-progress zone” for the entities selected (the Selection display) should show the images as Thumbnails, using the “Thumbnail (100 x 100)” image style.

Step 5 - Widgets configuration undefined


This is the final and most important step to configure our browser. Here we will add as many widgets as we want to expose to the user when they are using the browser.

In our example above, we have added two widgets:

  • Upload: which will allow the user to upload a new image to the site
  • View: allowing the editor to select a pre-existing image from a gallery (which is a view you can customize as well)

Common pitfall 

When using a view as a widget, not all views can be selected here. You must have created beforehand a view with the following characteristics:

- It shows entities of the type being selected by this browser

- It has a display of type “entity_browser"

- It has at least one field of type “Entity browser bulk select form.” This is what will make the checkboxes appear and allow the entities to be selectable.

As long as these traits are present, the view can do any additional filtering and show any information you want to make the selection process easier.

Common pitfall 

View widgets have an additional setting “Automatically submit selection” intended to save the user one click when sending the entities to the “work-in-progress selection area.” Note though that this option should only be used when the Selection Display is set to “Multi-Step.” Failing to do so may produce an error on your site such as:

Drupal\Core\Config\ConfigException: Used entity browser selection display cannot work in combination with settings defined for used selection widget. in Drupal\entity_browser\Form\EntityBrowserForm->isFunctionalForm()

You can refer to this issue if you want more information or are interested in helping the site-builder experience here.

Browser created. Now what?

Now you can use it! As mentioned before, there is a lot of abstraction involved so that the browser can be used in many different contexts. Maybe one of the most common scenarios is to use the browser as a field widget to populate entity field values, but the same browser could also be used to support embedding workflows, in a custom form that you build yourself, or other scenarios you can think of.

Moving on with our example, let’s consider we have a node content type (let’s say the “Article” content type) with an image field where we want to stop using the default core upload widget and start using our brand-new browser instead. All we need to do is head over to:

Structure -> Content types -> Article -> Manage form display

or go directly to /admin/structure/types/manage/article/form-display to change the field widget settings.

undefined

Click on the dropdown, select “Entity browser” and then click on the cog on the right to open the browser widget options.

undefined

Different browsers may have different options here. In any case, the most important operation you want to perform here is to select from the dropdown list the entity browser you want to use.

Once done, your node creation form should have been updated to use the new field widget!

undefined

When you click on “Select entities”, your browser will open:

undefined undefined

Note that, as expected, we have two widgets (labeled here “upload” and “view”), displayed as Tabs, presented to the user in a Modal window.

Common pitfall

As seen above, the browser is a “piece” you configure and then plug into other parts of your site. Once it is very abstract, sometimes you can still plug it into some places, but it won’t work if the configuration you created for the browser is incompatible with the context you want to use it in. The following are two examples of this:

Misconfiguration scenario 1

Widgets don’t enforce entity types based on the context they are used in. This means that you could potentially use a widget to select an entity of a type incompatible with the field where that widget is being used, resulting in an error such as “The entity must be of type <type>”. You can refer to this issue for more information on this situation.

Misconfiguration scenario 2

Field widgets have a setting that allows you to define how the selection should be handled during the update process. This is called “Selection mode” and you can choose between the options “Append to selection,” “Prepend selection,” and “Edit selection”. As somewhat expected, if you choose an option like “Edit selection,” your browser must have been configured to have a “selection” available, which currently is only possible with the “Multi-step Selection Display” plugin. For example, if you choose the “No selection” plugin in the browser configuration while leaving this “Edit selection” setting on the field widget, you will end up with an error such as “Used entity browser selection display does not support pre-selection.” You can refer to this issue for more information on this.

As you can imagine by now, there are many opportunities to add variations to the process and to the final result. Your view can have other filters, show the elements in a grid instead of a table, etc. The definite best way to understand all capabilities of this tool is to play around with it yourself and explore some examples provided by others.

There are however two ingredients you can add to the mix that are worth mentioning here: the integration with DropzoneJS and Inline Entity Form.

Enhance your upload widgets with DropzoneJS

DropzoneJS is a very nice open-source library that improves the user-experience for file uploading. You can replace the standard Drupal upload widget…

undefined

... with a more user-friendly drop zone area:

undefined

At the moment of this writing, the module DropzoneJS only provides the widget shown above to be used inside entity browsers or other module integrations. After this issue is solved, you’ll be able to use it stand-alone in normal Drupal fields too, if you don’t want to have an entity browser around.

Did you know that…

In Drupal 8 the standard file upload widget can handle multiple files and drag-and-drop out of the box?

undefined Create entities “on the fly” with the integration with Inline Entity Form

One very powerful integration of the Entity Browser module is with the Inline Entity Form contributed solution. This will allow you to use the very same entity creation form as a widget in your browser.

Please refer to the documentation for more details on how to configure the two ways in which this integration can be implemented on your site. The important thing to keep in mind here is that this can open the door for creating related content without having to leave the main context!

Videos require iframe browser support. Conclusion

I hope this article helped you (and Sylvia!) in identifying the possibilities behind Entity Browser, and some of the most common issues when configuring it.

Do you have anything else you wish you knew when you first tried it out? Join the conversation! Please leave a comment below and let’s reduce the learning curve together.

Acknowledgements

Thanks to Seth Brown, David Burns and Juampy NR for their help with this article.

Other articles in this series

Hero Photo by Barn Images

Web Accessibility in 2017

Matt and Mike talk with several Lullabot accessibility experts as well as Drupal 8 accessibility co-maintainer Mike Gifford about accessibility topics, tools, and more.

Contenta Makes Your Content Happy

(This article, co-authored with members of the Contenta community, was cross-posted from Medium. Contenta is a community project that some Lullabots are actively contributing to)

Contenta is a Drupal distribution that gives you modern API capabilities out-of-the-box with JSON API, which amongst other features allows you to fetch nested resources in one request, and plans to include a similar GraphQL service soon. It’s ready to feed content to your JavaScript-powered website, phone app, TV, or even the mythical fridge application.

Drupal is a leading content management system when it comes to building smart editorial experiences and modeling complex content, but the front-end needs of consumers have evolved rapidly away from traditional server-side rendered sites. From powering a single application, to multi-channel publishing, Contenta provides all the tools and configuration you need to get started with your Create Once, Publish Everywhere CMS.

With Contenta, you can either start with a minimal blank slate, or try out our demo install which shows you how to solve some of the different problems encountered when developing and building a decoupled application. The demo install also provides many reference implementations of consumers in popular frameworks such as React, Angular, Elm, Ionic, and even an Alexa skill!

(Thanks Sally Young for this summary)

undefined The longer story

It’s 2013 and I’m sitting in a big auditorium watching Lin Clark and Klaus Purer talk about the ongoing effort to expose Drupal 8’s content as a REST server. I am so inspired by their talk that I decide to help out on a sprint. I sit at Klaus’ table but never ask him how can I help? I don’t want to bother him while he is working. He doesn't even know I want to collaborate.

This idea of exposing Drupal’s content and allowing 3rd parties to access the data continues to grow in my head.

Fast forward to the future.

Contenta CMS

Decoupled Drupal allows people outside of the Drupal community to build applications leveraging the best CMS in existence. In theory, you don’t need to be a Drupal developer or understand much of Drupal, or to build an Ember application powered by Drupal on the back-end.

Still, it is difficult to start from scratch and put together a Drupal site that can be used as a decoupled back-end. You need to understand many drupalisms. You need to know what modules are out there. You need to know which ones are stable and usable. You need to know enough about decoupled back-ends to setup a system that can perform the tasks your project needs. In summary, either you already use Drupal or you are not likely to use decoupled Drupal soon.

Those are the problems that Contenta wants to solve.

undefined

We know how complex it can be to set up decoupled Drupal. In fact Contenta is built by the people that built JSON API, Simple OAuth, Subrequests, Docson, Drupal core, etc. That is why we made sure that the first thing we did was provide a quick installer that gives you:

  • The collection of modules you will likely need, instead of you having to research amongst the billions of Drupal modules out there.
  • Demo content to start testing right away that can be reverted with one click.

Here is the first quick installer we released:

Videos require iframe browser support. The demo applications

Even if the back-end is exciting, the most exciting part is the number of demo applications that we are working on.

We decided that we wanted to be able to showcase decoupled Drupal very easily to that undecided CTO, or to that stakeholder that heard that Drupal is not the modern way. For that we needed something more complex than a Todo app. That’s why the Contenta team coordinated with the Out of the Box Experience Initiative to build the same product they wanted to build. They also want to show to the world what Drupal can do in a real life project.

Today Contenta facilitates an API with high-quality structured content for a recipe magazine and exposes that data to more than six different projects. Each project builds the same recipe magazine application based on the same wireframes that the Out of the Box Experience Initiative used.

undefined

Imagine Mercè, an Angular developer today. She knows nothing about Drupal but she needs a backend for her app. She can just install Contenta (with one command) and check the (amazing) work Matt Davis and Joao Garin are doing. She can get a BIG jumpstart.

Imagine Masato, he likes working with React. Basing his work on the–state of the art–example that Sally Young has been perfecting, he can be more productive than ever.

It’s all about the people we are helping. Come and thank the people helping them. undefined Come and help

We have an open collaboration model. Everything is open to everyone for discussion. Even the weekly API-First meetings—the open forum where we meet—are recorded so other people can be part of the conversation.

We still need help gathering documentation resources about decoupled Drupal for the Knowledge Hub. Do you know any? Suggest a tutorial!

undefined

If you want to jump right in, grab an issue in GitHub.

We hang out in the #contenta channel in the Drupal Slack. Come and say hi. Someone will greet you and ask you if you want to collaborate or just be a part of the conversation. Don’t be me in Prague. We want to help you to help the community.

How to Embed Just About Anything in Drupal 8 WYSIWYG with Entity Embed and URL Embed


Embedding media assets (images, videos, related content, etc) in a WYSIWYG text area is a long-standing need that has been challenging in Drupal sites for a while. In Drupal 7, numerous solutions addressed the problem with different (and sometimes competing) approaches. 


Drupal 8 core comes with basic support for embedding images out-of-the-box, but we all know that “ambitious digital experiences” nowadays need to go far beyond that and provide editors with a powerful tool capable of handling any type of media asset, including remote ones such as a relevant Tweet or Facebook post.
 

This is where the powerful combination of Entity Embed + URL Embed comes into play. Predicated on, and extending the foundation established by Drupal core, these two modules were designed to be a standardized solution for embedding any entity or remote URL content in a WYSIWYG editor in Drupal 8.

In this article, you will find:
  • What Drupal 8 core offers out-of-the-box for embedding images inside rich text
  • How to create a WYSIWYG embed button with the Entity Embed module (with some common pitfalls you may encounter along the way)
  • How to embed remote media assets with the URL Embed module (again, with some common pitfalls!)
  • Additional resources on how to extend and integrate your embedding solutions with other tools from the Media ecosystem.
Drupal 8 core embedding

Drupal 8 has made a big step from Drupal 7 and included the CKEditor WYSIWYG library in core. It also comes with an easy-to-use tool to embed local images in your text.

undefined

But what if the content you want to embed is not an image?

I am sure you have come across something like this before:

undefined

or this:

undefined

These are all examples of non-image media assets that needed to be embedded as well. So how can we extend Drupal 8 core’s ability to embed these pieces of content in a rich-text area? 

Enter the Entity Embed module. The Entity Embed module allows any Drupal entity to be embedded within a text area using a WYSIWYG editor.

 The module doesn't care what you embed, as long as that piece of content is stored in Drupal as a standard entity. In other words, you can embed nodes, taxonomy terms, comments, users, profiles, forms, media entities, etc. all in the same way and using a standard workflow.

In order to limit the length of this article, we will be working only with entities provided by Drupal core, but you should definitely try the Media Entity module, which can take your embedding capabilities to a whole new level (more on this at the end of this article).

Basic installation and configuration

Create an embed button

After downloading and installing the Entity Embed module and its dependencies, navigate to 

Configuration -> Content Authoring -> Text editor embed buttons

or go directly to /admin/config/content/embed and click “Add embed button”

undefined

You will see that there is already a button on this page called “Node,” which was automatically created upon installation. For the purposes of the example, we will create another button to embed nodes, but you can obviously use and modify the provided one on your site if you wish.

As an example, we’ll create a button to embed a node into another node, simulating a “Related article” scenario in a news article.

undefined

The config options you will find on this page are:

  • Label: Give your button a human-readable name and adjust its machine name if needed
  • Embed type: If you only have the Entity Embed module enabled, “Entity” is your only option here. When you install other modules that also extend the Embed framework, other options will be available.
  • Entity type: Choose here the type of the entity you will be embedding with this button. In our case, we choose “Content” in order to be able to embed nodes.
  • Content type: We can optionally filter the entity bundles (aka “Content types” for nodes) that the user will be allowed to choose from. In our case we only want “Articles”.
  • Allowed Entity Embed Display plugins: Choose here the display plugins that will be available when embedding the entity. This means that when an editor chooses an entity to embed, they will be asked how this entity should be displayed, and the options the user sees will be restricted to the selections you make here. In our case, we only want the editor to be able to choose between a simple “Label” pointing to the content, or a “Teaser” visualization of the node (which uses the teaser viewmode). More on this later.
  • Entity browser: In our example, we won’t be using any, but you should definitely try the integration of Entity Embed with Entity Browser, making the selection of the embedded entities much easier!
  • Button icon: You can optionally upload a custom icon to be shown on the button. If left empty, the letter “E” will be used.

Once we’ve created our button, it’s time to add it to the WYSIWYG toolbar.

Entity Embed: WYSIWYG configuration

Navigate to:

Configuration -> Content authoring -> Text formats and editors

or go directly to /admin/config/content/formats and click “configure” on the format you want to add the button to. In our example, we are going to add it to the “Basic HTML” text format.

Step 1: Place the button in the active toolbar

undefined

Step 2: Mark the checkbox “Display embedded entities”

undefined

Step 3: Make sure the special tags are whitelisted

If you are also using the filter “Limit allowed HTML tags and correct faulty HTML” (which you should), it’s important to make sure that the tags used by this module are allowed by this filter. Scroll down a little bit and verify that the tags:

<drupal-entity data-entity-type data-entity-uuid data-entity-embed-display data-entity-embed-display-settings data-align data-caption data-embed-button>

are listed in the “Allowed HTML tags” text area.

undefined

If you are using a recent version of the module, this should be automatically populated as soon as you click on “Display embedded entities”, but it doesn’t hurt to verify it was done correctly.

Common pitfall

If you are embedding images and you want to use “Alt” and “Title” attributes, you probably want to fine-tune the allowed tags to be something like:

<drupal-entity data-entity-type data-entity-uuid data-entity-embed-display data-entity-embed-display-settings data-align data-caption data-embed-button alt title>

Instead of the default value provided.

Common pitfall

If you have the filter “Restrict images to this site” active (which comes activated by default in Drupal core), you will probably want to reorder the filters so that “Display embedded entities” comes after the filter “Restrict images to this site”. If you don’t do this, when you embed some image entities you may end up with something like:

undefined All set, ready to embed some entities! 

Your editors can now navigate to any content form that uses the “Basic HTML” text format and start using the button!

Videos require iframe browser support.

Common pitfall

After this issue got in, the module slightly changed the way it manages the display plugins for viewing the embedded entity. As a result, the entity_reference formatter (“Rendered entity”) that many people are used to from other contexts is not available right away, and instead all non-default viewmodes of the entity type are shown directly as display options

undefined

However, for an entity without custom viewmodes, you won’t have the “Full” (or “default”) viewmode available anymore. There is still some discussion happening about how to ensure the best user experience for this issue, if you want to know more about it or jump in and help, you can find more information here, here and here.

Quick and easy solution for remote embedding with URL Embed


A sister-module of Entity Embed, the URL Embed module allows content editors to quickly embed any piece of remote content that implements the oEmbed protocol. 


Nice tech terms, but in practice what does that mean? It means that any content from one of the sites below (but not limited to these) can be embedded:

  • Deviantart
  • Facebook
  • Flickr
  • Hulu
  • IFTTT
  • Instagram
  • National Film Board of Canada
  • Noembed
  • Podbean
  • Revision3
  • Scribd
  • SlideShare
  • SmugMug
  • SoundCloud
  • Spotify
  • TED
  • Twitter
  • Ustream
  • Viddler
  • Vimeo
  • YouTube

Under the hood, the module leverages the awesome Embed open-source library to fetch the content from the remote sites, and uses the same base framework as Entity Embed to create embed buttons for this type of content. As a result, the site builder has a very similar workflow when configuring the WYSIWYG tools, which can then be used by the content editor in a very intuitive way. Let’s see how all that works.

Install the module and create a URL Embed button

As usual, the first step is to enable the module itself, along with all its dependencies.

Common pitfall

This module depends on the external Embed library. If you are using Composer to download the Drupal module (which you should), Composer will do all the necessary work to fetch the dependencies for you. You can also install only the library itself using Composer if you want. To do that just run "composer require embed/embed ". Failing to correctly install the library will prevent the module from being enabled, with an error message such as:

undefined

Once successfully enabled, the “Embed button” concept here is exactly the same as in the previous example, so all we need is to go back to /admin/config/content/embed .

undefined

The module already creates a new button for us, intended for embedding remote content. URL buttons have no specific configuration except for name, type and icon, so we’ll skip the button configuration part.

URL Embed: WYSIWYG Configuration

Similarly to what we did with the Entity button, we need to follow the same steps here to enable the URL Embed button on a given text format:

  • Step 1: Place the button in the active toolbar
  • Step 2: Mark the checkbox “Display embedded URLs
  • Step 3: Make sure the special tags are whitelisted. (Note that here the tags won’t be automatically populated, you need to manually introduce <drupal-url data-*> to the “Allowed HTML tags” text area.)
  • Step 4: (Optional) If you want, you can also mark the checkbox “Convert URLs to URL embeds”, which will automatically convert URLs from the text into embedded objects. If you do this though, make sure you reorder the filters so that “Display embedded URLs” is placed after the filter “Convert URLs to URL embeds
 All set, go embed some remote content! Videos require iframe browser support.

Common pitfall

Please bear in mind that while the Embed library can deal with many remote websites, it won’t do magic with providers that don’t implement the oEmbed protocol in a consistent way! Currently the module will only render the content of providers that return something inside the code property. If you are trying to embed some remote content that is not appearing correctly on your site, you can troubleshoot it by going to the URL: https://oscarotero.com/embed3/demo and trying the same URL there. If there is no content inside the code property, this URL unfortunately can’t be embedded in Drupal using URL Embed. Once this issue is complete, this will be less of an issue because the validation will prevent the user from entering an invalid URL.

Get even more by integrating with other Media solutions

There are at least two important ways you can extend the solutions indicated here, to achieve an even more powerful or easy-to-use editorial experience. Those approaches will be discussed in detail in upcoming articles, but you can have a brief idea about them below, in case you want to start learning about them right away.

Allow content to be selected through Entity Browsers


In the previous example, the content being embedded (in this case a referenced node) was selected by the editor using an autocomplete field. This is a very basic solution, available out-of-the-box for any referenced entity in Drupal core, but it does not provide the ultimate user experience we would expect from a modern CMS. The good news: it’s an easy fix. Plug any Entity Browser you may have on your site to any embed button you have created. 

Going over the configuration of Entity Browsers is beyond the scope of this article, but you can read more at the official documentation, or directly by giving it a try, using one of the pre-packaged modules like File Entity Browser, Content Browser or Media Entity Browser.

Common pitfall

If you are using an Entity Browser to select the content to be embedded, make sure your browser is configured to have a display plugin of type “iFrame.” The Embed options already appear in a modal window, so selecting an Entity Browser that is configured to be shown in a modal window won’t work.

Use the Media Entity module to deal (also) with remote media assets

The URL Embed module is a handy solution to allow your site editors to embed remote media assets in a quick-and-easy way, but what if:

  • you want to standardize your workflow for managing local and remote media assets?
  • you want to have a way to re-use content that was already embedded in other pieces of content?
  • etc.

A possible alternative that would solve all those needs is to standardize how Drupal sees all your media assets. The Media Entity approach means that you can “wrap” both local and remote media assets in a special “media" entity type, and as a result, the Entity Embed could be used with any type of asset, once all of them are Drupal entities regardless of their real storage.

In Conclusion

Hopefully, you find this a useful tutorial. I can strongly recommend this “Media Entity” approach, because it is being partly moved into Drupal core, which will result in an even stronger foundation for how Drupal sites will handle media assets in the near future.

Acknowledgements

Thanks to Seth Brown, David Burns and Dave Reid for their help with this article.

Other articles in this series

Image credit: Daian Gan

Clutch Lists Lullabot Among Top Agencies and Developers in Boston

Although our work expands far beyond the Boston borders, we’re happy to announce that we’ve made Clutch’s list of the top agencies and developers in Beantown! Today, they published this press release in which we’re recognized as leaders in three matrices: web design, web development, and custom software development.

Using proprietary research to determine the top companies in six “Leaders Matrices,” Clutch reviews vendors in the technology, marketing, and digital industries to give buyers the insights they need to make informed decisions. Their “reviews” are based on service offerings, client reviews, and past experience - in other words, a company’s ability to deliver the very best work and the proof to back it up.

This honor is attributed to the great work that our talented team continuously produces, and, of course, our collaboration with the clients we’re so fortunate to work with today and those with whom we’ve worked in the past. As with any accolades or awards, we never take these things for granted and will continue to work hard to be the best agency for our clients and our Lullabot team. - Matt Westgate, our CEO & Co-Founder

We Lullabots love what we do and are always proud of our work, so it’s very exciting to be recognized by an independent research firm like Clutch!

Image Credit: Zoltan Kovacs

Ways to Help the D8 Media Initiative

If you have built a media-rich website in Drupal 7, you already know that Drupal’s support for handling media assets was poor in the past. Common user needs such as “multiple file upload,” “re-usability of assets,” “global and per-user media library,” “integration of remote assets,” “media embedding in WYSIWYG,” etc. were recurrent challenges with no standardized solution.

The Drupal 7 version of Media tried to meet most of these needs with a single, monolithic module. While it did a great job, many site builders and developers perceived it as overkill—too complex for sites that only needed some of its functionality.

With this critique in mind, the Media module team decided to radically change the architecture and split all functionality into smaller, simpler components in Drupal 8. The result is a rich ecosystem of solutions that promotes re-usability, extensibility, and collaboration over competition.

Want to make Drupal the best media-handling solution available in any web framework? Now is the perfect time to help, and we are always welcoming to new contributors. This post outlines some of the main ways you might get involved and make an impact on the Drupal 8 Media initiative right away. Not interested in helping? The following article still offers a nice introduction to the suite of modules that comprise the Media project in D8 and what's in store in the future.

First of all, what is the “D8 Media initiative”?

This term encompasses all work being done to improve Drupal 8 media-handling functionalities, including:

At Drupalcon New Orleans, based on a survey of content creators, Dries proposed a media initiative to move parts of the contributed ecosystem to Drupal core. To drill into the particulars, visit this issue and read more about the first part of the plan here. You can learn more about the reasons behind this effort here or here, but suffice it to say Drupal needs to be friendly to media and publishers out of the box rather than requiring the complex ecosystem of contributed modules that exist today.

As you can see, it’s a very broad initiative involving a lot of functionality (and a lot of code!). There is opportunity to help in virtually any area of expertise, and the team more directly involved with the initiative is currently short of resources, so now is a great moment to get involved.

How can I help?

There are several ways, depending on your particular interests, availability, and level of confidence with coding and contribution to Drupal.

Helping out with documentation

The media initiative has an official handbook intended to provide guidance for people trying to get familiar with all contrib modules available. That documentation is not yet fully complete, and some existing sections need to be updated to reflect some changes that have been happening over time.

The process of improving this guide is open to anyone with a Github account. Simply submit pull requests to the repository using Github's UI or Gitbook if you're unfamiliar with the command line. In any case, if you are stuck on this process and want to help, please feel free to ping @marcoscano in the #drupal-media IRC channel (on freenode) and I’ll be glad to help you get started.

Helping in the issue queues

As mentioned before, the change from a monolithic approach in Drupal 7 to a set of focused modules in Drupal 8 offers many benefits. But it also comes with additional maintenance and support overhead, especially in the issue queues.

If you are already familiar with the issue queues in Drupal, all you need to know is that we try to label all issues that affect the initiative with the “D8Media” tag. You can then use the unified kanban board to navigate through issues from all projects, or you can go to each individual project and start looking for issues to jump on.

If you are just beginning with contribution, and want to wade in slowly, take a look at the “Active” issues, try to reproduce them, and verify if the issue summary is complete. Does the issue describe clear steps to reproduce the problem? Also, the issues in status “Needs review” normally have a patch that would benefit from manual testing as well. You can then verify if the patch applies to the latest -dev version, confirm if it solves the issue, and if you feel like, you can also review the code to see if it makes sense. Make sure you have these tips on your mind while doing your review.

Finally, when you are comfortable enough, feel free to take any issue in “Active” or “Needs work” status and give it a try yourself! Some useful tips if you are getting familiar to this process can be found here, here, and here.

Helping with the “Media-in-core” initiative

If you are comfortable already with contributing code and want to dive deep into the most active effort of the initiative, you can find the current issues being worked on in the first step of the Media in core plan. This issue lists a series of “must-have,” “should-have,” and “could-have” sub-issues that will need to be done in order for the desired media functionality to be considered stable in Drupal core. The current expectation is to have the base API for dealing with media entities in core ready for 8.4.x and all the basic user-facing niceties ready by 8.5.x, but the truth is that these targets are flexible and will strongly depend on the contribution that can happen in the following months.

No matter the path you choose, don’t feel alone!

The team involved in the initiative is diverse. Some people are more directly involved and spend quite a lot of time contributing, while others do what they can just occasionally. Regardless of where you are, any contribution is welcome. We do try to help each other and keep in touch as much as possible. If you need help starting with something or want a second opinion on how to approach an issue, feel free to join the #drupal-media IRC channel and ask for help (be patient though, the answer does not always comes right away).

The team has also an official sync-up meeting every Wednesday, 2 p.m. UTC, on the #drupal-media IRC channel. Feel free to join and say hello! Also, keep an eye on nearby drupal events, in case there is a Media-focused sprint happening in parallel to a camp or con.

Happy contributing!

The Ten Commandments of a New Drupal 8 Site for Enterprise Developers

Over the past two years, I’ve had the opportunity to work with many different clients on their Drupal 8 site builds. Each of these clients had a large development team with significant amounts of custom code. After a recent launch, I went back and pulled together the common recommendations we made. Here they are!

1. Try to use fewer repositories and projects

With the advent of Composer for Drupal site building, it feels natural to have many small, individual repositories for each custom module and theme. It has the advantage of feeling familiar to the contrib workflow for Drupal modules, but there are significant costs to this model that only become obvious as code complexity grows.

The first cost is that at best, every bit of work requires two pull requests; one pull request in a custom module repository, and a second commit in the composer.lock in the site repository. It’s easy to forget about that second pull request, and in our case, it led to constant questioning by the QA team to see if a given ticket was ready to test or not.

A second cost is dealing with cross-repository dependencies. For example, in site implementations, it’s really common to do some work in a custom module and then to theme that work in a custom theme. Even if there’s only a master branch, there would still be three pull requests for this work—and they all have to be merged in the right order. With a single repository, you have a choice. A single pull request can be reviewed and merged, or multiple can be filed.

A third, and truly insidious cost is where separate repositories actually become co-dependent, and no one knows it. This can happen when modules are only tested in the context of a single site and database, and not as site-independent reusable modules. Is your QA team testing each project against a stock Drupal install as well as within your site? Are they mixing and matching different tags from each repository when testing? If not, it’s better to just have a single site repository.

2. Start with fewer branches, and add more as needed

Sometimes, it feels good to start a new project by creating all of the environment-specific branches you know you’ll need; develop, qa, staging, master, and so on. It’s important to ask yourself; is each branch being used? Do we have environments for all of these branches? If not, it’s totally OK to start with a single master branch. If you do have multiple git repositories, ask this question for each repository independently. Perhaps your site repository has several branches, while the new SSO module that you’re building for multiple sites sticks with just a master branch. Branches should have meaning. If they don’t, then they just confuse developers, QA, and stakeholders, leading to deployment mistakes. Delete them.

3. Avoid parallel projects

Once you do have multiple branches, it’s really important to ensure that branches are eventually merged “upstream.” With Composer, it’s possible to have different composer.json files in each branch, such as qa pointing to the develop in each custom module, and staging pointing to master. This causes all sorts of confusion because it effectively means you have two different software products—what QA and developers use, and what site users see. It also means that changes in the project scaffolding have to be done once in each branch. If you forget to do that, it’s nothing but pain trying to figure it out! Instead, use environment branches to represent the state of another branch at a given time, and then tag those branches for production releases. That way, you know that tag 1.3.2 is identical to some build on your develop branch (even if the hash isn’t identical due to merge commits).

4. Treat merge conflicts as an opportunity

I’ve heard from multiple developers that the real reason for individual repositories for custom modules is to “reduce merge conflicts.” Let’s think about the effect multiple repositories have on a typical Drupal site.

I like to think about merge conflicts in three types. First, there’s the traditional merge conflict, such as when git refuses to merge a branch automatically. Two lines of code have been changed independently, and a developer needs to resolve them. Second, there are logical merge conflicts. These don’t cause a merge conflict that version control can detect but do represent a conflict in code. For example, two developers might add the same method name to a class but in different text locations in the class. Git will happily merge these together, but the result is invalid PHP code. Finally, there are functional merge conflicts. This is where the PHP code is valid, but there is a regression or unexpected ~~behaviour~~ behavior in related code.

Split repositories don’t have much of an effect on traditional merge conflicts. I’ve found that split repositories make logical conflicts a little harder to manage. Typically, this happens when a base class or array is modified and the developer misses all of the places to update code. However, split repositories make functional conflicts drastically more difficult to handle. Since developers are working in individual repositories, they may not always realize that they are working at cross-purposes. And, when there are dependencies between projects, it requires careful merging to make sure everything is merged in the right order.

If developers are working in the same repository, and discover a merge conflict, it’s not a blocker. It’s a chance to make a friend! By discussing the conflict, it gives developers the chance to make sure they are solving the right problem, the right way. If conflicts are really complex, it’s an opportunity to either refactor the code or to raise the issue to the rest of the team. There’s nothing more exciting than realizing that a merge conflict revealed conflicting requirements.

5. Setup config management early

I’ve seen several Drupal 8 teams delay in setting up a deployment workflow that integrates with Drupal 8’s configuration management. Instead, deployments involve pushing code and manual UI work, clicking changes together. Then, developers pull down the production database to keep up to date.

Unfortunately, manual configuration is prone to error. All it takes is one mistake, and valuable QA time is wasted. Also, it avoids code review of configuration, which is actually possible and enjoyable with Drupal 8’s YAML configuration exports.

The nice thing about configuration management tooling is it typically doesn’t have any dependency on your actual site requirements. This includes:

  • Making sure each environment pulls in updated configs on deployment
  • Aborting deployments and rolling back if config imports fail
  • Getting the development team comfortable with config basics
  • Setting up the secure use of API keys through environment variables and settings.php.

Doing these things early will pay off tenfold during development.

6. Secure sites early

I recently worked on a site that was only a few weeks away from the production launch. The work was far enough along that the site was available outside of the corporate VPN under a “beta” subdomain. Much to my surprise, the site wasn’t under HTTPS at all. As well, the Drupal admin password was the name of the site!

These weren’t things that the team had forgotten about; but, in the rush of the last few sprints, it was clear the two issues weren’t going to be fixed until a few days before launch. HTTPS setup, in particular, is a great example of an early setup task. Even if you aren’t on your production infrastructure, set up SSL certificates anyway. Treat any new environments without SSL as a launch blocker. Consider using Let's Encrypt if getting proper certificates is a long task.

This phase is also a good chance to make sure admin and editorial accounts are secure. We recommend that the admin account password is set to a long random string—and then, don’t save or record the password. This eliminates password sharing and encourages editors to use their own separate accounts. Site admins and ops can instead use ssh and drush user-login to generate one-time login links as needed.

7. Make downsyncs normal

Copying databases and file systems between environments can be a real pain, especially if your organization uses a custom Docker-based infrastructure. rsync doesn’t work well (because most Docker containers don’t run ssh), and there may be additional networking restrictions that block the usual sql-sync commands.

This leads many dev teams to really hold off on pulling down content to lower environments because it’s such a pain to do. This workflow really throws QA and developers for a loop, because they aren’t testing and working against what production actually is. Even if it has to be entirely custom, it’s worth automating these steps for your environments. Ideally, it should be a one-button click to copy the database and files from one environment to a lower environment. Doing this early will improve your sprint velocity and give your team the confidence they need in the final weeks before launch.

8. Validate deployments

When deploying new code to an environment, it’s important to fail builds if something goes wrong. In a typical Drupal site, you could have errors during:

  • composer install
  • drush updatedb
  • drush config-import
  • The deployment could work, but the site could be broken and returning HTTP 500 error codes

Each deployment should capture the deployment logs and store them. If any step fails, subsequent steps should be aborted, and the site rolled back to its previous state. Speaking of…

9. Automate backups and reverts

When a deployment fails, it should be nearly automatic to revert the site to the pre-deployment state. Since Drupal updates involve touching the database and the file system, those should both be reverted. Database restores tend to be fairly straight forward, though filesystem restores can be more complex if they are stored on S3 or some other service. If you’re hosted on AWS or a similar platform, use their APIs and utilities to manage backups and restores where possible. They have internal access to their systems, making backups much more efficient. As a side benefit, this helps make downsyncs more robust, as they can be treated as a restore of a production backup instead of a direct copy.

10. Remember #cache

Ok, I suppose I mean “remember caching everywhere,” though in D8 it seems like render cache dependencies are what’s most commonly forgotten. It’s so easy to fall into Drupal 7 patterns, and just create render arrays as we always have. After all, on locals, everything works fine! But, forgetting to use addCacheableDependencies on render arrays leads to confusing bugs down the line.

Along the same lines, it’s important to set up invalidation caching early in the infrastructure process. Otherwise, odds are you’ll get to the production launch and be forced to rely on TTL caches simply because the site wasn’t built or tested for invalidation caching. It’s a good practice when setting up a reverse proxy to let Drupal maintain the caching rules, instead of creating them in the proxy itself. In other words, respect Cache-Control and friends from upstream systems, and only override them in very specific cases.

Finally, be sure to test on locals with caches enabled. Sure, disable them while writing code, but after turn them back on and check again. I find incognito or private browsing windows invaluable here, as they let you test as an anonymous user at the same time as being logged in. For example, did you just add a config form that changes how the page is displayed? Flip a setting, reload the page as anonymous, and make sure the update is instant. If you have to do a drush cache-rebuild for it to work, you know you’ve forgotten #cache somewhere.

What commandments did I miss in this list? Post below and let me know!

Header image from Control room of a power plant.