Lullabot

Beware These Nine Design Elements Your Front-End Developers Hate

Even though Lullabot’s front-end and design teams already work together well, we wanted to improve our relationship as one of many goals for 2016. Applying our UX research techniques inward, the design team used interviews and surveys to learn more about the front-end development team. What followed was thoughtful discussion, appreciation for the other team, some very exciting process ideas, and a laundry list of pet peeves.

One of the more straightforward discoveries was learning what design elements cause the most frustration during implementation, and how difficult each developer perceived them to be (from 1, a quick change, up to 10, biggest pain in the booty.

undefined

Our front-end team was awesome about explaining their perspective on this. My biggest take-away was that none of these are impossible—they can be done, but some come at a cost. In a time-constrained project, we need to consider these implications and prioritize carefully. To help clients and designers make trade-offs, Lullabot often employs what we call a ‘whiz-bang budget’: we offer the client a set amount of points to spend on enhancement features as they choose, based on their importance and effect on performance.

However you approach it, keep these pain points in mind when creating your next design system.

1. Custom Select Elements: 7.8

Styling custom elements for a form, particularly the drop-down list of options, can be particularly thorny. All browsers have their own default styling for form elements, and each browser often needs separate customizations. Some form styles, like a select element’s option list, are essentially impossible to override.

Often, we compromise and style only the default, closed state of a select element. When a user clicks to reveal the list of options, they’ll see the browser’s default menu styles again.

2. Styling Third Party Social Components: 7.1

Social providers, like AddThis, offer sharing and following functionality, but have hardcoded styles that are quite difficult to override. Be warned, these third-party social tools can also bring a slew of performance problems, as many of these tools load extensive JavaScript in order to track shares and collect data for analytics. What might look like a few innocent icons on the screen can seriously slow down your site.

If you do need to provide sharing components, find one that has an appearance you can live with as-is or one that requires minimal changes. If possible, find one that leaves tracking data to a supplemental tool like Google Analytics, which the site probably already uses.

3. Responsive Carousels: 6.6

Carousels are a pet peeve for developers and designers alike. They hide important content, as not many users engage with them. Worse still for front-end developers is converting a group of images that appear separately on larger screens into an interactive carousel for smaller screens.

When time is an issue, work together to find a module or existing code you can start from, like Slick Carousel, then tweak the styling. If you need something custom, make sure the designers have completely worked out the details across various screen sizes, to save refactoring later on.

4. Changing Source Order: 6.3

Source order is especially important as we build responsive designs that shift and stack. We wireframe for mobile and desktop simultaneously; this allows front-end developers to plan early on how to structure markup so that sidebars and the like don’t end up out of order on smaller screens.

If you can’t maintain a single order from screen to screen, talk to the developer about using the Flexbox #order property. This lets you change the order in which elements appear from one screen size to another. That said, even when there’s a solution at hand like Flexbox, make sure there’s a meaningful reason to change the order around, as these sorts of switches still take time.

5. Inconsistent Grids, Breaking Them With Full Width Elements: 6.1

Grid systems are regularly a point of misunderstanding, so discuss your intentions early on. For most implementations, breaking out of the columns of the grid or overriding margins can require some funky, funky math.

Choose a grid system together, with a developer, so they can walk you through the limitations before you design something difficult to realize. If a design element needs to extend beyond the grid, breaking out a full width element is much simpler than breaking out only one side or column.

6. Fixed And Overlaid UI: 6

This includes several issues our developers brought up as problematic: modals, overlays, sticky elements, anything appearing over other elements on the page through manipulations of z-index. These can all create awkward scrolling issues, and are especially buggy on mobile. Part of the complication comes from the mobile browsers’ header and footer, which have their own behaviors that can interfere with the designed experience. This effect is even worse when we have fixed elements at the top or bottom of the screen, as you know if you've ever tapped a mobile site footer icon only to have the browser toolbar pop up instead. None of these issues are impossible to solve on their own—they tend to get more complicated when there are multiple elements which need to be sticky on scroll or when multiple elements need to stack on top of each other.

7. Equal Height Grid Elements With Varied Text Lengths: 5.7

If design mocks aren’t created with a variety of real content, you will likely find a surprise when the implementation meets the site’s actual content.

Pull samples of the longest and shortest copy to plan for variation and also provide an example of text wrapping. This helps to ensure that line heights are set correctly in your design program of choice, which is easy to overlook if you only set one row of text.

Remember, using truncation to deal with variable length content in a fixed space can cause real trouble. You never know where a word may get off, turning the well-crafted words in your con...

8. Components That Have Exceptions/Differences: 5.2

This is a pet peeve of mine, as well. That is, not following a style guide to a T. Messy files cause confusion, as do variations that are not distinct enough to be clearly intentional. I hope tools like InVision’s Inspect and Sketch’s Craft plugin will make this frustration obsolete.

Developers, when you see an element within a mock that looks like it should match an existing style, please ask the designer responsible if the variation is intended. Hopefully, that designer’s also created a global style guide for you to reference in these sorts of situations. If not, this might be yet another conversation that needs to happen.

9. Different Mobile Navigation: 4.4

Often, the layout and interaction changes between desktop and mobile navigation are so severe that front-end developers need to create two completely different menus. And we can run into overlaying issues here again.

Your responsive designs are likely possible, but like with many of the other items mentioned, each breakpoint and change adds time.

Good Communication Eases the Pain

Besides learning these workarounds, I’ve felt that iterating on our general process and hand-off helps the team just as much. We don’t just throw things over the wall—everyone cares about the end product, and it shows. The big theme is communication: Talk with a developer early in the design process to discuss ideas and brainstorm suggestions that can prevent trouble down the road. Keep a designer on during the development process to answer questions, review implemented pages, and create additional mocks for any additional needs you uncover. Use wireframes, style guides, and documentation to create shared understanding. Re-connect when a design is approved to walk through details or catch missing UI. Discuss which features we can set aside as future enhancements, or if a set of features need to be completed together in order to work. Almost every time we collaborate, I learn something from our friendly, happy-to-share developers, and they have said they’re learning from us designers as well. Point being, these conversations always, always help.

Lub you, FEDs. Sorry again for all those custom select elements.

HTTPS Everywhere: Quick Start With Cloudflare

This is a continuation of a series of articles about HTTPS, continuing from HTTPS Everywhere: Security is Not Just for Banks. If you own a website and understand the importance of serving sites over HTTPS, the next task is to figure out how to migrate a HTTP website to HTTPS. In this article, I’ll walk through an easy and inexpensive option for migrating your site to HTTPS, especially if you have little or no control over your website server or don't know much about managing HTTPS.

A Github Pages Site

I started with the simplest possible example. I have a website hosted by a free, shared hosting service, Github Pages, that doesn’t directly provide SSL for custom domains, I have no shell access to the server, and I just wanted to get my site switched to HTTPS as easily and inexpensively as possible. I used an example from the Cloudflare blog about how to use Cloudflare SSL for a Github Pages site.

Services like Cloudflare can provide HTTPS for any site, no matter where it is hosted. Cloudflare is a Content Delivery Network (CDN) that stands in front of your web site to catch traffic before it gets to your origin website server. A CDN provides caching and efficient delivery of resources, but Cloudflare also provides SSL certificates, and they have a free account option to add any domain to a existing SSL certificate for no charge. With this alternative there is no need to purchase an individual certificate, nor figure out how to get it uploaded and signed. Everything is managed by Cloudflare. The downside of this option is that the certificate will be shared with numerous other unrelated domains. Cloudflare has higher tier accounts that have more options for the SSL certificates, if that’s important. But the free option is an easy and inexpensive way to get basic HTTPS on any site.

It’s important to note that adding another server to your architecture means that content makes another hop between servers. Now, instead of content going directly from your origin website server to the user, it goes from the the origin website server to Cloudflare to the user. The default Cloudflare SSL configuration will encrypt traffic between end users and the Cloudflare server (front-end traffic), but not between Cloudflare and your origin website server (back-end traffic). They point out in their documentation that back-end traffic is much harder to intercept, so that might be an acceptable risk for some sites. But for true security you want back-end traffic encrypted as well. If your origin website server has any kind of SSL certificate on it, even a self-signed certificate, and is configured to manage HTTPS traffic, Cloudflare can encrypt the back-end traffic as well with a “Full SSL” option. If the web server has an SSL certificate that is valid for your specific domain, Cloudflare can provide even better security with the “Full SSL (strict)” option. Cloudflare also can provide you with a SSL certificate that you can manually add to your origin server to support Full SSL, if you need that.

The following screenshot illustrates the Cloudflare security options.

undefined Step 1. Add a new site to Cloudflare

I went to Cloudflare, clicked the button to add a site, typed in the domain name, and waited for Cloudflare to scan for the DNS information (that took a few minutes). Eventually a green button appeared that said ‘Continue Setup’.

undefined Step 2. Review DNS records

Next, Cloudflare displayed all the existing DNS records for my domain.

Network Solutions is my registrar (the place where I bought and manage my domain). Network Solutions was also my DNS provider (nameserver) where I set up the DNS records that indicate which IP addresses and aliases to use for my domain. Network Solutions will continue to be my registrar, but this switch will make Cloudflare my DNS provider, and I’ll manage my DNS records on Cloudflare after this change.

I opened up the domain management screen on Network Solutions and confirmed that the DNS information Cloudflare had discovered was a match for the information in my original DNS management screen. I will be able to add and delete DNS records in Cloudflare from this point forward, but for purposes of making the switch to Cloudflare I initially left everything alone.

undefined Step 3. Move the DNS to Cloudflare

Next, Cloudflare prompted me to choose a plan for this site. I chose the free plan option. I can change that later if I need to. Then I got a screen telling me to switch nameservers in my original DNS provider.

undefined

On my registrar, Network Solutions, I had to go through a couple of screens, opting to Change where domain points, then Domain Name Server, point domain to another hosting provider. That finally got me to a screen where I could input the new nameservers for my domain name.

undefined

Back on Cloudflare, I saw a screen like the following, telling me that the change was in progress. There was nothing to do for a while, I just needed to allow the change to propagate across the internet. The Cloudflare documentation assured me that the change should be seamless to end users, and that seemed logical since nothing had really changed so far except the switch in nameservers.

undefined

Several hours later, once the status changed from Pending to Active, I was able to continue the setup. I was ready to configure the SSL security level. There were three possible levels. The Flexible level was the default. That encrypts traffic between my users and Cloudflare, but not between Cloudflare and my site’s server. Further security is only possible if there is an SSL certificate on the origin web site server as well as on Cloudflare. Github pages has a SSL certificate on the server, since they provide HTTPS for non-custom domains. I selected the Crypto tab in Cloudflare to choose the SSL security level I wanted and changed the security level to Full.

undefined Step 4. Confirm that HTTPS is Working Correctly

What I had accomplished at this point was to make it possible to access my site using HTTPS with the original HTTP addresses still working as before.

Next, it was time to check that HTTPS was working correctly. I visited the production site, and manually changed the address in my browser from HTTP://www.example.com to HTTPS://www.example.com. I checked the following things:

  • I confirmed there was a green lock displayed by the browser.
  • I clicked the green lock to view the security certificate details (see my previous article for a screenshot of what the certificate looks like), and confirmed it was displaying a security certificate from Cloudflare, and that it included my site’s name in its list of domains.
  • I checked the JavaScript console to be sure no mixed content errors were showing up. Mixed content occurs when you are still linking to HTTP resources on an HTTPS page, since that invalidates your security. I’ll discuss in more detail how to review a site for mixed content and other validation errors in the next article in this series.
Step 5. Set up Automatic Redirection to HTTPS

Once I was sure the HTTPS version of my site was working correctly, I could set up Cloudflare to handle automatic redirection to HTTPS, so my end users would automatically go to HTTPS instead of HTTP.

Cloudflare controls this with something it calls “Page Rules,” which are basically the kinds of logic you might ordinarily add to an .htaccess file. I selected the “Page Rules” tab and created a page rule that any HTTP address for this domain should always be switched to HTTPS.

undefined

Since I also want to standardize on www.example.com instead of example.com, I added another page rule to redirect traffic from HTTPS://example.com to HTTPS://www.example.com using a 301 redirect.

undefined

Finally, I tested the site again to be sure that any attempt to access HTTP redirected to HTTPS, and that attempts to access the bare domain redirected to the www sub-domain.

A Drupal Site Hosted on Pantheon

I also have several Drupal sites that are hosted on Pantheon and wanted to switch them to HTTPS, as well. Pantheon has instructions for installing individual SSL certificates for Professional accounts and above, but they also suggest an option of using the free Cloudflare account for any Pantheon account, including Personal accounts. Since most of my Pantheon accounts are small Personal accounts, I decided to set them up on Cloudflare as well.

The setup on Cloudflare for my Pantheon sites was basically the same as the setup for my Github Pages site. The only real difference was that the Pantheon documentation noted that I could make changes to settings.php that would do the same things that were addressed by Cloudflare’s page rules. Changes made in the Drupal settings.php file would work not just for traffic that hits Cloudflare, but also for traffic that happens to hit the origin server directly. Pantheon’s documentation notes that you don’t need to provide both Cloudflare page rules and Drupal settings.php configuration for redirects. You probably want to settle on one or the other to reduce future confusion. However, either, or both, will work.

These settings.php changes might also be adapted for Drupal sites not hosted on Pantheon, so I am copying them below.

// From https://pantheon.io/docs/guides/cloudflare-enable-https/#drupal // Set the $base_url parameter to HTTPS: if (defined('PANTHEON_ENVIRONMENT')) { if (PANTHEON_ENVIRONMENT == 'live') { $domain = 'www.example.com'; } else { // Fallback value for development environments. $domain = $_SERVER['HTTP_HOST']; } # This global variable determines the base for all URLs in Drupal. $base_url = 'https://'. $domain; } // From https://pantheon.io/docs/redirects/#require-https-and-standardize-domain //Redirect all traffic to HTTPS and WWW on live site: if (isset($_SERVER['PANTHEON_ENVIRONMENT']) && ($_SERVER['PANTHEON_ENVIRONMENT'] === 'live') && (php_sapi_name() != "cli")) { if ($_SERVER['HTTP_HOST'] != 'www.example.com' || !isset($_SERVER['HTTP_X_SSL']) || $_SERVER['HTTP_X_SSL'] != 'ON' ) { header('HTTP/1.0 301 Moved Permanently'); header('Location: https://www.example.com'. $_SERVER['REQUEST_URI']); exit(); } }

There was one final change I needed to make to my Pantheon sites that may or may not be necessary for other situations. My existing sites were configured with A records for the bare domain. That configuration uses Pantheon’s internal system for redirecting traffic from the bare domain to the www domain. But that redirection won’t work under SSL. Ordinarily you can’t use a CNAME record for the bare domain, but Cloudflare uses CNAME flattening to support a CNAME record for the bare domain. So once I switched DNS management to Cloudflare’s DNS service, I went to the DNS tab, deleted the original A record for the bare domain and replaced it with a CNAME record, then confirmed that the HTTPS bare domain properly redirected to the HTTPS www sub-domain.

undefined Next, A Deep Dive

Now that I have basic SSL working on a few sites, it’s time to dig in and try to get a better understanding about HTTPS/SSL terminology and options and see what else I can do to secure my web sites. I’ll address that in my next article, HTTPS Everywhere: Deep Dive Into Making the Switch.

Building Views Query Plugins for Drupal 8, Part 2

Welcome to the second installment of our three-part series on writing Views query plugins. In part one, we talked about the kind of thought and design work that must take place before coding begins. In part two, we’ll start coding our plugin and end up with a basic functioning example.

We’ve talked explicitly about needing to build a Views query plugin to accomplish our goal of having a customized Fitbit leaderboard, but we’ll also need field plugins to expose that data to Views, filter plugins to limit results sets, and, potentially, relationship plugins to span multiple API endpoints. There’s a lot to do, so let's dive in.
 

Getting started

In Drupal 8, plugins are the standard replacement for info hooks. If you haven’t yet had cause to learn about the plugin system in Drupal 8, I suggest the Drupalize.Me Drupal 8 Module Development Guide, which includes an excellent primer on Drupal 8 plugins.

Step 1: Create a views.inc file

Although most Views hooks required for Views plugins have gone the way of the dodo, there is still one that survives in Drupal 8: hook_views_data. The Views module looks for that hook in a file named [module].views.inc, which lives in your module's root directory. hook_views_data and hook_views_data_alter are the main things you’ll find here, but since Views is loading this file automatically for you, take advantage and put any Views-related procedural code you may need in this file.

Step 2: Implement hook_views_data()

Usually hook_views_data is used to describe the SQL tables that a module is making available to Views. However, in the case of a query plugin it is used to describe the data provided by the external service.

/** * Implements hook_views_data(). */ function fitbit_views_example_views_data() { $data = []; // Base data. $data['fitbit_profile']['table']['group'] = t('Fitbit profile'); $data['fitbit_profile']['table']['base'] = [ 'title' => t('Fitbit profile'), 'help' => t('Fitbit profile data provided by the Fitbit API\'s User Profile endpoint.'), 'query_id' => 'fitbit', ]; return $data; }

The format of the array is usually $data[table_name]['table'], but since there is no table I’ve used a short name for the Fitbit API endpoint, prefixed by the module name instead. So far, I’ve found that exposing each remote endpoint as a Views “table”—one-to-one—works well. It may be different for your implementation. This array needs to declare two keys—‘group’ and ‘base.' When Views UI refers to your data, it uses the ‘group’ value as a prefix. Whereas, the ‘base’ key alerts Views that this table is a base table—a core piece of data available to construct Views from (just like nodes, users and the like). The value of the ‘base’ key is an associative array with a few required keys. The ‘title’ and ‘help’ keys are self-explanatory and are also used in the Views UI. When you create a new view, ‘title’ is what shows up in the “Show” drop-down under “View Settings”:

undefined

The ‘query_id’ key is the most important. The value is the name of our query plugin. More on that later.

Step 3: Expose fields

The data you get out of a remote API isn’t going to be much use to people unless they have fields they can display. These fields are also exposed by hook_views_data.

// Fields. $data['fitbit_profile']['display_name'] = [ 'title' => t('Display name'), 'help' => t('Fitbit users\' display name.'), 'field' => [ 'id' => 'standard', ], ]; $data['fitbit_profile']['average_daily_steps'] = [ 'title' => t('Average daily steps'), 'help' => t('The average daily steps over all the users logged Fitbit data.'), 'field' => [ 'id' => 'numeric', ], ]; $data['fitbit_profile']['avatar'] = [ 'title' => t('Avatar'), 'help' => t('Fitbit users\' account picture.'), 'field' => [ 'id' => 'fitbit_avatar', ], ]; $data['fitbit_profile']['height'] = [ 'title' => t('Height'), 'help' => t('Fibit users\'s height.'), 'field' => [ 'id' => 'numeric', 'float' => TRUE, ], ];

The keys that make up a single field definition include ‘title’ and ‘help’— again self-explanatory—used in the Views UI. The ‘field’ key is used to tell Views how to handle this field. There is only one required sub-key, ‘id,' and it’s the name of a Views field plugin. 

The Views module includes a handful of field plugins, and if your data fits one of them, you can use it without implementing your own. Here we use standard, which works for any plain text data, and numeric, which works for, well, numeric data. There are a handful of others. Take a look inside /core/modules/views/src/Plugin/views/field to see all of the field plugins Views provides out-of-the-box. Find the value for ‘id’ in each field plugin's annotation. As an aside, Views eats its own dog food and implements a lot of its core functionality as Views plugins, providing examples for when you're implementing your Views plugins. A word of caution, many core Views plugins assume they are operating with an SQL-based query back-end. As such you’ll want to be careful mixing core Views plugins in with your custom query plugin implementation. We’ll mitigate some of this when we implement our query plugin shortly.

Step 4: Field plugins

Sometimes data from your external resource doesn’t line up with a field plugin that ships with Views core. In these cases, you need to implement a field plugin. For our use case, avatar is such a field. The API returns a URI for the avatar image. We’ll want Views to render that as an <img> tag, but Views core doesn’t offer a field plugin like that. You may have noticed that we set a field ‘id’ of ‘fitbit_avatar’ in hook_views_data above. That’s the name of our custom Views field plugin, which looks like this:

<?php namespace Drupal\fitbit_views_example\Plugin\views\field; use Drupal\views\Plugin\views\field\FieldPluginBase; use Drupal\views\ResultRow; /** * Class Avatar * * @ViewsField("fitbit_avatar") */ class Avatar extends FieldPluginBase { /** * [email protected]} */ public function render(ResultRow $values) { $avatar = $this->getValue($values); if ($avatar) { return [ '#theme' => 'image', '#uri' => $avatar, '#alt' => $this->t('Avatar'), ]; } } }

Naming and file placement is important, as with any Drupal 8 plugin. Save the file at: fitbit_views_example/src/Plugin/views/field/Avatar.php. Notice the namespace follows the file path, and also notice the annotation: @ViewsField("fitbit_avatar"). The annotation declares this class as a Views field plugin with the ‘id’ ‘fitbit_avatar,' hence the use of that name back in our hook_views_data function. Also important, we're extending FieldPluginBase, which gives us a lot of base functionality for free. Yay OOO! As you can see, the render method gets the value of the field from the row and returns a render array so that it appears as an <img> tag.

Step 5: Create a class that extends QueryPluginBase

After all that setup, we’re almost ready to interact with a remote API. We have one more task: to create the class for our query plugin. Again, we’re creating a Drupal 8 plugin, and naming is important so the system knows that our plugin exists. We’ll create a file named: 

fitbit_views_example/src/Plugin/views/query/Fitbit.php 

...that looks like this:

<?php namespace Drupal\fitbit_views_example\Plugin\views\query; use Drupal\views\Plugin\views\query\QueryPluginBase; /** * Fitbit views query plugin which wraps calls to the Fitbit API in order to * expose the results to views. * * @ViewsQuery( * id = "fitbit", * title = @Translation("Fitbit"), * help = @Translation("Query against the Fitbit API.") * ) */ class Fitbit extends QueryPluginBase { }

Here we use the @ViewsQuery annotation to identify our class as a Views query plugin, declaring our ‘id’ and providing some helpful meta information. We extend QueryPluginBase to inherit a lot of free functionality. Inheritance is a recurring theme with Views plugins. I’ve yet to come across a Views plugin type that doesn’t ship with a base class to extend. At this point, we’ve got enough code implemented to see some results in the UI. We can create a new view of type Fitbit profile and add the fields we’ve defined and we’ll get this:

undefined

Not terribly exciting, we still haven’t queried the remote API, so it doesn’t actually do anything, but it’s good to stop here to make sure we haven’t made any syntax errors and that Drupal can find and use the plugins we’ve defined.

As I mentioned, parts of Views core assume an SQL-query backend. To mitigate that, we need to implement two methods which will, in a sense, ignore core Views as a way to work around this limitation.  Let’s get those out of the way:

public function ensureTable($table, $relationship = NULL) { return ''; } public function addField($table, $field, $alias = '', $params = array()) { return $field; }

ensureTable is used by Views core to make sure that the generated SQL query contains the appropriate JOINs to ensure that a given table is included in the results. In our case, we don’t have any concept of table joins, so we return an empty string, which satisfies plugins that may call this method. addField is used by Views core to limit the fields that are part of the result set. In our case, the Fitbit API has no way to limit the fields that come back in an API response, so we don’t need this. We’ll always provide values from the result set, which we defined in hook_views_data. Views takes care to only show the fields that are selected in the Views UI. To keep Views happy, we return $field, which is simply the name of the field.

Before we come to the heart of our plugin query, the execute method, we’re going to need a couple of remote services to make this work. The base Fitbit module handles authenticating users, storing their access tokens, and providing a client to query the API. In order to work our magic then, we’ll need the fitbit.client and fitbit.access_token_manager services provided by the base module. To get them, follow a familiar Drupal 8 pattern:

/** * Fitbit constructor. * * @param array $configuration * @param string $plugin_id * @param mixed $plugin_definition * @param FitbitClient $fitbit_client * @param FitbitAccessTokenManager $fitbit_access_token_manager */ public function __construct(array $configuration, $plugin_id, $plugin_definition, FitbitClient $fitbit_client, FitbitAccessTokenManager $fitbit_access_token_manager) { parent::__construct($configuration, $plugin_id, $plugin_definition); $this->fitbitClient = $fitbit_client; $this->fitbitAccessTokenManager = $fitbit_access_token_manager; } /** * [email protected]} */ public static function create(ContainerInterface $container, array $configuration, $plugin_id, $plugin_definition) { return new static( $configuration, $plugin_id, $plugin_definition, $container->get('fitbit.client'), $container->get('fitbit.access_token_manager') ); }

This is a common way of doing dependency injection in Drupal 8. We’re grabbing the services we need from the service container in the create method, and storing them on our query plugin instance in the constructor. 

Now we’re finally ready for the heart of it, the execute method:

/** * [email protected]} */ public function execute(ViewExecutable $view) { if ($access_tokens = $this->fitbitAccessTokenManager->loadMultipleAccessToken()) { $index = 0; foreach ($access_tokens as $uid => $access_token) { if ($data = $this->fitbitClient->getResourceOwner($access_token)) { $data = $data->toArray(); $row['display_name'] = $data['displayName']; $row['average_daily_steps'] = $data['averageDailySteps']; $row['avatar'] = $data['avatar']; $row['height'] = $data['height']; // 'index' key is required. $row['index'] = $index++; $view->result[] = new ResultRow($row); } } } }

The execute method is open ended. At a minimum, you’ll want to assign ResultRow objects to the $view->result[] member variable. As was mentioned in the first part of the series, the Fitbit API is atypical because we’re hitting the API once per row. For each successful request we build up an associative array, $row, where the keys are the field names we defined in hook_views_data and the values are made up of data from the API response. Here we are using the Fitbit client provided by the Fitbit base module to make a request to the User profile endpoint. This endpoint contains the data we want for a first iteration of our leaderboard, namely: display name, avatar, and average daily steps. Note that it’s important to track an index for each row. Views requires it, and without it, you’ll be scratching your head as to why Views isn’t showing your data. Finally, we create a new ResultRow object with the $row variable we built up and add it to $view->result. There are other things that are important to do in execute like paging, filtering and sorting. For now, this is enough to get us off the ground.

That’s it! We should now have a simple but functioning query plugin that can interact with the Fitbit API. After following the installation instructions for the Fitbit base module, connecting one or more Fitbit accounts and enabling the fitbit_views_example sub-module, you should be able to create a new View of type Fitbit profile, add Display name, Avatar, and Average Daily Steps fields and get a rudimentary leaderboard:

undefined Debugging problems

If the message ‘broken or missing handler’ appears when attempting to add a field or other type of handler, it usually points to a class naming problem somewhere. Go through your keys and class definitions and make sure that you’ve got everything spelled correctly. Another common issue is Drupal throwing errors because it can’t find your plugins. As with any plugin in Drupal 8, make sure your files are named correctly, put in the right folder, with the right namespace, and with the correct annotation.

Summary

Most of the work here has nothing to do with interacting with remote services at all—it is all about declaring where your data lives and what its called. Once we get past the numerous steps that are necessary for defining any Views plugins, the meat of creating a new query plugin is pretty simple.

  1. Create a class that extends QueryPluginBase
  2. Implement some empty methods to mitigate assumptions about a SQL query backend
  3. Inject any needed services
  4. Override the execute method to retrieve your data into a ResultRow object with properties named for your fields, and store that object on the results array of the Views object.

In reality, most of your work will be spent investigating the API you are interacting with and figuring out how to model the data to fit into the array of fields that Views expects.

Next steps

In the third part of this article, we’ll look at the following topics:

  1. Exposing configuration options for your query object
  2. Adding options to field plugins
  3. Creating filter plugins

Until next time!

Making Drupal 8 API First with RESTful Web Services

Matt and Mike sit down with the developers who are leading the REST initiatives in Drupal 8, and discuss the current landscape, and what's on the horizon. We discuss JSON API, GraphQL, changes to Drupal core, and more.

Building Views Query Plugins for Drupal 8, Part 1

Three years ago Greg Dunlap wrote a series of articles about building Views query plugins in Drupal 7. A lot has changed since then. Drupal 8 has been out for a year now and Views is in core. I recently had an opportunity to write a Views query plugin for a Drupal 8 project, and it worked out surprisingly well. So, how has the process of implementing a Views query plugin changed? As in the original series, we’ll take a look at how you build one from scratch.

The original article series used Flickr as an example of a remote service you could expose to Views via a query plugin. The Flickr API module that was used does not have a Drupal 8 port, so I needed to port something for the series. We’re big Fitbit users at Lullabot, myself included, so I thought it might be fun to use the Fitbit API as an example of what can be done with Views query plugins. We’ll use the Fitbit API, exposed via a Views query plugin, to build our own custom leader board for our Drupal site.

What’s a Views query plugin?

Since Views 3 in Drupal 7, you could write your own plugin to replace Views’ built-in SQL-query engine. This means that you can make Views query against any kind of data source, using the same Views UI your site administrators are accustomed to. The most common use case is to create views that query a remote web service.

The big picture of how you write a Views query plugin hasn’t changed much. You’ll still go through the same steps as in Drupal 7, so we’ll follow the original article series and divide it into three parts:

  1. Planning and modeling your data
  2. Creating a basic query plugin
  3. Exposing configuration options and handling arguments and filters

Probably the biggest change in writing Views query plugins in Drupal 8 is the use of Drupal 8’s plugin system. It’s helpful to have a general understanding of Drupal 8 plugins as we’ll skip over some of those details. The Drupalize.me Drupal 8 Module Development Guide is an excellent source for general information about Drupal 8 plugins.

Let’s do it!

Modeling your plugin data

One of the first things you need to do before coding your plugin is sit down and think about how the data being returned from your API maps to the data that Views expects. There are a lot of moving parts when writing a Views query plugin, so it’s good to resist the temptation to dive right into code. Let's first figure out the nature of the remote API and what we need to do to transform the data into a Views-friendly format.

Views is designed to represent tabular data, the basis of which is a row of fields. Many API endpoints do not follow this model. For example, a single request may contain a lot of nested data. The query plugin I wrote on a recent project wrapped a Search endpoint that returned faculty member search results. The results contained arrays of research interests and degrees. It was easy enough to flatten these arrays into comma separated lists and present to Views a single field, but it’s not difficult to imagine more complex data. Perhaps the results should include details about the Department the faculty member belongs to. Complex nested data may necessitate a Views relationship plugin.

The Fitbit API exposes a number of endpoints. We’ll first hone in on the User endpoint. It’s pretty straightforward in that it returns nearly tabular data. Issue a request and you get back a list of key-value pairs. It’s also got just enough data in the response to put together a simple leader board based on users’ average daily steps. Here is an example of what it returns (clipped for brevity):

{ "user": { "age": 32, "avatar": "https://d6y8zfzc2qfsl.cloudfront.net/B8395E1F-346C-1E31-E74C-0AE2512A38BD_profile_100_square.jpg", "avatar150": "https://d6y8zfzc2qfsl.cloudfront.net/B8395E1F-346C-1E31-E74C-0AE2512A38BD_profile_150_square.jpg", "averageDailySteps": 7334, "displayName": "Matthew O.", "topBadges": [ { "image100px": "https://static0.fitbit.com/images/badges_new/100px/badge_daily_steps30k.png", "name": "Trail Shoe (30,000 steps in a day)", }, { "image100px": "https://static0.fitbit.com/images/badges_new/100px/badge_lifetime_miles1997.png", "name": "India (3,213 lifetime kilometers)", } ], } }

There are a few values with nested data, in particular topBadges, but we’ll see that it’s not that difficult to mange these. From a Views standpoint, we want to expose each of these pieces of data as a field.

There is a wrinkle when it comes to the Fitibit API and our ability to translate it for Views, perhaps you’ve already noticed it. Each response contains only a single user. The API does not allow for multiple users’ data to be returned by a single request. This is because the data can be very personal; values like gender, height, and weight are included in some responses, but only with the individual user's explicit permission. Fitbit’s API follows the OAuth 2.0 Authorization Code Grant. Under the OAuth 2.0 Authorization Code Grant scheme, your application, in this case our Drupal module, has to be registered with Fitbit. Then, in order to query the API on behalf of users, each user must grant access to the application. When a user grants access, they have the option of selectively granting scope, for example they can choose to share activity data, such as steps, distance, calories burned, and active minutes but choose to omit profile data, which includes values like height and weight. Authentication is outside the scope of this article but take a look at the Fitbit base module if you’re interested.

What does this mean for our Views implementation? Well unlike a SQL-query backend which can receive multiple rows in a single request, we’ll need to write our Views query plugin such that we loop over all of our authenticated users and make a single API request per user to aggregate the “table” we give back to Views. This kind of complexity is the trickiest part of writing a Views query plugin. Remember, Views deals in tabular data, so you have to analyze your remote data source and, anywhere it doesn’t fit the bill, you need to translate it for Views.

Data spread across more than one API request represents another common hang up. Here too you have to aggregate the data and then translate into a tabular format. There are a couple ways you can do this, you can either hit each endpoint in turn and present the aggregate fields to Views as a single “table,” or you can implement a Views relationship plugin. By implementing a Views relationship plugin you can defer the decision of which API endpoints are hit to the site administrator.

Take our Fitbit module for example. The Fitbit API has an endpoint to retrieve user profile data, and a separate endpoint to retrieve daily activity summary data. To surface data from both endpoints to Views, we could just hit both endpoints all the time and give the aggregate result back to Views, however, that could be wasteful. The site administrator may have no interest in daily activity summary data. Instead, we can use Views relationships to give the site administrator the option to opt in to daily activity summary data. When the relationship is present, our Views query plugin will know to hit both endpoints.

Other considerations

If the number of authenticated Fitbit users on our site grew, we’d reach a point where performance and rate limiting would become a concern. We will probably want to investigate a caching strategy to reduce the number of round trips. Another concern I had at the beginning of the project was that I preferred not to interact directly with the API, but to instead use a module or library that abstracted a lot of that work away for me. In particular, I thought it would be nice to not have to write the Fitbit authentication code myself.

There wasn’t any module available for Drupal 8, but I did find a Composer library that I could use to do a lot of that work for me. The OAuth 2.0 Client Library together with the Fitbit provider provided a great stepping stone. So in true Drupal 8 fashion, I stepped off the island and harnessed the power of these external dependencies to take care of the heavy lifting around authentication. The author of the Fitbit provider was even open to pull requests, making the experience much more fun.

Conclusion

When writing any piece of sufficiently complex code, taking the time to think about your problem and the best way to solve it will pay dividends down the road. Writing a set of Views plugins is no exception. You need to think about the data you have, the data Views expects, and how to deal with the complications that arise when the two don’t fit together perfectly. If you’re designing your own system from scratch, you have the luxury of building the APIs 'just so' to fit your desired use case. Sadly, life is rarely so neat. Resist the temptation to dive straight into code, and first figure out what it is you need to build.

In part 2 of this series, we’ll go through the steps of building our plugin, ending up with the simple use case of having a Fitbit leaderboard. Until next time!

Less meetings and more writing

Meetings are toxic, that's one of my favorite articles. So true. I refer this article to a lot of people when I want to reduce the amount of meetings that we have. Don't get me wrong, it's good to meet every once in awhile, but only when it's every-once-in-a-while.

Over the years, I have met many teams that tend to discuss everything in meetings. Rarely do these meetings have an agenda and, even rarer still, does everyone comes with enough insight to get straight to the point. In web development, topics like "Discuss the current deployment process" or "Discuss testing strategies" are black holes that will gulp your team's precious time unless there is asynchronous research and collaboration beforehand.

Don't meet. Write and collaborate.

When I need to discuss something, and I want to avoid a meeting, I write my thoughts in a document, share it with the team, and discuss it via suggestions, edits, and comments.

Here's an example: I wanted to propose changing the way that some code was submitting video files to a transcoding platform but, before creating a ticket, I wanted to discuss the implementation with my colleagues Andrew and Dave. Instead of scheduling a meeting or pinging them via Slack, I wrote the following document in Dropbox Paper:

undefined

Next, I invited Andrew and Dave to review it. They made changes to it and left some comments with questions, which led to a discussion:

undefined

Once there were no further edits to do and we resolved all comment threads, the meeting was done. But wait, there was no meeting! That is the great thing about this process: not only did the three of us agree on how to implement that change in the code, but we also took our time to think, research, and discuss. Plus now we have written proof that we can reuse for the next step, which in this case meant creating a ticket to work on the solution that we agreed upon.

If we look at the document's change history, we can see a chronology that ends with me creating a ticket at the top:

undefined Silently introducing this process within your team

The best thing about this process is that you don't need to force it. Instead, you can just do it the next time that you want to discuss something. Write down what you would like to discuss with the rest of the team and invite them to collaborate. If you persist with this process for a few weeks, you will end up attending less meetings and getting much more work done. If the team does not come to an agreement while collaborating in a document, then a meeting may help to wrap up what is left, but make it a short meeting, and only after you've gone through this process!

If the team likes the process (this is a cultural change, some will, others won’t), then you could use the following template to describe it:

  1. Someone from the team writes the goal, the scenario, and then either proposes a solution or brainstorms ideas. I use Dropbox Paper or Google Docs.
  2. This person then invites the rest of the team to collaborate.
  3. The team reviews the document, which evolves with edits here and there, reverts, comments, further research, new ideas, etc.
  4. Once all comment threads have been resolved and there are no further edits, the document is complete. The next step would depend on the original goal.

Give it a go next time and let me know how it goes!

Hero photo by Startup Stock Photos.

HTTPS Everywhere: Security is Not Just for Banks

Why Does HTTPS Matter?

HTTPS has been around for a while, but it’s generally not well-understood. Many people know that sites using HTTPS instead of HTTP will display a lock on the URL to tell users that the site is safe to use. Everyone knows that big ecommerce websites have to use HTTPS. Many people are aware that HTTPS is a good idea for login pages and other form pages on any site. But does it matter for every day web pages and sites? Increasingly, the answer is YES, even for small sites and non-form pages.

HTTPS protects end users from eavesdroppers and other threats. Because of all the security ramifications of plain HTTP, Google is putting its considerable weight behind efforts to encourage websites to become more secure with an “HTTPS Everywhere” initiative:

HTTPS is also a requirement for some new interactive functionality, like taking pictures, recording audio, enabling offline app experiences, or geolocation, all of which require explicit user permissions. So, there are many reasons for website owners and users to pay attention to it.

What Does Insecurity Look Like?

As an experiment, to see exactly what level of security HTTPS gives the user, I visited two sites, one HTTP, and one HTTPS. Our Senior Systems Administrator, Ben Chavet, acted like an eavesdropper. He wasn’t even sitting next to me. He was 800 miles away watching my traffic over the VPN I was using. It took him just a few minutes to pick up what I was doing. What he did could have been done by someone in a coffee shop on a shared network, or by a “Man-in-the-Middle” somewhere between me and the sites I was accessing.

When I logged into the plain HTTP site, my “eavesdropper” could see everything I did, in plain text, including the full path I was visiting, along with my login name and password. He could even get my session cookie, which would allow him to impersonate me. Here’s a screen shot of some of the information he was able to view.

undefined

But when I logged into a site protected by HTTPS, the only thing that was legible to my “eavesdropper” was the domain name of the site, and a couple of other bits of information from the security certificate as it was being processed. Everything else was encrypted. 

undefined

There are other problems with plain HTTP. An eavesdropper could steal session cookies to emulate a legitimate user to gain access to information they shouldn’t be able to see. If an attacker has access to a plain HTTP page, they could change links on the page, perhaps to redirect a user to another site. Or by encrypting form submissions (but not the form itself) an attacker can modify a form to post to a different URL. A valid HTTPS page is not vulnerable to these kinds of changes.

Clearly, HTTPS offers a huge security benefit!

What Does HTTPS Provide?

Let’s back up a bit. What exactly does HTTPS give us? It’s two things, really. First, it’s a way to ensure data integrity and make sure that traffic sent over the internet is encrypted. Secondly, it’s a system that provides authentication, meaning an assurance that the site a user is looking at is the site they think they are looking at.

In addition to obfuscating the user’s activity and data, HTTPS means the identity of the site is authenticated using a certificate which has been verified by a trusted third party.

If you get to a site using HTTPS instead of HTTP, you are accessing a site that purports to be secure. On an HTTPS connection, the browser you use (i.e. Internet Explorer, Safari, Chrome, or Firefox) and the site’s server will communicate with each other. The browser expects the server to provide a certificate of authenticity and a key the browser can use to encode and decode messages between the browser and the server. If the browser gets the information it requires from a secure site, it will display a safety lock in the address bar. If anything seems amiss, the browser will warn the user. Problems on an HTTPS page could be a missing, invalid, or expired certificate or key, or “mixed content” (HTTP content or resources that should never be included on an HTTPS page).

Identity, data integrity, and encryption are all important. A bogus site could still be encrypting its traffic, and a site that is totally legitimate might not be encrypting its traffic. A really secure site will both encrypt its traffic and also provide evidence that it is the site it purports to be.

How Do Users Know a Site is Secure?

Browsers provide messages for insecure sites. The specific messages vary from browser to browser, and depend on the situation, but might include text like “This page may not be secure.” or “The certificate is not trusted because it is self signed.” Most browsers display some color-coding that is expected to help convey the security status.  

If a site is rendered only over HTTP, browsers usually don’t indicate anything at all about the security of the site, they just provide a plain URL without a lock. This provides no information, but also no assurance of any kind. And as noted above, unencrypted internet traffic over HTTP is still a potential security risk.

The following chart illustrates a range of possibilities for browser security status indicators (note that EV is a special type of HTTPS certificate that provides extra assurance, like for bank and financial sites, more about that later):

undefined

For more information about the HTTPS security, users can click on the lock icon. The specific details they see will vary from browser to browser, but generally, there is a link with text like “More details” or “View certificate” that will allow the user to see who owns the certificate and other details about it.

undefined

Research about how well end users understand HTTPS security status and messages found that most users don’t understand and ultimately ignore security warnings. Users often miss the lock, or lack of a lock, and find the highly technical browser messages to be confusing. The focus on color to indicate security status is a problem for those that are color blind. Also, so many sites still use HTTP or are otherwise potentially insecure that it is easy for users to discount the risk and proceed regardless. The conclusion of all this research is that better systems need to be put in place to make it clear to users which sites are secure and which aren’t, and to encourage more sites to adhere to recommended security best practices.

A while ago, Chrome started to let users understand how secure a site is. These examples use a combination of color and shape to convey what’s secure and what isn’t. Currently, the plain HTTP site is more noticeably a security threat.

undefined

Starting in January of 2017, they plan to add text saying ‘Secure’ or ‘Not secure’ for even more emphasis:

undefined

Other browsers may follow suit to make plain HTTP look more noticeably insecure. Between the user safety, the SEO hit, and the security warnings that may scare people away from sites using plain HTTP, no legitimate site can really afford to ignore the implications of not serving content over HTTPS.

What Do All the Terms Mean? 

HTTPS terminology is confusing. There is a lot of jargon and countless acronyms. If you read anything about HTTPS, you can quickly get lost in a sea of unfamiliar terminology. Here is a list of definitions to help make things more clear.

Secure Socket Layer (SSL)

SSL is the original standard used for encrypted traffic sent over HTTP. It has actually been superseded by TLS, but the term is still used in a generic way to refer to either SSL or TLS.

Transport Layer Security (TLS)

TLS is the new variation of SSL, but it’s a newer, more stringent, protocol. TLS is not just for web browsers and HTTP, it can also be used with non-HTTP applications. For instance, it can be used to provide secure email delivery. TLS is the layer where encryption takes place.

HTTPS

HTTPS is just a protocol that indicates that HTTP includes the extra layer of security provided by TLS.

Certificate Authority (CA)

A CA is an organization that provides and verifies HTTPS certificates. “Self-signed” certificates don’t have any indication about who they belong to. Certificates should be signed by a known third party.

Certificate Chain of Trust

There can be one or more intermediate certificates, creating a chain. This chain should take you from the current certificate all the way back to a trusted CA.

Domain Validation (DV)

A DV certificate indicates that the applicant has control over the specified DNS domain. DV certificates do not assure that any particular legal entity is connected to the certificate, even if the domain name may imply that. The name of the organization will not appear next to the lock since the controlling organization is not validated. DV certificates are relatively inexpensive, or even free. It’s a low level of authentication, but provides assurance that the user is not on a spoofed copy of a legitimate site.

Extended Validation (EV)

Extended Validation certificates validate the legal entity that controls the domain as well as the fact that they have actual control over the domain. The name of the verified legal identity is displayed in the browser, in green, next to the lock. EV certificates are more expensive than DV certificates because of the extra work they require from the CA. EV certificates convey more trust, so are appropriate for financial and commerce sites.

Next Steps

It seems pretty clear that HTTPS is important. In my next article, HTTPS Everywhere: Making the Switch, I’ll talk about what it takes to migrate a site from HTTP to HTTPS.

More Reading How HTTPS works How HTTPS affects SEO ranking Browser clues about website security How a password can be stolen over an insecure connection Types of Certificates

Talking Laravel with Matt Stauffer

Matt & Mike talk with "Laravel Up and Running" author Matt Stauffer about the Laravel PHP framework and how it differs from PHP and Drupal as a whole. They are joined by Lullabot developers Andrew Berry and Matt Robison.

Drupal as a Political Act

Like so many of you, both here in the United States and elsewhere, I am deeply troubled by the incidents of hateful harassment and the threats to democracy that have spiked since November 8. My thoughts have been routinely consumed with the task of analyzing my work and motivations, trying to detect any positive impact that my contributions have on the world. Because of my involvement in the Drupal community, my thoughts are as much about Drupal as they are about me. I am a proud member of our community and I cannot help but reflect on how the organization of our community brings people together, discourages hate, and promotes democracy.

What it would mean to "make the world better" is up for debate. We cannot be experts in all subjects, and a group of Drupal developers might not fully understand, for example, the policies that allow tax havens, the economic implications of a $15 minimum wage, how to combat predatory lending, or the solutions to climate change. Perhaps we have strong opinions on these topics, but many of us would begrudgingly admit that we know more about dependency injection, re-rolling patches, or even the hook system. That is, we know how to build Drupal websites. More importantly, to succeed in the Drupal community we are required to be considerate, respectful, and collaborative. We, as a community, vigorously reject bigotry, racism, sexism, homophobia, and xenophobia. This, in my view, makes the world better.

What is more, I would argue that Drupal blurs traditional boundaries. While certainly there are market forces that determine how Drupal is constructed, powerful legal and cultural nonmarket forces push back. Some Drupal agencies exist to turn a profit, but do so working primarily with public sector or non-profit organizations. Drupal agencies can be seen as capitalist in the sense that they accumulate surplus value by "exploiting the working class," but socialist in the sense that they produce goods that are owned collectively. Some have stated goals to invest value back into the community and others are "benefit corporations," required to make the world a better place. While I am tempted to place new labels on the Drupal community, such as "post-capitalist," I find such terms to be of limited use, and I am far more interested in finding common ground that unites our community.

Drupal code has only limited value without the community, and our community stands for values that transcend our code. I participate in the Drupal community because I believe it represents ideals that are consistent with my own. One of the beliefs that we hold in high regard is "doing good." It would be difficult to convince me that people, such as George DeMet and Tiffany Farriss, Todd Ross Nienkerk, or Lev Tsypin, have anything but the best intentions in the way they run their businesses. More importantly, these individuals, like so many others in our community, actually do make the world a better place through their work, compassion, and advocacy.

In some respects, the well-intentioned subset of our community exemplifies what Luc Boltanski and Eve Chiapello describe as "the new spirit of capitalism." In their study of management textbooks, they find this "new spirit" is characterized by, among other things, a "high moral tone" (58), a belief that workers should "control themselves" (80), structures where managers are essentially replaced with customers (82), and where bureaucracy represents a kind of "totalitarianism and arbitrariness" that should be avoided (85). While Boltanski and Chiapello find many faults with this "new spirit," generally, I would suggest that is has become more important than ever to acknowledge the many benefits that the people and organizations in our community have for the world. While critique and criticism will surely be needed, we should also continue to celebrate the impact that our software and colleagues plays in efforts towards ending poverty, empowering independent journalists, defending the free and open Internet, and educating people. Even though Drupal has been used for nefarious purposes, and there are many reasons to critique the Drupal community, I feel emboldened knowing that when people came together to build websites for DeanSpace, the United Nations, Amnesty International, Greenpeace, Oxfam, the ACLU, the Electronic Frontier Foundation, National Public Radio, Free Press, and the White House, they choose Drupal.

More than just software, part of the reason we "stay for the community" is because we place such a high premium on human interaction. Drupal contributors create public goods (free software) that can be used by one person without reducing the availability to others. If the public relations departments of mega-corporations extol the value of business and markets, while criticizing government and fair labor, the Drupal community takes an alternative approach that values solidarity. In this sense, our democratic practices threaten unjust power. Throughout history people in power have pushed back against the democratizing effects of solidarity to defend their positions of power. In his 1776 magnum opus on political economy, Wealth of Nations, Adam Smith famously observed, "All for ourselves and nothing for other people, seems, in every age of the world, to have been the vile maxim of the masters of mankind." With every Drupal Camp, DrupalCon, code sprint, community summit, and user group meeting we gather together in solidarity. Let us not forget all we do to encourage hope and camaraderie.

If you are discouraged by a world that turns workers against each other and treats citizens as consumers, pushing them to the malls rather than the public library, remember that we as a Drupal community are pushing back against the "masters of mankind." In the 1970s, Buckley v. Valeo may have determined that money is a form of speech, but because we work together, Drupal becomes another kind of speech. Most of us (the working class) must sell our labor in return for a wage or salary. So what I am arguing is not for our community to become noncommercial or anti-commercial, but instead that we consider expanding our horizon of expectations to allow for a conception of Drupal as a political act. I want us to celebrate our community and stand up against hate, inequality, corruption, and depoliticization. If that idea makes you uncomfortable, then perhaps consider the words of the historian Howard Zinn and his suggestion that what matters are "the countless deeds of unknown people who lay the basis for the events of human history." I hope that we can find common ground, build on what we have accomplished, and organize against the forces that seek to divide us against ourselves.

Pull Content From a Remote Drupal 8 Site Using Migrate and JSON API

I wanted to find a way to pull data from one Drupal 8 site to another, using JSON API to expose data on one site, and Drupal’s Migrate with a JSON source on another site to consume it. Much of what I wanted to do was undocumented and confusing, but it worked well, once I figured it out. Nevertheless, it took me several days to get everything working, so I thought I’d write up an article to explain how I solved the problem. Hopefully, this will save someone a lot of time in the future.

I ended up using the JSON API module, along with the REST modules in Drupal Core on the source site. On the target site, I used Migrate from  Drupal Core 8.2.3 along with Migrate Plus and Migrate Tools.

Why JSON API?

Drupal 8 Core ships with two ways to export JSON data. You can access data from any entity by appending ?_format=json to its path, but that means you have to know the path ahead of time, and you’d be pulling in one entity at a time, which is not efficient.

You could also use Views to create a JSON endpoint, but it might be difficult to configure it to include all the required data, especially all the data from related content, like images, authors, and related nodes. And you’d have to create a View for every possible collection of data that you want to make available. To further complicate things, there's an outstanding bug using GET with Views REST endpoints.

JSON API provides another solution. It puts the power in the hands of the data consumer. You don’t need to know the path of every individual entity, just the general path for a entity type, and bundle. For example: /api/node/article. From that one path, the consumer can select exactly what they want to retrieve just by altering the URL. For example, you can sort and filter the articles, limit the fields that are returned to a subset, and bring along any or all related entities in the same query. Because of all that flexibility, that is the solution I decided to use for my example. (The Drupal community plans to add JSON API to Core in the future.)

There’s a series of short videos on YouTube that demonstrate many of the configuration options and parameters that are available in Drupal’s JSON API.

Prepare the Source Site

There is not much preparation needed for the source because of JSON API’s flexibility. My example is a simple Drupal 8 site with an article content type that has a body and field_image image field, the kind of thing core provides out of the box.

First, download and install the JSON API module. Then, create YAML configuration to “turn on” the JSON API. This could be done by creating a simple module that has YAML file(s) in /MODULE/config/optional. For instance, if you created a module called custom_jsonapi, a file that would expose node data might look like:

filename: /MODULE/config/optional/rest.resource.entity.node.yml: id: entity.node plugin_id: 'entity:node' granularity: method configuration: GET: supported_formats: - json supported_auth: - basic_auth - cookie dependency: enforced: module: - custom_jsonapi

To expose users or taxonomy terms or comments, copy the above file, and change the name and id as necessary, like this:

filename: /MODULE/config/optional/rest.resource.entity.taxonomy_term.yml: id: entity.taxonomy_term plugin_id: 'entity:taxonomy_term' granularity: method configuration: GET: supported_formats: - json supported_auth: - basic_auth - cookie dependency: enforced: module: - custom_jsonapi

That will support GET, or read-only access. If you wanted to update or post content you’d add POST or PATCH information. You could also switch out the authentication to something like OAuth, but for this article we’ll stick with the built-in basic and cookie authentication methods. If using basic authentication and the Basic Auth module isn’t already enabled, enable it.

Navigate to a URL like http://sourcesite.com/api/node/article?_format=api_json and confirm that JSON is being output at that URL.

That's it for the source.

Prepare the Target Site

The target site should be running Drupal 8.2.3 or higher. There are changes to the way file imports work that won't work in earlier versions. It should already have a matching article content type and field_image field ready to accept the articles from the other site.

Enable the core Migrate module. Download and enable the Migrate Plus and Migrate Tools modules. Make sure to get the versions that are appropriate for the current version of core. Migrate Plus had 8.0 and 8.1 branches that only work with outdated versions of core, so currently you need version 8.2 of Migrate Plus.

To make it easier, and so I don’t forget how I got this working, I created a migration example as the Import Drupal module on Github. Download this module into your module repository. Edit the YAML files in the /config/optional  directory of that module to alter the JSON source URL so it points to the domain for the source site created in the earlier step.

It is important to note that if you alter the YAML files after you first install the module, you'll have to uninstall and then reinstall the module to get Migrate to see the YAML changes.

Tweaking the Feed Using JSON API

The primary path used for our migration is (where sourcesite.com is a valid site):

http(s)://sourcesite.com/api/node/article?_format=api_json

This will display a JSON feed of all articles. The articles have related entities. The field_image field points to related images, and the uid/author field points to related users. To view the related images, we can alter the path as follows:

http(s)://sourcesite.com/api/node/article?_format=api_json&include=field_image

That will add an included array to the feed that contains all the details about each of the related images. This way we won’t have to query again to get that information, it will all be available in the original feed. I created a gist with an example of what the JSON API output at this path would look like.

To include authors as well, the path would look like the following. In JSON API you can follow the related information down through as many levels as necessary:

http(s)://sourcesite.com/api/node/article?_format=api_json&include=field_image,uid/author

Swapping out the domain in the example module may be the only change needed to the example module, and it's a good place to start. Read the JSON API module documentation to explore other changes you might want to make to that configuration to limit the fields that are returned, or sort or filter the list.

Manually test the path you end up with in your browser or with a tool like Postman to make sure you get valid JSON at that path.

Migrating From JSON

I had a lot of trouble finding any documentation about how to migrate into Drupal 8 from a JSON source. I finally found some in the Migrate Plus module. The rest I figured out from my earlier work on the original JSON Source module (now deprecated) and by trial and error. Here’s the source section of the YAML I ended up with, when migrating from another Drupal 8 site that was using JSON API.

source: plugin: url data_fetcher_plugin: http data_parser_plugin: json urls: http://sourcesite.com/api/node/article?_format=api_json ids: nid: type: integer item_selector: data/ fields: - name: nid label: 'Nid' selector: /attributes/nid - name: vid label: 'Vid' selector: /attributes/vid - name: uuid label: 'Uuid' selector: /attributes/uuid - name: title label: 'Title' selector: /attributes/title - name: created label: 'Created' selector: /attributes/created - name: changed label: 'Changed' selector: /attributes/changed - name: status label: 'Status' selector: /attributes/status - name: sticky label: 'Sticky' selector: /attributes/sticky - name: promote label: 'Promote' selector: /attributes/promote - name: default_langcode label: 'Default Langcode' selector: /attributes/default_langcode - name: path label: 'Path' selector: /attributes/path - name: body label: 'Body' selector: /attributes/body - name: uid label: 'Uid' selector: /relationships/uid - name: field_image label: 'Field image' selector: /relationships/field_image


One by one, I’ll clarify some of the critical elements in the source configuration.

File-based imports, like JSON and XML use the same pattern now. The main variation is the parser, and for JSON and XML, the parser is in the Migrate Plus module:

source: plugin: url data_fetcher_plugin: http data_parser_plugin: json

The url is the place where the JSON is being served. There could be more than one URL, but in this case there is only one. Reading through multiple URLs is still pretty much untested, but I didn’t need that:

urls: http://sourcesite.com/api/node/article?_format=api_json

We need to identify the unique id in the feed. When pulling nodes from Drupal, it’s the nid:

ids: nid: type: integer

We have to tell Migrate where in the feed to look to find the data we want to read. A tool like Postman (mentioned above) helps figure out how the data is configured. When the source is using JSON API, it’s an array with a key of data:

item_selector: data/

We also need to tell Migrate what the fields are. In the JSON API, they are nested below the main item selector, so they are prefixed using an xpath pattern to find them. The following configuration lets us refer to them later by a simple name instead of the full path to the field. I think the label would only come into play if you were using a UI:

fields: - name: nid label: 'Nid' selector: /attributes/nid Setting up the Image Migration Process

For the simple example in the Github module we’ll just try to import nodes with their images. We’ll set the author to an existing author and ignore taxonomy. We’ll do this by creating two migrations against the JSON API endpoint, first one to pick up the related images, and then a second one to pick up the nodes.

Most fields in the image migration just need the same values they’re pulling in from the remote file, since they already have valid Drupal 8 values, but the uri value has a local URL that needs to be adjusted to point to the full path to the file source so the file can be downloaded or copied into the new Drupal site.

Recommendations for how best to migrate images have changed over time as Drupal 8 has matured. As of Drupal 8.2.3 there are two basic ways to process images, one for local images and a different one for remote images.  The process steps are different than in earlier examples I found. There is not a lot of documentation about this. I finally found a Drupal.org thread where the file import changes were added to Drupal core and did some trial and error on my migration to get it working.  

For remote images:

source: ... constants: source_base_path: 'http://sourcesite.com/' process: filename: filename filemime: filemime status: status created: timestamp changed: timestamp uid: uid uuid: id source_full_path: plugin: concat delimiter: / source: - 'constants/source_base_path' - url uri: plugin: download source: - '@source_full_path' - uri guzzle_options: base_uri: 'constants/source_base_path'

For local images change it slightly:

source: ... constants: source_base_path: 'http://sourcesite.com/' process: filename: filename filemime: filemime status: status created: timestamp changed: timestamp uid: uid uuid: id source_full_path: plugin: concat delimiter: / source: - 'constants/source_base_path' - url uri: plugin: file_copy source: - '@source_full_path' - uri

The above configuration works because the Drupal 8 source uri value is already in the Drupal 8 format, http://public:image.jpg. If migrating from a pre-Drupal 7 or non-Drupal source, that uri won’t exist in the source. In that case you would need to adjust the process for the uri value to something more like this:

source: constants: is_public: true ... process: ... source_full_path: - plugin: concat delimiter: / source: - 'constants/source_base_path' - url - plugin: urlencode destination_full_path: plugin: file_uri source: - url - file_directory_path - temp_directory_path - 'constants/is_public' uri: plugin: file_copy source: - '@source_full_path' - '@destination_full_path' Run the Migration

Once you have the right information in the YAML files, enable the module. On the command line, type this:

drush migrate-status

You should see two migrations available to run.  The YAML files include migration dependencies and that will force them to run in the right order. To run them, type:

drush mi --all

The first migration is import_drupal_images. This has to be run before import_drupal_articles, because field_image on each article is a reference to an image file. This image migration uses the path that includes the related image details, and just ignores the primary feed information.

The second migration is import_drupal_articles. This pulls in the article information using the same url, this time without the included images. When each article is pulled in, it is matched to the image that was pulled in previously.

You can run one migration at a time, or even just one item at a time, while testing this out:

drush migrate-import import_drupal_images --limit=1

You can rollback and try again.

drush migrate-rollback import_drupal_images

If all goes as it should, you should be able to navigate to the content list on your new site and see the content that Migrate pulled in, complete with image fields. There is more information about the Migrate API on Drupal.org.

What Next?

There are lots of other things you could do to build on this. A Drupal 8 to Drupal 8 migration is easier than many other things, since the source data is generally already in the right format for the target. If you want to migrate in users or taxonomy terms along with the nodes, you would create separate migrations for each of them that would run before the node migration. In each of them, you’d adjust the include value in the JSON API path to pull the relevant information into the feed, then update the YAML file with the necessary steps to process the related entities.

You could also try pulling content from older versions of Drupal into a Drupal 8 site. If you want to pull everything from one Drupal 6 site into a new Drupal 8 site you would just use the built in Drupal to Drupal migration capabilities, but if you want to selectively pull some items from an earlier version of Drupal into a new Drupal 8 site this technique might be useful. The JSON API module won’t work on older Drupal versions, so the source data would have to be processed differently, depending on what you use to set up the older site to serve JSON. You might need to dig into the migration code built into Drupal core for Drupal to Drupal migrations to see how Drupal 6 or Drupal 7 data had to be massaged to get it into the right format for Drupal 8.

Finally, you can adapt the above techniques to pull any kind of non-Drupal JSON data into a Drupal 8 site. You’ll just have to adjust the selectors to match the format of the data source, and do more work in the process steps to massage the values into the format that Drupal 8 expects.

The Drupal 8 Migrate module and its contributed helpers are getting more and more polished, and figuring out how to pull in content from JSON sources could be a huge benefit for many sites. If you want to help move the Migrate effort forward, you can dig into the Migrate in core initiative and issues on Drupal.org.

Rebuilding POP in D8: Configuration Management

In my last article, I talked about new options for setting up a development environment in Drupal 8. Having done that, I need to setup a workflow for development as well as a process for deploying changes. Additionally, for this project, I have a partner that I need to share changes with before they go live. It would be really nice for her to be able to preview these changes and push them live after review.

As mentioned in my last article, I have a pretty basic setup for this project. I am doing work on my local laptop, and I have a private repository on Github for my code, and two virtual hosts on my hosting provider. One is a preview site that is password-protected so that we can review changes as they go live, and one is the production website.

undefined

There is no custom code at all in the Pinball Outreach Project website outside the theme (and precious little there), but there is a lot of custom configuration that needs to be pushed around. Additionally, some of that configuration is tough to test outside of a site that is publicly accessible. Thankfully, Drupal 8 brings the gift of CMI.

The simplest workflow

I realize I may be biased, given my role as the former CMI initiative lead, but I have to say, the D8 configuration management system is an absolute joy. It allows a variety of workflows and performs precisely as advertised. I created a pretty simple workflow for this project, but one of the beauties of the system is that you can create a workflow as simple or complex as needed.

Using the built-in Configuration Synchronization tool that ships with Drupal 8 provides the simplest possible workflow. Go to:

admin/config/development/configuration/full/export

...and download your site's entire configuration as a tarball. You can then go to:

admin/config/development/configuration/full/import

...on the destination site and upload that tarball into the new system. Once you have uploaded your configuration, you are presented with a list of the items that have changed and have the opportunity to review each one.

undefined undefined

In this example, I have changed the view called 'Press' to sort descending instead of ascending. If you are happy with these changes, then you can click Import All to integrate them into your site.

While this doesn't give you all the functionality a more technical user might require, it works and would be sufficient for a simpler site.

My workflow

I wanted to have a little more control over my configuration and maintain the ability to experiment and roll those changes back if I decided I didn't like them. My process, which is still basic as deployment workflows go, works like this:

  • I make some changes through the admin UI on my local development environment.
  • I open up a terminal to this environment and run drush config-export -y from the project's root directory. This exports all configuration to your config export directory (see the last article in this series for details about how that is set up.)
  • Again, in terminal, I run the git status command to check that my configuration changes are what I expect them to be. If all has gone well, I should see that some configuration files have been modified in my config export directory, and this configuration should only be related to the changes I just made.
  • I add the changes to git (git add), commit (git commit -m) and push them (git push).
  • Next, I SSH to the destination environment and pull the changes down (git pull).
  • Finally, I run drush config-import -y from the project's root on the remote server. This command imports the configuration changes I pulled down from git.

While this might seem like a pretty simple workflow, it is incredibly powerful, especially if you are used to the Features workflows prevalent in Drupal 7. First off, it allows you to be as granular as you want with your commits. I can commit a change as simple as adjusting a View's sort order or as large as an entire collection of new content types. With my changes in version control, I can roll them back as easily as I can import them (mostly, some content type/field changes cannot be rolled back.) Keeping changes as simple as possible can also help minimize merge conflicts in projects with multiple developers.

undefined

This process will work between any source environment and any destination environment, assuming the two environments are instances of the same site. Clearly, it is both ideal and easy to make changes locally and push them live, but that is not always the best option. For instance, while setting up Metatag module, I realized it would be much easier to test from a publicly accessible environment. So I went to the live site and tweaked my settings to where I wanted them, then exported and committed these changes and brought the changes back down to my local. In another case, I was on the phone with my partner who was viewing some changes on the preview site. As she was making comments and asking for changes, I made them until we were both happy with them. Then, I could merge those changes down to my local before pushing them live.

In many cases, this setup will not be workable. As the sole dev and site admin, it works for me because I know exactly what is going on all the time and I can make sure that the changes I'm making won't overwrite someone else's work. Nevertheless, there's no reason this model couldn't be expanded to work with multiple developers. For instance, you could lock down the live site so that its admin is read only (simple with a module like Config Read Only) and then come up with an automated or user-instantiated process to make those changes live. This could also pretty easily be expanded to a multi-developer environment.

More advanced options

Here at Lullabot we have already started talking through some different options for our more advanced projects. In one case, we ended up going back to Features. The client already had a Features-based workflow for their D7 site and was reluctant to change to a totally new system. Additionally, they maintained a network of sites based on a common core repository or “distribution.” We needed to get off the ground quickly with something we already understood rather than figuring out a config workflow that would work. Features offered us that.

Alex Pott has proposed another workflow that involves shipping your config with an install profile. In this scenario, your configuration lives in your install profile, and you essentially rebuild your entire site from scratch when you push changes. This works especially well with advanced composer-based workflows in which your assets are scattered and imported as part of your build process. Additionally, this setup can get around some technical problems involving the way Drupal 8 assigns UUIDs to configuration. For more information on this, see the presentation he did with fellow Lullabot Matthew Tift at DrupalCon New Orleans.

As we get farther and farther into the Drupal 8 cycle, we will see a ton more workflows and processes around configuration management and deployment. Some will be use-case specific, some will end up becoming best practices, and some will probably just be weird. I wouldn't have it any other way because it means the configuration management system works for site owners and not the other way around.

Next in the series, I will talk about some interesting things D8 has enabled around theming and templating.

Header photo by Karl Lind Films