Lullabot

Building an Enterprise React Application, Part 1

Last year, Lullabot was asked to help build a large-scale React application for a major U.S. media company and, lucky for me, I was on the team. An enterprise React application is often part of a complex system of applications, and that was certainly the case for this project. What follows is part one of our discussion. It includes a high-level view of the overall application architecture as well as a look at the specific architecture used for the React part of the project that was the focus of my work.

Web Application Architecture

Take a quick look at the diagram below. It describes the high-level architecture of the collection of applications that comprise the site. It begins with the content management system, or CMS. In this project, the CMS was Drupal, but this could just as well have been Wordpress or any number of alternatives—basically, software for editorial use that allows non-technical content creators to add pages, define content relationships, and perform other common editorial tasks. No pages or views are served to users directly from the CMS.

undefined

It’s not uncommon to have additional data sources besides the CMS feeding the API, as was the case on this project. In that sense our diagram oversimplifies things, but it does give a good sense of the data flows.

The API The API is software that provides a consistent view into the data of the CMS (as well as other data sources, when present). The database in a CMS like Drupal is normalized. One important task of the API is to de-normalize this data, which allows clients—web browsers, mobile apps, smart TVs, etc.—to make fewer round trips.

The API is a critical part of the application. The real business case for an architecture like this is to have a single data source serve content across a range of platforms in a consistent, efficient way. Client devices make an HTTP request to the API and receive a response with the requested data, usually in JSON format.

Caching Layers Having a caching layer in front of your API and Node.js servers helps reduce load on the API server and decreases response time. Our client uses Akamai as a CDN and caching solution, but there are many other options.

Node.js Server We’ve finally gotten to where the code for the React web app lives. We used the server-side JavaScript application framework, Express, on the project. Express was used to create an HTTP server that responds to requests from web clients. It’s also where we did the server-side rendering of the React application.

Clients In the diagram, I’ve added icons to represent mobile apps and web browsers, respectively, but any number of devices may consume the API’s data For example, our client serves not only web browsers and a mobile app from the API, but also Roku boxes and Samsung TVs.

What’s happening with the web client is pretty interesting. The first request by a browser goes to the Node.js server, which will return the pre-rendered first page. This is server-side React and it’s helping provide a faster load time.

Without rendering first on the server, the client would have to retrieve the page and then begin rendering, creating a lag on first load as well as potentially having an adverse impact on SEO. Subsequent pages will be routed on the client using a library like React Router. The app will then make requests directly to the API for the data it needs for a specific “page” or route.

React Architecture

When thinking of JavaScript application architecture, the MVC pattern immediately comes to mind for many people. React isn’t a MVC framework like Angular or Ember, but is instead a library that handles views. If that’s the case, then what architecture does a typical large-scale React application use?

On this project, we used Redux. Redux is a both a library and an architecture/pattern that is influenced by the Flux architecture from Facebook.

Two distinguishing characteristics of Redux are strict unidirectional data flow (shared with Flux) and storing all application state as a single object. The data lifecycle in a Redux app has four steps:

  1. Dispatch an action.
  2. Call to a reducer function.
  3. A root reducer combines the output of the various reducer functions into a single state tree.
  4. The store saves the state returned by the root reducer triggering view updates.

The diagram below illustrates this data flow:

undefined

Let’s walk through this step by step.

1. Dispatch an action Let’s say you have a list of items displayed in a component (represented by the blue box in the diagram) and you’d like the user to be able to delete an item when clicking a button next to the item. When a button is clicked, there will be a call to the Redux dispatch function. It’s a common pattern to also pass a call to an action creator into this function. It may look something like this:

dispatch(deleteItem(itemID));

In the example above, the function deleteItem is an action creator (yellow box). Action creators return plain JavaScript objects (orange box). In our case the return object looks like this:

{ type: 'DELETE_ITEM', itemId: itemId }

Note that actions must always have a type property. In this simple example, we could have just passed the plain object in the dispatch function and it would have worked just fine. However, an advantage to using an action creator is that they can be used to transform the data before it’s passed to the reducer. For example, action creators are a great place for API calls, adding timestamps or anything else that may cause side effects.

2. Call to a reducer function Once the action creator returns the action to the dispatch function, the store then calls the reducer function (green box). The store passes two things to the reducer: the current state and the action. An important thing to note is that Redux reducers must be pure functions. Passing the same input to a reducer function should result in exactly the same output every time.

The reducer then computes the next state. It does this using a switch statement that checks for the action type, which in our example case is “DELETE_ITEM”, and then returns the new state. An important point here is that state is immutable in Redux. The state object is never changed directly, but rather a new state object is created and returned based on the specified action passed to the reducer.

In our example this might look like this:

// The code below is part of our itemsReducer. Default state is defined in reducer // function definition. switch (action.type) { case DELETE_ITEM: return { ...state, lastItemDeleted: action.itemId }; default: return state }

3. Root reducer combines reducer output into a single state tree A very common pattern is to split your reducer functions into separate files based on what part of the app they address. For example, we might have a file in our hypothetical app called itemsReducer and perhaps another one called usersReducer.

The output of these two reducers will then be combined into a single state object when you call the Redux combineReducers function.

4. The store saves the state returned by the root reducer Finally the store saves the state and all the subscribers (your components) will receive the updated state and the view will be updated as needed.

That’s a general overview of what a Redux architecture looks like. It’s obviously missing many implementation details. To learn more about how Redux works, I highly recommend reading the documentation in its entirety. The Redux docs are very well written and comprehensive. If you prefer video tutorials, Dan Abramov, the creator of Redux, has a couple of great courses here and here.

If you’re interested in getting a quick start playing around with React and Redux, I’ve put together a boilerplate project that’s a good starting point for building a medium-to-large application. It’s informed by many of the lessons learned working on this project.

Until Next Time…

Enterprise applications are often enormously complicated and take the efforts of multiple, dedicated teams each specializing in one area of the application. This is particularly true if the technology stack is diverse. If you’re new to a big project like I’ve described here, try to get a good handle on the architecture, the workflows, the roles of other teams, as well as the tools you are using to build the application. This big picture view will help you produce your best work.

Next week we’ll publish part two, where we’ll go over the build tools and processes we used on the project as well as how we organized the code.

The Lullabot Approach to Sales

Mike and Matt sit down (literally!) with Lullabot's sales team. Learn how Lullabot does sales, and what it takes to sell large projects.

Lullabot's 2017 Annual Retreat

Many companies have corporate retreats where the whole team gets together to celebrate their success and spend time thinking about how to improve their work. We’re no different. Almost every year since 2006 we’ve brought our geo-distributed team together to spend a week “working on how we work” while bonding with our peers. In 2017, 52 employees from Lullabot and Lullabot Education flew to Palm Springs, CA for a week of rest, relaxation and vision work at our beloved Smoke Tree Ranch. We’ve been at Smoke Tree before.

undefined

If you don’t see your co-workers every day, a company retreat is more akin to a family reunion. You’re not sick of Bob who would otherwise bring tuna salad sandwiches for lunch or Mary who never refills the coffee pot when it’s empty or Phil who plays his polka music too loud. Our team genuinely wants to get to know each other and form bonds outside of work. It lifts our spirits and allows us to cultivate gratitude and celebrate success in person.

“As a new employee, this has been my first retreat. I have liked the openness about the state of the company, the free time activities, the beautiful place (of course!), but, without a doubt, what I will keep for me as the best take away from the retreat is the people—this is a great company, because it's made of great people.” —Ezequiel Vázquez undefined

We start planning the company retreat three months in advance. At first, the planning group consisted of our event coordinator Haley Scarpino, our human resources team, the admin team and the directors. On our first call we reviewed our notes from the previous retreat to recap what worked and didn’t work, then we each talked about what we wanted to get out of the upcoming retreat. In preparation for the kick-off call, I had already brainstormed new ideas and intentions I was eager to share. My vision for the retreat was to reduce the sit-and-listen presentations and replace that time with collaborative workshopping as a company. The other big areas of consensus from that kickoff meeting were:

  • Protect the unstructured “fun and relaxation” time. Free time is critical for the team to recharge during the retreat.
  • Cultivate gratitude.
  • Feel connected.

By the time we finished the planning for the retreat, we had almost 20 of 54 Lullabots owning an activity or overseeing some aspect of the retreat. I'm so grateful for how the team stepped up this year to pull off a successful event. I should also mention we go back to the same place each year which eases the stress on our event coordinator and our team. Using the same venue means our team knows what to pack, what to expect, and what fun activities they can do next year that they didn’t get time to do this year. Using the same venue allows us to focus on event curation rather than logistics planning and exception handling.

undefined

At the end of our weekly planning sessions our daily schedule looked like this:

  • 9-10 a.m.: Announcements and Presentations.
  • 10-12 a.m.: Team Workshops. This year we advanced our open books philosophy, having the team build revenue forecasts for the year in small groups and estimate the percentage of each expense category using rolls of pennies. We prioritized our company values and wrote headlines for the company we want to be five years from now. We also threw in a couple of strategy workshops we do with our clients so the team can experience those as-a-client.
  • 12 a.m.-1 p.m.: Lunch.
  • 1-3 p.m.: Freetime. Self-organizing volleyball, golf, horseshoes, horseback riding, soccer, and hanging out by the pool or simply taking a nap. We used Slack to organize free time, and it worked quite well.
  • 3-5 p.m.: Self-organizing groups. Using Trello, a team member would list a topic they wanted to talk about for an hour. Anything goes. We had conversations on home improvement, personal core values, career advancement, knitting, our website, work-life balance, and so on. The sessions with the highest votes were curated and added to the agenda.
  • 5-6 p.m.: Circles. We break into small groups and share our feelings and experiences from the day. It’s a judgment-free way to process the day while connecting to a small group of peers that remains the same throughout the retreat.
  • 6:30-7:30 p.m.: Dinner.
  • 7:30-10 p.m.: Evening activities. Each night has an event planned. From the ever popular lightning talks and keynote karaoke to a talent show, storytelling, to our very relaxed outdoor dinner and bonfire on the last night. We also had an awards show for the best posts on our private social company network, Yammer. Yes, there were trophies.
undefined “I loved the focus on finances and how the business works, and how to keep it not just sustainable but also growing in a way that continues to underline our core values. I came away with something I didn't think possible—an even greater respect for our leadership, and a renewed confidence that I'm in the right place.” —Greg Dunlap

There’s one more event not listed on here which was new this year: community service time. We took an afternoon to deviate from the schedule to build bicycles for ten kids at the Boys & Girls Club Palm Springs. The bikes came from Palm Springs Cyclery with a considerable discount. It was a fun way to share the afternoon and do something together we usually don’t get to do. Speaking for myself, it was a personal highlight of the retreat.

undefined

To be candid, the whole experience is surreal—like a week-long dream. The crisp desert air in the morning, the cactus flowers in full bloom and 70-90 degree Fahrenheit weather in the afternoon against the backdrop of snow-covered mountains. Did I mention being surrounded by people you want to be around? This team. This incredibly talented, smart, hard-working, and passionate team is the real reason an event like this becomes meaningful.

“I found it inspiring that Lullabot is planning five, and even ten, years down the road. Our continued product diversification provides opportunities for growth and learning. Lullabot has been a great "job" for years, but now I'm starting to see it as a career instead.” —Nathan Haug

Many of us will no doubt see each other before the next company retreat, whether it’s at a client onsite, a departmental retreat, or a conference. And when we do, those same feelings of seeing a friend you haven’t hung out with in awhile will most likely be there.

And perhaps you’ll be there as well? We seldom hire, but we’re always looking for smart, talented people to join our team. Our hiring process is slow, but start the conversations with us now if you’d like to work at Lullabot. We take care of our employees and work every day to earn their trust. We ask that in return, you share your passion, creativity and initiative with us.

Photos by Greg Dunlap

Using the serialization system in Drupal

As part of the API first initiative I have been working a lot with the serialization module. This module is a key member of the web-service-oriented modules present both in core and contrib.

The main focus of the serialization module is to encapsulate Symfony's serialization component. Note that there is no separate deserialization component. This single component is in charge of serializing and deserializing incoming data.

When I started working with this component the first question that I had was "What does serialize mean? And how is it different from deserializing?". In this article I will try to address this question and give a brief introduction on how to use it in Drupal 8.

Serializers encoders and normalizers

Serialization is the process of normalizing and then encoding an input object. Similarly, we refer to deserialization as the process of decoding and then denormalizing an input string. Encoding and decoding are the reverse processes of one another, just like normalizing and denormalizing are.

In simple terms, we want to be able to turn an object of class MyClass into a particular string representation, and then be able to turn that string back into the original object.

An encoder is in charge of converting simple data—a set of scalars, arrays and stdClass objects—into a string. The resulting string is a convenient way to store or transport the original object. A decoder performs the opposite function; it will take that encoded string and transform it into an array that’s ready to use. json_encode and json_decode are good examples of a commonly used (de)encoder. XML is another example of a format to encode to. Note that for an object to be correctly encoded it needs to be normalized first. Consider the following example where we encode and decode an object without any normalization or denormalization.

class MyClass {} $obj = new MyClass(); var_dump($obj); // Outputs: object(MyClass) (0) {} var_dump(json_decode(json_encode($obj))); // Outputs: object(stdClass) (0) {}

You can see in the code above that the composition of the two inverse operations is not the same original object of type MyClass. This is because the encoding operation loses information if the input data is not a simple set of scalars, arrays, and stdClass objects. Once that information is lost, the decoder cannot get it back.

undefined

One of the reasons why we need normalizers and denormalizers is to make sure that data is correctly simplified before being turned into a string. It also needs to be upcast to a typed object after being parsed from a string. Another reason is that different (de)normalizers allow us to work with different formats of the data. In the REST subsystem we have different normalizers to transform a Node object into the JSON, HAL or JSON API formats. Those are JSON objects with different shapes, but they contain the same information. We also have different denormalizers that will take a simplified JSON, HAL or JSON API payload and turn it into a Node object.

(De)Normalization in Drupal

The normalization of content entities is a very convenient way to express the content in a particular format and shape. So formatted, the data can be exported to other systems, stored as a text-based document, or served via an HTTP request. The denormalization of content entities is a great way to import content into your Drupal site. Normalization and denormalization can also be combined to transform a document from one format to another. Imagine that we want to transform a HAL document into a JSON API document. To do so, you need to denormalize the HAL input into a Node object, and then normalize it into the desired JSON API document.

A good example of the normalization process is the Data Model module. In this case instead of normalizing content entities such as nodes, the module normalizes the Typed Data definitions. The typed data definitions are the internal Drupal objects that define the schemas of the data for things like fields and properties. An integer field will contain a property (the value property) of type IntegerData. The Data Model module will take object definitions and simplify (normalize) them. Then they can be converted to a string following the JSON Schema format to be used in external tools such as beautiful documentation generators. Note how a different serialization could turn this typed data into a Markdown document instead of JSON Schema string.

Adding a new (de)normalizer to the system

In order to add a new normalizer to the system you need to create a new tagged service in custom_module.services.yml.

serializer.custom_module.my_class_normalizer: class: Drupal\custom_module\Normalizer\MyClassNormalizer tags: - { name: normalizer, priority: 25 }

The class for this service should implement the normalization interface in the Symfony component Symfony\Component\Serializer\Normalizer\NormalizerInterface. This normalizer service will be in charge of declaring which types of objects it knows how to normalize and denormalize—that would be MyClass in our previous example. This way the serialization module uses it when an object of type MyClass needs to be (de)normalized. Since multiple modules may provide a service that supports normalizing MyClass objects, the serialization module will use the priority key in the service definition to resolve the normalizer to be used.

As you would expect, in Drupal you can alter and replace existing normalizers and denormalizers so they provide the output you need. This is very useful when you are trying to alter the output of the JSON API, JSON or HAL web services.

In a next article I will delve deeper into how to create a normalizer and a denormalizer from scratch, by creating an example module that (de)normalizes nodes.

Conclusion

The serialization component in Symfony allows you to deal with the shape of the data. It is of the utmost importance when you have to use Drupal data in an external system that requires the data to be expressed in a certain way. With this component, you can also perform the reverse process and create objects in Drupal that come from a text representation.

In a following article I will show you an introduction on how to actually work with (de)normalizers in Drupal.

Matt Westgate and Seth Brown on Doing Good with Drupal

In this episode of Hacking Culture, Matthew Tift talks with Matt Westgate and Seth Brown about Lullabot, the Drupal community, and how people who build free software improve the world. This episode is released under the Creative Commons attribution share alike 4.0 International license. The theme music used in this episode comes from the Open Goldberg Variations. Learn more at hackingculture.org.

Drupal SEO with Ben Finklea

Mike and Matt sit down with Ben Finklea, and talk all things SEO and Drupal, including his new book "Drupal SEO."

Building Views Query Plugins for Drupal 8, Part 3

Welcome to the third part of our series on writing Views query plugins. In part 1, we talked about the planning work that should precede coding. In part 2, we covered the basics of actually writing a query plugin. In this final chapter, we will investigate some enhancements to make your plugin more polished and flexible.

Exposing configuration options

Allow Site Admins to set their preferred units: metric or imperial.

Most Fitbit endpoints accept an option to set the units the response is returned in. If you are Canadian like me, you know that metric is preferable, but it’s also in our nature to be be nice, so we should expose a configuration option to allow our American friends to show data in their anachronistic imperial units. (I jest, love you guys!)

Exposing configuration options for a query plugin is done in two parts. First, build the UI and, second, make use of the stored configuration. In our query plugin class, we’ll implement two methods to help us create the UI, defineOptions and buildOptionsForm :

/** * [email protected]} */ protected function defineOptions() { $options = parent::defineOptions(); $options['accept_lang'] = array( 'default' => NULL, ); return $options; } /** * [email protected]} */ public function buildOptionsForm(&$form, FormStateInterface $form_state) { parent::buildOptionsForm($form, $form_state); $form['accept_lang'] = [ '#type' => 'select', '#options' => [ '' => $this->t('Metric'), 'en_US' => $this->t('US'), 'en_GB' => $this->t('UK'), ], '#title' => $this->t('Unit system'), '#default_value' => $this->options['accept_lang'], '#description' => $this->t('Set the unit system to use for Fitbit API requests.'), ]; }

With this done, we should see our configuration options in the Views UI under Advanced > Query settings.

undefined

However, it won’t work because we’re not actually using the stored configuration yet. To do that, we’ll add to our execute method in our query plugin:

/** * [email protected]} */ public function execute(ViewExecutable $view) { // Set the units according to the setting on the view. if (!empty($this->options['accept_lang'])) { $this->fitbitClient->setAcceptLang($this->options['accept_lang']); } // Clip... }

Query plugin options are available via $this->options, which Drupal provides as part of the QueryPluginBase class that our Views plugin is extending. We use the stored value, together with a method on the Fitbit client service to set the preferred units for all subsequent API requests: $this->fitbitClient->setAcceptLang($this->options['accept_lang']); . With that, site admininstrators can set their preferred units, and the result set will reflect that choice. Since this is Views and we’ve exposed height as a numeric field, Views core gives us a nice way to format the data and suffix it with units so we end up with a polished result. Just edit the field options.

undefined Field plugin options

Adding options to customize the appearance of the avatar field.

Views also allows us to have custom options for our field plugins. In the last article, we set up a field plugin for avatar which uses the avatar URI from the API response and renders it as an <img> tag. Fitbit’s API actually provides two avatar size options and it would be great to leave it to the site administrator to decide which size to render. We’ll use field plugin options to do that.

As with query plugins, exposing configuration options for a field plugin follows the same two parts, with one small addition. In our query plugin class, we’ll implement two methods, defineOptions and buildOptionsForm , to build the UI:

/** * [email protected]} */ protected function defineOptions() { $options = parent::defineOptions(); $options['avatar_size'] = ['default' => 'avatar']; return $options; } /** * [email protected]} */ public function buildOptionsForm(&$form, FormStateInterface $form_state) { $form['avatar_size'] = [ '#type' => 'select', '#title' => $this->t('Image size'), '#options' => [ 'avatar' => $this->t('Default (100px)'), 'avatar150' => $this->t('Medium (150px)'), ], '#default_value' => $this->options['avatar_size'], '#description' => $this->t('Choose the size avatar you would like to use.'), ]; parent::buildOptionsForm($form, $form_state); }

This should be fairly self explanatory; we’re defining a form element for the UI and, once saved, the configuration option will be stored in $this->options['avatar_size'] . The small addition I referred to earlier lies within the query plugin. Before, we were only passing along the single value for avatar. Now that the site administrator has the option, we’ll want to make sure both values for avatar are passed along in the Views result. We do that, in the query plugins execute method like so:

$row['avatar'] = [ 'avatar' => $data['avatar'], 'avatar150' => $data['avatar150'], ];

Instead of a flat value, we’re setting ‘avatar’ to an array with both values for avatar from the API response. Then, back in the field plugin, in the render method, we take care to use the appropriate size avatar according to the option selected:

/** * [email protected]} */ public function render(ResultRow $values) { $avatar = $this->getValue($values); if ($avatar) { return [ '#theme' => 'image', '#uri' => $avatar[$this->options['avatar_size']], '#alt' => $this->t('Avatar'), ]; } }

We simply call $this->getValue($values), which is able to pull out the value we want from the ResultRow object. The render method receives a ResultRow object that has all of the data for the row. FieldPluginBase has a getValue method which we can access since we are extending FieldPluginBase . With that done, we can now click on the avatar field in the Views UI and set the desired image size:

undefined Filter plugins

Filtering the leaderboard by user id.

What if we wanted to limit the result to only a particular user? Say we wanted to show a user's Fitbit details on their Drupal user profile page. For that, we’d need to filter the result set by a user id. To make that happen, we need a Views filter plugin. The first step is to define the field to filter on in hook_views_data():

/** * Implements hook_views_data(). */ function fitbit_views_example_views_data() { // Base data and other field definitions... $data['fitbit_profile']['uid'] = [ 'title' => t('User id'), 'help' => t('Drupal user id, not to be confused with Fitbit profile id.'), 'field' => [ 'id' => 'standard', ], 'filter' => [ 'id' => 'fitbit_uid', ], ]; return $data; }

The part we are most concerned with here is the ‘filter’ key. Its value is an associative array with one key ‘id’, which we set to the name of the filter plugin we’re going to create. Also, note the ‘field’ key, which makes the Drupal user id available as a field in the Views UI. It doesn’t hurt to add it, and it also illustrates how plugins related to a certain field (e.g. field, filter, and others like relationship and argument) are all defined in the same array in hook_views_data(). So, for the next step, we’ll create this file: fitbit_views_example/src/Plugin/views/filter/Uid.php 

<?php namespace Drupal\fitbit_views_example\Plugin\views\filter; /** * Simple filter to handle filtering Fitbit results by uid. * @ViewsFilter("fitbit_uid") */ class Uid extends FilterPluginBase { }

So far, this is typical Drupal 8 plugin scaffolding code. The file is placed in the right folder for the plugin type. The namespace follows PSR-4 naming. The annotation for the plugin type assigns an id to our plugin. Finally, we extend the base class provided by Views for the plugin type. Now let’s look at the specifics required for our filter plugin implementation:

class Uid extends FilterPluginBase { public $no_operator = TRUE; /** * [email protected]} */ protected function valueForm(&$form, FormStateInterface $form_state) { $form['value'] = [ '#type' => 'textfield', '#title' => $this->t('Value'), '#size' => 30, '#default_value' => $this->value, ]; } }

$no_operator = TRUE tells Views that we are not interested in the site administrators having an option to select an operator. In our case, we’ll keep things simple and always assume '='. You could, of course, allow for choice of operators if your remote service supports it. The key component here is the valueForm method. In it, we need to set an appropriate Form API element for the ‘value’ key of the $form array passed as the first argument. The name ‘value’ is important as the base class expects this key to work. The form element that you return is used in a couple of places. It’s used in the Views UI for when the site administrator is setting up a filter. It’s also used if the filter is exposed, rendered in the exposed filters form with the view itself. That’s it for the plugin implementation.  At this point we can add the filter in the Views UI:

undefined

The last step adjusts our query plugin to be able to handle and make use of the filter. The first thing we’ll need to do is implement an addWhere method on the query plugin class:

public function addWhere($group, $field, $value = NULL, $operator = NULL) { // Ensure all variants of 0 are actually 0. Thus '', 0 and NULL are all // the default group. if (empty($group)) { $group = 0; } // Check for a group. if (!isset($this->where[$group])) { $this->setWhereGroup('AND', $group); } $this->where[$group]['conditions'][] = [ 'field' => $field, 'value' => $value, 'operator' => $operator, ]; }

Here, especially, we can see Views’ biases to SQL rear its head. The method name, addWhere, is odd from our perspective of querying a remote service. There is no notion of a WHERE clause present in the Fitbit API. Further, Views supports grouping filters, and logical operators within each group. Here again, the remote service we are using has no notion of this. It’s possible the remote service your implementing does in which case the flexibility Views affords is amazing. In our case it’s overkill, but I’ve copied core Views implementation for the SQL query plugin, so we’ll be able to handle everything that the Views UI allows for setting up filters. The final step is adjusting the execute method on our query plugin to incorporate the filter into the call to the Fitbit API:

/** * [email protected]} */ public function execute(ViewExecutable $view) { // Clip ... if (isset($this->where)) { foreach ($this->where as $where_group => $where) { foreach ($where['conditions'] as $condition) { // Remove dot from beginning of the string. $field_name = ltrim($condition['field'], '.'); $filters[$field_name] = $condition['value']; } } } // We currently only support uid, ignore any other filters that may be // configured. $uid = isset($filters['uid']) ? $filters['uid'] : NULL; if ($access_tokens = $this->fitbitAccessTokenManager->loadMultipleAccessToken([$uid])) { // Query remote API and return results ... } }

Here, we’re looping through any filters that have been configured on the view and grabbing their values. We then ignore any other filter that may have been configured on the view, since we’re only supporting uid for now and pass it along to $this->fitbitAccessTokenManager->loadMultipleAccessToken([$uid]), which will limit the access tokens we get back to just the uid set and only show us results for the corresponding user.

Often, as was the case on a recent client project, the filters that you set up will actually get passed along in the remote API request. The Fitbit API is a bit odd in this regard in that most endpoints only return data for a single user anyway, so there is no filtering that makes sense.

That’s it! After all that work, we can set up a filter by uid to limit the results to a single user.

Wrap up

We did it, at long last, we’ve produced a custom Fitbit leaderboard, which might look something like this:

undefined

Of course this is just stock Drupal 8 with the Fitbit module installed and configured, but it’s Views and we all know how to customize the look and feel of Views, so make it pretty to your heart's content.

While we've looked at a lot of code, I don't think that any of it has been horribly complicated. It's mostly a matter of knowing what to put where, with a healthy dose of planning to make sure our data fits into the Views paradigm properly. In summary, the steps are:

  1. Make a plan of attack, taking into account the data you're retrieving and the way Views expects to use it.

  2. Create field handlers for your data as necessary.

  3. Write remote queries to retrieve your data and store it in rows in the view object.

  4. Write filter plugins as necessary to narrow the result set.

There's a lot of work in those steps, but after running through it a couple times the architecture makes a lot of sense.

Get the code!

The code from this article can be found in the Fitbit module on drupal.org. It consists of a base module to handle application setup, authentication and access token storage and two sub-modules for Views integration. The first is fitbit_views_example, which I created specifically for this article series. You’ll find all the code we went through in there. The other one, fitbit_views is a more fully featured and slightly more complex version, including spanning multiple API endpoints with relationship plugins. You should use fitbit_views if your intending on using this functionality on your Drupal site. Feel free to file issues and patches!

Phew, that was a lot. Thanks for sticking with me through it all. Special thanks to Greg Dunlap for trusting me with the reboot of his original series, which has guided me through my own Views query plugin implementations. Thanks also to the Fitbit module maintainer, Matt Klein, who was kind enough to grant me co-maintainer rights on the project.

Beware These Nine Design Elements Your Front-End Developers Hate

Even though Lullabot’s front-end and design teams already work together well, we wanted to improve our relationship as one of many goals for 2016. Applying our UX research techniques inward, the design team used interviews and surveys to learn more about the front-end development team. What followed was thoughtful discussion, appreciation for the other team, some very exciting process ideas, and a laundry list of pet peeves.

One of the more straightforward discoveries was learning what design elements cause the most frustration during implementation, and how difficult each developer perceived them to be (from 1, a quick change, up to 10, biggest pain in the booty.

undefined

Our front-end team was awesome about explaining their perspective on this. My biggest take-away was that none of these are impossible—they can be done, but some come at a cost. In a time-constrained project, we need to consider these implications and prioritize carefully. To help clients and designers make trade-offs, Lullabot often employs what we call a ‘whiz-bang budget’: we offer the client a set amount of points to spend on enhancement features as they choose, based on their importance and effect on performance.

However you approach it, keep these pain points in mind when creating your next design system.

1. Custom Select Elements: 7.8

Styling custom elements for a form, particularly the drop-down list of options, can be particularly thorny. All browsers have their own default styling for form elements, and each browser often needs separate customizations. Some form styles, like a select element’s option list, are essentially impossible to override.

Often, we compromise and style only the default, closed state of a select element. When a user clicks to reveal the list of options, they’ll see the browser’s default menu styles again.

2. Styling Third Party Social Components: 7.1

Social providers, like AddThis, offer sharing and following functionality, but have hardcoded styles that are quite difficult to override. Be warned, these third-party social tools can also bring a slew of performance problems, as many of these tools load extensive JavaScript in order to track shares and collect data for analytics. What might look like a few innocent icons on the screen can seriously slow down your site.

If you do need to provide sharing components, find one that has an appearance you can live with as-is or one that requires minimal changes. If possible, find one that leaves tracking data to a supplemental tool like Google Analytics, which the site probably already uses.

3. Responsive Carousels: 6.6

Carousels are a pet peeve for developers and designers alike. They hide important content, as not many users engage with them. Worse still for front-end developers is converting a group of images that appear separately on larger screens into an interactive carousel for smaller screens.

When time is an issue, work together to find a module or existing code you can start from, like Slick Carousel, then tweak the styling. If you need something custom, make sure the designers have completely worked out the details across various screen sizes, to save refactoring later on.

4. Changing Source Order: 6.3

Source order is especially important as we build responsive designs that shift and stack. We wireframe for mobile and desktop simultaneously; this allows front-end developers to plan early on how to structure markup so that sidebars and the like don’t end up out of order on smaller screens.

If you can’t maintain a single order from screen to screen, talk to the developer about using the Flexbox #order property. This lets you change the order in which elements appear from one screen size to another. That said, even when there’s a solution at hand like Flexbox, make sure there’s a meaningful reason to change the order around, as these sorts of switches still take time.

5. Inconsistent Grids, Breaking Them With Full Width Elements: 6.1

Grid systems are regularly a point of misunderstanding, so discuss your intentions early on. For most implementations, breaking out of the columns of the grid or overriding margins can require some funky, funky math.

Choose a grid system together, with a developer, so they can walk you through the limitations before you design something difficult to realize. If a design element needs to extend beyond the grid, breaking out a full width element is much simpler than breaking out only one side or column.

6. Fixed And Overlaid UI: 6

This includes several issues our developers brought up as problematic: modals, overlays, sticky elements, anything appearing over other elements on the page through manipulations of z-index. These can all create awkward scrolling issues, and are especially buggy on mobile. Part of the complication comes from the mobile browsers’ header and footer, which have their own behaviors that can interfere with the designed experience. This effect is even worse when we have fixed elements at the top or bottom of the screen, as you know if you've ever tapped a mobile site footer icon only to have the browser toolbar pop up instead. None of these issues are impossible to solve on their own—they tend to get more complicated when there are multiple elements which need to be sticky on scroll or when multiple elements need to stack on top of each other.

7. Equal Height Grid Elements With Varied Text Lengths: 5.7

If design mocks aren’t created with a variety of real content, you will likely find a surprise when the implementation meets the site’s actual content.

Pull samples of the longest and shortest copy to plan for variation and also provide an example of text wrapping. This helps to ensure that line heights are set correctly in your design program of choice, which is easy to overlook if you only set one row of text.

Remember, using truncation to deal with variable length content in a fixed space can cause real trouble. You never know where a word may get off, turning the well-crafted words in your con...

8. Components That Have Exceptions/Differences: 5.2

This is a pet peeve of mine, as well. That is, not following a style guide to a T. Messy files cause confusion, as do variations that are not distinct enough to be clearly intentional. I hope tools like InVision’s Inspect and Sketch’s Craft plugin will make this frustration obsolete.

Developers, when you see an element within a mock that looks like it should match an existing style, please ask the designer responsible if the variation is intended. Hopefully, that designer’s also created a global style guide for you to reference in these sorts of situations. If not, this might be yet another conversation that needs to happen.

9. Different Mobile Navigation: 4.4

Often, the layout and interaction changes between desktop and mobile navigation are so severe that front-end developers need to create two completely different menus. And we can run into overlaying issues here again.

Your responsive designs are likely possible, but like with many of the other items mentioned, each breakpoint and change adds time.

Good Communication Eases the Pain

Besides learning these workarounds, I’ve felt that iterating on our general process and hand-off helps the team just as much. We don’t just throw things over the wall—everyone cares about the end product, and it shows. The big theme is communication: Talk with a developer early in the design process to discuss ideas and brainstorm suggestions that can prevent trouble down the road. Keep a designer on during the development process to answer questions, review implemented pages, and create additional mocks for any additional needs you uncover. Use wireframes, style guides, and documentation to create shared understanding. Re-connect when a design is approved to walk through details or catch missing UI. Discuss which features we can set aside as future enhancements, or if a set of features need to be completed together in order to work. Almost every time we collaborate, I learn something from our friendly, happy-to-share developers, and they have said they’re learning from us designers as well. Point being, these conversations always, always help.

Lub you, FEDs. Sorry again for all those custom select elements.

HTTPS Everywhere: Quick Start With Cloudflare

This is a continuation of a series of articles about HTTPS, continuing from HTTPS Everywhere: Security is Not Just for Banks. If you own a website and understand the importance of serving sites over HTTPS, the next task is to figure out how to migrate a HTTP website to HTTPS. In this article, I’ll walk through an easy and inexpensive option for migrating your site to HTTPS, especially if you have little or no control over your website server or don't know much about managing HTTPS.

A Github Pages Site

I started with the simplest possible example. I have a website hosted by a free, shared hosting service, Github Pages, that doesn’t directly provide SSL for custom domains, I have no shell access to the server, and I just wanted to get my site switched to HTTPS as easily and inexpensively as possible. I used an example from the Cloudflare blog about how to use Cloudflare SSL for a Github Pages site.

Services like Cloudflare can provide HTTPS for any site, no matter where it is hosted. Cloudflare is a Content Delivery Network (CDN) that stands in front of your web site to catch traffic before it gets to your origin website server. A CDN provides caching and efficient delivery of resources, but Cloudflare also provides SSL certificates, and they have a free account option to add any domain to a existing SSL certificate for no charge. With this alternative there is no need to purchase an individual certificate, nor figure out how to get it uploaded and signed. Everything is managed by Cloudflare. The downside of this option is that the certificate will be shared with numerous other unrelated domains. Cloudflare has higher tier accounts that have more options for the SSL certificates, if that’s important. But the free option is an easy and inexpensive way to get basic HTTPS on any site.

It’s important to note that adding another server to your architecture means that content makes another hop between servers. Now, instead of content going directly from your origin website server to the user, it goes from the the origin website server to Cloudflare to the user. The default Cloudflare SSL configuration will encrypt traffic between end users and the Cloudflare server (front-end traffic), but not between Cloudflare and your origin website server (back-end traffic). They point out in their documentation that back-end traffic is much harder to intercept, so that might be an acceptable risk for some sites. But for true security you want back-end traffic encrypted as well. If your origin website server has any kind of SSL certificate on it, even a self-signed certificate, and is configured to manage HTTPS traffic, Cloudflare can encrypt the back-end traffic as well with a “Full SSL” option. If the web server has an SSL certificate that is valid for your specific domain, Cloudflare can provide even better security with the “Full SSL (strict)” option. Cloudflare also can provide you with a SSL certificate that you can manually add to your origin server to support Full SSL, if you need that.

The following screenshot illustrates the Cloudflare security options.

undefined Step 1. Add a new site to Cloudflare

I went to Cloudflare, clicked the button to add a site, typed in the domain name, and waited for Cloudflare to scan for the DNS information (that took a few minutes). Eventually a green button appeared that said ‘Continue Setup’.

undefined Step 2. Review DNS records

Next, Cloudflare displayed all the existing DNS records for my domain.

Network Solutions is my registrar (the place where I bought and manage my domain). Network Solutions was also my DNS provider (nameserver) where I set up the DNS records that indicate which IP addresses and aliases to use for my domain. Network Solutions will continue to be my registrar, but this switch will make Cloudflare my DNS provider, and I’ll manage my DNS records on Cloudflare after this change.

I opened up the domain management screen on Network Solutions and confirmed that the DNS information Cloudflare had discovered was a match for the information in my original DNS management screen. I will be able to add and delete DNS records in Cloudflare from this point forward, but for purposes of making the switch to Cloudflare I initially left everything alone.

undefined Step 3. Move the DNS to Cloudflare

Next, Cloudflare prompted me to choose a plan for this site. I chose the free plan option. I can change that later if I need to. Then I got a screen telling me to switch nameservers in my original DNS provider.

undefined

On my registrar, Network Solutions, I had to go through a couple of screens, opting to Change where domain points, then Domain Name Server, point domain to another hosting provider. That finally got me to a screen where I could input the new nameservers for my domain name.

undefined

Back on Cloudflare, I saw a screen like the following, telling me that the change was in progress. There was nothing to do for a while, I just needed to allow the change to propagate across the internet. The Cloudflare documentation assured me that the change should be seamless to end users, and that seemed logical since nothing had really changed so far except the switch in nameservers.

undefined

Several hours later, once the status changed from Pending to Active, I was able to continue the setup. I was ready to configure the SSL security level. There were three possible levels. The Flexible level was the default. That encrypts traffic between my users and Cloudflare, but not between Cloudflare and my site’s server. Further security is only possible if there is an SSL certificate on the origin web site server as well as on Cloudflare. Github pages has a SSL certificate on the server, since they provide HTTPS for non-custom domains. I selected the Crypto tab in Cloudflare to choose the SSL security level I wanted and changed the security level to Full.

undefined Step 4. Confirm that HTTPS is Working Correctly

What I had accomplished at this point was to make it possible to access my site using HTTPS with the original HTTP addresses still working as before.

Next, it was time to check that HTTPS was working correctly. I visited the production site, and manually changed the address in my browser from HTTP://www.example.com to HTTPS://www.example.com. I checked the following things:

  • I confirmed there was a green lock displayed by the browser.
  • I clicked the green lock to view the security certificate details (see my previous article for a screenshot of what the certificate looks like), and confirmed it was displaying a security certificate from Cloudflare, and that it included my site’s name in its list of domains.
  • I checked the JavaScript console to be sure no mixed content errors were showing up. Mixed content occurs when you are still linking to HTTP resources on an HTTPS page, since that invalidates your security. I’ll discuss in more detail how to review a site for mixed content and other validation errors in the next article in this series.
Step 5. Set up Automatic Redirection to HTTPS

Once I was sure the HTTPS version of my site was working correctly, I could set up Cloudflare to handle automatic redirection to HTTPS, so my end users would automatically go to HTTPS instead of HTTP.

Cloudflare controls this with something it calls “Page Rules,” which are basically the kinds of logic you might ordinarily add to an .htaccess file. I selected the “Page Rules” tab and created a page rule that any HTTP address for this domain should always be switched to HTTPS.

undefined

Since I also want to standardize on www.example.com instead of example.com, I added another page rule to redirect traffic from HTTPS://example.com to HTTPS://www.example.com using a 301 redirect.

undefined

Finally, I tested the site again to be sure that any attempt to access HTTP redirected to HTTPS, and that attempts to access the bare domain redirected to the www sub-domain.

A Drupal Site Hosted on Pantheon

I also have several Drupal sites that are hosted on Pantheon and wanted to switch them to HTTPS, as well. Pantheon has instructions for installing individual SSL certificates for Professional accounts and above, but they also suggest an option of using the free Cloudflare account for any Pantheon account, including Personal accounts. Since most of my Pantheon accounts are small Personal accounts, I decided to set them up on Cloudflare as well.

The setup on Cloudflare for my Pantheon sites was basically the same as the setup for my Github Pages site. The only real difference was that the Pantheon documentation noted that I could make changes to settings.php that would do the same things that were addressed by Cloudflare’s page rules. Changes made in the Drupal settings.php file would work not just for traffic that hits Cloudflare, but also for traffic that happens to hit the origin server directly. Pantheon’s documentation notes that you don’t need to provide both Cloudflare page rules and Drupal settings.php configuration for redirects. You probably want to settle on one or the other to reduce future confusion. However, either, or both, will work.

These settings.php changes might also be adapted for Drupal sites not hosted on Pantheon, so I am copying them below.

// From https://pantheon.io/docs/guides/cloudflare-enable-https/#drupal // Set the $base_url parameter to HTTPS: if (defined('PANTHEON_ENVIRONMENT')) { if (PANTHEON_ENVIRONMENT == 'live') { $domain = 'www.example.com'; } else { // Fallback value for development environments. $domain = $_SERVER['HTTP_HOST']; } # This global variable determines the base for all URLs in Drupal. $base_url = 'https://'. $domain; } // From https://pantheon.io/docs/redirects/#require-https-and-standardize-domain //Redirect all traffic to HTTPS and WWW on live site: if (isset($_SERVER['PANTHEON_ENVIRONMENT']) && ($_SERVER['PANTHEON_ENVIRONMENT'] === 'live') && (php_sapi_name() != "cli")) { if ($_SERVER['HTTP_HOST'] != 'www.example.com' || !isset($_SERVER['HTTP_X_SSL']) || $_SERVER['HTTP_X_SSL'] != 'ON' ) { header('HTTP/1.0 301 Moved Permanently'); header('Location: https://www.example.com'. $_SERVER['REQUEST_URI']); exit(); } }

There was one final change I needed to make to my Pantheon sites that may or may not be necessary for other situations. My existing sites were configured with A records for the bare domain. That configuration uses Pantheon’s internal system for redirecting traffic from the bare domain to the www domain. But that redirection won’t work under SSL. Ordinarily you can’t use a CNAME record for the bare domain, but Cloudflare uses CNAME flattening to support a CNAME record for the bare domain. So once I switched DNS management to Cloudflare’s DNS service, I went to the DNS tab, deleted the original A record for the bare domain and replaced it with a CNAME record, then confirmed that the HTTPS bare domain properly redirected to the HTTPS www sub-domain.

undefined Next, A Deep Dive

Now that I have basic SSL working on a few sites, it’s time to dig in and try to get a better understanding about HTTPS/SSL terminology and options and see what else I can do to secure my web sites. I’ll address that in my next article, HTTPS Everywhere: Deep Dive Into Making the Switch.

Building Views Query Plugins for Drupal 8, Part 2

Welcome to the second installment of our three-part series on writing Views query plugins. In part one, we talked about the kind of thought and design work that must take place before coding begins. In part two, we’ll start coding our plugin and end up with a basic functioning example.

We’ve talked explicitly about needing to build a Views query plugin to accomplish our goal of having a customized Fitbit leaderboard, but we’ll also need field plugins to expose that data to Views, filter plugins to limit results sets, and, potentially, relationship plugins to span multiple API endpoints. There’s a lot to do, so let's dive in.
 

Getting started

In Drupal 8, plugins are the standard replacement for info hooks. If you haven’t yet had cause to learn about the plugin system in Drupal 8, I suggest the Drupalize.Me Drupal 8 Module Development Guide, which includes an excellent primer on Drupal 8 plugins.

Step 1: Create a views.inc file

Although most Views hooks required for Views plugins have gone the way of the dodo, there is still one that survives in Drupal 8: hook_views_data. The Views module looks for that hook in a file named [module].views.inc, which lives in your module's root directory. hook_views_data and hook_views_data_alter are the main things you’ll find here, but since Views is loading this file automatically for you, take advantage and put any Views-related procedural code you may need in this file.

Step 2: Implement hook_views_data()

Usually hook_views_data is used to describe the SQL tables that a module is making available to Views. However, in the case of a query plugin it is used to describe the data provided by the external service.

/** * Implements hook_views_data(). */ function fitbit_views_example_views_data() { $data = []; // Base data. $data['fitbit_profile']['table']['group'] = t('Fitbit profile'); $data['fitbit_profile']['table']['base'] = [ 'title' => t('Fitbit profile'), 'help' => t('Fitbit profile data provided by the Fitbit API\'s User Profile endpoint.'), 'query_id' => 'fitbit', ]; return $data; }

The format of the array is usually $data[table_name]['table'], but since there is no table I’ve used a short name for the Fitbit API endpoint, prefixed by the module name instead. So far, I’ve found that exposing each remote endpoint as a Views “table”—one-to-one—works well. It may be different for your implementation. This array needs to declare two keys—‘group’ and ‘base.' When Views UI refers to your data, it uses the ‘group’ value as a prefix. Whereas, the ‘base’ key alerts Views that this table is a base table—a core piece of data available to construct Views from (just like nodes, users and the like). The value of the ‘base’ key is an associative array with a few required keys. The ‘title’ and ‘help’ keys are self-explanatory and are also used in the Views UI. When you create a new view, ‘title’ is what shows up in the “Show” drop-down under “View Settings”:

undefined

The ‘query_id’ key is the most important. The value is the name of our query plugin. More on that later.

Step 3: Expose fields

The data you get out of a remote API isn’t going to be much use to people unless they have fields they can display. These fields are also exposed by hook_views_data.

// Fields. $data['fitbit_profile']['display_name'] = [ 'title' => t('Display name'), 'help' => t('Fitbit users\' display name.'), 'field' => [ 'id' => 'standard', ], ]; $data['fitbit_profile']['average_daily_steps'] = [ 'title' => t('Average daily steps'), 'help' => t('The average daily steps over all the users logged Fitbit data.'), 'field' => [ 'id' => 'numeric', ], ]; $data['fitbit_profile']['avatar'] = [ 'title' => t('Avatar'), 'help' => t('Fitbit users\' account picture.'), 'field' => [ 'id' => 'fitbit_avatar', ], ]; $data['fitbit_profile']['height'] = [ 'title' => t('Height'), 'help' => t('Fibit users\'s height.'), 'field' => [ 'id' => 'numeric', 'float' => TRUE, ], ];

The keys that make up a single field definition include ‘title’ and ‘help’— again self-explanatory—used in the Views UI. The ‘field’ key is used to tell Views how to handle this field. There is only one required sub-key, ‘id,' and it’s the name of a Views field plugin. 

The Views module includes a handful of field plugins, and if your data fits one of them, you can use it without implementing your own. Here we use standard, which works for any plain text data, and numeric, which works for, well, numeric data. There are a handful of others. Take a look inside /core/modules/views/src/Plugin/views/field to see all of the field plugins Views provides out-of-the-box. Find the value for ‘id’ in each field plugin's annotation. As an aside, Views eats its own dog food and implements a lot of its core functionality as Views plugins, providing examples for when you're implementing your Views plugins. A word of caution, many core Views plugins assume they are operating with an SQL-based query back-end. As such you’ll want to be careful mixing core Views plugins in with your custom query plugin implementation. We’ll mitigate some of this when we implement our query plugin shortly.

Step 4: Field plugins

Sometimes data from your external resource doesn’t line up with a field plugin that ships with Views core. In these cases, you need to implement a field plugin. For our use case, avatar is such a field. The API returns a URI for the avatar image. We’ll want Views to render that as an <img> tag, but Views core doesn’t offer a field plugin like that. You may have noticed that we set a field ‘id’ of ‘fitbit_avatar’ in hook_views_data above. That’s the name of our custom Views field plugin, which looks like this:

<?php namespace Drupal\fitbit_views_example\Plugin\views\field; use Drupal\views\Plugin\views\field\FieldPluginBase; use Drupal\views\ResultRow; /** * Class Avatar * * @ViewsField("fitbit_avatar") */ class Avatar extends FieldPluginBase { /** * [email protected]} */ public function render(ResultRow $values) { $avatar = $this->getValue($values); if ($avatar) { return [ '#theme' => 'image', '#uri' => $avatar, '#alt' => $this->t('Avatar'), ]; } } }

Naming and file placement is important, as with any Drupal 8 plugin. Save the file at: fitbit_views_example/src/Plugin/views/field/Avatar.php. Notice the namespace follows the file path, and also notice the annotation: @ViewsField("fitbit_avatar"). The annotation declares this class as a Views field plugin with the ‘id’ ‘fitbit_avatar,' hence the use of that name back in our hook_views_data function. Also important, we're extending FieldPluginBase, which gives us a lot of base functionality for free. Yay OOO! As you can see, the render method gets the value of the field from the row and returns a render array so that it appears as an <img> tag.

Step 5: Create a class that extends QueryPluginBase

After all that setup, we’re almost ready to interact with a remote API. We have one more task: to create the class for our query plugin. Again, we’re creating a Drupal 8 plugin, and naming is important so the system knows that our plugin exists. We’ll create a file named: 

fitbit_views_example/src/Plugin/views/query/Fitbit.php 

...that looks like this:

<?php namespace Drupal\fitbit_views_example\Plugin\views\query; use Drupal\views\Plugin\views\query\QueryPluginBase; /** * Fitbit views query plugin which wraps calls to the Fitbit API in order to * expose the results to views. * * @ViewsQuery( * id = "fitbit", * title = @Translation("Fitbit"), * help = @Translation("Query against the Fitbit API.") * ) */ class Fitbit extends QueryPluginBase { }

Here we use the @ViewsQuery annotation to identify our class as a Views query plugin, declaring our ‘id’ and providing some helpful meta information. We extend QueryPluginBase to inherit a lot of free functionality. Inheritance is a recurring theme with Views plugins. I’ve yet to come across a Views plugin type that doesn’t ship with a base class to extend. At this point, we’ve got enough code implemented to see some results in the UI. We can create a new view of type Fitbit profile and add the fields we’ve defined and we’ll get this:

undefined

Not terribly exciting, we still haven’t queried the remote API, so it doesn’t actually do anything, but it’s good to stop here to make sure we haven’t made any syntax errors and that Drupal can find and use the plugins we’ve defined.

As I mentioned, parts of Views core assume an SQL-query backend. To mitigate that, we need to implement two methods which will, in a sense, ignore core Views as a way to work around this limitation.  Let’s get those out of the way:

public function ensureTable($table, $relationship = NULL) { return ''; } public function addField($table, $field, $alias = '', $params = array()) { return $field; }

ensureTable is used by Views core to make sure that the generated SQL query contains the appropriate JOINs to ensure that a given table is included in the results. In our case, we don’t have any concept of table joins, so we return an empty string, which satisfies plugins that may call this method. addField is used by Views core to limit the fields that are part of the result set. In our case, the Fitbit API has no way to limit the fields that come back in an API response, so we don’t need this. We’ll always provide values from the result set, which we defined in hook_views_data. Views takes care to only show the fields that are selected in the Views UI. To keep Views happy, we return $field, which is simply the name of the field.

Before we come to the heart of our plugin query, the execute method, we’re going to need a couple of remote services to make this work. The base Fitbit module handles authenticating users, storing their access tokens, and providing a client to query the API. In order to work our magic then, we’ll need the fitbit.client and fitbit.access_token_manager services provided by the base module. To get them, follow a familiar Drupal 8 pattern:

/** * Fitbit constructor. * * @param array $configuration * @param string $plugin_id * @param mixed $plugin_definition * @param FitbitClient $fitbit_client * @param FitbitAccessTokenManager $fitbit_access_token_manager */ public function __construct(array $configuration, $plugin_id, $plugin_definition, FitbitClient $fitbit_client, FitbitAccessTokenManager $fitbit_access_token_manager) { parent::__construct($configuration, $plugin_id, $plugin_definition); $this->fitbitClient = $fitbit_client; $this->fitbitAccessTokenManager = $fitbit_access_token_manager; } /** * [email protected]} */ public static function create(ContainerInterface $container, array $configuration, $plugin_id, $plugin_definition) { return new static( $configuration, $plugin_id, $plugin_definition, $container->get('fitbit.client'), $container->get('fitbit.access_token_manager') ); }

This is a common way of doing dependency injection in Drupal 8. We’re grabbing the services we need from the service container in the create method, and storing them on our query plugin instance in the constructor. 

Now we’re finally ready for the heart of it, the execute method:

/** * [email protected]} */ public function execute(ViewExecutable $view) { if ($access_tokens = $this->fitbitAccessTokenManager->loadMultipleAccessToken()) { $index = 0; foreach ($access_tokens as $uid => $access_token) { if ($data = $this->fitbitClient->getResourceOwner($access_token)) { $data = $data->toArray(); $row['display_name'] = $data['displayName']; $row['average_daily_steps'] = $data['averageDailySteps']; $row['avatar'] = $data['avatar']; $row['height'] = $data['height']; // 'index' key is required. $row['index'] = $index++; $view->result[] = new ResultRow($row); } } } }

The execute method is open ended. At a minimum, you’ll want to assign ResultRow objects to the $view->result[] member variable. As was mentioned in the first part of the series, the Fitbit API is atypical because we’re hitting the API once per row. For each successful request we build up an associative array, $row, where the keys are the field names we defined in hook_views_data and the values are made up of data from the API response. Here we are using the Fitbit client provided by the Fitbit base module to make a request to the User profile endpoint. This endpoint contains the data we want for a first iteration of our leaderboard, namely: display name, avatar, and average daily steps. Note that it’s important to track an index for each row. Views requires it, and without it, you’ll be scratching your head as to why Views isn’t showing your data. Finally, we create a new ResultRow object with the $row variable we built up and add it to $view->result. There are other things that are important to do in execute like paging, filtering and sorting. For now, this is enough to get us off the ground.

That’s it! We should now have a simple but functioning query plugin that can interact with the Fitbit API. After following the installation instructions for the Fitbit base module, connecting one or more Fitbit accounts and enabling the fitbit_views_example sub-module, you should be able to create a new View of type Fitbit profile, add Display name, Avatar, and Average Daily Steps fields and get a rudimentary leaderboard:

undefined Debugging problems

If the message ‘broken or missing handler’ appears when attempting to add a field or other type of handler, it usually points to a class naming problem somewhere. Go through your keys and class definitions and make sure that you’ve got everything spelled correctly. Another common issue is Drupal throwing errors because it can’t find your plugins. As with any plugin in Drupal 8, make sure your files are named correctly, put in the right folder, with the right namespace, and with the correct annotation.

Summary

Most of the work here has nothing to do with interacting with remote services at all—it is all about declaring where your data lives and what its called. Once we get past the numerous steps that are necessary for defining any Views plugins, the meat of creating a new query plugin is pretty simple.

  1. Create a class that extends QueryPluginBase
  2. Implement some empty methods to mitigate assumptions about a SQL query backend
  3. Inject any needed services
  4. Override the execute method to retrieve your data into a ResultRow object with properties named for your fields, and store that object on the results array of the Views object.

In reality, most of your work will be spent investigating the API you are interacting with and figuring out how to model the data to fit into the array of fields that Views expects.

Next steps

In the third part of this article, we’ll look at the following topics:

  1. Exposing configuration options for your query object
  2. Adding options to field plugins
  3. Creating filter plugins

Until next time!