Lullabot

Fear Not: Emotional Fear in the Workplace

I love working in human resources and have spent nine years in this field (coming up on seven years with Lullabot). My most enjoyable workdays are the ones in which I get to help an actual person with a real-life problem. I like to be in the know, not because I’m a big control freak (I may be a little one), but because I like to be in a position to help others.

Besides working at Lullabot, I also go to college. This semester, I elected to take a business leadership course. It might be applicable to my job, I thought, as I clicked “enroll”. WHOA. “Might” could not have been more wrong. Every time I read another chapter in my textbook, I find myself nodding along thinking about the way Lullabot operates. It’s a company that certainly harbors an environment for participative leadership.

 I thank my lucky stars that I am taking this course at THIS point of my life, as an employee of Lullabot, and not as the 18-year-old-this-is-what-my-parents-said-to-do-after-high-school self. The words in the textbook have far more meaning than they otherwise would have. I am devouring this course like I’m starving and business leadership is the first food I can get my hands on.

One of my great honors at Lullabot is being part of the hiring team. I’m one team member who gets to correspond with potential candidates and am often the first voice they’ll hear on an initial screening phone call. It’s my pleasure to get to know people and ask a few simple questions. Sometimes, I’ll get asked a question or two, and more than once I’ve been asked, “What’s your favorite thing about working at Lullabot?” Undoubtedly, I’ll respond, “They treat us like adults.” Such a simple concept, really. Treat an adult…like an adult. Nevertheless, at previous companies where I’ve worked, stepping back and letting people do their work independently proves to be a hard thing for management to do.

Empowering people is the single most important element in managing our organization.  - Fred Smith, Chairman of FedEx


I stopped reading in my textbook (The Art of Leadership 5th Edition by George Manning and Kent Curtis) when I got to Chapter 9: Empowerment in the Workplace and the Quality Imperative because it spoke to me and I had to write what I was feeling out. In the section titled Communication Problems and Solutions I came across a paragraph on fear:

Employees may find it hard to be truthful with leaders because of fear of punishment. If they have made a mistake, it may be difficult to communicate this information for fear of the leader’s reaction. Leaders can best circumvent this by showing appreciation for employees’ honesty and candor, even when they have made mistakes.


In 2010, Lullabot approached me with a project. I had helped them with a few events previously, and now they had a task that they could use some outside help on. It wasn’t an event this time, it was a mistake. Someone had made an eCommerce mistake, and they needed some manpower to fix it. This mistake was a little costly, and perhaps embarrassing, but the thing that stood out to me was— wait for it— no one was in trouble. No one was mad. No one was scared. No one lost their job. And it wasn’t a secret. Team members knew, and everyone was fine with it. This goes even further than the “treat people like adults”. It’s treating people like humans. Everyone makes mistakes and Lullabot is ok with that. Our “Be Human” core value is my favorite. People are human for better or for worse.

This situation was one of the main reasons Lullabot earned my trust, thus gaining my loyalty. After helping fix the mistake, I was asked to stay on in an HR capacity with Lullabot. I often wonder if I owe my eventual hire to that mistake.

Safety, not courage, is (arguably) the opposite of fear. The opposing feeling to being scared is feeling secure. In my opinion, treating people like adults abolishes fear. But more than that, when we treat people like humans—with understanding, empathy and respect—we instill a feeling of safety. So the next time someone asks me my favorite thing about working at Lullabot, I’ll amend my answer. My favorite thing about Lullabot is the feeling of safety. Safe to be who I am (a work in progress), responsible for my actions through good and bad, and trusted to make decisions everyday.

The Drupal.org Infrastructure Team

Matt and Mike sit down with the Drupal.org infrastructure team and talk the ins and outs of what it takes to keep drupal.org's web properties and services (such as composer, git, testing suite, etc) up and running.

A New Dimension for Designing the Web

Consider that designing for the traditional web of today is still a two-dimensional medium. The audience is looking at a flat screen. Users scroll up and down, sometimes left and right, on a two-dimensional plane. Your goal is to make those two dimensions convey meaning and order in a way that touches their hearts and minds. You want to give them an experience that is both memorable and meaningful but must do so within the confines of two dimensions.

Now consider how you would accomplish the same goal in a three-dimensional medium. There are new factors to consider such as the user’s head angle and their viewing radius. How do you make elements touchable in a manner that is intuitive without the benefit of haptic feedback? How do you deal with locomotion in 3D space when a user is sitting or standing still? Does your UI move with the user or disappear altogether? How do you use audio cues to get a user’s attention so that they turn their head to view what you want? We’re no longer designing in a flat medium, but within all the space encompassing a person.

It may seem like there are fewer constraints and that can be liberating, but it can also be a bit scary. The truth is that constraints make our work easier. Luckily, some guidelines are starting to emerge to govern designing for a three-dimensional medium.

There are also many design principles which stay the same. The basic way the human eye works hasn't changed. Principles such as contrast, color, and pattern recognition are still valid tools at our disposal. These concepts can still be useful as the building blocks in our new virtual medium. We may even be able to explore them like never before.

How VR will affect UX

VR comes with a new set of expectations from users and tools with which we can connect them to our medium.

Giving UX a Whole New Meaning with a Third Dimension

Interfaces will no longer constrain themselves to two-dimensional screens sitting on a desk. The screen appears to go away with a screen mounted to your head. As you move your head, there is new canvas space available all around. The head mounted display gives a sense of depth and scale, which we can utilize. Instead of a screen in front of us, we can have browsers and apps represented all around us.

Additionally, we want to think of how to organize these apps in the space around us. Spatial organization of our UI components can help a user to remember where they can find things. Or it could degrade into the nightmare desktop we’ve all seen with a massive clutter of icons.

Placing typical UI elements in such an environment can be tricky. It can be hard to figure out what is optimal when anything is possible. To get an idea of some factors involved in finding what is optimal, take a look at some research done by Mike Alger which is in turn based on some useful observations provided by Alex Chu of Samsung. Mike Alger’s research concludes that a measurable area looking something like this is optimal for user interface elements.

undefined

Mike Alger’s 18-minute video “VR Interface Design Pre-Visualisation Methods” https://youtu.be/id86HeV-Vb8

This conclusion is based on several factors as explained by Mike Alger in his paper. The first is the field of view when a user is looking straight forward. This distance is variable per device, but to give you an idea, the Oculus Rift’s FOV is 94.2°. The second factor is the distance a user can comfortably see. This distance is around 0.5 to 1 meters from the user due to how the eyes try to focus more and strain after this distance. Oculus recommends a distance of 0.75 meters to developers. The third factor is the head rotation of a user. Horizontally the comfort zone is about 30° from the center with a maximum distance of 55° to the side. The fourth is head pitch or the distance up and down that is comfortable for a user to position their head. Upwards this is comfortable up to 20° with a maximum of 60°. Downwards this is comfortably 12° with a maximum of 40°.

undefined

Left: Seated angles of neck rotation.

Right: Combining rotation with FOV results in beginning zones for content.

Credit: Mike Alger - Visual Design Methods for Virtual Reality.

Creating Memorable Experiences

An interesting side effect of experiencing virtual reality through a head-mounted display is that it’s more directly tied in with your memory due to the feeling of being part of an environment rather than watching it on a screen. Our brains can retain a lot of data that we gather from on-screen reading and watching, but feeling as though you’re a part of it all creates memories of the experience, not just retention of what we learned as through reading.

Creating memories and engaging a user on a level where they can almost forget that what they are experiencing is virtual is what we refer to as immersion. Immersion is at the heart of what VR applications are trying to accomplish. Being a core goal of a VR experience, we should try to understand the factors involved and the techniques that are quickly becoming best practices when trying to achieve the creation of an immersive experience.

Use Sound to Draw Attention

Positional audio is a way of seeing with your ears. We evolved having to deal with predators. Accordingly, our brains know exactly where a sound comes from relative to our bodies. A recognizable sound that is behind you and to your left can quickly trigger not only an image of what made the sound, but we can approximate where that sound came from. It also triggers feelings within us. We might feel a measure of fear or soothing depending on what our brain relates this sounds to and how close it is to us. As in real life, so it is in VR. We can use positional audio as cues to gain a user’s attention or to give them information that is not presented visually. Remembering a time when you were in VR and a sudden loud sound scared the pants off of you can immediately recall a memory of everything else you were experiencing at the time.

Scale can be a Powerful Tool

Another useful tool is how scale can affect the user. Creating a small model of something like a robot can make the user feel powerful, as though the robot were a toy and cute. But scale the robot up to three or four times the size of the user and it’s now menacing. Depending on the memories you’re trying to create, scale can be a useful tool to shape how the user remembers the experience.

Create Beautiful Scenes to Remember

Visually stunning scenery can affect the immersion experience as well. A beautiful sunset or a starry night sky can give the user an environment to which they can relate. Invoking known places and scenes is an effective means of creating memorable experiences. It’s a means of triggering other similar memories and feelings and then building upon those with the direction you are trying to take the user. An immersed user may have been there before, but now there is a new experience to remember.

Make Learning Memorable

Teaching someone a new and wonderful thing can also be a useful memory trigger. Do you remember where you were when you first learned of HTML, or Photoshop? The knowledge that tends to stick out the most to us can trigger pretty powerful images and the feelings that we memorized in those moments. Heralded by some to be an important catalyst in changing how we learn, VR has the potential to revolutionize education. Indeed we are better able to create memories through the use of VR, and what better memories than those of learning new and interesting things. Or perhaps making those less interesting things much more interesting and in doing so learning them more fully.

More VR Resources you need to know Cardboard Design Lab

A set of intrinsic rules you need to know to be able to respect physiologically and treat your users carefully. Google has regrouped some of these principles in an app so you can learn through this great immersive experience.

WebVR Information

This is a convenient website that has lots of links to information regarding the WebVR specification, how to try it out, and how to contribute to its formation. It’s a great quick stop for getting into the nitty gritty of the idea behind WebVR and why.

3D User Interfaces: Theory and Practice

This book addresses the critical area of 3D user interface design—a field that seeks to answer detailed questions that make the difference between a 3D system that is usable and efficient and one that causes user frustration, errors, and even physical discomfort.

Oculus Developer Center

Signing up for an account here can give you access to the developer forums which contain tons of information and discussion about VR, best practices, and shared experiences by other VR developers.

Learning Virtual Reality

Developing Immersive Experiences and Applications for Desktop, Web, and Mobile

The VR Book: Human-Centered Design for Virtual Reality

The VR Book: Human-Centered Design for Virtual Reality is not just for VR designers, it is for managers, programmers, artists, psychologists, engineers, students, educators, and user experience professionals. It is for the entire VR team, as everyone contributing should understand at least the basics of the many aspects of VR design.

Building an Enterprise React Application, Part 2

Building an Enterprise React Application, Part 2

In the first part of this article, we talked a lot about architecture. This time around we’ll be looking at the build tools, processes and front-end code organization on the project. Many of the tools we used are popular and well-tested and I can recommend using any of them without hesitation.

I’d like to begin by briefly talking about adding dependencies (libraries, packages, e.g.) to your JavaScript projects. If you’re coming from a Drupal background (as many of our readers are), then you may be quite cautious about adding dependencies to your projects. This is understandable since adding excessive modules to a Drupal site can have serious performance consequences.

With JavaScript projects, the situation is different. It’s very common to use micro-libraries, sometimes, quite a few of them. The performance concerns in doing so in a JavaScript application are not as great. Rather than the number of dependencies, the focus should be on the size of the file(s) clients need to download to run your app.

The use of small libraries is a huge asset when building JavaScript applications because you can compose a tailored solution for the project at hand, rather than relying on a monolithic solution that may include a bunch of things you’ll never use. It’s really powerful to have that versatility.

Build Tools and Processes

Let’s look at a few of the key tools that the front-end team used on the project:

Let’s talk about these one by one…

Npm

On this project we used npm to manage dependencies. Since then, however, Facebook has published yarn, which is probably the better choice if you’re starting a project today. Whether you use yarn or npm, the actual modules will still come from the npm registry.

Another thing I’d like to mention is our use of npm scripts instead of task runners like Grunt and Gulp. You can use npm scripts to do things like watch files for changes, run tests, transpile your code, add pre and post-commit hooks, and much more. You may find task runners to be redundant in most cases.

Webpack

In even a small JavaScript application, you will likely have your code distributed in a number of different files—think ES6 modules, or React components. Webpack helps gather them all together, process them (ES6 transpiling, for example) and then output a single file—or multiple files, if that’s what you specify—to include in your application.

Browserify does much the same thing as webpack. Both tools process files. They aren’t task runners like Gulp or Grunt, which, as I mentioned, were unnecessary for us. Webpack has built-in hot swapping of modules, which is very helpful with development and is one reason why it’s widely preferred over Browserify for React projects.

Babel

Babel is an indispensable part of a JavaScript build process. It allows developers to use ES6 (aka ES2015) today, even if your target browsers don’t yet support it. There are huge advantages to doing this—better variable scoping, destructuring, template literals, default parameters and on and on.

The release of new versions of JavaScript now happens annually. In fact, ES7 was finalized in June of last year, and with Babel you can use most of those features now. Unless something significant changes, transpiling looks to be a permanent feature of JavaScript development.

PostCSS

Do you use Autoprefixer to add vendor prefixes to your CSS rules? If yes, then you’re already using PostCSS. PostCSS is simply a JavaScript tool used to transform CSS. It can replicate the things Sass and Less do, but it doesn’t make assumptions about how you want to process your CSS files. For example, you can use tomorrow’s CSS syntax today, write custom plugins, find and fix syntax errors, add linting to your stylesheets, and much more. If you’re already using a tool like Autoprefixer, it may make sense to go all in and simplify your build process.

ESLint

After first using ESLint, I wondered how I ever got by without it. ESLint helps enforce consistent JavaScript code style. It’s easy to use and if you’re in a big team, it’s essential. There’s also a plugin to add specific linting rules for React. Quick note, there is a new project called prettier that you might prefer for enforcing consistent code styling. I haven’t tried it out yet, but it looks promising.

Stylelint

I alluded to stylelint above when I discussed PostCSS. It’s a linter for your CSS. As with ESLint, it provides great value in maintaining a consistent, readable codebase.

Testing tools

We used Mocha for a JavaScript testing framework and Chai for the assertion library, but there are other good ones. I also recommend using Enzyme to help with traversing your components during the testing process. We didn't use it on this project, but if we were starting again today we certainly would.

Build Tools and Processes Summary

You may have noticed a common thread between all the tools I listed above. They’re all JavaScript-centric. There are no Ruby gems or wrappers written in C or Java. Everything that is required is available through npm. If you’re a JavaScript developer (or you want to be) this is a big advantage. It greatly simplifies things. Plus, you can contribute to those projects to add functionality, or even fork them if need be.

Another thing to note, we’ve discarded tools that we could do without. PostCSS can do the job performed by Sass, so we went with PostCSS since we’re already using it for vendor prefixes. Npm scripts can do the job of Gulp and Grunt, so we used npm scripts since they are available by default. Front-end build processes are complex enough. When we can simplify, it’s a good idea to do so.

File Structure and Separation of Concerns

What is meant by the term, “separation of concerns”? From the perspective of front-end development, it’s commonly understood to mean that files that do similar things are separated out into their own directories. For example, there is commonly a folder for JavaScript files and another for CSS. Within those folders there are usually subfolders where we further categorize the files by what part of the application they apply to. It may look something like this:

JS/ app.js Views/ Header.js CSS/ global.css Header/ Header.css

With the structure above things are being grouped by technology and then are further categorized by the part of the application in question. But isn’t the actual focus of concern something else entirely? In our example above, what if we structured things more like this?

app/ App.js Header/ Header.js Header.css

Now we have both the CSS and JavaScript files in a single folder. Any other file that pertains solely to the header would also go in this folder, because the focus of concern is the header, not the various technologies used to create it.

When we first started using this file structure on the project, I was uncertain how well it would work. I was used to separating my files by technology and it seemed like the right way to do it. In short order, however, I saw how grouping by component created a more intuitive file structure that made it easier to track down what was going on.

CSS in JavaScript

Taking this line of thinking to its logical conclusion, why not just include all the code for a component in one file? I think this is a really interesting idea, but I haven’t had the opportunity to do it in a real project yet and we certainly didn’t do it on this project.

However, if you’re interested, there are some good solutions that I think are worth consideration, including styled components and Aphrodite. I realize that this suggestion is a bit controversial and many of you reading this will be put off it, but after working on React projects for the last couple of years, it has started to make a lot of sense to me.

Wrapping Up…

I find it remarkable how much time is spent on the two topics we’ve addressed here. Front-end build processes can be complex. Some tools, such as webpack, can be difficult to get your head around at first and it can feel overwhelming when you’re getting started.

There are a few React starter projects I’d like to suggest that may help. The first one is from Facebook, called Create React App. It can help you get started very quickly, but there are some limits. Create React App doesn’t try to be all things to all people. It doesn’t include server rendering, for example, although you can add it yourself if needed.

Another very interesting project is Next.js. It’s a small React framework that does support server rendering, so it may be worth trying out. Server rendering of JavaScript applications is important for many projects, certainly for any public websites you want build. It’s also a fairly complex problem. There isn’t (to my knowledge) a simple, drop-in solution available at this time. Next.js may be the simplest, best supported path to getting started on a universal React app, but it may not be right for your team or project.

We’ve created our own React boilerplate that supports server rendering, so feel free to look it over and take away what’s useful. Of course, there are many other React boilerplate projects out there—too many to mention—but I encourage you to look a few of them over if your team is planning a React project. Reviewing how other teams have approached the same problems will help you decide what’s going to be the best choice for your team.

HTTPS Everywhere: Deep Dive Into Making the Switch

HTTPS Everywhere: Deep Dive Into Making the Switch

In the previous articles, HTTPS Everywhere: Security is Not Just for Banks and HTTPS Everywhere: Quick Start With CloudFlare, I talked about why it’s important to serve even small websites using the secure HTTPS protocol, and provided a quick and easy how-to for sites where you don’t control the server. This article is going to provide a deep dive into SSL terminology and options. Even if you are offloading the work to a service like Cloudflare, it’s good to understand what’s going on behind the scenes. And if you have more control over the server you’ll need a basic understanding of what you need to accomplish and how to go about it.

At a high level, there are a few steps required to set up a website to be served securely over HTTPS:

  1. Decide what type of certificate to use.
  2. Install a signed certificate on the server.
  3. Configure the server to use SSL.
  4. Review your site for mixed content and other validation issues.
  5. Redirect all traffic to HTTPS.
  6. Monitor the certificate expiration date and renew it when it expires.

Your options are dependent on the type of certificate you want and your level of control over the website. If you self-host, you have unlimited choices, but you’ll have to do the work yourself. If you are using a shared host service, you’ll have to see what SSL options your host offers and how they recommend setting it up. Another option is to set up SSL on a proxy service like the Cloudflare CDN, which stands between your website and the rest of the web.

I’m going to go through these steps in detail.

Decide Which Certificate to Use

Every distinct domain needs certificates, so if you are serving content at www.example.com and blog.example.com, both domains need to be certified. Certificates are provided by a Certificate Authority (CA). There are numerous CAs that will sell you a certificate, including DigiCert, VeriSign, GlobalSign, and Comodo. There are also CAs that provide free SSL certificates, like LetsEncrypt.

Validation Levels There are several certificate validation levels available.

Domain Validation (DV) degree certificate indicates that the applicant has control over the specified DNS domain. DV certificates do not assure that any particular legal entity is connected to the certificate, even if the domain name may imply that. The name of the organization will not appear next to the lock in the browser since the controlling organization is not validated. DV certificates are relatively inexpensive, or even free. It’s a low level of authentication but provides assurance that the user is not on a spoofed copy of a legitimate site.

Organization Validation (OV) OV certificates verify that the applicant is a legitimate business. Before issuing the SSL certificate, the CA performs a rigorous validation procedure, including checking the applicant's business credentials (such as the Articles of Incorporation) and verifying the accuracy of its physical and Web addresses.

Extended Validation (EV) Extended Validation certificates are the newest type of certificate. They provide more validation than the OV validation level and adhere to industry-wide certification guidelines established by leading Web browser vendors and Certificate Authorities. To clarify the degree of validation, the name of the verified legal identity is displayed in the browser, in green, next to the lock. EV certificates are more expensive than DV or OV certificates because of the extra work they require from the CA. EV certificates convey more trust than the other alternatives, so are appropriate for financial and commerce sites, but they are useful on any site where trust is important.

Certificate Types

In addition to the validation levels, there are several types of certificates available.

Single Domain Certificate An individual certificate is issued for a single domain. It can be either DV, OV or EV.

Wildcard Certificate A wildcard certificate will automatically secure any sub-domains that a business adds in the future. They also reduce the number of certificates that need to be tracked. A wildcard domain would be something like *.example.com, which would include www.example.com, blog.example.com, help.example.com, etc. Wildcards work only with DV and OV certificates. EV certificates cannot be provided as wildcard certificates, since every domain must be specifically identified in an EV certificate.

Multi-Domain Subject Alternative Name (SAN) A multi-domain SAN certificate secures multiple domain names on a single certificate. Unlike a wildcard certificate, the domain names can be totally unrelated. It can be used by services like Cloudflare that combine a number of domains into a single certificate. All domains are covered by the same certificate, so they have the same level of credentials. A SAN certificate is often used to provide multiple domains with DV level certification, but EV SAN certificates are also available.

Install a Signed Certificate

The process of installing a SSL certificate is initiated on the server where the website is hosted by creating a 2048-bit RSA public/private key pair, then generating a Certificate Signing Request (CSR). The CSR is a block of encoded text that contains information that will be included in the certificate, like the organization name and location, along with the server’s public key. The CA then uses the CSR and the public key to create a signed SSL certificate, or a Certificate Chain. A certificate chain consists of multiple certificates where each certificate vouches for the next. This signed certificate or certificate chain is then installed on the original server. The public key is used to encrypt messages, and they can only be decrypted with the corresponding private key, making it possible for the user and the website to communicate privately with each other.

Obviously, this process is something that only works if you have shell access or a control panel UI to the server. If your site is hosted by a third party, it will be up to the host to determine, how, if at all, they will allow their hosted sites to be served over HTTPS. Most major hosts offer HTTPS, but specific instructions and procedures vary from host to host.

As an alternative, there are services, like Cloudflare, that provide HTTPS for any site, no matter where it is hosted. I discussed this in more detail in my previous article, HTTPS Everywhere: Quick Start With CloudFlare.

Configure the Server to Use SSL

The next step is to make sure the website server is configured to use SSL. If a third party manages your servers, like a shared host or CDN, this is handled by the third party and you don’t need to do anything other than determine that it is being handled correctly. If you are managing your own server, you might find Mozilla's handy configuration generator and documentation about Server Side TLS useful.

One important consideration is that the server and its keys should be configured for PFS, an abbreviation for either Perfect Forward Security or Perfect Forward Secrecy. Prior to the implementation of PFS, an attacker could record encrypted traffic over time and store it. If they got access to the private key later, they could then decrypt all that historic data with the private key. Security around the private key might be relaxed once the certificate expires, so this is a genuine issue. PFS ensures that even if the private key gets disclosed later, it can’t be used to decrypt prior encrypted traffic. An example of why this is important is the Heartbleed bug, where PFS would have prevented some of the damage caused by Heartbleed. If you’re using a third-party service for SSL, be sure it uses PFS. Cloudflare does, for instance.

Normally SSL certificates have a one-to-one relationship to the IP address of their domains. Server Name Indication (SNI) is an extension of TLS that provides a way to manage multiple certificates on the same IP address. SNI-compatible browsers (most modern browsers are SNI-compatible) can communicate with the server to retrieve the correct certificate for the domain they are trying to reach, which allows multiple HTTPS sites to be served from a single IP address.

Test the server’s configuration with Qualys' handy SSL Server Test. You can use this test even on servers you don’t control! It will run a battery of tests and give the server a security score for any HTTPS domain.

Review Your Site for HTTPS Problems

Once a certificate has been installed, it’s time to scrutinize the site to be sure it is totally valid using HTTPS. This is one of the most important, and potentially time-consuming, steps in switching a site to HTTPS.

To review your site for HTTPS validation, visit it by switching the HTTP in the address to HTTPS and scan the page source. Do this after a certificate has been installed, otherwise, the validation error from the lack of a certificate may prevent other validation errors from even appearing.

A common problem that prevents validation is the problem of mixed content, or content that mixes HTTP and HTTPS resources on the page. A valid HTTPS page should not include any HTTP resources. For instance, all JavaScript files and images should be pulled from HTTPS sources. Watch canonical URLs and link meta tags, as they should use the same HTTPS protocol. This is something that can be fixed even before switching the site to HTTPS, since HTTP pages can use HTTPS resources without any problem, just not the reverse.

There used to be a recommendation to use protocol-relative links, such as //example.com instead of http://example.com, but now the recommendation is to just always use HTTPS, if available since a HTTPS resource works fine under either protocol.

Absolute internal links should not conflate HTTP and HTTPS references. Ideally, all internal links should be relative links anyway, so they will work correctly under either HTTP or HTTPS. There are lots of other benefits of relative links, and few reasons not to use them.

For the most part, stock Drupal websites already use relative links wherever possible. In Drupal, some common sources of mixed content problems include:

  • Hard-coded HTTP links in custom block content.
  • Hard-coded HTTP links added by content authors in body, text, and link fields.
  • Hard-coded HTTP links in custom menu links.
  • Hard-coded HTTP links in templates and template functions.
  • Contributed modules that hard-code HTTP links in templates or theme functions.

Most browsers will display HTTPS errors in the JavaScript console. That’s the first place to look if the page isn’t validating as HTTPS. Google has an example page with mixed content errors where you can see how this looks.

undefined Redirect all Traffic to HTTPS

Once you’ve assured yourself that your website passes SSL validation, it’s time to be sure that all traffic goes over HTTPS instead of HTTP. You need 301 redirects from your HTTP pages to HTTPS, especially when switching from HTTP to HTTPS. If a website was already in production on HTTP, search engines have already indexed your pages. The 301 redirect ensures that search engines understand the new pages are a replacement for the old pages.

If you haven’t already, you need to determine whether you prefer the bare domain or the www version, example.com vs www.example.com. You should already be redirecting traffic away from one to the other for good SEO. When you include the HTTP and HTTPS protocols, at a minimum you will have four potential addresses to consider: http://example.com, https://example.com, https://example.com, and https://www.example.com. One of those should survive as your preferred address. You’ll need to set up redirects to reroute traffic away from all the others to that preferred location.

Specific details about how to handle redirects on the website server will vary depending on the operating system and configuration on the server. Shared hosts like Acquia Cloud and Pantheon provide detailed HTTPS redirection instructions that work on their specific configurations. Those instructions could provide useful clues to someone configuring a self-hosted website server as well.

HTTP Strict Transport Security (HSTS)

The final level of assurance that all traffic uses HTTPS is to implement the HTTP Strict Transport Security (HSTS) header on the secured site. The HSTS header creates a browser policy to always use HTTPS for the specified domain. Redirects are good, but there is still the potential for a Man-in-the-Middle to intercept the HTTP communication before it gets redirected to HTTPS. With HSTS, after the first communication with a domain, that browser will always initiate communication with HTTPS. The HSTS header contains a max-age when the policy expires, but the max-age is reset every time the user visits the domain. The policy will never expire if the user visits the site regularly, only if they fail to visit within the max-age period.

If you’re using Cloudflare’s SSL, as in my previous article, you can set the HSTS header in Cloudflare’s dashboard. It’s a configuration setting under the “Crypto” tab.

Local, Dev, and Stage Environments

A final consideration is whether or not to use HTTPS on all environments, including local, dev, and stage environments. That is truly HTTPS everywhere! If the live site uses HTTPS, it makes sense to use HTTPS in all environments for consistency.

HTTPS Is Important

Hopefully, this series of articles provides convincing evidence that it's important for sites of all sizes to start using the HTTPS protocol, and some ideas of how to make that happen. HTTPS Everywhere is a worthy initiative!

Drupal Serialization Step-by-Step

In my previous article about the serializer component, I touched on the basic concepts involved when serializing an object. To summarize, serialization is the combination of encoding and normalization. Normalizers simplify complex objects, like User or ComplexDataDefinition. Denormalizers perform the reverse operation. Using a structured array of data, they generate complex objects like the ones listed above.

In this article, I will focus on the Drupal integration of the Symfony serializer component. For this, I will guide you step-by-step through a module I created as an example. You can find the module at https://github.com/e0ipso/entity_markdown. I have created a different commit for each step of the process, and this article includes a link to the code in each step at the beginning of each section. However, you can use GitHub UI to browse the code at any time and see the diff.

undefined

When this module is finished, you will be able to transform any content entity into a Markdown representation of it. Rendering a content entity with Markdown might be useful if you wanted to send an email summary of a node, for instance, but the real motivation is to show how serialization can be important outside the context of web services.

Add a new normalizer service

These are the changes for this step. You can browse the state of the code at this step in here.

Symfony’s serializer component begins with a list of normalizer classes. Whenever an object needs to be normalized or serialized the serializer will loop through the available normalizers to find one that declares support for the type of object at hand, in our case a content entity. If you want to add a class to the list of eligible normalizers you need to create a new tagged service.

A tagged service is just a regular class definition that comes with an entry in the mymodule.services.yml so the service container can find it and instantiate it whenever appropriate. For a service to be a tagged as a service you need to add a tags property with a name. You can also add a priority integer to convey precedence with respect to services tagged with the same name. For a normalizer to be recognized by the serialization module, you need to add the tag name normalizer.

When Drupal core compiles the service container, our newly created tagged service will be added to the serializer list in what it’s called a compiler pass. This is the place in Drupal core where that happens. The service container is then cached for performance reasons. That is why you need to clear your caches when you add a new normalizer.

Our normalizer is an empty class at the moment. We will fix that in a moment. First, we need to turn our attention to another collection of services that need to be added to the serializer, the encoders.

Include the encoder for the Markdown format

These are the changes for this step. You can browse the state of the code at this step in here.

Similarly to a normalizer, the encoder is also added to the serialization system via a tagged service. It is crucial that this service implements `EncoderInterface`. Note that at this stage, the encoder does not contain its most important method encode(). However, you can see that it contains supportsEncoding(). When the serializer component needs to encode an structured array, it will test all the encoders available (those tagged services) by executing supportsEncoding() and passing the format specified by the user. In our case, if the user specifies the 'markdown' format. Then, our encoder will be used to transform the structured array into a string, because supportsEncoding() will return TRUE. To do the actual encoding it will use the encode() method. We will write that method later. First, let me describe the normalization process.

Normalize content entities

The normalization will differ each time. It depends on the format you want to turn your objects into, and it depends on the type of objects you want to transform. In our example, we want to turn a content entity into a Markdown document.

For that to happen, the serializer will need to be able to:

  1. Know when to use our normalizer class.
  2. Normalize the content entity.
  3. Normalize any field in the content entity.
  4. Normalize all the properties in every field.
Discover our custom normalizer

These are the changes for this step. You can browse the state of the code at this step in here.

For a normalizer to be considered a good fit for a given object it needs to meet two conditions:

  • Implement the `NormalizerInterface`.
  • Return `TRUE` when calling `supportsNormalization()` with the object to normalize and the format to normalize to.

The process is nearly the same as the one we used to determine which encoder to use. The main difference is that we also pass the object to normalize to the supportsNormalization() method. That is a critical part since it is very common to have multiple normalizers for the same format, depending on the type of object that needs to be normalized. A Node object will have different code that turns it into an structured array when compared to an HttpException. We take that into account in our example by checking if the object being normalized is an instance of a ContentEntityInterface.

Normalize the content entity

These are the changes for this step. You can browse the state of the code at this step in here.

This step contains a first attempt to normalize the content entity that gets passed as an argument to the normalize() method into our normalizer.

Imagine that our requirements are that the resulting markdown document needs to include an introductory section with the entity label, entity type, bundle and language. After that, we need a list with all the field names and the values of their properties. For example, the body field of a node will result in the name field_body and the values for format, summary and value. In addition to that any field can be single or multivalue, so we will take that into consideration.

To fulfill these requirements, I've written a bunch of code that deals with the specific use case of normalizing a content entity into an structured array ready to be encoded into Markdown. I don’t think that the specific code is relevant to explain how normalization works, but I've added code comments to help you follow my logic.

You may have spotted the presence of a helper method called normalizeFieldItemValue() and a comment that says Now transform the field into a string version of it. Those two are big red flags suggesting that our normalizer is doing more than it should, and that it’s implicitly normalizing objects that are not of type ContentEntityInterface but of type FieldItemListInterface and FieldItemInterface. In the next section we will refactor the code in ContentEntityNormalizer to defer that implicit normalization to the serializer.

Recursive normalization

These are the changes for this step. You can browse the state of the code at this step in here.

When the serializer is initialized with the list of normalizers, for each one it checks if they implement SerializerAwareInterface. For the ones that do, the serializer adds a reference to itself into them. That way you can serialize/normalize nested objects during the normalization process. You can see how our ContentEntityNormalizer extends from SerializerAwareNormalizer, which implements the aforementioned interface. The practical impact of that is that we can use $this->serializer->normalize() from within our ContentEntityNormalizer. We will use that to normalize all the field lists in the entity and the field items inside of those.

First turn your focus to the new version of the ContentEntityNormalizer. You can see how the normalizer is divided into parts that are specific to the entity, like the label, the entity type, the bundle, and the language. The normalization for each field item list is now done in a single line: $this->serializer->normalize($field_item_list, $format, $context);.  We have reduced the LOC to almost half, and the cyclomatic complexity of the class even further. This has a great impact on the maintainability of the code.

All this code has now been moved to two different normalizers:

  • FieldItemListNormalizer contains the code that deals with normalizing single and multivalue fields. It uses the serializer to normalize each individual field item.
  • FieldItemNormalizer contains the code that normalizes the individual field items values and their properties/columns.

You can see that for the serializer to be able to recognize our new `FieldListItemNormalizer` and `FieldItemNormalizer` objects we need to add them to the service container, just like we did for the ContentEntityIterface normalizer.

A very nice side effect of this refactor, in addition to the maintainability improvement, is that a third party module can build upon our code more easily. Imagine that this third party module wants to make all field labels bold. Before the refactor they would need to introduce a normalizer for content entities—and play with the service priority so it gets selected before ours. That normalizer would contain a big copy and paste of a big blob of code in order to be able to make the desired tweaks. After the refactor, our third party would only need to have a normalizer for the field item list (which outputs the field label) with more priority than ours. That is a great win for extensibility.

Implement the encoder

As we said above the most important part of the encoder is encapsulated in the `encode()` method. That is the method in charge of turning the structured array from the normalization process into a string. In our particular case we treat each entry of the normalized array as a line in the output, then we append any suffix or prefix that may apply.

Further development

At this point the Entity Markdown module is ready to take any entity and turn it into a Markdown document. The only question is how to execute the serializer. If you want to execute it programmatically you only need do:

\Drupal::service(‘serializer’)->serialize(Node::load(1), ‘markdown’);

However there are other options. You could declare a REST format like the HAL module so you can make an HTTP request to http://example.org/node/1?_format=markdown and get a Markdown representation of the node in response (after configuring the corresponding REST settings).

Conclusion

The serialization system is a powerful tool that allows you to reform an object to suit your needs. The key concepts that you need to understand when creating a custom serialization are:

  • Tagged services for discovery
  • How a normalizer and an encoder get chosen for the task
  • How recursive serialization can improve maintainability and limit complexity.

Once you are aware and familiar with the serializer component you will start noticing use cases for it. Instead of using hard-coded solutions with poor extensibility and maintainability, start leveraging the serialization system.

The Unexpected Power of Viewport Units in CSS

The days of fixed-width designs and needing to only test against a handful of viewport sizes are gone. We live in a fluid-width world with a myriad of device sizes and aspect ratios. Percentage-based units allow us to accommodate the variety of possible ways our content will be viewed, but they don’t work in every scenario. [Viewport percentage units] (https://developer.mozilla.org/en-US/docs/Web/CSS/length#Viewport-percentage_lengths) units, or “viewport units” for short, offer an alternative “fluid” value to use when percentage-based units prove inadequate. For example, viewport units can be useful when trying to create an equal height/width stack of elements.

Viewport units control attributes for elements on the page based on the size of the screen whereas percentages inherit their size from the parent element. For example, height: 100%; applied to an element is relative to the size of its parent. In contrast, height: 100vh will be 100% of the viewport height regardless of where the element resides in the DOM.

Let’s take a look at the syntax:

undefined undefined undefined undefined

Now that we have the basic terminology laid out let’s look at an example.

Fixed-ratio cards

In this example, we will also leverage (Flexbox)[https://developer.mozilla.org/en-US/docs/Learn/CSS/CSS_layout/Flexbox] to layout the width of our elements. Percentages fail us as it’s not possible to calculate the height of the items dynamically when the number of items in the list effects the height of their container. It is worth noting a similar result can also be achieved with intrinsic ratios. But by leveraging viewport units, we get the same effect with less code and without having to use absolute positioning.

.stack { display: flex; flex-wrap: wrap; } .stack__element { flex: 50vw; height: 50vw; }

See on codepen. Try changing the height from 50vw to 50%, and observe the results.

Full-height elements

Viewport units shine when an image must scale to the height of the user’s screen. Without viewport units, we would need all the container elements of .image to have height: 100%; on them. Get the same result with less code:

.image { height: 100vh; width: auto; }

See on codepen. Again, replace 100vh with 100% and see what happens.

Keeping an element shorter than the screen

In a similar manner vein, what if we want to force an element's height to be shorter than the viewport? This technique can be useful if you want to explicitly control the height of an element relative to the viewport size so that it will always remain in view.

.shorten-me { max-height: 90vh; }

See on codepen. To play with this, try reducing the width of the screen in codepen so that you have an extremely narrow viewport. Note the behavior of the text box.

Scaling text

Rem and Em give developers great flexibility when adjusting font sizes, but they do not scale dynamically with the viewport size. Though we can leverage the way rem inherits base font size from the root element, use a viewport unit on the root element's font size, and scale with that.

Use this method carefully. Text could potentially become illegible as it scales with the viewport. In practice, media queries combine nicely with vh units to ensure readability across screen sizes. If you are going to implement something like this in your design, I highly recommend Zell Liew’s in-depth write up on viewport-based typography.

html { font-size: 16px; } h1 { font-size: calc(100% + 5vw); }

See on codepen.

Breaking out of the container

Viewport units make it possible to break outside of a containing element or elements. In scenarios where the CMS makes it difficult or impossible to alter our markup in an HTML template, using viewport units can achieve the desired result regardless of the markup. This technique won't work in every scenario, but it’s a nice trick in some instances.

.container { max-width: 1024px; margin: 0 auto; } .breakout { position: relative; left: 50%; transform: translate(-50%, 0); width: 100vw; }

See on codepen.

Browser support and gotchas

Though there is solid support for viewport units across all major web browsers, you can still come across bugs including:

  • Safari. One of the major problems you need to look out for is lack of support for viewport units inside calc() in Safari 8 and below.
  • Some versions of Internet Explorer and Edge do not have complete support including no support for vmax yet.
  • Platforms, especially Windows, are inconsistent about how they count the width of the scrollbar across browsers.
  • Chrome at this time does not evaluate viewport units when printing.

Because there are still a few bugs, it is always a good practice to check caniuse before implementing a newer technique in your design.

Conclusion

Viewport units are one of my favorite additions to browsers in recent years. Nevertheless, if there are places in your design where you do not want an element to change relative to the screen size then viewport units are not what you need. But if you want great flexibility in tailoring a site's elements to a wide array of devices—viewport units are the perfect fit. I hope that using them will serve you.

Building an Enterprise React Application, Part 1

Last year, Lullabot was asked to help build a large-scale React application for a major U.S. media company and, lucky for me, I was on the team. An enterprise React application is often part of a complex system of applications, and that was certainly the case for this project. What follows is part one of our discussion. It includes a high-level view of the overall application architecture as well as a look at the specific architecture used for the React part of the project that was the focus of my work.

Web Application Architecture

Take a quick look at the diagram below. It describes the high-level architecture of the collection of applications that comprise the site. It begins with the content management system, or CMS. In this project, the CMS was Drupal, but this could just as well have been Wordpress or any number of alternatives—basically, software for editorial use that allows non-technical content creators to add pages, define content relationships, and perform other common editorial tasks. No pages or views are served to users directly from the CMS.

undefined

It’s not uncommon to have additional data sources besides the CMS feeding the API, as was the case on this project. In that sense our diagram oversimplifies things, but it does give a good sense of the data flows.

The API The API is software that provides a consistent view into the data of the CMS (as well as other data sources, when present). The database in a CMS like Drupal is normalized. One important task of the API is to de-normalize this data, which allows clients—web browsers, mobile apps, smart TVs, etc.—to make fewer round trips.

The API is a critical part of the application. The real business case for an architecture like this is to have a single data source serve content across a range of platforms in a consistent, efficient way. Client devices make an HTTP request to the API and receive a response with the requested data, usually in JSON format.

Caching Layers Having a caching layer in front of your API and Node.js servers helps reduce load on the API server and decreases response time. Our client uses Akamai as a CDN and caching solution, but there are many other options.

Node.js Server We’ve finally gotten to where the code for the React web app lives. We used the server-side JavaScript application framework, Express, on the project. Express was used to create an HTTP server that responds to requests from web clients. It’s also where we did the server-side rendering of the React application.

Clients In the diagram, I’ve added icons to represent mobile apps and web browsers, respectively, but any number of devices may consume the API’s data For example, our client serves not only web browsers and a mobile app from the API, but also Roku boxes and Samsung TVs.

What’s happening with the web client is pretty interesting. The first request by a browser goes to the Node.js server, which will return the pre-rendered first page. This is server-side React and it’s helping provide a faster load time.

Without rendering first on the server, the client would have to retrieve the page and then begin rendering, creating a lag on first load as well as potentially having an adverse impact on SEO. Subsequent pages will be routed on the client using a library like React Router. The app will then make requests directly to the API for the data it needs for a specific “page” or route.

React Architecture

When thinking of JavaScript application architecture, the MVC pattern immediately comes to mind for many people. React isn’t a MVC framework like Angular or Ember, but is instead a library that handles views. If that’s the case, then what architecture does a typical large-scale React application use?

On this project, we used Redux. Redux is a both a library and an architecture/pattern that is influenced by the Flux architecture from Facebook.

Two distinguishing characteristics of Redux are strict unidirectional data flow (shared with Flux) and storing all application state as a single object. The data lifecycle in a Redux app has four steps:

  1. Dispatch an action.
  2. Call to a reducer function.
  3. A root reducer combines the output of the various reducer functions into a single state tree.
  4. The store saves the state returned by the root reducer triggering view updates.

The diagram below illustrates this data flow:

undefined

Let’s walk through this step by step.

1. Dispatch an action Let’s say you have a list of items displayed in a component (represented by the blue box in the diagram) and you’d like the user to be able to delete an item when clicking a button next to the item. When a button is clicked, there will be a call to the Redux dispatch function. It’s a common pattern to also pass a call to an action creator into this function. It may look something like this:

dispatch(deleteItem(itemID));

In the example above, the function deleteItem is an action creator (yellow box). Action creators return plain JavaScript objects (orange box). In our case the return object looks like this:

{ type: 'DELETE_ITEM', itemId: itemId }

Note that actions must always have a type property. In this simple example, we could have just passed the plain object in the dispatch function and it would have worked just fine. However, an advantage to using an action creator is that they can be used to transform the data before it’s passed to the reducer. For example, action creators are a great place for API calls, adding timestamps or anything else that may cause side effects.

2. Call to a reducer function Once the action creator returns the action to the dispatch function, the store then calls the reducer function (green box). The store passes two things to the reducer: the current state and the action. An important thing to note is that Redux reducers must be pure functions. Passing the same input to a reducer function should result in exactly the same output every time.

The reducer then computes the next state. It does this using a switch statement that checks for the action type, which in our example case is “DELETE_ITEM”, and then returns the new state. An important point here is that state is immutable in Redux. The state object is never changed directly, but rather a new state object is created and returned based on the specified action passed to the reducer.

In our example this might look like this:

// The code below is part of our itemsReducer. Default state is defined in reducer // function definition. switch (action.type) { case DELETE_ITEM: return { ...state, lastItemDeleted: action.itemId }; default: return state }

3. Root reducer combines reducer output into a single state tree A very common pattern is to split your reducer functions into separate files based on what part of the app they address. For example, we might have a file in our hypothetical app called itemsReducer and perhaps another one called usersReducer.

The output of these two reducers will then be combined into a single state object when you call the Redux combineReducers function.

4. The store saves the state returned by the root reducer Finally the store saves the state and all the subscribers (your components) will receive the updated state and the view will be updated as needed.

That’s a general overview of what a Redux architecture looks like. It’s obviously missing many implementation details. To learn more about how Redux works, I highly recommend reading the documentation in its entirety. The Redux docs are very well written and comprehensive. If you prefer video tutorials, Dan Abramov, the creator of Redux, has a couple of great courses here and here.

If you’re interested in getting a quick start playing around with React and Redux, I’ve put together a boilerplate project that’s a good starting point for building a medium-to-large application. It’s informed by many of the lessons learned working on this project.

Until Next Time…

Enterprise applications are often enormously complicated and take the efforts of multiple, dedicated teams each specializing in one area of the application. This is particularly true if the technology stack is diverse. If you’re new to a big project like I’ve described here, try to get a good handle on the architecture, the workflows, the roles of other teams, as well as the tools you are using to build the application. This big picture view will help you produce your best work.

Next week we’ll publish part two, where we’ll go over the build tools and processes we used on the project as well as how we organized the code.

The Lullabot Approach to Sales

Mike and Matt sit down (literally!) with Lullabot's sales team. Learn how Lullabot does sales, and what it takes to sell large projects.

Lullabot's 2017 Annual Retreat

Many companies have corporate retreats where the whole team gets together to celebrate their success and spend time thinking about how to improve their work. We’re no different. Almost every year since 2006 we’ve brought our geo-distributed team together to spend a week “working on how we work” while bonding with our peers. In 2017, 52 employees from Lullabot and Lullabot Education flew to Palm Springs, CA for a week of rest, relaxation and vision work at our beloved Smoke Tree Ranch. We’ve been at Smoke Tree before.

undefined

If you don’t see your co-workers every day, a company retreat is more akin to a family reunion. You’re not sick of Bob who would otherwise bring tuna salad sandwiches for lunch or Mary who never refills the coffee pot when it’s empty or Phil who plays his polka music too loud. Our team genuinely wants to get to know each other and form bonds outside of work. It lifts our spirits and allows us to cultivate gratitude and celebrate success in person.

“As a new employee, this has been my first retreat. I have liked the openness about the state of the company, the free time activities, the beautiful place (of course!), but, without a doubt, what I will keep for me as the best take away from the retreat is the people—this is a great company, because it's made of great people.” —Ezequiel Vázquez undefined

We start planning the company retreat three months in advance. At first, the planning group consisted of our event coordinator Haley Scarpino, our human resources team, the admin team and the directors. On our first call we reviewed our notes from the previous retreat to recap what worked and didn’t work, then we each talked about what we wanted to get out of the upcoming retreat. In preparation for the kick-off call, I had already brainstormed new ideas and intentions I was eager to share. My vision for the retreat was to reduce the sit-and-listen presentations and replace that time with collaborative workshopping as a company. The other big areas of consensus from that kickoff meeting were:

  • Protect the unstructured “fun and relaxation” time. Free time is critical for the team to recharge during the retreat.
  • Cultivate gratitude.
  • Feel connected.

By the time we finished the planning for the retreat, we had almost 20 of 54 Lullabots owning an activity or overseeing some aspect of the retreat. I'm so grateful for how the team stepped up this year to pull off a successful event. I should also mention we go back to the same place each year which eases the stress on our event coordinator and our team. Using the same venue means our team knows what to pack, what to expect, and what fun activities they can do next year that they didn’t get time to do this year. Using the same venue allows us to focus on event curation rather than logistics planning and exception handling.

undefined

At the end of our weekly planning sessions our daily schedule looked like this:

  • 9-10 a.m.: Announcements and Presentations.
  • 10-12 a.m.: Team Workshops. This year we advanced our open books philosophy, having the team build revenue forecasts for the year in small groups and estimate the percentage of each expense category using rolls of pennies. We prioritized our company values and wrote headlines for the company we want to be five years from now. We also threw in a couple of strategy workshops we do with our clients so the team can experience those as-a-client.
  • 12 a.m.-1 p.m.: Lunch.
  • 1-3 p.m.: Freetime. Self-organizing volleyball, golf, horseshoes, horseback riding, soccer, and hanging out by the pool or simply taking a nap. We used Slack to organize free time, and it worked quite well.
  • 3-5 p.m.: Self-organizing groups. Using Trello, a team member would list a topic they wanted to talk about for an hour. Anything goes. We had conversations on home improvement, personal core values, career advancement, knitting, our website, work-life balance, and so on. The sessions with the highest votes were curated and added to the agenda.
  • 5-6 p.m.: Circles. We break into small groups and share our feelings and experiences from the day. It’s a judgment-free way to process the day while connecting to a small group of peers that remains the same throughout the retreat.
  • 6:30-7:30 p.m.: Dinner.
  • 7:30-10 p.m.: Evening activities. Each night has an event planned. From the ever popular lightning talks and keynote karaoke to a talent show, storytelling, to our very relaxed outdoor dinner and bonfire on the last night. We also had an awards show for the best posts on our private social company network, Yammer. Yes, there were trophies.
undefined “I loved the focus on finances and how the business works, and how to keep it not just sustainable but also growing in a way that continues to underline our core values. I came away with something I didn't think possible—an even greater respect for our leadership, and a renewed confidence that I'm in the right place.” —Greg Dunlap

There’s one more event not listed on here which was new this year: community service time. We took an afternoon to deviate from the schedule to build bicycles for ten kids at the Boys & Girls Club Palm Springs. The bikes came from Palm Springs Cyclery with a considerable discount. It was a fun way to share the afternoon and do something together we usually don’t get to do. Speaking for myself, it was a personal highlight of the retreat.

undefined

To be candid, the whole experience is surreal—like a week-long dream. The crisp desert air in the morning, the cactus flowers in full bloom and 70-90 degree Fahrenheit weather in the afternoon against the backdrop of snow-covered mountains. Did I mention being surrounded by people you want to be around? This team. This incredibly talented, smart, hard-working, and passionate team is the real reason an event like this becomes meaningful.

“I found it inspiring that Lullabot is planning five, and even ten, years down the road. Our continued product diversification provides opportunities for growth and learning. Lullabot has been a great "job" for years, but now I'm starting to see it as a career instead.” —Nathan Haug

Many of us will no doubt see each other before the next company retreat, whether it’s at a client onsite, a departmental retreat, or a conference. And when we do, those same feelings of seeing a friend you haven’t hung out with in awhile will most likely be there.

And perhaps you’ll be there as well? We seldom hire, but we’re always looking for smart, talented people to join our team. Our hiring process is slow, but start the conversations with us now if you’d like to work at Lullabot. We take care of our employees and work every day to earn their trust. We ask that in return, you share your passion, creativity and initiative with us.

Photos by Greg Dunlap

Using the serialization system in Drupal

As part of the API first initiative I have been working a lot with the serialization module. This module is a key member of the web-service-oriented modules present both in core and contrib.

The main focus of the serialization module is to encapsulate Symfony's serialization component. Note that there is no separate deserialization component. This single component is in charge of serializing and deserializing incoming data.

When I started working with this component the first question that I had was "What does serialize mean? And how is it different from deserializing?". In this article I will try to address this question and give a brief introduction on how to use it in Drupal 8.

Serializers encoders and normalizers

Serialization is the process of normalizing and then encoding an input object. Similarly, we refer to deserialization as the process of decoding and then denormalizing an input string. Encoding and decoding are the reverse processes of one another, just like normalizing and denormalizing are.

In simple terms, we want to be able to turn an object of class MyClass into a particular string representation, and then be able to turn that string back into the original object.

An encoder is in charge of converting simple data—a set of scalars, arrays and stdClass objects—into a string. The resulting string is a convenient way to store or transport the original object. A decoder performs the opposite function; it will take that encoded string and transform it into an array that’s ready to use. json_encode and json_decode are good examples of a commonly used (de)encoder. XML is another example of a format to encode to. Note that for an object to be correctly encoded it needs to be normalized first. Consider the following example where we encode and decode an object without any normalization or denormalization.

class MyClass {} $obj = new MyClass(); var_dump($obj); // Outputs: object(MyClass) (0) {} var_dump(json_decode(json_encode($obj))); // Outputs: object(stdClass) (0) {}

You can see in the code above that the composition of the two inverse operations is not the same original object of type MyClass. This is because the encoding operation loses information if the input data is not a simple set of scalars, arrays, and stdClass objects. Once that information is lost, the decoder cannot get it back.

undefined

One of the reasons why we need normalizers and denormalizers is to make sure that data is correctly simplified before being turned into a string. It also needs to be upcast to a typed object after being parsed from a string. Another reason is that different (de)normalizers allow us to work with different formats of the data. In the REST subsystem we have different normalizers to transform a Node object into the JSON, HAL or JSON API formats. Those are JSON objects with different shapes, but they contain the same information. We also have different denormalizers that will take a simplified JSON, HAL or JSON API payload and turn it into a Node object.

(De)Normalization in Drupal

The normalization of content entities is a very convenient way to express the content in a particular format and shape. So formatted, the data can be exported to other systems, stored as a text-based document, or served via an HTTP request. The denormalization of content entities is a great way to import content into your Drupal site. Normalization and denormalization can also be combined to transform a document from one format to another. Imagine that we want to transform a HAL document into a JSON API document. To do so, you need to denormalize the HAL input into a Node object, and then normalize it into the desired JSON API document.

A good example of the normalization process is the Data Model module. In this case instead of normalizing content entities such as nodes, the module normalizes the Typed Data definitions. The typed data definitions are the internal Drupal objects that define the schemas of the data for things like fields and properties. An integer field will contain a property (the value property) of type IntegerData. The Data Model module will take object definitions and simplify (normalize) them. Then they can be converted to a string following the JSON Schema format to be used in external tools such as beautiful documentation generators. Note how a different serialization could turn this typed data into a Markdown document instead of JSON Schema string.

Adding a new (de)normalizer to the system

In order to add a new normalizer to the system you need to create a new tagged service in custom_module.services.yml.

serializer.custom_module.my_class_normalizer: class: Drupal\custom_module\Normalizer\MyClassNormalizer tags: - { name: normalizer, priority: 25 }

The class for this service should implement the normalization interface in the Symfony component Symfony\Component\Serializer\Normalizer\NormalizerInterface. This normalizer service will be in charge of declaring which types of objects it knows how to normalize and denormalize—that would be MyClass in our previous example. This way the serialization module uses it when an object of type MyClass needs to be (de)normalized. Since multiple modules may provide a service that supports normalizing MyClass objects, the serialization module will use the priority key in the service definition to resolve the normalizer to be used.

As you would expect, in Drupal you can alter and replace existing normalizers and denormalizers so they provide the output you need. This is very useful when you are trying to alter the output of the JSON API, JSON or HAL web services.

In a next article I will delve deeper into how to create a normalizer and a denormalizer from scratch, by creating an example module that (de)normalizes nodes.

Conclusion

The serialization component in Symfony allows you to deal with the shape of the data. It is of the utmost importance when you have to use Drupal data in an external system that requires the data to be expressed in a certain way. With this component, you can also perform the reverse process and create objects in Drupal that come from a text representation.

In a following article I will show you an introduction on how to actually work with (de)normalizers in Drupal.