Lullabot

Invaders! Securing "Smart" Devices on a Home Network

As a part of Lullabot’s security team, we’ve been keeping track of how the Internet of Things plays a role in our company security. Since we’re fully distributed, each employee works day-to-day over their home internet connection. This subreddit reminds us that most “smart” devices are actually quite dumb as far as security goes. With malware like Mirai actively focusing on home IoT devices including cameras, we know that anything we plug in will be under constant assault. However, there can be significant utility in connecting physical devices to your local network. So, my question: is it possible to connect an “IoT” device to my home network securely, even when it has known security issues?

An opportunity presented itself when we needed to buy a new baby monitor that supported multiple cameras. The Motorola MBP853CONNECT was on sale, and included both Wifi and a “regular” proprietary viewer. Let’s see how far we can get.

The Research

Before starting, I wanted to know if anyone else had done any testing with this model of camera. After searching for “motorola hubble security” (Hubble is the name of the mobile app), I came across Push To Hack: Reverse engineering an IP camera. This article goes into great detail about the many flaws they found in a different Motorola camera aimed at outdoor use. Given that both cameras are made by Binatone, and connect to the same remote services, it seemed likely that the MBP853 was subject to similar vulnerabilities. The real question was if Motorola updated all of their cameras to fix the reported bugs, or if they just updated a single line of cameras.

These articles were also great resources for figuring out what the cameras were capable of, and I wouldn’t have gotten as far in the time I had without them:

Goals

I wanted to answer these three questions about the cameras:

  1. Can the cameras be used in a purely “local” mode, without any cloud or internet connectivity at all?
  2. If not, can I allow just enough internet access to the camera so it allows local access, but blocks access to the cloud services?
  3. If I do need to use the Hubble app and cloud service, is it trustworthy enough to be sending images and sounds from my child’s bedroom?
The Infrastructure

I recently redid my home network, upgrading to an APU2 running OPNSense for routing, combined with a Unifi UAP-AC-PRO for wireless access. Both software stacks support VLANs—a way to segregate and control traffic between devices on the same ‘physical’ network. For WiFi, this means creating a separate SSID for the cameras, and assigning it a VLAN ID in the UniFi controller. Then, in OPNSense, I created a new interface with the same VLAN ID. On that interface, I enabled DHCP, and then set up basic firewall rules to block all traffic. That way, I could try setting up the camera while using Wireshark on my laptop to sniff the traffic, without worrying that I was exposing my real network to anything nefarious.

Packet Sniffing

One of the benefits of running a “real” operating system on your router is that all of our favorite network debugging tools are available, including tcpdump. Since Wireshark will be running on our local workstation, and not our router, we need to capture the network traffic to a separate file. Once I knew the network interface name using ifconfig, I then used SSH along with -w - to reroute the packet dump to my workstation. If you have enough disk space on the router, you could also dump locally and then transfer the file after.

$ ssh [email protected] tcpdump -w - -i igb0_vlan3000 > packet-dump.pcap

After setting this up, I realized that this wouldn't show traffic of the initial setup. That’s because, in setup mode, the WiFi camera broadcasts an open WiFi network. You then have to use the Android or iOS mobile app to configure the camera so it has the credentials to your real network. So, for the first packet dump, I joined my laptop to the setup network along with my phone. Since the network was completely open, I could see all traffic on the network, including the API calls made by the mobile app to the camera.

undefined Verifying the setup vulnerability Let's make sure this smart camera is using HTTPS and keeps my WiFi password secure.

I wanted to see if the same setup vulnerability documented by Context disclosing my WiFi passwords applied to this camera model. While I doubt anyone in my residential area is capturing traffic, this is a significant concern in high-density locations like apartment buildings. Also, since the cameras use the 2.4GHz and not the 5GHz band, their signal can reach pretty far, especially if all you’re trying to do is read traffic and not have a successful communication. In the OPNSense firewall, I blocked all traffic on the “camera” VLAN. Then, I made sure I had a unique, but temporary password on the WiFi network. That way, if the password was broadcast, at least I wasn’t broadcasting the password for a real network and forcing myself to reset it.

Once I started dumping traffic, I ran through the setup wizard with my phone. The wizard failed as it tests internet connectivity, but I could at least capture the initial setup traffic.

In Wireshark, I filtered to https traffic:

undefined

Oh dear. The only traffic captured is from my phone trying to reach 66.111.4.148. According to dig -x 66.111.4.148, that IP resolves to www.fastmail.com - in other words, my email app checking for messages. I was expecting to see HTTPS traffic to the camera, given that the WiFi network was completely open. Let’s look for raw HTTP traffic.

undefined

This looks promising. I can see the HTTP commands sent to the camera fetching it’s version and other information. Wireshark’s “Follow HTTP stream” feature is very useful here, helping to reconstruct conversations that are spread over multiple packets and request / response pairs. For example, if I follow the “get version” conversation at number 3399:

GET /?action=command&command=get_version HTTP/1.1 User-Agent: Dalvik/2.1.0 (Linux; U; Android 7.1.1; Nexus 6P Build/N4F26O) Host: 192.168.193.1 Connection: Keep-Alive Accept-Encoding: gzip HTTP/1.1 200 OK Proxy-Connection: Keep-Alive Connection: Close Server: nuvoton Cache-Control: no-store, no-cache, must-revalidate, pre-check=0, post-check=0, max-age=0 Pragma: no-cache Expires: 0 Content-type: text/plain get_version: 01.19.30

Let’s follow the setup_wireless command:

GET /?action=command&command=setup_wireless_save&setup=1002000071600000000606blueboxthisismypasswordcamera000000 HTTP/1.1 User-Agent: Dalvik/2.1.0 (Linux; U; Android 7.1.1; Nexus 6P Build/N4F26O) Host: 192.168.193.1 Connection: Keep-Alive Accept-Encoding: gzip HTTP/1.1 200 OK Proxy-Connection: Keep-Alive Connection: Close Server: nuvoton Cache-Control: no-store, no-cache, must-revalidate, pre-check=0, post-check=0, max-age=0 Pragma: no-cache Expires: 0 Content-type: text/plain setup_wireless_save: 0

That doesn't look good. We can see in the GET:

  1. The SSID of the previous WiFi network my phone was connected to (“bluebox”).
  2. The password for the “camera” network (thisismypassword).
  3. The SSID of that network.

Presumably, this is patched in the latest firmware update. Of course, there’s no way to get the firmware without first configuring the camera. So, I opened up the Camera VLAN to the internet (but not the rest of my local network), and updated.

That process showed another poor design in the Hubble. When checking for firmware updates, the app fetches the version number from the camera. Then, it compares that to a version fetched from ota.hubble.in… over plain HTTP.

undefined

In other words, the firmware update itself is subject to a basic MITM attack, where an attacker could block further updates from being applied. At the least, this process should be over HTTPS, ideally with certificate pinning as well. Amusingly, the OTA server is configured for HTTPS, but the certificate expired the day I was writing this section.

undefined

After the update had finished, I reset the camera to factory defaults and checked again. This time, the setup_wireless_save GET was at the least not in cleartext. However, I don’t have any trust that it’s not easily decryptable, so I’m not posting it here.

Evaluating Day-to-Day Security

Assuming that the WiFi password was at least secure from casual attackers, I proceeded to add firewall rules to allow traffic from the camera to the internet, so I could complete the setup process. This was a tedious process. tcpdump along with the OPNSense list of “blocked traffic” was very helpful here. In the end, I had to allow:

  • DNS
  • NTP for time sync
  • HTTPS
  • HTTP
  • UDP traffic

I watched the IPs and hostnames used by the camera, which were all EC2 hosted servers. The “aliases” feature in OPNSense allowed me to configure the rules by hostname, instead of dealing with constantly changing IPs. Of course, given the above security issues, I wonder how secure their DNS registrations are.

Needing to allow HTTP was a red flag to me. So, after the setup finished, I disabled all rules except DNS and NTP. Then, I added a rule to let my normal home LAN access the CAMERA VLAN. I could then access the camera with an RTSP viewer at the URL:

rtsp://user:[email protected]:6667/blinkhd/

Yes, the credentials actually are user and pass.

And tada! It looked like I had a camera I could use with my phone or laptop, or better yet at the same time as my wife. Neat stuff!

It All Falls Apart

After a fresh boot, everything seemed fine with the video streams. However, over a day or two, the streams would become more and more delayed, or would drop, and, eventually, I’d need to restart the camera. Wondering if this had something to do with my firewall rules, I re-enabled the HTTP, HTTPS, and UDP rules, and started watching the traffic.

Then, my phone started to get notification spammed.

At this point, I’d been using the cameras for about two weeks. As soon as I re-enabled access to Hubble, my phone got notifications about movement detected by the camera. I opened the first one… and there was a picture of my daughter, up in her room, in her jammies.

It was in the middle of the day, and she wasn’t home.

What I discovered is that the camera will save a still every time it detects movement, and buffer them locally until they can be sent. And, looking in Wireshark, I saw that the snapshots were being uploaded with an HTTP POST to snap.json without any encryption at all. Extracting the conversation, and then decoding the POST data (which was form data, not JSON!), I ended up with a picture.

I now had proof the camera was sending video data over the public internet without any security whatsoever. I blocked all internet access, including DNS, hoping that would still let local access work. It did!

Then, my wife and I started hearing random beeps in the middle of the night. Eventually, I tracked it to the cameras. They would beep every 15 minutes or so, as long as they didn’t have a working internet connection. This killed the cameras for home use, as they’d wake the whole family. Worse yet, even if we decided to allow internet access, if it was down in the middle of the night (our cable provider usually does maintenance at 3AM), odds are high we’d all be woken up. I emailed Motorola support, and they said there was no way to disable the beeping, other than to completely reset the cameras and not use the WiFi feature at all.

We’re now happily using the cameras as “dumb” devices.

Security Recommendations and Next Steps

Here are some ideas I had about how Motorola could secure future cameras:

  1. The initial setup problem could have been solved by using WPA2 on the camera. I’ve seen routers from ISPs work this way; the default credentials are unique per device, and printed on the bottom of the device. That would significantly mitigate the risk of a completely open setup process. Other devices include a Bluetooth radio for this purpose.
  2. Use encryption and authentication for all APIs. Of course, there are difficulties from this such as certificate management, hostname validation, and so on. However, this might be a good case where the app could validate based on a set of hardcoded properties, or accept all certificates signed by a custom CA root.
  3. Mobile apps should validate the authenticity of the camera to prevent MITM attacks. This is a solved problem that Binatone simply hasn’t implemented.
  4. Follow HTTP specifications! All “write” commands for the camera API use HTTP GETs instead of POSTs. That means that proxies or other systems may inadvertently log sensitive data. And, since there’s no authentication, it opens up the API to CSRF vulnerabilities.

In terms of recommendations to the Lullabot team, we currently recommend that any “IoT” devices be kept on completely separate networks from devices used for work. That’s usually as simple as creating a “guest” WiFi network. After this exercise, I think we’ll also recommend to treat any such devices as hostile, unless they have been proven otherwise. Remember, the “S” in “IoT” stands for “secure”.

Personally, I want to investigate hacking the camera firmware to remove the beeps entirely. I was able to capture the firmware from my phone (the app stores them in Android’s main storage), and since there’s no authentication, I’m guessing I could replace the beeps with silence, assuming they are WAV or MP3 files.

In the future, I’m hoping to find an IoT vendor with a security record that matches Apple’s, who is clearly the leader in mobile security. Until then, I’ll be sticking with dumb devices in my home.

Modernizing JavaScript in Drupal 8

Mike and Matt host two of Drupal's JavaScript maintainers, Théodore Biadala and Matthew Grill, as well as Lullabot's resident JavaScript expert Sally Young, and talk about the history of JavaScript in Drupal, and attempts to modernize it.

Why I Chose to Use Flow for Static Type Checking My JavaScript

I recently gave a talk internally here at Lullabot about my experiences using Flow, a static type checker for JavaScript built by Facebook. The talk seemed to go pretty well and so I thought I’d share that experience here.

To begin, let’s talk a bit about the problems that can arise with how JavaScript handles types.

Static vs Dynamic Typing

In simplest terms, a statically-typed language has types that are known at compile time (before the program runs) and dynamically-typed languages have types that aren’t known until runtime.

So, in a statically-typed language, there are errors that will be identified very early—you will most likely see them displayed in your editor or when you try to build your program. They are hard to miss. But things are different in a dynamically-typed language like JavaScript. Consider the following code:
 

var myObject = { prop1: 'Some text here' } console.log(myObject.prop1()); // TypeError: myObject.prop1 is not a function


What this code does is declare an object and assign it a property with the value "Some text here", which is a string. Then we try to call that property as if it’s a function, producing the following error:
 

TypeError: myObject.prop1 is not a function


This sort of error can be very easy to miss in JavaScript. There is no compile error to warn the developer and if that bit of code is only fired after some sequence of events—say after a few different button clicks—then it could very well sneak into the code base.

The benefit of having a static type checker then, is that some types of bugs are caught early, prior to the code being pushed to production.

A Separate, but Related Problem

Another source of errors comes from implicit type coercion. That’s a mouthful, but it’s a straightforward idea. Take a look at the example below:

// implicit type coercion in JavaScript var myNumber = 20; console.log(typeof myNumber); // number myNumber = true; console.log(typeof myNumber); // boolean


In the code above we declared a variable called myNumber and assigned it a value—the number 20. Next, we change the value to true and in the process, change its type as well (it’s now a boolean). 

Notice that we didn’t explicitly change the type—JavaScript just did that on its own. In this simple example, it’s easy for us to see what’s happening. In a large code base with multiple developers, however, this implicit type coercion can easily create bugs.

How Flow Solves These Problems

In the case of our first error, having Flow check the file—even without type annotations—will catch the error. We simply install Flow and add the pragma to the top of the file like so:

/* @flow */ var myObject = { prop1: 'Some text here' } console.log(myObject.prop1());


Now Flow will catch that error and display it either in our editor or when we run Flow from the command line:

// Here's the output from Flow 6: console.log(myObject.prop1()); ^ call of method `prop1`. Function cannot be called on 4: prop1: 'Some text here' ^ string


Although we don’t have to provide type annotations in many cases in order for Flow to catch errors, adding them helps Flow work better and find more potential problems. The annotations are also a useful way to document code. Here’s what it would look like if we added the type annotations to our object:

/* @flow */ var myObject: {prop1: string} = { prop1: 'Some text here' }


We signal a type annotation by adding a colon after the variable name. In this example we could have chosen to not add a type for the property and simply wrote:

/* @flow */ var myObject: {} = { prop1: 'Some text here' }


But as mentioned, adding the annotations can provide big benefits in some instances. The point is, they are optional and you can decide what works best in a particular situation.

To fix our example with the implicit type coercion, we can update our variable declaration like so:

/* @flow */ var myNumber: number = 20;

 
Now we’d get an error if we tried to change the value to something with a different type. Very simple, right? With these annotations Flow allows us to get rid of two common types of errors in JavaScript.

Why I Chose Flow

Are you sold on the value of using a static type checker yet? I know I had to really think about it for a while. For most of JavaScript’s existence, static type checking hasn’t been an option. And with so much other tooling to deal with, I wondered if it was worth it.

Here’s a quote from the Flow team that I found compelling:

…underlying the design of Flow is the assumption that most JavaScript code is implicitly statically typed; even though types may not appear anywhere in the code, they are in the developer’s mind as a way to reason about the correctness of the code.


It makes a lot of sense to have a tool help you with this, doesn’t it? Particularly when it isn’t much effort to get started. It’s especially straightforward if you’re already using a tool like Babel to transpile your code.

Here’s another great quote from Facebook:

Facebook loves JavaScript; it’s fast, it’s expressive, and it runs everywhere, which makes it a great language for building products. At the same time, the lack of static typing often slows developers down. Bugs are hard to find (e.g., crashes are often far away from the root cause), and code maintenance is a nightmare (e.g., refactoring is risky without complete knowledge of dependencies). Flow improves speed and efficiency so developers can be more productive while using JavaScript.


If you’re familiar with Flow, then you may have also heard of TypeScript, which is a language created by Microsoft that compiles to JavaScript and also features type annotations. There are  other similar compile-to-JS languages as well—Elm and Reason come to mind—and they offer similar advantages. So why did I chose Flow instead?

The first reason is one that is implied from the examples above—it is incredibly easy to get value from Flow due to it’s outstanding type inference. Just add /* @flow */ to the top of a file and you’ve already helped yourself out.

A second reason is that at Lullabot, most of our JavaScript projects involve React. Flow and React are both Facebook products and used heavily within that company. Knowing that the tools you use are going to work well with each other now and in the future is reassuring.

Another consideration is that, as a client services company, Flow is an easier sell on the projects we work on. There is a big difference in the mind of a stakeholder between: 

Hey we’d like to use this library to improve code quality on your project


vs

We’d like to write the code for your project in a language your internal team doesn’t know.


I realize that TypeScript is just a superset of JavaScript, but perceptions like this often matter to a client. They may also matter to fellow developers. Being able to say, “Flow is from Facebook” helps increase confidence in the tool and makes it more likely that static type checking gets used on the project.

So my decision was ultimately about easy wins, compatibility with existing technology and client sentiment.

Below is an excerpt from one of the Flow team members on a discussion thread. The topic is Flow vs TypeScript:

We built Flow because we didn't see TypeScript going in the right direction for the community. For example, it purposefully does not attempt to be a sound type system… …documenting the API using types is useful (in fact using Flow types you can generate API documentation using documentation.js). But the point is that you don't need type annotations everywhere in Flow to get something out of it. The places where you do add type annotations Flow ends up doing a lot more work for you because it's able to infer so much. If you aren't telling TypeScript what to do all the time, it won't do anything for you. With TypeScript 2.0, it is starting to do more of this, but it has a long way to go to match how much Flow is able to infer.


I include the quote above not to bash TypeScript, but because it’s an obvious question and useful to hear what Facebook was thinking when they developed Flow.

Personally, I’m sold on the value of static type checking and that includes TypeScript. It offers a way for developers to eliminate an entire category of bugs, and in the case of Flow, with very little extra effort. That’s a big win in my book.

If you’re interested in getting started with Flow, I recommend integrating it with your editor. You’ll really get the most out of it that way. Begin here and then refer to the guides on setting Flow up with a number of different editors including Atom, VS Code and Sublime.

Bonus reading:
JavaScript’s type system by Axel Rauschmayer
What to know before debating type systems by Chris Smith
Coercion from You Don't Know JS: Types & Grammar by Kyle Simpson

Battling Distractions by Gaining Focus

Recently, I went through a stage where I started to get mentally tired earlier than usual. What is worse: by the time I started feeling tired, I hadn't finished half of what I wanted for that day. The first few days I thought that I just needed more rest and to make time for things I enjoy outside of work. So I did: I made sure that I was sleeping well, eating well, and doing my hobbies. That had a positive effect, but still I was feeling that my productivity wasn’t as sharp. While thinking about what to do about this problem, I remembered something from the book, The Inner Game of Tennis: In order to correct something in your game, you first need to feel and observe yourself. So I did.

Observing Myself

I get the most out of my workday by going to a co-working space. There is something about that place: it’s cozy, I have friends, the lighting is good, and mobile connectivity is bad (which is actually cool because I only go out to pick up important calls). The day that I decided to observe my day of work, I did the following:

  1. Woke up around 8:30AM.
  2. Had a strong breakfast while watching the news.
  3. Took a shower and got dressed.
  4. Walked to the coworking space, arriving by 10:30AM.
  5. Checked personal email and social networks until 11AM.
  6. Looked at Lullabot’s Yammer and internal email for an hour.
  7. Worked until 2:30PM, stopped thirty minutes for lunch.
  8. Worked until 7PM, then headed to play a game of squash.

Overall, apart from the fact of waking up a little late and taking it slowly to get to sit at my working desk and then actually starting to work, it did not look bad. However, there were two things that were making me feel worse each day:

  1. I couldn’t find time to study nor to make progress on the DrupalCamp Spain website.
  2. I felt mentally exhausted at 7PM. In fact, by lunchtime, I was already feeling tired.
Identifying the Problem 

Observing my behavior throughout the day, and what I did get done (not much, to be honest), I felt happier as I figured out a couple things:

If you don’t have time, then wake up earlier I had to be more efficient in the mornings—which is when my brain is at full speed—to get most of my work done. If I wanted to do other things like studying or working on the DrupalCamp Spain website, I should wake up earlier than usual and start as soon as possible.

Focus just on what you are doing I discovered why I wasn’t getting much done and why my brain was getting so tired. One thing was the storm of notifications that I was getting from Franz (a messaging aggregator):

undefined

And the other was my mobile phone, so close to my hands:

undefined

By observing myself, I discovered that every time that a page load or a command execution took more than a second to run, I would either use a keyboard shortcut to check what was going on in Franz or check my phone notifications. This was killing my concentration, and it was the main cause of my lack of productivity.

I felt so happy realizing what the problem was. Thanks to my dad—who used to get upset when I distracted him while playing golf—I discovered that achieving a deep state of concentration takes time and, if you lose it, it takes even more time to recover it.

Once, when I was a kid, I was following my dad playing golf and, because I was getting bored, I got distracted a couple times in the course, made noise, and got him distracted. He then told me, "Juampy, it’s very hard for me to get back into the game if I lose my concentration, so either you stay silent or you walk to the clubhouse."

He knew it! Concentration is like starting the engine of an airplane: it takes a lot of energy to do so. Imagine then what happens if you start and stop that engine dozens of times a day.

In my particular case, I was forcing my brain to concentrate several times throughout the day because every time that I would start messaging someone or reading tweets, I was losing my focus. It was time to see how I could change this behavior.

Making Adjustments

Based on the above findings, I decided that it was time to try the following for a few weeks and then re-evaluate how it worked:

  1. Wake up early I have read and heard this several times from friends and books: beat the daylight, start early, and you will then have a long day ahead with plenty of time to do everything that you want to. I started waking up between 6AM to 7AM depending on the day and having breakfast while either studying or working on the DrupalCamp Spain website until 8:30AM, when I would take a shower and go to the coworking space to do client work.
  2. Keeping the mind focused on one thing I forced myself to focus on one thing and avoid distractions until I was done with it. If, for example, one morning I had a lot of email to read, I would tell myself: “I am going to go through all this, which I think may take one or two hours. Then I will chill out for five minutes and move on to the next task”. So I did. I did not switch off my phone, nor did I disable Franz, I just didn't pay attention to them. I set my phone not to ring nor vibrate, but only light up when there was a call so I could see who was calling and decide whether to pick it up or not. As for Franz, I disabled notifications, so all I could see was this:
undefined

Even though it was hard at the beginning, I felt that I started gaining a higher level of concentration, which made me able to see deeper into the problems I was trying to solve, make connections, and come up with better solutions, faster! I felt great, excited, and happy. However, I realized that, since the brain is another muscle, I could train it to get even better at its task.

Meditating to Start the Day

Finally, I had an excuse for starting to experiment with meditation. I'd read about its benefits for life and sports from a few friends and books. Therefore, I thought that I could apply this to my work and my life in general.

I added meditation to my new daily routine. I have a resting mat that I use for working standing up, so the next morning at 6AM, as soon as I woke up, I went there, sat with my legs crossed, set a timer for six minutes, and focused on breathing in and out. I also had a notepad next to me, so in case there was a thought I couldn’t let go, I would write it down there and continue focusing on my breathing. Did you know that Phil Jackson, who won 11 NBA titles coaching the Chicago Bulls, made his team to follow this meditation routine?

While meditating, I wouldn’t be strict at keeping thoughts away. If one popped up, I would observe it with a positive attitude and then go back to breathing. Here are a few thought examples and what I did:

  1. If work related—a complicated ticket with a close deadline—I would empower myself and say: “Relax, there is time, you can do it, you can complete it in time, trust in yourself.” Sometimes I can even hear my voice saying it.
  2. If sports related—a game with a huge guy that hits the ball at 100 miles an hour, I would tell myself: you can beat him, you can absorb his pace and use it against him. I would also picture me playing, controlling the game and controlling my mind over the pressure.
  3. If it were something that I had to do for that day: I would write it down in the notepad, and then go back to breathing.

By the time the timer rang, I felt a mix of energizing peace. The breathing had activated my body, it was ready to go, while the meditation had given me peace and made me feel positive and happy. I was ready to rock & roll! 

Looking Forward

It’s been two months since I started this new routine, and now I tell all my friends and relatives about it because it has make me a happier and more productive person. Not only can I do my work better, but also I have more time to do other things, and I feel that I have better self esteem and confidence in myself and in what I do. This is just my experience with meditation; find yours, make your own kind of meditation depending on what you need. I have just started with this, so I am unable to suggest books on meditation, but here are the books that led me through this journey:

  1. The Inner Game of Tennis
  2. The Monk Who Sold His Ferrari
  3. Sacred Hoops
  4. Golf is a Game of Confidence
Acknowledgements

These folks helped me in several ways to write this article:

  1. James Sansbury and my friend Xenia, who told me about meditation and got me curious about it.
  2. Matt Westgate and Mateu Aguiló Bosh, who inspired me to wake up early.
  3. Jerad Bitner and Marissa Epstein for their awesome reviews and tips.

Higher Order Functions in JavaScript

Higher-order functions can be intimidating at first, but they’re not that hard to learn. A higher-order function is simply a function that can either accept another function as a parameter or one that returns a function as a result. It’s one of the most useful patterns in JavaScript and has particular importance in functional programming.

We’ll begin with an example that you may be familiar with—JavaScript's handy Array.map() method. Let's say we have an array of objects that each have a name property like so:

const characters = [ {name: 'Han Solo'}, {name: 'Luke SkyWalker'}, {name: 'Leia Organa'} ];

Now let’s log these names out to the console using the map method (live code here).

// ES6 syntax const names = characters.map(character => character.name); console.log(names); // "Han Solo" "Luke SkyWalker" "Leia Organa" // ES5 syntax var names2 = characters.map(function(character) { return character.name; }); console.log(names2); // "Han Solo" "Luke SkyWalker" "Leia Organa"

Did you see how the map method (aka function) took another function as a parameter? That fits our definition of a higher-order function, doesn’t it? Higher-order functions aren’t that scary after all. In fact, you may have been using them all the time without even realizing it.

Now, let’s explore another closely related concept in functional programming. "Function composition" takes the idea of higher-order functions and allows you to build on it in a very powerful way. We can take small, focused functions and combine them to build complex functionality. By keeping these “building block” functions small, they are easier to test, easier for us to get our heads around and highly reusable.

To see what this looks like, let’s take a look at another example. Don’t worry too much if the code is a bit confusing at first, we’ll step through it in a moment. Let’s begin with the ES6 version:

// ES6 version const characters = [ {name: 'Luke Skywalker', img: 'http://example.com/img/luke.jpg', species: 'human'}, {name: 'Han Solo', img: 'http://example.com/img/han.jpg', species: 'human'}, {name: 'Leia Organa', img: 'http://example.com/img/leia.jpg', species: 'human'}, {name: 'Chewbacca', img: 'http://example.com/img/chewie.jpg', species: 'wookie'} ]; const humans = data => data.filter(character => character.species === 'human'); const images = data => data.map(character => character.img); const compose = (func1, func2) => data => func2(func1(data)); const displayCharacterImages = compose(humans, images); console.log(displayCharacterImages(characters)); /* Logs out the following array [ "http://example.com/img/luke.jpg", "http://example.com/img/han.jpg", "http://example.com/img/leia.jpg" ] */

And now the ES5 version of the same code…

// ES5 version var characters = [ {name: 'Luke Skywalker', img: 'http://example.com/img/luke.jpg', species: 'human'}, {name: 'Han Solo', img: 'http://example.com/img/han.jpg', species: 'human'}, {name: 'Leia Organa', img: 'http://example.com/img/leia.jpg', species: 'human'}, {name: 'Chewbacca', img: 'http://example.com/img/chewie.jpg', species: 'wookie'} ]; var humans = function(data) { return data.filter(function(character) { return character.species === 'human'; }) } var images = function(data) { return data.map(function(character) { return character.img; }) } function compose(func1, func2) { return function(data) { return func2(func1(data)); }; } var displayCharacterImages = compose(humans, images); console.log(displayCharacterImages(characters)); /* Logs out the following array [ "http://example.com/img/luke.jpg", "http://example.com/img/han.jpg", "http://example.com/img/leia.jpg" ] */

OK, let’s step through this and see if it becomes clear how something like this might be useful. I’ll use the ES5 example during my explanation to keep things as familiar as possible.

1. On line 2 we’re declaring an array of objects. It’s very common to receive and work with data in this way—the results of an API call, for example.

2. On line 9 we declare the first function. An important thing to note about this function is that it’s a pure function. That means that given the same input, it will always return the same output. This is a key concept in functional programming. This makes the function easy to test, refactor and, in general, understand.

3. On line 15 we declare the second function images . Take a look at the callback function passed into map (which I’ve pulled out for reference below)

function(character) { return character.img; }

I’ve used “character” to make the code easier to follow, but if I instead used “item” or something like that, you can see how completely decoupled the function is from the rest of the code. It can be used for any array of objects that have an img property you want to extract. Yay code reuse!

4. On line 21 we have our higher-order function compose that takes two functions as parameters and returns another function. This function can also be reused to combine the results of any two functions called from left to right. Pass in the data and let your functions transform it in whatever way required.

5. On line 27 we call the compose function passing in both the humans and the images functions and assigning it to the displayCharacterImages variable. If we were to log the value of displayCharacterImages right now, we’d see the inner function that is returned by compose. 

function (data) { return func2(func1(data)); }

6. When we finally call it on line 29, we pass in the characters array, which is passed into func1, which in our case was humans. That function returns an array of objects that only contains characters that are humans (sorry, Chewie!) and immediately passes it to func2 which is our images function, which returns an array of images of the human characters in our data set. You can imagine that we might use that array to display the images of those characters on some part of a fan site we have created.

Hopefully, this was a gentle introduction to the topic of higher-order functions and how you might use them in combining small functions together to build complex behavior. If you don’t get it right away, don’t worry. It’s a part of functional programming that can take a bit of getting used to, but once you have a handle on it, can be very powerful and allow you to write flexible code with fewer errors.

Further Reading

If you found this interesting and would like to learn more about functional programming, I have a few good resources for you. The first is Learning Functional Programming with JavaScript, which is a conference talk by Anjana Vakil. There is also a good (and entertaining) video tutorial series from Mattias Petter Johansson (mpj). And finally, for those that prefer an article format, there is Functional Programming In JavaScript — With Practical Examples from Raja Rao and How to Use Map, Filter, & Reduce in JavaScript by Peleke Sengstacke.

An overview of testing in Drupal 8

I have been working with Turner for the past twelve months as part of the Draco team (Drupal Application Core), who maintains Drupal 8 modules used by TV channels such as NBA, PGA, or TBS. Once brands started to use our modules, it was mandatory for us to improve testing coverage and monitoring. By that time, we had unit tests, but there was still a long way to go to get to a point where we could feel safe to merge a given pull request by looking at the code and the test results in Jenkins. This article explains our journey, including both the frustrations and successes implementing testing coverage in Drupal 8.

General structure of a test

A test tests something. Yes, it may sound trivial, but I want to use this triviality to explain why there are different types of tests in Drupal 8 and how they achieve this goal. Every test has a first step where you prepare the context and then a second step where you run assertions against that context. Here are some examples:

Context Tests I am an editor who has created a page node. When I open the full display view, I should see the title, the body, and an image I am visiting the homepage. There should be a menu with links to navigate to the different sections of the site, plus a few blocks promoting the main sections. I am an authenticated user who has opened the contact form. Some of the fields such as the name and email should be automatically set. Form submission should pass validation. A confirmation message is shown on success.

The assertions in the Tests column verify that the code that you have written works as expected under a given Context. To me, the end goals of tests are:

  • They save me from having to test things manually when I am peer reviewing code.
  • They prove that a new feature or bug fix works as expected.
  • They check that the new code does not cause regressions in other areas.

Having said that, let’s dive into how we can accomplish the above in Drupal 8.

Types of tests available

Depending on what you want to test, there are a few options available in Drupal 8:

  • If you want to test class methods, then you can write Unit tests.
  • If you want to test module APIs, then your best option is to write Kernel tests.
  • If you want to test web interfaces (I call these UI tests), then you have a few options:

In the following sections we will see how to write and run each of the above, plus a few things to take into account in order to avoid frustration.

Unit tests

Unit tests in Drupal 8 are awesome. They use the well known PHPUnit testing framework and they run blazingly fast. Here is a clip where we show one of our unit tests and then we run the module’s unit test suite:

Videos require iframe browser support.

The downside of Unit tests is that if the class has many dependencies, then there is a lot that you need to mock in order to prepare the context for your test to work, which complicates understanding and maintaining the test logic. Here is what I do to overcome that: whenever I see that I am spending too much time writing the code to set up the context of a Unit test, I end up converting the test to a Kernel test, which we explore in the next section.

Kernel tests

Kernel tests are Unit tests on steroids. They are great when you want to test your module’s APIs. In a Kernel test, Drupal is installed with a limited set of features so you specify what you need in order to prepare the context of your test. These tests are written in PHPUnit, so you also have all the goodness of this framework such as mocking and data providers.

Here is a Kernel test in action. Notice that it is very similar to the clip above about Unit tests:

Videos require iframe browser support.

Kernel tests fetch the database information from core/phpunit.xml where you need to set the database connection details. You may have noticed that these tests are slightly slower than Unit tests—they need to install Drupal—yet they are extremely powerful. There are many good examples of Kernel tests in Drupal core and contributed modules like Token.

UI tests

I am grouping together Browser tests, JavaScript tests, and Behat tests as UI tests because they all test the user interface through different methods. I want to save you the hassle of going through each of them by suggesting that if you need to test the user interface, then install and use Drupal Behat Extension. Here is why:

Browser tests don’t support JavaScript. Unless you are testing an administration form, chances are that you will need JavaScript to run on a page. They are solid, but I don’t see a reason to write a Browser test when I could write a Behat one.

JavaScript tests only support PhantomJS as the browser. PhantomJS is headless, meaning that there is no visual interface. It is tedious and at times frustrating to write tests using PhantomJS because you need to type and execute commands on a debugger in order to figure out what the test “sees” at a given point, while with Behat you can use full-featured Chrome or Firefox (I laugh at myself every time I call this a "bodymore" browser) so debugging becomes way more fun. Moreover, these kind of tests get randomly stuck both in my local development environment and in Jenkins.

Discovering Behat tests

Andrew Berry and myself spent a lot of time trying to get JavaScript tests working locally and in Jenkins without luck, which is why we decided to give Behat tests a go. It felt like salvation because:

  • The setup process of the Drupal Behat Extension module is straightforward.
  • We found no bugs in the module while writing tests.
  • We can use Chrome or Firefox locally, while Jenkins uses PhantomJS and does not get stuck like it did with JavaScript tests.

Here is a sample test run:

Videos require iframe browser support.

On top of the above, I discovered the usefulness of feature files. A feature file contains human readable steps to test something. As a developer, I thought that I did not need this, but then I discovered how easy it was for the whole team to read the feature file of a module to understand what is tested, and then dive into the step implementations of that feature file to extend them.

Here is a sample feature file:

undefined

Each of the above steps matches with a class method that implements its logic. For example, here is the step implementation of the step “When I create a Runsheet”:

undefined

Behat tests can access Drupal’s APIs so, if needed, you can prepare the context of a test programmatically. All that Behat needs in its "behat.yml" file is the URL of your site and the Drupal’s root path:

undefined Now look at your project

Does your project or module have any tests? Does it need them? If so, which type of tests? I hope that now you have the answers to some of these questions thanks to this article. Go ahead and make your project more robust by writing tests.

If you are willing to improve how testing works in Drupal 8 core, here are a few interesting open issues:

In the next article, I will share with you how do we run Unit, Kernel, and Behat tests automatically when a developer creates a pull request and report their results.

Acknowledgements

I want to thank the following folks for their help while writing this article:

  • Matt Oliveira, for proofreading.
  • Marcos Cano, for his tips and his time spent doing research on this topic.
  • Seth Brown, for helping me to become a better writer.
  • The Draco team at Turner, for giving me the chance to experiment and implement testing as part of our development workflow.

Clayton Dewey on Drutopia

In this episode of Hacking Culture, Matthew Tift talks with Clayton Dewey about Drutopia, an initiative to revolutionize the way we build online tools

Drupalcon Baltimore Edition

Matt and Mike sit down with several Lullabots who are presenting at Drupalcon Baltimore. We talk about our sessions, sessions that we're excited to see, and speaking tips for first-time presenters.

Cross-Pollination between Drupal and WordPress

WordPress controls a whopping 27% of the CMS market share on the web. Although it grew out of a blogging platform, it can now can handle advanced functionality similar to Drupal and is a major (yet friendly) competitor to Drupal. Like Drupal, it’s open source and has an amazing community. Both communities learn from each other, but there is still much more to share between the two platforms.

Recently I had the opportunity to speak at WordCamp Miami on the topic of Drupal. WordCamp Miami is one of the larger WordCamps in the world, with a sold-out attendance of approximately 800 people.

undefined What makes Drupal so great?

Drupal commands somewhere in the neighborhood of 2% of the CMS market share of the web. It makes complex data models easy, and much of this can be accomplished through the user interface. It has very robust APIs and enables modules to share one another’s APIs. Taken together, you can develop very complex functionality with little to no custom code.

So, what can WordPress take away from Drupal? Developer Experience: More and better APIs included in WordPress Core

The WordPress plugin ecosystem could dramatically benefit from standardizing API’s in core.

  • Something analogous to Drupal’s Render API and Form API would make it possible for WordPress plugins to standardize and integrate their markup, which in turn would allow plugins to work together without stepping on each other’s toes.
  • WordPress could benefit from a way to create a custom post type in the core UI. Drupal has this functionality out the the box. WordPress has the functionality available, but only to the developer. This results in WordPress site builders searching for very specific plugins that create a specific post type, and hoping it does what they want.
  • WordPress already has plugins similar to Drupal’s Field API. Plugins such as Advanced Custom Fields and CMB2 go along way to allowing WordPress developers to easily create custom fields. Integrating something similar to this into WordPress core would allow plugin developers to count on a stable API and easily extend it.
  • An API for plugins to set dependencies on other plugins is something that Drupal has done since its beginning. It enables mini-ecosystems to develop that extend more complex modules. In Drupal, we see a module ecosystems built around Views, Fields, Commerce, Organic Groups, and more. WordPress would benefit greatly from this.
  • A go-to solution for custom query/list building would be wonderful for WordPress. Drupal has Views, but WordPress does not, so site builders end up using plugins that create very specific queries with output according to a very specific need. When a user needs to make a list of “popular posts,” they end up looking through multiple plugins dedicated to this single task.

A potential issue with including new APIs in WordPress core is that it could possibly break WordPress’ commitment to backwards compatibility, and would also dramatically affect their plugin ecosystem (much of this functionality is for sale right now).

WordPress Security Improvements

WordPress has a much-maligned security reputation. Because it commands a significant portion of the web, it’s a large attack vector. WordPress sites are also frequently set up by non-technical users, who don’t have the experience to keep it (and all of its plugins) updated, and/or lock down the site properly.

That being said, WordPress has some low-hanging fruit that would go a long way to help the platform’s reputation.

  • Brute force password protection (flood control) would prevent bots from repeatedly connecting to wp-login.php. How often do you see attempted connections to wp-login.php in your server logs?.
  • Raise the minimum supported PHP version from 5.2 (which does not receive security updates). Various WordPress plugins are already doing this, and there’s also talk about changing the ‘recommended’ version of PHP to 7.0.
  • An official public mailing list for all WordPress core and plugin vulnerabilities would be an easy way to alert developers to potential security issues. Note that there are third-party vendors that offer mailing lists like this.
Why is WordPress’ market share so large?

Easy: It can be set up and operated by non-developers—and there are a lot more non-developers than developers! Installing both Drupal and WordPress is dead simple, but once you’re up and running, WordPress becomes much easier.

Case in Point: Changing Your Site's Appearance

Changing what your site looks like is often the first thing that a new owner will want to do. With WordPress, you go to Appearance > Themes > Add New, and can easily browse themes from within your admin UI. To enable the theme, click Install, then click Activate.

undefined

With Drupal, you go to Appearance, but you only see core themes that are installed. If you happen to look at the top text, you read in small text that "alternative themes are available." Below that there is a button to “Install a New Theme.”

undefined

Clicking the button takes you to a page where you can either 1) paste in a URL to the tarball/zip, or upload a downloaded tarball/zip. You still have to know how to to download the zip or tarball, and where to extract it, and then browse to appearance, and enable the theme.

So it goes with Drupal. The same process goes with modules and more. Drupal makes things much more difficult. 

So, what can Drupal learn from WordPress?

To continue to grow, Drupal needs to enable non-developers. New non-developers can eventually turn into developers, and will become “new blood” in the community. Here’s how Drupal can do it:

  • A built in theme and module browser would do wonders for enabling users to discover new functionality and ways to change their site’s appearance. A working attempt at this is the Project Browser module (available only for Drupal 7). The catch 22 of this is that you have to download this the old-fashioned way in order to use it.
  • The ability to download vetted install profiles during the Drupal installation process would be amazing. This would go a long way to enable the “casual explorers," and show them the power of Drupal. A discussion of this can be found here.
  • Automatic security updates is a feature that would be used by many smaller sites. Projects have been steered toward WordPress specifically because smaller clients don’t have the budget to pay developers to keep up with updates. This feature has been conceptually signed off on by Drupal’s core committers, but significant work has yet to be done.
Mitigating Security Risks

The downside for this functionality is that Drupal would need to have a writable file-system, which at it’s core, is less secure. Whether that balances out with automatic updates is debatable.

Automatic security updates and theme/module installation would not have to be enabled out of the box. The functionality could be provided in core modules that could be enabled only when needed.

What has Drupal already learned from WordPress?

Cross-pollination has already been happening for a while. Let’s take a look at what the Drupal community has already, or is in the process of, implementing:

  • Semantic versioning is one of the most important changes in Drupal 8. With semantic versioning, bug fixes and new features can be added at a regular cadence. Prior to this, Drupal developers had to wait a few years for the next major version. WordPress has been doing this for a long time.
  • A better authoring experience is something that Drupal has been working on for years (remember when there was no admin theme?). With Drupal 8, the default authoring experience is finally on par with WordPress and even surpasses it in many areas.
  • Media management is the ability to upload images and video, and easily reference them from multiple pieces of content. There’s currently a media initiative to finally put this functionality in core.
  • Easier major version upgrades is something that WordPress has been doing since it’s inception.

Drupal has traditionally required significant development work in between major versions. That however, is changing. In a recent blog post, the lead of the Drupal project, Dries Buytaert said,

Updating from Drupal 8's latest version to Drupal 9.0.0 should be as easy as updating between minor versions of Drupal 8.

This is a very big deal, as it drastically limits the technical debt of Drupal as new versions of Drupal appear.

Conclusion

Drupal and WordPress have extremely intelligent people contributing to their respective platforms. And, because of the GPL, both platforms have the opportunity to use vetted and proven approaches that are shortcuts to increased usability and usage. This, in turn, can lead to a more open (and usable) web.

Special thanks to Jonathan Daggerhart, John TuckerMatthew Tift, and Juampy NR for reviewing and contributing to this article.

undefined

The Evolution of Design Tools

I often get asked why we use Sketch over Photoshop when it comes to designing for the web. The simple answer is that it’s the right tool for our team. However, the story behind it is much more complicated. The tools that designers use continue to change as the web evolves. The evolution of tools has accelerated, especially since the introduction of responsive design. As the web continues to change, so does the process for designers. I seem to stumble across a new tool or process idea every week! But, the tools themselves are only a small part of the design process. Below are a few ways in which the evolution of the web has impacted our design process and toolset.

Responsive Design

Before the first smartphone was released and Ethan Marcotte published his first article on responsive design, we designed on a fixed-width canvas. Our designs didn’t have to have a flexible width (though designers still argued about it) because most users browsed the web on their desktops or laptops. The computer screen was seen as a fixed box where the majority had a resolution of 800-1024px wide. Designers in those days urged one another to design wider fixed layouts.

The release of the smartphone and tablet changed that. Now the screen could be any variable of numerous widths, heights, and resolution.

undefined

As designers and developers dove into creating optimal responsive experiences for the web, the visual design began to evolve and simplify. We quickly found out that loading an experience heavy on bevel and emboss, drop shadows, textures, and numerous raster graphics created a crappy experience for the users on cellular internet connections because of the additional load time. Designs began evolving from skeuomorphic to flat, stripping away unnecessary design to speed up load times.

undefined

As the web landscape was changing, designers found themselves in need of more efficient tools to help envision and test a scalable design. Tools like Sketch and Figma included responsive presets and allowed for multiple art boards and also provided ways to quickly scale up or down components. The tools themselves became simpler, ditching unnecessary features and enabling designers to create and export assets based on real CSS values. There are still major gaps in the new tools that have been released, but overall, they’ve helped improve the process.

User-centric Design

The growth of and interest in user experience design has also had a dramatic impact on the way web designers approach their work. Designers are looking more towards users for feedback to help shape design solutions for a site or product. A user-research based approach to design, along with the popularity of design sprints, has designers needing to work more iteratively in order to test and revise designs.

Designers often prototype in order to test ideas on actual users. For designers whose strength is not in front-end development, this can be a time-consuming process. Additionally, having a development team to prototype ideas with and for you isn’t always feasible due to time and budget constraints. Here’s where prototyping tools such as Adobe Experience Design, Invision, Flinto, and Principle step in to fill that gap. They allow designers to quickly prototype an idea to test it with real users and get feedback from clients and the project team. Very little coding is needed to complete these sorts of prototypes, and some also integrate nicely with Sketch and Adobe Photoshop to save time when importing and exporting.

undefined

Our design team often uses a combination of Sketch and Invision to prototype and test simple ideas. When we need to produce more complicated prototypes of things like animations our team has experimented with other tools like Principle and Flinto to help visualize our ideas in action. Many of the most common interactions, transitions, animations etc. can be relatively quickly prototyped for user testing and feedback without writing a single line of code.

It's worth noting that most design teams work differently, and while one may leverage tools like the ones previously mentioned for prototyping, others may prototype nearly everything "in the browser" using basic HTML, CSS and JavaScript or frameworks like Bootstrap and Foundation as starting points. Our team tries to always work in the most efficient way possible, especially when producing artifacts that are, in the end, disposable. There are things that we prototype with some basic HTML and CSS, but many things that we're able to test more quickly using tools that require no writing of code.

Collaboration By asking people for their input early in the process, you help them feel invested in the outcome.

- Jake Knapp, Sprint: How to Solve Big Problems and Test New Ideas in Just 5 Days

Design and development teams have evolved to become more cross-functional and collaborative. Many designers and developers are also individually becoming more cross-functional with designers learning the basics of HTML, CSS and Javascript to prototype ideas and developers with an eye and heart for user experience beginning to participate in that process and adopt methods.

Before responsive design and the popularity of agile, most teams tended to be separated in their own silo. Design worked in one space, development in another. Designs were often created and approved with little input from developers, and developers tended to develop the site with little input from designers. This approach often led to miscommunication and misunderstanding.

Luckily, as web technology and processes began to accelerate, teams began to see the benefits of working more closely together. Today, designers, developers, clients, basically anyone involved in the project are often brainstorming and participating in the entire process from start to finish. Teams are working together much earlier in the process to help solve problems and test solutions. However, teams today can also be co-located across the world, making team collaboration more difficult.

Design tools have evolved to help communicate ideas to the client and team. Programs like Invision, Marvel, UXPin and others not only help designers prototype ideas, but also allow the team and clients to leave feedback by way of comments or notes. They can review and comment on a prototype from anywhere, using any device.

For design teams that are collaborating on a single project, several design programs have introduced shared libraries where global assets in a project can be kept up to date. Designers just need to install the library for that project and voilà! They’ll have all of the updated project assets to use in their design. Updating and sharing a component is as easy as adding it to the shared library where others can access it. This added feature really helps our team ensure designs are consistent by using the most updated versions of colors, type styles, elements and components.

Several design programs have also added features to help bridge the gap between design and front-end development. Some now allow for the export of styles as CSS or even Sass. Several programs like Zeplin, Invision (via the Craft Plugin and Inspect feature) and Avocode convert designs to CSS, allowing developers to get the styling information they need without opening a native design file. Assessing colors, typography, spacing, and even exporting raster and vector assets can all be done using one of the above programs. They also play well with Adobe Photoshop and Sketch files by directly syncing with those programs and eliminating the need to export designs and then individually upload them.

undefined Tools Are Only Part of the Process

Design tools have come a long way since the beginning of responsive design. They’ve evolved to help teams solve new problems and enable teams to communicate more efficiently. But, as designers, we need to know that tools will only get you so far. As John Maeda said, “Skill in the digital age is confused with mastery of digital tools, masking the importance of understanding materials and mastering the elements of form.” Know the problems that need to be solved, the goals needed to be accomplished, and find the best solution that speaks to the user. Don't get too focused on the tools.

Finally, every team is different and so is their toolset. Just keep in mind that you should pick the toolset that fits your team’s process the best. What may work for one, may not work for another.

Introduction to the Factory Pattern

I have written in the past about other design patterns, why knowing they exist and using them is important. In this article, I will tell you about another popular one, the factory class pattern.

Some of the biggest benefits of Object Oriented Programming are code reuse and organization using polymorphism. This powerful tool allows you to create an abstract Animal class and then override some of that behavior in your Parrot class. In most languages, you can create an instance of an object using the new keyword. It is common to write your code anticipating that you'll require an Animal at run-time, without the benefit of foresight as to whether it will be a Dog or a Frog. For example, imagine that your application allows you to select your favorite animal via a configuration file, or that you gather that information by prompting the user of the application. In that scenario, when you write the code you cannot anticipate which animal class will be instantiated.  Here's where the factory pattern comes in handy.

The Animal Factory

In the factory pattern, there is a method that will take those external factors as input and return an instantiated class. That means that you can defer the creation of the object to some other piece of code and assume you'll get the correct object instance from the factory object or method. Essentially, the factory couples logic from external factors with the list of eligible classes to instantiate.

In the factory class pattern it is common to end with class names like UserInputAnimalFactory or ConfigAnimalFactory. However, in some situations you may need to decide what factory to use during runtime, so you need a factory of factories like `ConfigAnimalFactoriesFactory`. That may sound confusing, but it is not uncommon.

I have created a nodejs package that will help to illustrate the factory pattern. It is called easy-factory, and you can download it from npm with npm install --save easy-factory. This package contains a factory base class that you can extend. In your extending class you only need to implement a method that contains the business logic necessary to decide which class should be used.

Let’s assume that your app offers the users a list of animal sounds. The user is also asked to select a size. Based on the sound name and the size, the app will print the name of the selected animal. The following code is what this simple app would look like.

// Import the animal factory. We'll get into it in the next code block. const SoundAnimalFactory = require('./SoundAnimalFactory'); const soundNames = ['bark', 'meow', 'moo', 'oink', 'quack']; // The following code assumes that the user chooses one of the above. const selectedSoundName = scanUserAnimalPreference(); // Have the user select the animal size: 'XS', 'S', 'M', 'L', 'XL'. const selectedSizeCode = scanUserSizePreference(); // Based on the sound name and the size get the animal object. const factoryContext = {sound: selectedSoundName, size: selectedSizeCode}; const animal = SoundAnimalFactory.create(factoryContext, selectedSizeCode); // Finally, print the name of the animal. print(animal.getName());

There is no new keyword to create the animal object. We are deferring that job to SoundAnimalFactory.create(…). Also notice, we are passing two variables to the factory: factoryContext and selectedSizeCode. The first parameter is used to determine which class to use, the rest of the parameters will be passed to that class constructor when calling with the new keyword. Thus, with factoryContext the factory will have enough information to create the instance of the needed object. selectedSizeCode will be passed to the constructor of the determined class. Thus ultimately you will end up with something like new Cat('S').

The code for the factory will be aware of all the available animals and will inspect the `factoryContext` to decide which one to instantiate.

// SoundAnimalFactory.js 'use strict'; const FactoryBase = require('easy-factory'); class SoundAnimalFactory extends FactoryBase { /** * Decide which animal to instantiate based on the size and sound. * * @param {object} context * Contains the keys: 'size' and 'sound'. * * @throws Error * If no animal could be found. * * @return {function} * The animal to instantiate. */ static getClass(context) { if (typeof context.size === 'undefined' || typeof context.sound === 'undefined') { throw new Error('Unable to find fruit.'); } if (context.sound === 'bark') { return require('./dog'); } else if (context.sound === 'meow') { if (context.size === 'L' || context.size === 'XL') { return require('./tiger'); } return require('./cat'); } // And so on. } } module.exports = SoundAnimalFactory;

Notice how our application code executed the method SoundAnimalFactory.create but our factory only implements SoundAnimalFactory.getClass. That is just because in this particular implementation of the factory pattern the actual instantiation of the object takes place in the base class FactoryBase. What happens behind the scenes is that the create method will call getClass to know which class to implement and then call new on that with any additional parameters. Note that your approach may differ if you don't use the easy-factory nodejs package.

Real-World Example

I use this pattern in almost all the projects I am involved. For instance, in my current project we have different sources of content since we are building a data pipeline. Each source contains information about different content entities in the form of JSON documents that are structured differently for each source. Our task is to convert incoming JSON documents into a common format. For that, we have denormalizer classes, which take the JSON document in a particular format and transform it to the common format. The problem is that for each combination of source and entity type, we need to use a different denormalizer object. We are using an implementation similar to the animal sound factory above. Instead of taking user input in this case, we inspect the incoming JSON document structure instead to identify the source and entity type. Once our factory delivers the denormalizer object, we only need to call doStuffWithDenormalizer(denormalizer, inputDocument); and we are done.

// The input document comes from an HTTP request. // inputDocument contains a ‘type’ key that can be: season, series, episode, … // it also contains the ‘source’ parameter to know where the item originated. const inputDocument = JSON.parse(awsSqs.fetchMessage()); // Build an object that contains all the info to decide which denormalizer // should be used. const context = { type: inputDocument.type, source: inputDocument.source }; // Find what denormalizer needs to be used for the input document that we // happened to get from the AWS queue. The DenormalizationFactory knows which // class to instantiate based on the `context` variable. const denormalizer = DenormalizationFactory.create(context); // We are done! Now that we have our denormalizer, we can do stuff with it. doStuffWithDenormalizer(denormalizer, inputDocument);

You can see how there are two clear steps. First, build an object –context that contains all the information to know which denormalizer to instantiate. Second, use the factory to create the appropriate instance based on the context variable.

As you can see this pattern is very simple and a common way to get an instance of a class without knowing which class needs to be instantiated when you are writing the code. Even if simple it is important to identify this approach as a design pattern so you can communicate more efficiently and precisely with your team.

Lullabot at DrupalCon Baltimore

We’re joining forces this year with our friends at Pantheon to bring you a party of prehistoric proportions! It will be a night of friendly fun, good conversation, and dinosaurs at the Maryland Science Center. DrupalCon is always a great place to meet new people, and if you’re an old timer, it’s a great place to see old friends and make new ones.

The Maryland Science Center is only a short 10-minute walk from the Convention Center. Stop by and enjoy the harbor views, a tasty beverage, dessert, or just to say “hello”. We promise the dinosaurs will behave themselves!

Lullabot’s 9ᵀᴴ Annual DrupalCon Party
Wednesday, April 26th at the Maryland Science Center
601 Light St.
Baltimore, MD 21230
8pm - 11pm

There will be nineteen of us Lullabots attending, five of whom will be presenting sessions you won’t want to miss. And, be sure to swing by booth 102 in the exhibit hall to pick up some fun Lullabot swag and of course, our famous floppy disk party invites. See you in Baltimore!

Speaker Sessions Wednesday, April 26

Mateu Aguiló Bosch
Advanced Web Services with JSON API
12pm - 1:00pm
308 - Pantheon

Matthew Tift
Drupal in the Public Sphere
2:15pm - 3:15pm
318 - New Target

Wes Ruvalcaba & David Burns
Virtual Reality on the Web - An Overview and "How to" Demo
3:45pm - 4:15pm
Community Stage - Exhibit Hall

Sally Young
Decoupled from the Inside Out
5pm - 6pm
318 - New Target
*Presenting with Preston So and Matthew Grill of Acquia

Thursday, April 27

Mateu Aguiló Bosch
API-First Initiative
10:45am - 11:45am
318 - New Target
*Presenting with Wim Leers of Acquia