Lullabot

Making Web Accessibility Great Again: Auditing the US Presidential Candidates Websites for Accessibility

Imagine that you arrive at a website, but you cannot see the screen. How do you know what’s there? How do you navigate? This is normal for many people, and the accessibility of your site will make or break their experience. Accessibility is about including everyone. People with physical and cognitive disabilities have specific challenges online—and making your site accessible removes those barriers and opens the door to more users.

Severely disabled Americans constitute a population of 38.3 million people, and make up a huge swath of voters (see the #CripTheVote movement on Twitter). Some notable U.S. presidential elections have been decided by much less, and because of this, we’re auditing the US presidential election candidates’ websites.

During this audit, we’ll see what the candidates’ websites are doing right and wrong, and where the low-hanging fruit lies. This article won’t be a full top-to-bottom audit, but we will show some of the important things to look for and explain why they’re important.

Our Methods Automated Testing

The first step in our a11y audit is to do a quick automated check. If you’re new to accessibility, the WAVE tool by WebAIM is a great place to start. It’ll check for standard accessibility features and errors in alt attributes, contrast, document outline, and form labels. For the features or errors it finds, it provides an info icon that you can click to learn what the issue is, why it’s important, and how to do it right. WAVE is free, and highlights both negative (errors, alerts, features, structural elements, ARIA attributes, contrast), and positive (features, structural elements, ARIA attributes).

Keyboard Testing

As great as WAVE is, an automated tool is never as good as a live person. This is because some accessibility requirements need human logic to apply them properly. For the next part, we’re going to navigate around each website using only the keyboard.

This is done by using the tab button to move to the next element, and shift-tab to move backwards. The spacebar (or return) is used to click or submit a form element. If everything is done right, a person will be able to navigate through your website without falling into tab rabbit-holes (tabbit-holes?). We should be able to tab through the whole page in a logical order without getting stuck or finding things that we can’t access or interact with.

Beyond that, we need to be able to see where the focus lies as we tab across the page. Just as interactive elements give a visual cue on hover, we should get an indication when we land on an interactive element while tabbing, too. That state is referred to as ‘having focus’. You can extend your hover state to focus, or you can make a whole new interaction for focus. It’s up to you!

.link--cta-button:hover, .link--cta-button:focus { /* The :focus pseudo-class for a11y */ background: #2284c0; } Screen Reader Audit

Screen readers are used by visually impaired and blind people to navigate websites. For this purpose we’ll use VoiceOver, which is the default pre-installed screen reader for OS X. We’re looking for things that read oddly (like an acronym), things that don’t get read that should get read (and vice-versa), and making sure all of the information is available.

Let’s start with Donald Trump’s website

The first thing that we did while looking at Donald Trump’s website was audit it using the WAVE tool. Here’s what we found:

  • 8 Errors
  • 14 Alerts
  • 15 Features
  • 35 Structural Elements
  • 5 HTML5 and ARIA
  • 5 Contrast Errors (2 were false flags)
The Good Bold Colors for a Bold Candidate

The color scheme is very accessible. On the front page, there were only two places where the contrast wasn’t sufficient. Outside of that, his color scheme provides text that stands out well from the background and is easy to read for low-vision users.

The Bad Lacking Focus

Remember how we talked about focus states? This site has almost none. Tabbing through the page is confusing because there are no visual indications of where you are.

This is especially egregious because the browser automatically applies focus states to focusable elements for you. In order for there not to be focus elements at all, the developer has to actively go out of their way to break them by applying outline: 0; to the element’s focus state. This is okay if you’re doing it to replace the focus state with something more decorative, but taking it off and not replacing it is a big accessibility no-no.

Skipped Skip Link

When tabbing through The Donald’s website, the first thing we notice is the absence of a skip link. Without a skip link, a keyboard user is forced to tab through each link in the navigation when arriving at each page before they can access the rest of the content. This repetitive task can become aggravating quickly, especially on sites with a lot of navigation links.

Unclear Link Text

Links should always have text that clearly explains where the user is going. The link’s text is what a person using a screen reader will hear, so text like ‘Read More’ or text and icons that require visual context to understand their destination aren’t ideal.

In this area, the link text that goes out to the linked Twitter posts read ‘12h’ and ‘13h’. Without the visual context of a Twitter icon (that’s a background image, so there’s no alternative text to provide that), the user probably has no idea what ‘12h’ is referring to or where that link will lead.

The Ugly Navigation Nightmares

The most important part of any website, in terms of access, is the navigation. An inaccessible navigation menu can block access to large portions of the website. Unfortunately, the navigation system of Trump’s website does just that, and prevents users with disabilities from directly accessing the sub-navigation items under the issues and media sections.

A more accessible way to do this is to use a click dropdown instead of a :hover. If that doesn’t work for the design, make sure that the :hover state of the menu applies to :focus as well, so that the menu will open the nested links when the parent menu item is tabbed to.

Disorganized Structure

Structural elements (h1, h2, h3, etc tags) are very helpful when used properly. In this case, they’re definitely not. Heading levels aren’t sequential, and nested information isn’t always relevant to its parent.

Audit of Hillary Clinton’s website The Good

Overall, Clinton’s website is better than most when it comes to accessibility. It’s that clear that her development team made it a purposeful consideration during the development process. While it’s not perfect, there was a lot of good done here. Let’s explore some examples of things done right.

Keyboard Accessibility

The keyboard accessibility on this site is very good. We found that we could access the elements and navigate to other pages easily without a mouse. It was easy to open and shut the drop-down ‘More’ area in the navigation, and access its nested links, which is a good example of how to implement what we were talking about when we covered the shortfalls of the navigation system on Trump’s website.

Skip Link

Hillary Clinton’s website includes a proper skip link, which allows users to skip the navigation and go directly to the content.

Great Focus States

The other thing we found when checking the keyboard accessibility was that everything has a focus state that makes it visually obvious where you are on the page. The light dotted border focus state is a bit subtle for low-vision users, but the fact that the focus state of the elements was styled independently from the hover state shows that the developer was aware of the need for focus indicators and made a conscious effort to implement them.

Translation

We usually think of accessibility in terms of people with disabilities because they often benefit from it the most, but accessibility is really just about including as many people as possible. A nice touch we found on Clinton’s site was a button at the top to translate the site into Spanish. With 41 million native Spanish speakers in the US, providing the option to experience the content in the user’s first language is a great accessibility move.

Video Captioning

Deaf people rely on captions to get the dialogue from videos, since it’s very difficult to lip-read in film. The videos on Hillary’s site are furnished with open captions, which means that they’re always on. Open captions are great for people with disabilities, but they’re also a smart move to capture your non-disabled audience as well. Often autoplay videos won’t play any sound unless they’re interacted with, but providing open captions on the video gives you another chance to capture the audience’s interest by displaying the words on the screen.

The Bad No Transcripts for Video

While it was great that the videos were captioned, we couldn’t find a transcript provided. Many people erroneously believe that you only need one or the other, but captions and transcripts actually serve different purposes, so it’s ideal to provide both. Captions are great for the Deaf, who want to read the words in real-time with the video. Transcripts are more useful for the blind and the Deaf-blind, who benefit from a written summary of what’s visually happening onscreen in each scene before the written dialogue begins. A braille terminal, used by the Deaf-blind, can’t convert open captions inlaid into the video’s frames into braille for its users, so these users won’t benefit from that.

Low Contrast

Contrast is important for low-vision users. We know that subtlety is all the rage, but design choices like putting blue text on a blue background makes it really difficult for some people to read. There are some great free tools like Das Plankton that will let you see if your contrast level is high enough to meet accessibility standards.

Schemes like this fail contrast testing on every level, so it’s best to avoid them. A better choice probably would have been white over a slightly darker blue.

The Ugly The Horrible Modal

Despite the obvious hard work that went into making Hillary’s website accessible, much of the effort is lost due to a modal that appears when visiting the website for the first time (or in incognito mode). The problem is that the modal doesn’t receive focus when it pops up, and its close button has no focus indicator. While it technically can be closed via keyboard by navigating backwards (or tabbing through every single link on the page) once it pops up, it’s not obvious visually when that close button has focus, and navigating backwards isn’t exactly intuitive.

Conclusion

With one glaring exception, it’s obvious that lots of thought and work had been put into making Hillary Clinton’s website accessible to voters with disabilities. There is definitely room for improvement with small things like somewhat irrelevant alternative attributes on photos, but on the whole the site is better on accessibility than the vast majority of the sites that we see.

Unfortunately, it is also obvious that accessibility is deeply neglected within Donald Trump’s website, which leaves a large swath of potential voters unable to browse to his stance on issues and other content. Hopefully, this will be attended to shortly.

Hopefully these auditing case studies lead you to think about your own website from the point of view of a person with a disability. There are plenty of challenges online for the disability community, but lots of those can be fixed with a few easy tweaks like the ones we covered here. We hope you'll use what you've learned to make your website accessible, too.

The Dark Art of Web Design

Matt & Mike sit down with Lullabot's entire design team and talk the ins, outs, processes and tools behind sites such as GRAMMY.com, MSNBC, This Old House, and more!

UX For Brains: Eight Ways Psychology Can Improve Your Designs

In my last article, I described the value of psychology for web and UX designers and provided examples of some things it can reveal about human bodies and behaviors. The title is a decent summary of the findings: “Let’s be honest, people suck.” In this article I'd like to present some practical ways to take that understanding of psychology and use it in your work.

How I Learned To Worry More And Love Limitations

In part one I described (maybe even ranted about) how people have unfairly high expectations, even though their own performance is limited: in their cognitive abilities, attention spans, vision, and reading, to name a few. But not all of these limitations are that problematic; they serve a purpose. Life is really hard, and it can be overwhelming to get through the day. Imagine being endlessly caught within analysis paralysis, or following an obsessive-compulsive loop. We are lucky to have shortcuts to keep us moving (even if those shortcuts are imperfect).

UX designers can make things easier if we work within these limitations. Designers have a responsibility to consider human complexities and build things that can work across the greater gamut. Allow me to repeat my plea from part one:

Stop designing experiences for us, for the “interactive 1%”.

I try to always remember that I’m coming from an industry that is much more familiar with web patterns than my typical user. The director of my college’s Graphic Design program would label anyone outside of a design domain a “pedestrian.” At the time, I thought it was pejorative, but now I see his point: imagine bringing a random person right off the street into your design process: they have no design background or project context. How successfully could they interact with your work?

By thinking about these less informed users, we can enhance usability for everyone. Take sidewalk curb cuts as an example: this real world solution for assisting the disabled also benefits a much larger group. I appreciate them on those days I’ve tricked myself into wearing heels. Ask yourself if your experiences are accessible enough to help all types of pedestrians get around (see what I did there?).

How To Help Humans Succeed

What follows are some guidelines to keep in mind to keep your designs work easy to interact with.

1. Simplify everything. Every. Thing. People don't want to buy a quarter-inch drill. They want a quarter-inch hole! - Theodore Levitt

Don't lose sight of the purpose of a design. Define the goals it serves, and break them down to their most basic form (a good way to do this is to ask “why?”, five times). Keep this handy, maybe on a post-it, to reference as you build your flows and pages.

Once you get into the nitty gritty, ask yourself how to reduce cognitive load. Don't use ten words when five will do. Replace the cheeky marketing jargon for clear labels and calls to action.

Be ruthless in removing design elements that are only decorative and not informative. Present a limited number of choices (this one can seem counter-intuitive, because users might request more options, but deep down, they don't want them and the analysis paralysis that too many choices lead to; check out the jam study, if you don’t believe me). One method Jason Fried describes that can be helpful as you try to simplify is to begin grouping things (tasks, needs, features, etc.) into what needs to be obvious, easy and possible.

2. Make a great first impression

The saying “never judge a book by it’s cover” exists for a reason: people are naturally judgemental, and form their initial impressions of a website in under a second. But designers can use these snap judgments to their advantage. The most cited factor used when evaluating whether a website is credible is the appeal of the visual design! The right layout, typography, and color can make a huge impact and make it immediately.

3. Get users where they’re going

Users need to know where they are, and where they can get to. Create navigation that helps users peruse and get a sense of the kinds of content they could access, without forcing them to click to and view every page. And show them where they are. Breadcrumbs are a great example of how to help users understand their current context and how the site is structured around it.

This sounds obvious, but sometimes the easiest way to help a user find something is by putting it on the page. It requires a lot more mental effort for a user to read a link in the navigation, decide it’s relevant to their current task, bring the mouse over to it, click it, and wait for the page to load, compared to say, flicking their finger on the mouse, wherever it may be, to scroll down the page. Your clients will often focus their concerns on keeping the right things "above the fold," but be sure to encourage them to not be overly concerned with that. Remember, scrolling is as cheap as it gets.

Sometimes you need a user to find something very specific, for example, an error message within a form. In these cases, keep this feedback in close proximity to their most recent action, such as locking it up with the Submit button they just clicked. If you do have to place important information elsewhere, make sure it’s going to get noticed in their peripheral view by creating additional cues like animation or—dare I say it—sound.

4. Provide opportunities for focus

People have a hard enough time paying attention, so make things easy to find during quick, half-hearted skimming. Break things up into clear sections, with whitespace in between. Use labels, lists, and graphics to make hunting even easier. Test that your hierarchy is clear by seeing what stands out first, second, and third, making sure it aligns with the user's priorities for that page. Hopefully, you’ve already put the most important thing right at the top.

And take out any distracting cruft to reduce cognitive load. This might be in the form of too much content, or design elements that add clutter, or a decorative typeface that is hard to read. This goes back to keeping it simple, but again we should ask ourselves if our experience is getting in the way of the A-to-B journey most users are actually on.

5. Leverage existing patterns

It can feel icky as a designer to take the road most traveled, but we have to be very careful and intentional when creating new whiz-bang. Users are coming to your site with preconceived expectations (like where they will find the shopping cart), so you'd better have a really good reason for reinventing the wheel. Otherwise, put content and navigation where users expect to see it. Make links look like links, and buttons look like buttons. Don’t make static titles blue and suggest interaction.

If you do need to teach something new, you can still leverage existing patterns to relate to something a user is already comfortable with. My favorite example of this is the simplicity of dragging files into your computer’s trash can. And of course, once you have your patterns set, maintain consistency across the entire experience, as users will be jumping from across multiple pages and different devices.

6. Plan for mistakes

First of all, of course, you want to try to reduce mistakes in the first place. Make sure all your copy is super duper clear, like a button with a “Process Order” label, or text explaining when a person’s card is going to be charged. Consider what slips could occur; are your buttons large enough and far apart enough to prevent accidental taps on mobile? If someone came back to this tab in three weeks, would they have any idea what they were doing? Are you confirming when something is actually going right, with progress bars and confirmation messages, or are you leaving users to press an unresponsive button three and four times?

Hopefully, you’re done all of that, but sadly, mistakes will still happen. Part two is building backup plans and working through those error flows to get users back on track. Try to clearly explain what went wrong, so they have better luck with their next attempt. For really complicated systems, build in undos instead of making users start over. And for really important, terrifying systems (like piloting aircraft), create layers and layers of design redundancy.

James Reason knew what was up when he created the Swiss Cheese Model to illustrate the importance of design redundancy, 7. Put users in a good mood

Amazingly, creating an experience users enjoy can help with reliability and usability. Users perceive enjoyable experiences as easy to use and efficient, as the state of flow that they create promotes flexible thinking and problem solving. And if a user doesn’t create a workaround in their cool state of mind, they are also more forgiving and more likely to excuse any hiccups they might encounter. There are lots of ways to create fun, and I don’t want to make it too formulaic, but consider starting points like inspiring designs and photography, humor, novelty, and creating flow.

8. Know your audience

These steps go a long way toward usability, but you have to validate your specific work with real users within your target audience. Whether it's two hours, two days or two weeks, spend some time talking to real people that will use what you’re making: first to understand your personas and the types of problems they experience, laying the groundwork for your design process; then touch base with them as designs progress, to see how they use things and complete (or struggle with) important tasks. The insights you'll gain are extremely valuable and the time spent doesn't have to be great.

Accept Human Behavior As It Is

So there you have it, a few gadgets for your toolbelt. I hope you find them useful and wish you luck with tackling these usability challenges. Had enough metaphors? No? You can equip yourself even further by combining these fundamentals with specific persona knowledge to understand your users’ problems more holistically before you get building. Good for one more? To truly solve users’ problems, we need to make ourselves familiar with the realities of this rocky usability terrain and adapt to it.

Around the Drupalverse with Enzo García

Matt & Mike talk with Eduardo "Enzo" García about his 18 country tour around the the world! We talk about various far-flung Drupal communities, motivations, challenges, and more.

Scaling CSS Components with BEM, REMs, & EMs

Using relative units for measurements has a lot of advantages, but can cause a lot of heartache if inheritance goes wrong. But as I’ll demonstrate, BEM methodology and some clever standardization can reduce or eliminate the downsides.

A little vocab before we get too far into this: em units

CSS length unit based on the font size of the current element. So if we set:

.widget { font-size: 20px; }

For any style on that .widget element that uses em units, 1em = 20px
 

rem units

Similar to em units, except it’s based on the “root’s em”, more specifically the <html> tag’s font-size. So if we set:

html { font-size: 14px; }

Then anywhere in the page 1rem = 14px, even if we used rem inside of our .widget element’s styles. Although rem can be changed, I recommend leaving it alone. Majority of user's will have a default font-size of 16px, so 1rem will equal 16px.

Relative units

Any length that’s context based. This includes: em , rem, vw , vh and there are more. % isn’t technically a unit, but for the purposes of this article, you can include them in the mix.
 

em inheritance

This is em‘s blessing and curse. Font size inherits by default, so any child of widget will also have a 20px font size unless we provide a style saying otherwise, so the size of an em will also inherit to the child.

Let's say we set font-size in em units, it can't inherit from itself, so for the font-size style an em is based on the parent’s calculated font-size. Any other style’s em measurement will be based on the calculated font-size of that element.

When the page renders, all em units are calculated into a px value, browser inspector tools will have a "computed" tab for CSS that will give you calculated px values for any em based style.

Here’s an example that’s good for demonstration, but would create a ball of yarn if it was in your code.

If we have these three elements, each nested in the one before:

.widget { font-size: 20px; margin-bottom: 1.5em; // 1.5em = 30px } .widget-child { font-size: 0.5em; // 0.5em = 10px margin-bottom: 1.5em; // 1.5em = 15px } .widget-grandchild { font-size: 1.4em; // 1.4em = 14px margin-bottom: 1.5em; // 1.5em = 21px } line 3

em will be calculated from the font-size of this element, .widget, which is 20px
1.5em = 20px * 1.5 = 30px 

line 7

Because this is a font-size style, it calculates em from the parent’s font-size
.widget  has a calculated font-size of 20px,  so 1em = 20px in this context.
0.5em = 20px * 0.5 = 10px 

line 8

em will be calculated from the font-size of this element, .widget-child, which is 10px
1.5em = 10px * 1.5 = 15px

line 12

Because this is a font-size style, it calculates em from the parent’s font-size. 
.widget-child  has a calculated font-size of 10px,  so 1em = 10px in this context.
1.4em = 10px * 1.4 = 14px

line 13

em will be calculated from the font-size of this element, .widget-grandchild, which is 14px. 
1.5em = 14px * 1.5 = 21px

This is why most developers stay away from em’s, if done poorly it can get really hard to tell how 1em will be calculated, especially because elements tend to have multiple selectors applying styles to them.
 

em contamination

Unintended em inheritance that makes you angry.


All that said, there are two reasons I love rem and em units.
 

1 ♡ Scalable Components

We’ve been creating fluid and responsive web sites for years now, but there are other ways our front-end code needs to be flexible.

Having implemented typography systems on large sites, I've found that specific ‘real world’ implementations provoke a lot of exceptions to the style guide rules.

While this is frustrating from a code standardization standpoint, design is about the relationship of elements, and it’s unlikely that every combination of components on every breakpoint will end up working as well as the designer wants. The designer might want to bump up the font size on an element in this template because its neighbor is getting too much visual dominance, a component may need to be scaled down on smaller screens for appropriate line lengths, or, on a special landing page, the designer may want to try something off script.

I’ve found this kind of issue to be much less painful when components have a base scale set on the component wrapper in rem units, and all other sizes in the component are set in em units. If a component, or part of a component, needs to be bumped up in a certain context, you can simply increase the font size of the component and everything inside will scale with it!

To show this in action, here are some article teasers to demonstrate em component scaling. Click on the CODEPEN image below and try modifying lines 1-15:

Note: For the rest of my examples I’m using a class naming convention based on BEM and Atomic Design, also from my previous article.

To prevent unwanted em contamination, we're going to set a rem font-size on every component. Doing so, locks the scope of our em inheritance, and BEM givus us an obvious place to do that.

For example, using the teaser-list component from my previous article we want featured ‘teaser lists’ to be more dominant than a regular ‘teaser list’:

.o-teaser-list { font-size: 1rem; } .o-teaser-list--featured { font-size: 1.25rem; } .o-teaser-list__title { font-size: 1.25em; // In a normal context this will be 20px // In a featured context this will be 25px }

Meanwhile, in our teaser component:

.m-teaser { font-size: 1rem; } .m-teaser__title { font-size: 1.125em; // This will be 18px if it's a o-teaser-list or o-teaser-list--featured, // because the component wrapper is using REM }

Element scaling and adjustment becomes much easier, exceptions become less of a headache, and we accomplish these things without the risk of unintended em inheritance.

As a somewhat absurd example to demonstrate how powerful this can be, I’ve created a pixel perfect Lullabot logo out of a few <div> tags and CSS. A simple change to the font-size on the wrapper will scale the entire graphic (see lines 1–18  in the CSS pane):

One potential drawback to this approach, it does take a little more brain-power to write your component CSS. Creating scaling components means your calculator app will be opened a lot, and you’ll be doing a lot of this kind of computation:

$element-em = $intended-size-px / $parent-element-calculated-px

Sass (or similar preprocessor) functions can help you with that, and there are some tools that can auto-convert all px values to rem in your CSS for you since rem can be treated as a constant.

2 ♡ Works with Accessibility Features

The most common accessibility issue is low-vision, by a wide margin. For quite a while browser's zoom has worked really well with px units, I hung my hat on that fact and moved on using px everywhere in my CSS assuming that low vision users were taken care of by an awesome zoom implementation.

Unfortunately that’s not the case.

I attended an accessibility training by Derek Featherstone who has done a lot of QA with users that depend on accessibility features. In his experience, low vision users often have applications, extensions or settings that increases the default font size instead of using the browser’s zoom.

A big reason for this might be that zoom is site specific, while font size is a global setting.

Here’s how a low vision user’s experience will breakdown depending on how font-sizes and breakpoints are defined:

Scenario 1:

fonts-size: px
@media (px)

The text size will be unaffected. The low vision user will be forced to try another accessibility feature so they can read the text, no good.

Scenario 2:

font-size: rem or em
@media (px)

The page can easily appear 'broken', since changes in font-size do not effect page layout.
For example, a sidebar could have 6 words per line with default browser settings:

But a user with a 200% font-size, the breakpoint will not change, so the sidebar will have 3 words per line, which will not look how we wanted, and often produces broken looking pages.

Scenario 3:

font-size: rem or em
@media (em)

Users with an increased font-size default font size will get an (almost*) identical experience to users that use zoom. An increase in default font-size will proportionately effect the breakpoints, so in my opinion, the EMs still have it.

For example, if we have breakpoints in em a user with a viewport width of 1000px that has their font-size at 200% will get the same breakpoint styles as a user with a 500px viewport and the default font-size.

* The exception being media elements will definitely be resized with zoom, but may not change in size with an increased default font-size, unless their width is set with relative units.

A quick note on em units in media queries:
Since media queries cannot be nested in a selector and don't have a 'parent element', em units will be calculated using the document's font-size.

Why not use rem you ask? It's buggy in some browsers some of the time, and even if it wasn't, it's one less character to type.

Conclusion

For sites that don’t have dedicated front-end people, or don’t have complex components, this might be more work than it’s worth. At minimum all of our font-size styles should be in rem units. If that sounds like a chore, PostCSS has a plugin that will convert all px units to REM (which can be used downstream of Sass/Less/Stylus). That will cover the flexibility and sturdiness needed for accessibility tools.

For large projects with ornate components, specific responsive scaling needs, and where one bit gone wrong can make something go from beautiful to broken, this can save your bacon. It’ll take some extra work, but keeping the em scope locked at the component level will keep extra work at a minimum with some pretty slick benefits.

Prioritizing with Dual-Axis Card Sorts

At a recent client design workshop, the Lullabot team ran into a classic prioritization challenge: tension between the needs of a business and the desires of its customers. Our client was a well-known brand with strong print, broadcast, and digital presences—and passionate fans who'd been following them for decades. Our redesign efforts were focused on clearing away the accumulated cruft of legacy features and out-of-control promotional content, but some elements (even unpopular ones) were critical for the company's bottom line. Asking stakeholders to prioritize each site feature gave deeply inconsistent results. Visitors loved feature X, but feature Y generated more revenue. Which was more important?

Multi-Axis Delphi Sort to the Rescue, Yo

Inspired by Alberta Soranzo and Dave Cooksey's recent presentation on advanced taxonomy techniques, we tried something new. We set up dual-axis card sort that captured value to the business and value to the user in a single exercise. Every feature, content type, and primary site section received a card and was placed on a whiteboard. The vertical position represented business value, horizontal represented value to site visitors, and participants placed each card at the intersection.

In addition, we used the "Delphi Card Sorting" technique described in the same presentation. Instead of giving each participant a blank slate and a pile of cards, we started them out with the results of the previous participant's card sort. Each person was encouraged to make (and explain) any changes they felt were necessary, and we recorded the differences after each 15-minute session.

With axis balancing user interest and business value, the upper-right corner becomes an easy way to spot high-priority features.

The results were dramatic. The hands-on, spatial aspect of card sorting made it fast and easy for participants to pick up the basics, and mapping the two kinds of "value" to different axis made each stakeholder's perspectives much clearer. Using the Delphi sorting method, we quickly spotted what features everyone agreed on, and which required additional investigation. Within an hour, we'd gathered enough information to make some initial decisions.

The Takeaway

Both of the tools we used—Delphi sorting and multi-axis card sorting—are quick, easy, and surprisingly versatile additions to design workshops and brainstorming sessions. Multi-axis sorts can be used whenever two values are related but not in direct conflict; the time to produce a piece of content versus the traffic it generates is another great example. Delphi sorting, too, can be used to streamline the research process whenever a group is being asked to help categorize or prioritize a collection of items.

Based on the results we've seen in our past several client engagements, we'll definitely be using these techniques in the future.

BEM & Atomic Design: A CSS Architecture Worth Loving

Atomic Design is all the rage; I’ve recently had the pleasure of using BEM (or CEM in Drupal 8 parlance) and Pattern Lab on Carnegie Mellon’s HCII’s site. Working through that site I landed on a front-end architecture I’m very happy with, so much so, I thought it’d be worth sharing.

CSS Architecture Guidelines

Generally I’m looking for:

  • Low specificity: If specificity battles start between selectors, the code quality starts to nosedive. Having a low specificity will help maintain the integrity of a large project’s CSS for a lot longer.
  • Easy-to-understand naming: Ideally it won’t take mental effort to understand what a class does, or what it should apply to. I also want it to be easy to learn for people who are new to the code base.
  • Easy to pick up: New developers need to know how and where to write features, and how to work in the project. A good developer experience on a project can help prevent a mangled code base and decrease the stress level.
  • Flexible & sturdy: There’s a lot of things we don’t control when we build a site. Our code needs to work with different viewports, user settings, when too much content gets added in a space, and plenty of other non-ideal circumstances.

As part of the project I got to understand the team that maintains the site, and what they were used to doing. They had a Sass tool chain with some custom libraries, so I knew it was safe in making something custom that relied on front-end developers to maintain it.

Other teams and other projects can warrant very different approaches from this one.

Low specificity is a by-product of BEM, but the other problems are a little trickier to solve. Thankfully the tools I went for when I started the project helped pave the way for a great way to work.

Class Naming and File Organization

At Lullabot we’ve been using BEM naming on any project we build the theme for. On most of our projects, our class names for a menu might break down like this:

  • menu - the name of the component
  • menu__link , menu__title , menu__list - Examples of elements inside of the menu
  • menu--secondary , menu__title--featured , menu__link--active - Examples of variations. A variation can be made on the component, or an element.

We try to make our file organization similarly easy to follow, we generally use Sass and create a ‘partial’ per component with a SMACSS-ish folder naming convention. Our Sass folder structure might look something like this:

base/ ├ _root.scss ├ _normalize.scss └ _typography.scss component/ ├ _buttons.scss ├ _teaser.scss ├ _teaser-list.scss └ _menu.scss layout/ ├ _blog.scss ├ _article.scss └ _home.scss style.scss

On this project, I went for something a little different.

As I wrapped my head around Pattern Lab’s default component organization, I really started to fall in love with it! Components are split up into three groups: atom, molecule, and organism. The only thing that defines the three levels is ‘size’, and if the component is a parent to other components. The ‘size’ comes down to how much HTML a component is made up of, for instance:

  • one to a few tags is likely to be an atom
  • three to ten lines of HTML is likely a molecule
  • any more is likely an organism

In the project I tried to make sure that child components were at least a level lower in particle size than their parent. Organisms can be parents of molecules and atoms, molecules can be parents of atoms, but atoms can't be parents of any component. Usually things worked out that way, but in a few instances I bumped a component up a particle size.

Pattern Lab also divides layouts into pages and templates. Templates are re-used on many web pages, pages are a specific implementation of a template. For instance, the section-overview template is the basic layout for all section pages, and we could make an about page, which builds on the section-overview styles.

Getting deeper into Pattern Lab, I loved how that structure informed the folder organization. To keep things consistent between the Pattern Lab template folders and my Sass folders I decided to use that naming convention with my folders.

Basically this split my components folder into three folders (atom, molecule and organism), and layouts into two (template, page). While I liked the consistency I started having a hard time figuring out which folder I put some components in, and I’m the guy who wrote it!

Fortunately this was early in the project, so I quickly reworked the class names with a prefix of the first letter of their “Pattern Lab type”.

The component classes now look like this:

  • a-button a-button--primary
  • m-teaser
    • m-teaser__title
  • m-menu
    • m-menu__link
  • o-teaser-list
    • o-teaser-list__title
    • o-teaser-list__teaser

The SCSS folders then followed a similar pattern:

00_base/ ├ _root.scss ├ _normalize.scss └ _typography.scss 01_atom/ └ _a-buttons.scss 02_molecule/ ├ _m-teaser.scss ├ _m-menu.scss └ _m-card.scss 03_organism/ ├ _o-teaser-list.scss └ _m-card-list.scss 04_template/ ├ _t-section.scss └ _t-article.scss 05_page/ ├ _p-blog.scss └ _p-home.scss style.scss

With those changes, a class name will inform me where to find my Sass partials, the Pattern Lab templates, and in the Pattern Lab library. Win!

Other class naming prefixes

Drupal 8 wisely used a convention from SMACSS that all classes added/touched/used by Javascript behaviors should be prefixed with js-. In Drupal 7 I would often remove classes from templates and accidentally break behavior, but no more!

A few examples of JS classes:

js-dropdown__toggle js-navigation-primary js-is-expanded

Another convention we have used in our projects at Lullabot is to add a u- prefix for utility classes.

A few examples of potential utility classes:

u-clearfix u-element-invisible u-hide-text

This clearly communicates that class has a specific purpose and should not be extended. No one should write a selector like this .t-sidebar .u-clearfix, unless they want angry co-workers.

These little prefixes makes utility and javascript classes easy to spot, in case they aren’t immediately obvious.

Intersection of Particles

In past BEM projects I’ve felt uneasy about the parent/child component relationship. There are sometimes points where I feel a parent/child relationship of two components is preferred, but that’s not what BEM is good for.

When I do create a parent/child relationship, often one component’s styles will need to be higher in the cascade than the other, which means more code documentation to make sure that that ‘gotcha’ is understood, and it introduces a little fragility.

This architecture handles that issue very well.

Let’s take a look at a ‘teaser list’, it contains a title for the listing and then a list of ‘teaser’ components.

In this example the wrapper of the ‘teaser’ component has two jobs:

  • define how the 'teaser’ is laid out in the ‘teaser list’ component
  • define any backgrounds, borders, padding, or other interior effects that the teaser has

With the naming convention we’ve gone with, that wrapper has two classes that cover each function:

  • o-teaser-list__teaser makes sure the component is flowing correctly inside of its parent
  • m-teaser class is responsible for the aesthetic and how the wrapper affects the contents

Using two classes will make sure the styles needed for the teaser’s layout in its parent won’t affect teasers that are in different contexts. The single letter prefix makes it even easier to to tell which class comes from the parent, and which defines the child component.

The partials are loaded in ‘particle size’, smallest to largest. I know if there’s any overlap in the styles for those two classes, the parent component’s class will win. No cascade guesswork, and no fragility introduced by the parent/child relationship.

As I mentioned before, I tried to make sure that parent components were always a larger ‘particle size’ than their children. There are a few instances where organisms were parents to other organisms, and, looking back, I wish I’d separated those organisms by adding a new particle size to the mix. Pattern Lab would have allowed me to add onto to it’s naming conventions, or even completely replace them.

Conclusion

I’m not selling the silver bullet to solve CSS architecture, but I am really happy with how this turned out. This setup certainly isn’t for every project or person.

Although I tend to use a few preprocessors (mainly Node Sass and PostCSS), there’s no need for any fancy preprocessors or ‘buzz word tools’, although I do find them helpful. The core of this is still something that can be written in vanilla CSS.

Leave a comment, I’d love to hear any feedback of other’s experiences trying to create these kinds of systems. What worked, what didn’t work; or specific feedback on anything I brought up here!

The Accidental Project Manager: Controlling your project

Project managers are an interesting crew. We often work alone, supporting our project team, usually without much training. That means some assembly is required to become a competent PM.

We often start out in some other discipline—typically development or design, and evolve into a project manager out of necessity. There’s a range of specific skills that we’re expected to have, but we can’t easily acquire. Without formal training, the skills we need only come from painful lessons, learned once we’re already in the trenches.

That was certainly my story—I’m an accidental project manager. If that’s you, and you’re managing projects without all the tools you think you might need, this article is for you. Let’s look at how to successfully control your projects, so they don’t surprise you and catch fire when you least expect it.

What does project control look like?

Initially, controlling your project is about planning and knowing the work you need to accomplish. The trick is that project needs and requirements shift over time. You learn more about the project as you go, and it’s normal to find the project exceeds your mandate. That means after the initial planning phase, it's important to be conservative when considering new priorities.

Adding new work mid-project exposes you to several dangers, including:

  • accidently overcommitting your team
  • inadvertently shifting your delivery dates
  • lowering the quality of the work you’ve already committed to deliver

It might be one big high-priority feature, or death by 1,000 feature requests, where small changes have a large impact. Either way, controlling how new requests are handled when you’re in the middle of a project is essential. Whether you’re working on a project inside your company, or as a vendor with one or more clients, you’re the one that keeps things on track.

Developers will push at you for more time to do something ‘the right way’. Designers will use their imaginations and invent features no one asked for. Business stakeholders will have ‘good ideas’. All of these war against the eventual completion of your project. Despite limited budgets and looming deadlines, it seems like everyone gets stuck in blue-sky thinking.

The thing is, they’re sort of doing what they’re supposed to do. But someone has to bottle all the creative energy and get everyone to finish the task at hand. Sometimes, the only one on the team that has the perspective to say “Good idea, but not right now!” is you.

So how do you control the scope of your project?

What can you control?

Much of what’s in this article is focused on vendor/client relationships, but it can be applied to internal PMs and their projects with a little creative imagination. Fundamentally, we’re all handling time, money, and the work that is produced by our team. Those three things are dependant on one another, in a relationship that’s often called the Iron Triangle.

The Iron Triangle is called that because the relationship between those three elements is fixed. Any change to one part (the amount of work you’ll produce) will have a corresponding change to the other two (the time or the cost of your project).

If you work inside an organization, you may feel like you have no way to say ‘no’ to your boss’s requests for expansion of your project scope. That may be true. Your best hope is to point out the Iron Triangle and express to them what their additional requests will mean to the time it takes to deliver your project, and possibly the opportunity cost of continuing to run the project while other priorities go unattended.

For those of us in a client services or vendor role, it’s easier to say no, if you’re paying attention.

Your statement of work

When it comes to scope control, a written statement of work is your best and only friend, especially in a fixed bid project.

A well-executed statement of work (SOW) will have quantified measurements that give you boundaries to perform your work. Here's the kinds of controls that you might expect to find in a well-written SOW:

Responsibilities and assumptions

Good SOWs may also contain language that describes what the client is responsible for. These could be anything, but typical items include:

  • dates of delivery for specifications, design comps, or other supporting documentation: "Design comps to be delivered on or before March 3rd"
  • division of labor: "Client will handle all deployment and hosting-related tasks"
  • descriptions of the general type and complexity of the solutions that your team will deliver

Based on those assumptions or responsibilities, you have freedom to:

  • alert the client that the project will need more time and budget if they are in danger of missing a deadline
  • discuss options with the client when their requests exceed the SOW's intent e.g. if a feature was supposed to be off-the-shelf, and it suddenly requires a custom solution
  • gently refuse tasks that are supposed to be on the client's plate
Quantifiable constraints

Quantifiable means a specific number—of hours, of pages, of features to be built, etc. These may be listed in the SOW itself, or in an appendix, typically generated from an estimation spreadsheet.

There's a ton of different things you can constrain in an SOW. What you choose to constrain should be based on what works for your team. My own experiences tell me a list of features, with written assumptions about how complex the implementation will be, is the best way to lock down what’s to be built.

That way, new features which are clearly not in the SOW can be identified. In the same way features that turn into a black hole of complexity, or stretch beyond their specified implementation or level of complexity can also be controlled.

If your SOW has numbers in it that you can use to help control your project, you're in a great position to ensure a safe and positive outcome for everyone involved.

HEY! What about Agile?

If you’re a fan of Agile methodologies like Scrum, you’ve probably noticed that most of this article so far has been focused on waterfall or fixed-bid style projects. In an Agile project, you may not have a list of features or an SOW with quantifiable constraints. However, there is another kind of constraint you will have: sprint planning.

In an Agile/Scrum world, it’s typical to plan the next 2-4 weeks of work with your client. These planning exercises typically involve estimation and discussion about the work that your product owner prioritizes, to see how much the team can fit in over the next sprint.

It’s ALSO typical for clients to push for you to agree to more work than your team can really support. Scope control in this case means defending the slack in you developer’s schedules, so that they don’t face overcommittment and burnout.

In the same way, if your product owner tries to switch priorities on you in the middle of the sprint, your job is to protect the productivity of your team. If you can’t stop the last minute change, you can at least try to trade out an equivalent amount of work so your team isn’t entirely buried.

Manage to the SOW

To keep your project on track, you need to:

  • Know your quantifiable constraints (SOW, or current sprint)
  • Track your project according to those constraints
  • Talk about those constraints regularly with your team, and the client

It’s almost inevitable that your boundaries will be challenged during the project. All kinds of risks and realities will arise—unclear business requirements, changing priorities, or unexpected conditions mid-project may cause a client to request additional work.

Here’s possibly the most important point of this whole article,

When work is requested in excess of the quantities specified by the SOW, or is not covered in the SOW or related scoping appendices, it should not be undertaken without a change order.

As a new project manager, I fell down on this point over and over again. Don't be like me: Use your SOW to protect yourself. Not managing to your SOW is like intentionally cutting your brakes before going on a road trip. The worst part is that your team will take the hit for the promises you make—be sure to use your quantifiable constraints to protect them.

For Agile projects, as discussed earlier, mid-sprint interruptions violate the planning and productivity cycle. If prioritization of work on your project is that volatile, you might consider moving to a Kanban-based workflow, where the product owner can prioritize work as needed, and your dev team pulls that work into their process as they have capacity. That way, your team can be measured by their throughput, based on priorities the client set. That puts the responsibility on the client to make up their mind, so you can deliver something of value to them.

Quantifiable constraints and assumptions tell you when you're succeeding with your project. If you lack a clear definition of done, or you don't have any tools to help you control your project, and you can't prevent scope creep.

Where can I get an SOW like that?

Many projects have failed because the SOW didn't have clear controls in it. If you find yourself in that position, you need to scream loudly to your boss, salesperson, or whomever can fix the situation for you.

If you're stuck in a SOW that doesn't have appropriate controls, you may just have to live with it. But it doesn't have to stay that way. The right kind of attention to quantifiable constraints and spelling out project assumptions during the sales process can make all the difference when you go to execute the project.

But my projects are internal!

Internal projects may not have a written statement of work, with associated budgets and deadlines. The thing is, you’re still on the hook if your project runs late. You still need protection.

You might try and publish a scope document for your project, and use it to say “Your request wasn’t considered originally. This is going to take longer”. You can be as loose or as formal as needed with your documentation, depending on the stakeholders you’re working with.

Try to get that written scope early in the project planning process, and circulate it to everyone who might make your life difficult later on. Getting scope agreement from your stakeholders early on might give you the means to push back when you need it later.

How to handle requests for out-of-scope work

All right. You’re mid-project and your stakeholders say “Could you just add personalized cat pictures to every page of the site, based on the location of each site visitor? Thanks.”

This is where you apply Darren’s first rule of scope control:

“Don’t commit to anything during the same meeting in which it was requested.”

Now, your client believes this has business value. You don’t want to flat-out reject their idea, even if the SOW doesn’t allow it. Instead, use the ‘Yes, and…’ technique. Affirm the value of that request, and tack on your condition or concern at the end.

In many cases, you’re saying, “Yes, and I’ll need more money to build that feature”. But it comes out like this:

“Geolocated cat pictures? Yes, I can see where that would be very important to your site. I’ll need to check our SOW and our current timeline to see how that might fit into what we’re already doing.”

Then you hang up the phone and go check your SOW. Don’t forget that part.

Once you’ve deferred the request...

If you’re lucky, your SOW contains a clause specifically excluding geolocated cat pictures. Problem solved. Now you can go back to the client, point to the SOW, and let them know they’ll need a change order for you to start on that work.

If you’re not lucky, the SOW doesn’t help you control for this type of request. At that point, it’s time to fall back on the Iron Triangle, and look at how this request will impact work you’re already committed to.

You can use that analysis and estimation to drive the discussion. It’s especially important to compare this new request against other priorities that are already underway. If you see the new request having a negative impact, say something!

Paperwork!

After discussion and possible negotiation about the request, the client may decide they want to move forward. Then you write a change order, sometimes with the help of an account manager, your boss, or whomever else might need to be involved.

Internal project managers, this is a good time to update your scope document and communicate about the change and its impact on your timeline. Whatever the case, written documentation of agreements protects everyone.

The worst thing of all is if you allow specific kinds of work (custom code, multiple design revisions that blow out your schedule, etc) that your SOW actually protected you from.

At the very least you should know those limits, and force a conversation that makes your stakeholder acknowledge they’re changing the rules. Accompanying documentation like change orders, or revisions to scope documents should be executed once agreement has been reached.

Whew!

To summarize, here’s some things you should notice about smart project managers:

  • They already know what’s in their SOW.
  • They don’t commit right away when a new requirement surfaces.
  • They use the ‘Yes, and...' tactic to acknowledge the client's business requirement, while expressing the boundaries under which that work could be completed.
  • They estimate the impact of new requests against present work, deadlines and budgets and communicate that impact to the client.
  • They negotiate how (or if) to satisfy the request and get the paperwork done.
  • Document and communicate the change to everyone involved.

By contrast, don’t be accidental in your scope control.

  • Don’t file your SOW away and never read it.
  • Don’t agree to new work requests right away without consulting your SOW or your team.
  • Don’t dismiss new feature requests without affirming your stakeholders.
  • Don’t decide about requests for new work without considering existing commitments, deadlines and budgets
  • Don’t allow verbal agreements. Get decisions written down where everyone can see, and communicate with all affected stakeholders.
The soft stuff

It’s all well and good to talk about managing to the SOW, but that can be hard to do. It’s scary. Sometimes, you don’t feel empowered to say no. Other times, you think you might need to have a hard conversation around scope or deadlines, and don’t even know exactly what’s worrying you.

I’ve got good news for you: when you feel that way, uncomfortable and uncertain, you’re right at the edge of really succeeding in project management. This is your moment to get it right.

If you’re worried about something on your project, don’t just tell yourself everything is going to be fine. It won’t be. Don’t just work harder. It won’t make things better. Not unless you talk about those worries out loud to people who can do something about them. Use your concerns to drive your project’s success—your discomfort may end up being your best friend.

When you’re worried:
  • Listen to your gut
  • Talk about your worries with people you trust who can help you see them more clearly
  • Once you’re clear, externalize your worries to your stakeholders in a proactive, professional manner, even if it’s scary.

You’ll be glad you did.

Replacing the Body Field in Drupal 8

The body field has been around since the beginning of Drupal time. Before you could create custom fields in core, and before custom entities were in core, there was a body field. As it exists now, the body field is a bit of a platypus. It's not exactly a text field like any other text field. It's two text fields in one (summary and body), with a lot of specialized behavior to allow you to show or hide it on the node form, and options to either create distinct summary text or deduce a summary by clipping off a certain number of characters from the beginning of the body.

The oddity of this field can create problems. The summary has no format of its own, it shares a format with the body. So you can't have a simple format for the summary and a more complex one for the body. The link to expose and hide the summary on the edit form is a little non-intuitive, especially since no other field behaves this way, so it's easy to miss the fact that there is a summary field there at all. If you are relying on the truncated text for the summary, there's no easy way to see in the node form what the summary will end up looking like. You have to preview the node to tell.

I wanted to move away from using the legacy body field in favor of separate body and summary fields that behave in a more normal way, where each is a distinct field, with its own format and no unexpected behavior. I like the benefits of having two fields, with the additional granularity that provides. This article describes how I made this switch on one of my legacy sites.

Making the Switch

The first step was to add the new fields to the content types where they will be used. I just did this in the UI by going to admin > structure > types. I created two fields, one called field_description for the full body text and one called field_summary for the summary. My plan was for the summary field to be a truncated, plain text excerpt of the body that I could use in metatags and in AMP metadata, as well as on teasers. I updated the Manage Display and Manage Form Display data on each content type to display my new fields instead of the old body field on the node form and in all my view modes.

Once the new fields were created I wanted to get my old body/summary data copied over to my new fields. To do this I needed an update hook. I used Drupal.org as a guide for creating an update hook in Drupal 8.

The instructions for update hooks recommend not using normal hooks, like $node->save(), inside update hooks, and instead updating the database directly with a SQL query. But that would require understanding all the tables that need to be updated. This is much more complicated in Drupal 8 than it was in Drupal 7. In Drupal 7 each field has exactly two tables, one for the active values of the field and one with revision values. In Drupal 8 there are numerous tables that might be used, depending on whether you are using revisions and/or translations. There could be up to four tables that need to be updated for each individual field that is altered. On top of that, if I had two fields in Drupal 7 that had the same name, they were always stored in the same tables, but in Drupal 8 if I have two fields with the same name they might be in different tables, with each field stored in up to four tables for each type of entity the field exists on.

To avoid any chance of missing or misunderstanding which tables to update, I went ahead and used the $node->save() method in the update hook to ensure every table gets the right changes. That method is time-consuming and could easily time out for mass updates, so it was critical to run the updates in small batches. I then tested it to be sure the batches were small enough not to create a problem when the update ran.

The update hook ended up looking like this:

<?php /** * Update new summary and description fields from body values. */ function custom_update_8001(&$sandbox) { // The content types to update. $bundles = ['article', 'news', 'book']; // The new field for the summary. Must already exist on these content types. $summary_field = 'field_summary'; // The new field for the body. Must already exist on these content types. $body_field = 'field_description'; // The number of nodes to update at once. $range = 5; if (!isset($sandbox['progress'])) { // This must be the first run. Initialize the sandbox. $sandbox['progress'] = 0; $sandbox['current_pk'] = 0; $sandbox['max'] = Database::getConnection()->query("SELECT COUNT(nid) FROM {node} WHERE type IN (:bundles[])", array(':bundles[]' => $bundles))->fetchField(); } // Update in chunks of $range. $storage = Drupal::entityManager()->getStorage('node'); $records = Database::getConnection()->select('node', 'n') ->fields('n', array('nid')) ->condition('type', $bundles, 'IN') ->condition('nid', $sandbox['current_pk'], '>') ->range(0, $range) ->orderBy('nid', 'ASC') ->execute(); foreach ($records as $record) { $node = $storage->load($record->nid); // Get the body values if there is now a body field. if (isset($node->body)) { $body = $node->get('body')->value; $summary = $node->get('body')->summary; $format = $node->get('body')->format; // Copy the values to the new fields, being careful not to wipe out other values that might be there. if (empty($node->{$summary_field}->getValue()) && !empty($summary)) { $node->{$summary_field}->setValue(['value' => $summary, 'format' => $format]); } if (empty($node->{$body_field}->getValue()) && !empty($body)) { $node->{$body_field}->setValue(['value' => $body, 'format' => $format]); } if ($updated) { // Clear the body values. $node->body->setValue([]); } } // Force a node save even if there are no changes to force the pre_save hook to be executed. $node->save(); $sandbox['progress']++; $sandbox['current_pk'] = $record->nid; } $sandbox['#finished'] = empty($sandbox['max']) ? 1 : ($sandbox['progress'] / $sandbox['max']); return t('All content of the types: @bundles were updated with the new description and summary fields.', array('@bundles' => implode(', ', $bundles))); } ?> Creating the Summary

That update would copy the existing body data to the new fields, but many of the new summary fields would be empty. As distinct fields, they won't automatically pick up content from the body field, and will just not display at all. The update needs something more to get the summary fields populated. What I wanted was to end up with something that would work similarly to the old body field. If the summary is empty I want to populate it with a value derived from the body field. But when doing that I also want to truncate it to a reasonable length for a summary, and in my case I also wanted to be sure that I ended up with plain text, not markup, in that field.

I created a helper function in a custom module that would take text, like that which might be in the body field, and alter it appropriately to create the summaries I want. I have a lot of nodes with html data tables, and I needed to remove those tables before truncating the content to create a summary. My body fields also have a number of filters that need to do their replacements before I try creating a summary. I ended up with the following processing, which I put in a custom.module file:

<?php use Drupal\Component\Render\PlainTextOutput; /** * Clean up and trim text or markup to create a plain text summary of $limit size. * * @param string $value * The text to use to create the summary. * @param string $limit * The maximum characters for the summary, zero means unlimited. * @param string $input_format * The format to use on filtered text to restore filter values before creating a summary. * @param string $output_format * The format to use for the resulting summary. * @param boolean $add_elipsis * Whether or not to add an elipsis to the summary. */ function custom_parse_summary($value, $limit = 150, $input_format = 'plain_text', $output_format = 'plain_text', $add_elipsis = TRUE) { // Allow filters to replace values so we have all the original markup. $value = check_markup($value, $input_format); // Completely strip tables out of summaries, they won't truncate well. // Stripping markup, done next, would leave the table contents, which may create odd results, so remove the tables entirely. $value = preg_replace('/(.*?)<\/table>/si', '', $value); // Strip out all markup. $value = PlainTextOutput::renderFromHtml(htmlspecialchars_decode($value)); // Strip out carriage returns and extra spaces to pack as much info as possible into the allotted space. $value = str_replace("\n", "", $value); $value = preg_replace('/\s+/', ' ', $value); $value = trim($value); // Trim the text to the $limit length. if (!empty($limit)) { $value = text_summary($value, $output_format, $limit); } // Add elipsis. if ($add_elipsis && !empty($value)) { $value .= '...'; } return $value; } ?> Adding a Presave Hook

I could have used this helper function in my update hook to populate my summary fields, but I realized that I actually want automatic population of the summaries to be the default behavior. I don't want to have to copy, paste, and truncate content from the body to populate the summary field every time I edit a node, I'd like to just leave the summary field blank if I want a truncated version of the body in that field, and have it updated automatically when I save it.

To do that I used the pre_save hook. The pre_save hook will update the summary field whenever I save the node, and it will also update the summary field when the above update hook does $node->save(), making sure that my legacy summaries also get this treatment.

My pre_save hook, in the same custom.module file used above, ended up looking like the following:

<?php use Drupal\Core\Entity\EntityInterface; /** * Implements hook_entity_presave(). * * Make sure summary and image are populated. */ function custom_entity_presave(EntityInterface $entity) { $entity_type = 'node'; $bundles = ['article', 'news', 'book']; // The new field for the summary. Must already exist on these content types. $summary_field = 'field_summary'; // The new field for the body. Must already exist on these content types. $body_field = 'field_description'; // The maximum length of any summary, set to zero for no limit. $summary_length = 300; // Everything is an entity in Drupal 8, and this hook is executed on all of them! // Make sure this only operates on nodes of a particular type. if ($entity->getEntityTypeId() != $entity_type || !in_array($entity->bundle(), $bundles)) { return; } // If we have a summary, run it through custom_parse_summary() to clean it up. $format = $entity->get($summary_field)->format; $summary = $entity->get($summary_field)->value; if (!empty($summary)) { $summary = custom_parse_summary($summary, $summary_length, $format, 'plain_text'); $entity->{$summary_field}->setValue(['value' => $summary, 'format' => 'plain_text']); } // The summary might be empty or could have been emptied by the cleanup in the previous step. If so, we need to pull it from description. $format = $entity->get($body_field)->format; $description = $entity->get($body_field)->value; if (empty($summary) && !empty($description)) { $summary = custom_parse_summary($description, $summary_length, $format, 'plain_text'); $entity->{$summary_field}->setValue(['value' => $summary, 'format' => 'plain_text']); } } ?>

With this final bit of code I’m ready to actually run my update. Now whenever a node is saved, including when I run the update to move all my legacy body data to the new fields, empty summary fields will automatically be populated with a plain text, trimmed, excerpt from the full text.

Going forward, when I edit a node, I can either type in a custom summary, or leave the summary field empty if I want to automatically extract its value from the body. The next time I edit the node the summary will already be populated from the previous save. I can leave that value, or alter it manually, and it won't be overridden by the pre_save process on the next save. Or I can wipe the field out if I want it populated automatically again when the node is re-saved.

Javascript or Presave?

Instead of a pre_save hook I could have used javascript to automatically update the summary field in the node form as the node is being edited. I would only want that behavior if I'm not adding a custom summary, so the javascript would have to be smart enough to leave the summary field alone if I already have text in it or if I start typing in it, while still picking up every change I make in the description field if I don’t. And it would be difficult to use javascript to do filter replacements on the description text or have it strip html as I'm updating the body. Thinking through all the implications of trying to make a javascript solution work, I preferred the idea of doing this as a pre_save hook.

If I was using javascript to update my summaries, the javascript changes wouldn't be triggered by my update hook, and the update hook code above would have to be altered to do the summary clean up as well.

Ta-dah

And that's it. I ran the update hook and then the final step was to remove my now-empty body field from the content types that I switched, which I did using the UI on the Content Types management page.

My site now has all its nodes updated to use my new fields, and summaries are getting updated automatically when I save nodes. And as a bonus this was a good exercise in seeing how to manipulate nodes and how to write update and pre_save hooks in Drupal 8.

Using the Template Method pattern in Drupal 8

Software design patterns are a very good way to standardize on known implementation strategies. By following design patterns you create expectations and get comfortable with the best practices. Even if you read about a design pattern and realize you have been using it for a long time, learning the formal definition will help you avoid eventual edge cases. Additionally, labeling the pattern will enhance communication, making it clearer and more effective. If you told someone about a foldable computer that you can carry around that contains an integrated trackpad, etc, you could have been more efficient by calling that a laptop.

I have already talked about design patterns in general and the decorator pattern in particular, and today I will tell you about the Template Method pattern. These templates have nothing to do with Drupal’s templates in the theme system.

Imagine that we are implementing a social media platform, and we want to support posting messages to different networks. The algorithm has several common parts for posting, but the authentication and sending of actual data are specific to each social network. This is a very good candidate for the template pattern, so we decide to create an abstract base class, Network, and several specialized subclasses, Facebook, Twitter, …

In the Template Method pattern, the abstract class contains the logic for the algorithm. In this case we have several steps that are easily identifiable:

  1. Authentication. Before we can do any operation in the social network we need to identify the user making the post.
  2. Sending the data. After we have a successful authentication with the social network, we need to be able to send the array of values that the social network will turn into a post.
  3. Storing the proof of reception. When the social network responds to the publication request, we store the results in an entity.

The first two steps of the algorithm are very specific to each network. Facebook and Instagram may have a different authentication scheme. At the same time, Twitter and Google+ will probably have different requirements when sending data. Luckily, storing the proof of reception is going to be generic to all networks. In summary, we will have two abstract methods that will authenticate the request and send the data plus a method that will store the result of the request in an entity. More importantly, we will have the posting method that will do all the orchestration and call all these other methods.

One possible implementation of this (simplified for the sake of the example) could be:

<?php namespace Drupal\template; use Drupal\Component\Serialization\Json; /** * Class Network. * * @package Drupal\template */ abstract class Network implements NetworkInterface { /** * The entity type manager. * * @var \Drupal\Core\Entity\EntityTypeManagerInterface. */ protected $entityTypeManager; /** * Publish the data to whatever network. * * @param PostInterface $post * A made up post object. * * @return bool * TRUE if the post was posted correctly. */ public function post(PostInterface $post) { // Authenticate before posting. Every network uses a different // authentication method. $this->authenticate(); // Send the post data and keep the receipt. $receipt = $this->sendData($post->getData()); // Save the receipt in the database. $saved = $this->storeReceipt($receipt); return $saved == SAVED_NEW || $saved == SAVED_UPDATED; } /** * Authenticates on the request before sending the post. * * @throws NetworkException * If the request cannot be authenticated. */ abstract protected function authenticate(); /** * Send the data to the social network. * * @param array $values * The values for the publication in the network. * * @return array * A receipt indicating the status of the publication in the social network. */ abstract protected function sendData(array $values); /** * Store the receipt data from the publication call. * * @return int * Either SAVED_NEW or SAVED_UPDATED (core constants), depending on the operation performed. * * @throws NetworkException * If the data was not accepted. */ protected function storeReceipt($receipt) { if ($receipt['status'] > 399) { // There was an error sending the data. throw new NetworkException(sprintf( '%s could not process the data. Receipt: %s', get_called_class(), Json::encode($receipt) )); } return $this->entityTypeManager->getStorage('network_receipts') ->create($receipt) ->save(); } }

The post public method shows how you can structure your posting algorithm in a very readable way, while keeping the extensibility needed to accommodate the differences between different classes. The specialized class will implement the steps (abstract methods) that make it different.

<?php namespace Drupal\template; /** * Class Facebook. * * @package Drupal\template */ class Facebook extends Network { /** * [email protected]} */ protected function authenticate() { // Do the actual work to do the authentication. } /** * [email protected]} */ protected function sendData(array $values) { // Do the actual work to send the data. } }

After implementing the abstract methods, you are done. You have successfully implemented the template method pattern! Now you are ready to start posting to all the social networks.

// Build the message. $message = 'I like the new article about design patterns in the Lullabot blog!'; $post = new Post($message); // Instantiate the network objects and publish. $network = new \Drupal\template\Facebook(); $network->post($post); $network = new \Drupal\template\Twitter(); $network->post($post);

As you can see, this is a behavioral pattern very useful to deal with specialization in a subclass for a generic algorithm.

To summarize, this pattern involves a parent class, the abstract class, and a subclass, called the specialized class. The abstract class implements an algorithm by calling both abstract and non-abstract methods.

  • The non-abstract methods are implemented in the abstract class, and the abstract methods are the specialized steps that are subsequently handled by the subclasses. The main reason why they are declared abstract in the parent class is because the subclass handles the specialization, and the generic parent class knows nothing about how. Another reason is because PHP won’t let you instantiate an abstract class (the parent) or a class with abstract methods (the specialized classes before implementing the methods), thus forcing you to provide an implementation for the missing steps in the algorithm.
  • The design pattern doesn’t define the visibility of these methods, you can declare them public or protected. If you declare these methods public, then you can surface them in an interface to make the base class abstract.

In one typical variation of the template pattern, one or more of the abstract methods are not declared abstract. Instead they are implemented in the base class to provide a sensible default. This is done when there is a shared implementation among several of the specialized classes. This is called a hook method (note that this has nothing to do with Drupal's hooks).

Coming back to our example, we know that most of the Networks use OAuth 2 as their authentication method. Therefore we can turn our abstract authenticate method into an OAuth 2 implementation. All of the classes that use OAuth 2 will not need to worry about authentication since that will be the default. The authenticate method will only be implemented in the specialized subclasses that differ from the common case. When we provide a default implementation for one of the (previously) abstract methods, we call that a hook method.

At this point you may be thinking that this is just OOP or basic subclassing. This is because the template pattern is very common. Quoting Wikipedia's words:

The Template Method pattern occurs frequently, at least in its simplest case, where a method calls only one abstract method, with object oriented languages. If a software writer uses a polymorphic method at all, this design pattern may be a rather natural consequence. This is because a method calling an abstract or polymorphic function is simply the reason for being of the abstract or polymorphic method.

You will find yourself in many situations when writing Drupal 8 applications and modules where the Template Method pattern will be useful. The classic example would be annotated plugins, where you have a base class, and every plugin contains the bit of logic that is specific for it.

I like the Template Method pattern because it forces you to structure your algorithm in a very clean way. At the same time it allows you to compare the subclasses very easily, since the common algorithm is contained in the parent (and abstract) class. All in all it's a good way to have variability and keep common features clean and organized.

Navigation and Deep Linking with React Native

Mobile deep links are an essential part of a mobile application’s business strategy. They allow us to link to specific screens in a mobile application by using custom URL schemas. To implement deep links on the Android platform, we create intent filters to allow Android to route intents that have matching URLs to the application runtime. For iOS, we define the custom URL schema and listen for incoming URLs in the app delegate. It’s possible to create a custom module for React Native which wraps this native functionality, but fortunately React Native now ships with the APIs we need for deep linking. This article provides an overview of how to get deep links working in a React Native application for both Android and iOS platforms. We will also cover the basics of routing URLs and handling screen transitions.

The accompanying example application code can be found here. Note that at the time of writing this article, the current stable version of React Native is 0.26.

Setting up the navigation

In this example, we will use React’s Navigator component to manage the routes and transitions in the app. For non-trivial apps, you may want to look into React’s new NavigatorExperimental component instead as it leverages Redux style navigation logic. 

We can use React’s Navigator component to manage the routes and transitions in the app. First make sure the component is imported:

import { Navigator, // Add Navigator here to import the Navigator component StyleSheet, Platform, Text, View, ToolbarAndroid, SegmentedControlIOS, Linking } from 'react-native';

Next, we need to define the route objects. In this example, are two routes, “home” and “account”. You can also add custom properties to each route so that they are accessible later when we want to render the screen associated to the route.

const App = React.createClass({ ... getInitialState() { return { routes: { home: { title: 'Home', component: HomeView }, account: { title: 'My Account', component: AccountView } } }; },

The Navigator component is then returned in the render function:

render() { return ( <Navigator ref={component => this._navigator = component} navigationBar={this.getNav()} initialRoute={this.state.routes.home} renderScene={(route, navigator) => <route.component {...route.props} navigator={navigator} />} /> ); },

To retain a reference to the navigator component, a function can be passed to the ref prop. The function receives the navigator object as a parameter allowing us to set a reference to it for later use. This is important because we need to use the navigator object for screen transitions outside the scope of the navigator component. 

A navigation bar that persists across all the screens can be provided using the navigationBar prop. A function is used here to return either a ToolbarAndroid or SegmentedControlIOS React component depending on which platform the code is running on. There are two screens, we can transition to using the navigation bar here.

getNav() { if (Platform.OS === 'ios') { return ( < SegmentedControlIOS values = { [this.state.routes.home.title, this.state.routes.account.title] } onValueChange = { value => { const route = value === 'Home' ? this.state.routes.home : this.state.routes.account; this._navigator.replace(route); } } /> ); } else { return ( < ToolbarAndroid style = { styles.toolbar } actions = { [{ title: this.state.routes.home.title, show: 'always' }, { title: this.state.routes.account.title, show: 'always' } ] } onActionSelected = { index => { const route = index === 0 ? this.state.routes.home : this.state.routes.account; this._navigator.replace(route); } } /> ); } }

The initialRoute prop lets the navigator component know which screen to start with when the component is rendered and the renderScene prop accepts a function for figuring out and rendering a scene for a given route.

With the Navigator component in place, we can transition forwards to a new screen by calling the navigator’s replace function and passing a route object as a parameter. Since we set a reference to the navigator object earlier, we can access it as follows:

this._navigator.replace(route);

We can also use this._navigator.push here if we want to retain a stack of views for back button functionality. Since we are using a toolbar/segmented controls and do not have back button functionality in this example, we can go ahead and use replace.

Notice that the parameters passed to onActionSelected and onValueChange are 0 and ‘Home` respectively. This is due to the differences in the ToolbarAndroid and SegmentedControlIOS components. The argument passed to the method from the ToolbarAndroid component is an integer denoting the position of the button. A string with the button value is passed for the SegmentedControlIOS component.

Now that we have a simple application that can transition between two screens, we can dive into how we can link directly to each screen using deep links.

Android Defining a custom URL

In order to allow deep linking to the content in Android, we need to add intent filters to respond to action requests from other applications. Intent filters are specified in your android manifest located in your React Native project at /android/app/src/main/java/com/[your app]/AndroidManifest.xml. Here is the modified manifest with the intent filter added to the main activity:

<activity android:name=".MainActivity" android:label="@string/app_name" android:configChanges="keyboard|keyboardHidden|orientation|screenSize"> <intent-filter> <action android:name="android.intent.action.MAIN" /> <category android:name="android.intent.category.LAUNCHER" /> </intent-filter> <intent-filter android:label="filter_react_native"> <action android:name="android.intent.action.VIEW" /> <category android:name="android.intent.category.DEFAULT" /> <category android:name="android.intent.category.BROWSABLE" /> <data android:scheme="deeplink" android:host="home" /> </intent-filter> </activity>

The <data> tag specifies the URL scheme which resolves to the activity in which the intent filter has been added. In this example the intent filter accepts URIs that begin with deeplink://home. More than one URI scheme can be specified using additional <data> tags. That’s it for the native side of things, all that’s left to do is the implementation on the JavaScript side.

Receiving the intent data in React

React’s Linking API, allows us to access any incoming intents. In the componentWillMount() function, we retrieve the incoming intent data and figure out which screen to render. Building on the navigation example above, we can add the following componentDidMount() function:

componentDidMount() { const url = Linking.getInitialURL().then(url => { if (url) { const route = url.replace(/.*?:\/\//g, ""); this._navigator.replace(this.state.routes[route]); } }); }

The getIntialURL() function returns the URL that started the activity. Here we are using a string replace function to get the part of the URL string after deeplink://. this.navigator.replace is called to transition to the requested screen.

Using the Android Debug Bridge (ADB), we can test the deep links via the command line.

$ adb shell am start -W -a android.intent.action.VIEW -d deeplink://home(the url scheme) com.deeplinkexample(the app package name)

Alternatively, open a web browser on the device and type in the URL scheme. The application should launch and open the requested screen.

IOS Defining a custom URL

In iOS, we register the URL scheme in the Info.plist file which can be found in the React Native project at /ios/[project name]/Info.plist. The url scheme defined in the example is deeplink://. This allows the application to be launched externally using the url.

<key>CFBundleURLTypes</key> <array> <dict> <key>CFBundleTypeRole</key> <string>Editor</string> <key>CFBundleURLSchemes</key> <array> <string>deeplink</string> </array> </dict> </array>

We also need to add a few extra lines of code to AppDelegate.m to listen for incoming links to the application when it has already been launched and is running.

#import "RCTLinkingManager.h" - (BOOL)application:(UIApplication *)application openURL:(NSURL *)url sourceApplication:(NSString *)sourceApplication annotation:(id)annotation { return [RCTLinkingManager application:application openURL:url sourceApplication:sourceApplication annotation:annotation]; }

 If we try to compile the project in XCode at this point we will get a RCTLinkingManager not found error. In order to use the LinkingIOS library, we need to manually link to the library in XCode.  There are detailed instructions in the official documentation for how to do this. The library we need to link is named RCTLinking and header search path we need to add is:

$(SRCROOT)/../node_modules/react-native/Libraries Receiving the URL in React

React’s Linking API provides an event listener for incoming links. Note that the event listeners are only available for IOS. We want to add the event listener when the component mounts and ensure that it is removed when the component is un-mounted to avoid any memory leaks. When an incoming link is received, the handleDeepLink function is executed which calls this.navigator.replace to transition to the requested screen.

componentDidMount() { Linking.addEventListener('url', this.handleDeepLink); }, componentWillUnmount() { Linking.removeEventListener('url', this.handleDeepLink); }, handleDeepLink(e) { const route = e.url.replace(/.*?:\/\//g, ""); this._navigator.replace(this.state.routes[route]); }

Test the deep link by visiting the URL in a web browser on the device.

Summary

In this article we covered how to enable deep links and screen transitions in a React Native application. Although the deep link implementation involves a small amount of work using native Android/iOS code, the available React Native APIs allow us to easily bridge the gap between native and javascript code, and leverage the JavaScript side to write the logic that figures out where the deep links should go. It’s worth noting that this article only serves as an example. For more complex real world applications, the NavigatorExperimental component should be considered as a more robust solution. We hope you have as much fun trying out mobile deep linking as we did!

Build native iOS and Android apps with React Native

In this article I am going to provide a brief overview of React Native and explain how you can start using it to build your native mobile apps today.

What’s all the fuss?

Having to write the same logic in different languages sucks. It’s expensive to hire separate iOS and Android developers. Did you know you could also be using your web developers to build your mobile apps?

There have been many attempts to utilize web development technologies to spit out an iOS or Android app with the most common being PhoneGap or Titanium. Nevertheless, there have always been issues with their approaches such as non-native UI, memory or performance limitations, and a lack of community support.

Enter React Native. For me, there are two key points as to why I think React Native is here to stay.

1. It brings React to mobile app development

At Lullabot, we think React is awesome. Developers *enjoy *building applications in React. Over recent years, there have been countless JavaScript frameworks and libraries but React seem to have gotten the formula just right. For a more detailed comparison, see this article by our very own John Hannah.

The approach behind React is to “Learn once, write anywhere” and React Native lets developers use React to write both iOS and Android apps. That doesn’t mean you can also use the same code for web, but being able to use the same developers across platforms is a huge win for boosting productivity and dropping costs.

2. It’s native

With React Native, there isn’t really a compromise to building your app in JavaScript. You get to use the language you know and love. Then, as if by magic, you get a fully native app.

React Native comes with a bunch of core APIs and components that we can use to build our native apps. What’s more, there is a lot of effort put into platform parity through single components instead of having separate iOS and Android components for functionality that is very similar. For an example, see Alert.

A source of confusion for beginners is whether they should still build a separate app for iOS and Android. That ultimately comes down to how different you want the apps to be. When I built my first app on React Native, Kaiwa, I only had a single codebase and almost all of the code was the same for both platforms. For the times when you need to target a specific platform, you can easily implement Platform Specific Code.

Prerequisites

There are a few things you should do before starting with React Native.

JavaScript & ES6

You’ll want to know JavaScript, that’s for sure. If you’re developing in JavaScript any time from 2015 onwards, you’ll definitely want to start using ES6 (a.k.a. ECMAScript 2015). ES6 does a great job at helping you write cleaner and more concise code. Here is the cheat sheet that I use.

React

You’ll need to know how React works. When I first started, I thought that React Native would be a totally different ball game to React. Because it’s for mobile, not web, right? Wrong. It’s the same.

The official documentation does a good job of getting you started. A page that I have at hand at all times when working with React is Component Specs and Lifecycle. It covers one of the topics that will definitely feel foreign when you first start with React. But before you know it, you won’t be able to live without it.

Devices

If you’re developing for iOS and Android you’re ideally going to have both iOS and Android devices to test on. You can get away with just using simulators for the most part but there are definitely differences when using actual devices that you’re going to want to know about. For example, devices act differently depending on power settings and the iOS simulator doesn’t accept push notifications.

If you’re shipping an app, buy devices. They don’t have to be brand-spanking-new, any old used thing from eBay will probably be fine, however, you will need a device that can run at least iOS 7 or Android 4.1. It’s also worth noting that you cannot submit an app to the Apple App Store without an iOS device.

Software

To build for iOS, you’re going to need Xcode which requires a Mac. Xcode has a simulator built in for iOS development. For Android, I use Genymotion as an alternative to the stock Android emulator but you can also connect an Android device via USB or Wi-Fi and debug directly on that.

Getting Started

This guide will help Mac users get started with React Native. If you’re on Linux or Windows, don’t worry! The official documentation has you covered.

Install Homebrew if you’ve been living under a rock. Then use this to install Node.

brew install node

Install the React Native CLI which allows you to play with your projects from the command line.

npm install -g react-native-cli

Optionally, install Watchman. It makes things faster so why not?

brew install watchman

For Android, you’ll need to install Android Studio which gives you the Android SDK and Emulator. For more, see these steps.

Set up your first project. I highly recommend that you do this as opposed to creating the project from scratch yourself. It creates all the files, folders and configurations you will need to compile a native app on either iOS or Android.

react-native init MyApp

From within the new directory for your app, run either of these commands to start testing your app on iOS or Android.

react-native run-ios react-native run-android

For iOS, you should see the Xcode iPhone simulator launch automatically with your app. If you wish, you can open Xcode and launch the app using ⌘ + R instead. For Android, If you’ve got an emulator running or you’ve set up your device for USB debugging then that will automatically launch with your app too. Job done!

Debugging

Being able to debug your app is crucial to efficient development. First, you’re going to want to know how to access the developer menu whist your app is running.

For iOS, shake the device or hit ⌘ + D in the simulator.

For Android, shake the device or press the hardware menu button.

Web Developers may be familiar with the concept of LiveReload. By enabling this from the developer menu, any changes to your JS will trigger an automatic reload on any active app. This feature really got my blood pumping the first time I started with React Native. In comparison to working natively, it definitely speeds up development time.

Want another slice of awesome for web developers? By selecting Debug JS Remotely from the developer menu, you can debug your JavaScript in Chrome DevTools. Some tips:

Tips and tricks for building your first app

These are some of the things that I wish someone had told me before I started working with React Native. I hope they can help you!

Know where to find components

You’re building your app in React. That means you’re going to write React Components. React Native provides a lot of components and APIs out of the box (see official docs for more). I was able to write the majority of my first app with only core components. However, if you can’t find a component for your specific use-case, its quite possible someone else has already made it. Head to the Awesome React Native GitHub repo and your mind will be blown with how many contributions the community is making.

Flexbox

Do you know what Flexbox is? If not, here is an article to start learning about it right now. You’re going to be using Flexbox extensively for laying out elements in your UI. Once you get past the learning curve, Flexbox is great and you’ll want to start using it on all your web projects too.

By default, React Native sets the deployment target device to iPhone within Xcode. You can change this to Universal so that your app will scale correctly on iPad too.

Test

Don’t assume the simulators are enough. Devices can act differently when it comes to things such as keyboards, web sockets, push notifications, performance etc. Also, don’t forget landscape! Lots of people test their app only in portrait and get a shock the first time they rotate.

Don’t stress about the release cycle

At the time of writing this article, a new version of React Native is released every 2 weeks. That’s a really fast release cycle. It’s inspiring to see how much momentum is in the community right now. However, my advice is to not worry if you’re finding it hard to keep up to date. Unless there is a specific feature or bug fix you need, it can be a lot of work upgrading and there are often regressions. You can keep an eye on release notes here.

Navigation

Navigation and routing play a major role in any mobile app. There has been some confusion around Navigator, NavigatorIOS and the recent NavigationExperimental components. Read this comparison for a clear overview. TL;DR Go with NavigationExperimental.

Data flow

Consider some sort of Flux architecture. Although not exactly Flux, Redux is probably the most popular pattern for React apps at present. It’s a great way to have data flow through your app. I found it invaluable for implementing things such as user login. If you’re new to React, I recommend that you read the examples over at Thinking In React before approaching data flow techniques for your app.

Summary

React Native is riding a wave at the moment. The recent announcement that Microsoft and Samsung are also backing React Native will only increase its popularity. Although there may be cases where its not the right fit, I do highly recommend considering it for your next mobile project.

Want to learn more? I’ll be presenting on this topic at ForwardJS in San Francisco, July 2016.

Drupalize.Me Sale: Save $10, Forever!

Looking for Drupal training? Our sister company, Drupalize.Me is running a big promotion this week.

Sign up for a monthly Personal Membership, and enter the code SAVE10 at checkout. You’ll save $10 immediately, and you’ll also save $10 each time your membership renews! The monthly savings won’t expire until you cancel or modify your membership.

But hurry—this offer expires on Friday, June 17.

Drupalize.Me is the #1 source for Drupal training tutorials. In the last few months Drupalize.Me has released tons of new Drupal 8 content, including a big Drupal 8 Theming Guide and Drupal 8 Migration Guide, with more on the way.

Drupalize.Me Sale: Save $10, Forever!

Looking for Drupal training? Our sister company, Drupalize.Me is running a big promotion this week.

Sign up for a monthly Personal Membership, and enter the code SAVE10 at checkout. You’ll save $10 immediately, and you’ll also save $10 each time your membership renews! The monthly savings won’t expire until you cancel or modify your membership.

But hurry—this offer expires on Friday, June 17.

Drupalize.Me is the #1 source for Drupal training tutorials. In the last few months Drupalize.Me has released tons of new Drupal 8 content, including a big Drupal 8 Theming Guide and Drupal 8 Migration Guide, with more on the way.

Adventures with eDrive: Accelerated SSD Encryption on Windows

As we enter the age of ISO 27001, data security becomes an increasingly important topic. Most of the time, we don’t think of website development as something that needs tight security on our local machines. Drupal websites tend to be public, have a small number of authenticated users, and, in the case of a data disclosure, sensitive data (like API and site keys) can be easily changed. However, think about all of the things you might have on a development computer. Email? Saved passwors that are obscured but not encrypted? Passwordless SSH keys? Login cookies? There are a ton of ways that a lost computer or disk drive can be used to compromise you and your clients.

If you’re a Mac user, FileVault 2 is enabled by default, so you’re likely already running with an encrypted hard drive. It’s easy to check and enable in System Preferences. Linux users usually have an encrypted disk option during install, as shown in the Ubuntu installer. Like both of these operating systems, Windows supports software-driven encryption with BitLocker.

I recently had to purchase a new SSD for my desktop computer, and I ended up with the Samsung 850 EVO. Most new Samsung drives support a new encryption technology called "eDrive".

But wait - don’t most SSDs already have encryption?

The answer is… complicated.

SSDs consist of individual cells, and each cell has a limited number of program/erase (PE) cycles. As cells reach their maximum number of PE cycles, they are replaced by spare cells. In a naive scenario, write activity can be concentrated on a small set of sectors on disk, which could lead to those extra cells being used up prematurely. Once all of the spare blocks are used, the drive is effectively dead (though you might be able to read data off of it). Drives can last longer if they spread writes across the entire disk automatically. You have data to save, that must be randomly distributed across a disk, and then read back together as needed. Another word for that? Encryption! As the poster on Stack Overflow says, it truly is a ridiculous and awesome hack to use encryption this way.

What most SSDs do is they have an encryption key which secures the data, but is in no way accessible to an end user. Some SSDs might let you access this through the ATA password, but there are concerns about that level of security. In general, if you have possession of the drive, you can read the data. The one feature you do get "for free" with this security model is secure erase. You don’t need to overwrite data on a drive anymore to erase it. Instead, simply tell the drive to regenerate its internal encryption key (via the ATA secure erase command), and BAM! the data is effectively gone.

All this means is that if you’re using any sort of software-driven encryption (like OS X’s FileVault, Windows BitLocker, or dm-crypt on Linux), you’re effectively encrypting data twice. It works, but it’s going to be slower than just using the AES chipset your drive is already using.

eDrive is a Microsoft standard based on TCG Opal and IEEE 1667 that gives operating systems access to manage the encryption key on an SSD. This gives you all of the speed benefits of disk-hosted encryption, with the security of software-driven encryption.

Using eDrive on a Windows desktop has a pretty strict set of requirements. Laptops are much more likely to support everything automatically. Unfortunately, this article isn’t going to end in success (which I’ll get to later), but it turns out that removing eDrive is much more complicated than you’d expect. Much of this is documented in parts on various forums, but I’m hoping to collect everything here into a single resource.

The Setup
  • An SSD supporting eDrive and "ready" for eDrive
  • Windows 10, or Windows 8 Professional
  • A UEFI 2.3.1 or higher motherboard, without any CSMs (Compatibility Support Modules) enabled, supporting EFI_STORAGE_SECURITY_COMMAND_PROTOCOL
  • A UEFI installation of Windows
  • (optionally) a TPM to store encryption keys
  • No additional disk drivers like Intel’s Rapid Storage Tools for software RAID support
  • An additional USB key to run secure erases, or an alternate boot disk
  • If you need to disable eDrive entirely, an alternate Windows boot disk or computer

I’m running Windows 10 Professional. While Windows 10 Home supports BitLocker, it forces encryption keys to be stored with your Microsoft account in the cloud. Honestly for most individuals I think that’s better than no encryption, but I’d rather have solid backup strategies than give others access to my encryption keys.

Determining motherboard compatibility can be very difficult. I have a Gigabyte GA-Z68A-D3-B3, which was upgraded to support UEFI with a firmware update. However, there was no way for me to determine what version of UEFI it used, or a way to determine if EFI_STORAGE_SECURITY_COMMAND_PROTOCOL was supported. The best I can suggest at this point is to try it with a bare Windows installation, and if BitLocker doesn’t detect eDrive support revert back to a standard configuration.

The Install

Samsung disk drives do not ship with eDrive enabled out of the box. That means you need to connect the drive and install Samsung’s Magician software to turn it on before you install Windows to the drive. You can do this from another Windows install, or install bare Windows on the drive knowing it will be erased. Install the Magician software, and set eDrive to "Ready to enable" under “Data Security”.

After eDrive is enabled, you must run a secure erase on the disk. Magician can create a USB or CD drive to boot with, or you can use any other computer. If you get warnings about the drive being "frozen", don’t ignore them! It’s OK to pull the power on the running drive. If you skip the secure erase step, eDrive will not be enabled properly.

Once the disk has been erased, remove the USB key and reboot with your Windows install disk. You must remove the second secure erase USB key, or Window’s boot loader will fail (#facepalm). Make sure that you boot with UEFI and not BIOS if your system supports both booting methods. Install Windows like normal. When you get to the drive step, it shouldn’t show any partitions. If it does, you know secure erase didn’t work.

After Windows is installed, install Magician again, and look at the security settings. It should show eDrive as "Enabled". If not, something went wrong and you should secure erase and reinstall again. However, it’s important to note that “Enabled” here does not mean secure. Anyone with physical access to your drive can still read data on it unless you turn on BitLocker in the next step.

Turning on BitLocker

Open up the BitLocker control panel. If you get an error about TPM not being available, you can enable encryption without a TPM by following this How-To Geek article. As an aside, I wonder if there are any motherboards without a TPM that have the proper UEFI support for hardware BitLocker. If not, the presence of a TPM (and SecureBoot) might be an easy way to check compatibility without multiple Windows installs.

Work your way through the BitLocker wizard. The make or break moment is after storing your recovery key. If you’re shown the following screen, you know that your computer isn’t able to support eDrive.

You can still go ahead with software encryption, but you will lose access to certain ATA features like secure erase unless you disable eDrive. If you don’t see this screen, go ahead and turn on BitLocker. It will be enabled instantly, since all it has to do is encrypt the eDrive key with your passphrase or USB key instead of rewriting all data on disk.

Turning off eDrive

Did you see that warning earlier about being unable to turn off eDrive? Samsung in particular hasn’t publically released a tool to disable eDrive. To disable eDrive, you need physical access to the drive so you can use the PSID printed on the label. You are supposed to use a manufacturer supplied tool and enter this number, and it will disable eDrive and erase any data. I can’t see any reason to limit access to these tools, given you need physical access to the disk. There’s also a Free Software implementation of these standards, so it’s not like the API is hidden. The Samsung PSID Revert tool is out there thanks to a Lenovo customer support leak (hah!), but I can’t link to it here. Samsung won’t provide the tool directly, and require drives to be RMA’ed instead.

For this, I’m going to use open-source Self Encrypting Drive tools. I had to manually download the 2010 and 2015 VC++ redistributables for it to work. You can actually run it from within a running system, which leads to hilarious Windows-crashing results.

C:\Users\andre\msed> msed --scan C:\Users\andre\msed> msed --yesIreallywanttoERASEALLmydatausingthePSID <YOURPSID> \\.\PhysicalDrive?

At this stage, your drive is in the "Ready" state and still has eDrive enabled. If you install Windows now, eDrive will be re-enabled automatically. Instead, use another Windows installation with Magician to disable eDrive. You can now install Windows as if you’ve never used eDrive in the first place.

Quick Benchmarks

After all this, I decided to run with software encryption anyways, just like I do on my MacBook with FileVault. On an i5-2500K, 8GB of RAM, with the aforementioned Samsung 850 EVO:

Before Turning On BitLocker After BitLocker After Enabling RAPID in Magician

RAPID is a Samsung provided disk filter that aggressively caches disk accesses to RAM, at the cost of increased risk of data loss during a crash or power failure.

As you can see, enabling RAPID (6+ GB a second!) more than makes up for the slight IO performance hit with BitLocker. There’s a possible CPU performance impact using BitLocker as well, but in practice with Intel’s AES crypto extensions I haven’t seen much of an impact on CPU use.

A common question about BitLocker performance is if there is any sort of impact on the TRIM command used to maintain SSD performance. Since BitLocker runs at the operating system level, as long as you are using NTFS TRIM commands are properly passed through to the drive.

In Closing

I think it’s fair to say that if you want robust and fast SSD encryption on Windows, it’s easiest to buy a system pre-built with support for it. In a build-your-own scenario, you still need at least two Windows installations to configure eDrive. Luckily Windows 10 installs are pretty quick (10-20 minutes on my machine), but it’s still more effort than it should be. It’s a shame MacBooks don’t have support for any of this yet. Linux support is functional for basic use, with a new release coming out as I write. Otherwise, falling back to software encryption like regular BitLocker or FileVault 2 is certainly the best solution today.

Header photo is a Ford Anglia Race Car, photographed by Kieran White

The Accidental Project Manager: Risk Management

Meet Jill. Jill is a web developer. She likes her job. However, Jill can see some additional tasks on her project that need doing. They're things like planning, prioritization, and communication with her boss and with her client about when the project will be done.

So Jill takes some initiative and handles those tasks. She learns a little about spreadsheets along the way, and her boss notices she's pretty good with clients.

This happens a few times more, and suddenly Jill is asked to manage the next project she's assigned to. A little time goes by and she's got a different job title and different expectations to go along with it.

This is a pretty common thing. As a designer, developer, and especially as a freelancer, you spot things that need doing. Before you know it, you're doing a different job, and you've never had an ounce of training. The best you can do is read a few blog posts about scrum and off you go!

Getting up to speed

As an accidental project manager (PM), you’ll have to exercise a different set of muscles to succeed in your new role. There are tricks and tactics that you’ll need that don’t always come naturally.

For example, most of my early PM failures came because I thought and acted like a developer who was out to solve problems for the client. That’s a great perspective, but has to be tempered with regard for the scope of the project. A PM needs to be able to step back, recognize an idea’s worth, and still say “We can’t do that”, for reasons of time or budget.

It can take time to build those new ways of thinking. Even worse, accidental project managers don't benefit much from the training that's available. I’ve found little value in the certifications and large volumes of information that exist for PMs—the Project Management Body of Knowledge (PMBOK) and the related Project Management Professional (PMP) certification are a prime example, but there are other certifications as well.

It’s really a question of scale: As an accidental project manager, a certification like the PMP is a terrible fit for where you're at, because you manage small-to-medium digital projects. Unless you’re building a skyscraper, you don't need 47 different processes and 71 distinct documents to manage things—you just need the right tools, applied in the right way.

In order to help fill that knowledge gap for accidental project managers, I’ve decided to start a series. In the future, I plan to post about scope control, measuring and reporting progress, and other topics that are important to PMs, but for now we’re going to cover just one: Risk Management.

Defining Risk

If you were looking at the PMBOK, you'd see risk defined like this:

an uncertain event or condition that, if it occurs, has a positive or negative effect on one or more project objectives such as scope, schedule, cost, or quality.

I fell asleep just pasting that into this post...I can hear you snoring as well, so we'll need a better definition. How about this:

Risk is a way of talking about uncertainty in your project—things that could affect your project that you're not sure about.

I like that better – it's more human, at the very least.

How to manage risks
  • Identify uncertainties, especially at kickoff or in transitions between project phases
  • Assess those risks for possible impact, likelihood, and relative priority
  • Plan responses to the risks that you feel are likely or high-priority
  • Monitor and address risks as the project progresses

Essentially, you should be brainstorming with the following questions:

  • What am I worried about? (risk identification)
  • Why am I worried about that? (possible impact)
  • How worried am I that this could happen? (likelihood/priority)
  • What can I control about this? (mitigation strategies)
  • What should I do if this thing actually happens? (response planning)
But that doesn't feel good...

You've nailed it. All those questions are great to ask – and better yet, talk about with your team and your client. The tricky part is that all this requires honesty. You have to be honest with yourself about what could happen, and this may lead you to realize some difficult things, possibly about yourself, the team you're working with, or the client you're working for. I'm going to go out on a limb and say that most project problems are not technical.

Brainstorming and being honest with yourself might lead you to truths like "I don't trust my client to deal with me honestly", or "My development team is unreliable.” That's hard stuff, and requires delicate handling.

In the end, the underlying reasons for risks are the reasons your project will fail, so you want to handle them as openly as possible. Letting that stuff go unattended is bad news. Get it out where you can do something about it.

Getting risk management into your routine

Risks can be identified throughout a project, but it’s often best to try and spot them right away during a kickoff. Any time you or your team is uncomfortable with something—a development task, a stakeholder, or whatever it may be—you should write it down. Even if you're not sure what to write down exactly, it's worthwhile to discuss with the team and see if you can spot what’s making you uncomfortable.

As the project progresses, you’ll want to communicate weekly with the client about risks you’re seeing, make plans to address them, and highlight items that are becoming more and more likely from week to week.That might take the form of a brief meeting, or an email summary calling out important points in a shared document that the client has access to.

However you communicate risks to your client, you'll want to track them someplace. You can do it in a simple list of items to discuss together or with your client, or you can use a more formal document called a risk log.

The Risk Log

You can track risks anywhere you want, but if you're feeling formal, it's common to keep them in a log that you can update and refer to throughout the project.

One thing to know is that risks are often stated in ‘condition > cause > consequence’ format. The format is loosely structured like this:

“There is a risk that …{something will happen}… caused by …{some condition}… resulting in …{some outcome}.”

Then for each risk, the log lets you:

  • track scores for likelihood and probable impact (on a 1-5 scale),
  • identify warning signs or dates for when action is needed add action plans for each risk in the log.
  • assign a responsible person

I've created a sample risk log for you to copy and modify as needed. If you don't think your project needs this kind of formality, it's possible to keep a simple running list of items to raise and discuss with the team. Nevertheless, seeing the slightly more formal version can help formulate what kind of tracking and tactics are available for the risks your project is facing.

Really comprehensive risk logs can also contain things like financial cost tracking, but that's rarely been a useful measure in my experience. Even big web projects rarely need that kind of tracking.

Common Risk Areas

Physicist Nils Bohr said: "An expert is a man who has made all the mistakes which can be made, in a narrow field." As an accidental project manager, you may lack the rich history of failures that might help you spot an impending risk. To help you figure out what risks you should be managing, here are a bunch of places to look:

When planning your project, watch out when:
  • Tasks exist that rely on the completion of other work before they can begin
  • Tasks exist that none of the project team has ever done before
  • You are required to use unfamiliar technologies
  • Tasks involve third parties or external vendors
  • Multiple systems have to interact, as in the case of api integration or data migration
  • Key decision makers are not available
  • Decisions involve more than one department/team
  • Resources/staff exist that are outside your direct control
  • You have to plan a task based on assumption rather than fact
When interacting with people, watch out for:
  • People who are worried about loss of their job
  • People who will require retraining
  • People that may be moved to a different department/team
  • People that are being forced to commit resources to the project unwillingly
  • People who fear loss of control over a function or resources
  • People forced to do their job in a different way than they're used to
  • People that are handling new or additional job functions (perhaps that's you?)
  • People who will have to use a new technology

Many of the people-oriented risks in the list above fall under the heading of ‘Change is hard’. This is especially true with folks on a client or external team where a new project introduces changes to the way they do business. If you're providing services to a client, you may bear some of the brunt of those change-related stressors.

Change-related risks aren’t limited to people outside your team —sometimes the developers, designers or other folks working directly with you have the same concerns. Wherever you find it, risk management is often about taking good care of people in the middle of change.

Organizational risk

It’s probably also worth mentioning that risks can also arise from organizational change outside your immediate control. Corporate restructuring and changes in leadership can impact your project, and politics between different business units are frequently a source of challenges.

There may not be a ton you can do about these kinds of risks, but identifying them and understanding how they affect your project can make a big difference in how you communicate about things.

So, accidental project manager, go forth and manage your risk! Take some time and think through what you’re uncomfortable or worried about. Be circumspect, ask yourself a lot of ‘why’ questions, and then communicate about it. You’ll be glad you did.

Pages