Lullabot

5 Things You Can Do to Save Your Sanity When Updating to Photoshop & Illustrator 2015

After updating to Adobe CC 2015, there were some small changes that drove me completely nuts, especially in Photoshop and Illustrator. I spent an ungodly amount of time combing through preferences and tool settings (I even watched a video to relearn how to use the crop tool in Photoshop), and I think I finally have Photoshop and Illustrator set to the point where I can actually use them without pulling out my hair. In this article I provide some tips that hopefully spare you some frustration when using Photoshop or Illustrator CC 2015 for the first time.

Disabling the animated zoom functionality in Illustrator

Adobe’s new animated zoom feature is turned on by default in the new version of Adobe CC 2015. Luckily, they provide  a way to turn this off.  If you’re like me and find the animated zoom annoying, choppy, or worse;  slowing down your computer, then I would recommend disabling animated zoom. You can find it in Preferences under GPU Performance.

You can find GPU Performance under the Illustrator CC Preferences menu. Uncheck the Animated Zoom option. Disabling the GPU performance in Illustrator

If you’re experiencing issues with graphics and typography displaying in Illustrator, it could be related to the GPU performance. When you launch Illustrator for the first time, Illustrator will recommend that you turn this on if you have a compatible graphics card. Not knowing what it was at the time or what it did, I chose to follow Adobe’s recommendation and turned this on. After all, Adobe told me that I did have a compatible graphics card. This is what I saw when I opened my first file after the update.

The above is not what I expected to happen. After restarting Illustrator a couple of times and then restarting my computer without success, I realized that it may be a preference setting in Illustrator that’s gone rogue. I stumbled across the GPU Performance option and immediately realized that this was probably the culprit. I unchecked it and viola, issue solved.

One thing to note is that when you disable the GPU performance, you also disable the animated zoom. Yes! I love killing two birds with one stone. My issue was pretty severe, but there are other smaller graphics problems that can appear due to GPU performance including random lines appearing and disappearing when scrolling down a page or while using the hand tool to “swim” through designs. My suggestion is that if you’re experiencing any weird graphic anomalies, try turning off the GPU performance to see if the problem is resolved. You can find the option under Preferences.

You can find GPU Performance under the Illustrator CC Preferences menu item. Uncheck the GPU Performance option to disable it. This will also disable the Animated Zoom option. Enabling the resizing handles when clicking on an object in Illustrator

By default, Illustrator disables the resizing handles that appear when you click on an object on the artboard. I’d love to be able to understand the data behind this decision, because most designers I know use the bounding box handles to resize objects more than the actual transform tool. Good thing you can re-enable these fairly quickly with a shortcut or a menu item. Just press command+shift+B to toggle the shortcut for the bounding box, or you can find the Show Bounding Box option under the menu item View.

You can find the Show/Hide bounding box option under the View menu. Enabling classic crop setting when cropping in Adobe Photoshop

After upgrading Photoshop, I experienced some new issues with the crop tool, especially when trying to crop artboards. Moving and rotating the crop on artboards seemed unnatural and I’ve found that sometimes when resizing the crop, I also unintentionally rotated the canvas. I even watched a video to relearn how to use the crop tool. In this video they revealed the classic crop option, which I’ve been using ever since. The classic crop option has been around since the release of Adobe CC but I had completely forgotten the option was available, probably because it’s hidden away behind a nondescript settings icon that can be difficult to find. If the default crop tool drives you crazy and makes you want to throw stuff around the room, then I highly recommend using the classic crop tool option. It’s much more familiar and predictable in how it works.

To enable classic crop mode, click on the crop tool, and then click on the gear icon that appears along the top of the screen. Add a checkmark next to the Use Classic Mode option. I also enabled the Enable Crop Shield option to help with accuracy when cropping.

When using the crop tool, you can enable Classic Mode by clicking on the gear icon that appears along the the top and adding a checkmark next to Use Classic Mode. To increase accuracy when cropping, I would also suggest turning on Enable Crop Shield. Utilize the option to export individual layers in Photoshop

The Save for Web option has been labeled as legacy and seems like it’s being replaced with a general export. The shortcut command+option+shift+s still works for the save for web option, and there’s still a menu item for it, but I’ve embraced the new export option mostly because you can export individual layers. The option to export individual layers instead of using the slice tool has improved my workflow and I’ve found that in some cases, I can export assets much more quickly without worrying about creating multiple layers of slices to extract specific graphics from a design.

To export individual layers, select the layers you want to export in the layer panel and right click to reveal the menu. Choose the Export As option. To export multiple layers at once, shift+select or command+select the layers you want to export and then right click to reveal the menu. You can also select Quick Export as PNG if you don’t want to customize any of the settings before exporting.

To export assets on a single or multiple layers, right click the selected layers in the layers panel to reveal the Export As option. There’s still more to learn

Learning the ins and outs of Illustrator and Photoshop CC 2015 is an ongoing process, and I’m sure I’ll continue to find better solutions for utilizing tools, settings, and preferences to their fullest. These were just a few I found that help me the most. I suggest visiting Adobe’s Learn & Support site for more in depth tutorials and want to learn more about what’s new in Adobe CC 2015. If you have a suggestion on a tool setting, preference, or something that has helped streamline your process when working in Photoshop or Illustrator CC 2015, we’d love to hear it!

Processing forms in React

React is a JavaScript presentation library created by Facebook and used at AirBnB, Instagram, NetFlix, and PayPal, among others. It is structured in components, where each component is a class that contains everything it needs for its rendering. React is also isomorphic, meaning that its code can be executed by the server and the browser.

This article gives an overview of how our contact form works. In order to explain it, we have created a GitHub repository and a live example which renders the form and processes the submission. For clarity, we have skipped some aspects of real applications such as a routing system, client and server side form validation, and email submission.

React is not a framework. Therefore, we need a few extra technologies to power the form. Here is a description of each of them:

  • Express: a Node.js web application manager. It listens to requests for our application and returns a response.
  • JADE: a templating engine widely used within the Node.js community. It is used to render the main HTML of the application.
  • Grunt: a JavaScript task manager. We use it to run two tasks: transform React's JSX syntax into JavaScript through Babel and then package these files into a single JavaScript file through Browserify.

In the following sections we will open the form, fill it out, submit it and view the response. During this process, we will explain what happens in the browser and in the server on each interaction.

Bootstrapping and rendering the form

We start by opening the form located at https://react-form.herokuapp.com with a web browser. Here is the response:

And here is what happened in the web server in order to render the response:

  1. Our Express application received a request and found a match at the following rule:

// Returns the contact form. app.get('/', function (req, res) { var ContactForm = React.renderToString(ContactFormFactory()); res.render('index', { Content: ContactForm }); });

  2. The rule above rendered the ContactForm component into a variable and passed it to the index.jade template, which has the following contents:

html head title!= React form example | Lullabot script(src='/js/react.js') link(href='https://some-path/bootstrap.min.css', rel='stylesheet') body.container #container.container!= Content script(src='/build/bundle.js')

  3. Express injected the form into the Content variable at the above template and then returned a complete HTML page back to the browser. At this point, we saw the form in the web browser.

  4. The web browser completed receiving /build/bundle.js and executed the following code contained there:

var React = require('react'); var ContactForm = require('./contact-form.jsx'); React.render(<ContactForm />, document.getElementById('container'));

The above snippet loaded the ContactForm component and rendered it. We already did this server side but we need to do it again client side in order to attach listeners to DOM events such as the form submission. This is the reason why React is isomorphic: it's code can be processed client side and server side, which solves the blank screen effect of Single Page Applications. In this example, at step 3 we received the whole form from the server instead of a blank screen. Isn't React neat?

Filling out the form and submitting it

We will use this snippet to fill out the form. Here is a screenshot of us running it in Chrome's Developer Tools:

Next, we will submit the form by clicking on the button at the bottom of the page. This will call a method in the ContactForm React component, which listens to the form submission event through the following code:

<form action="" onSubmit={this.handleSubmit}>

If you worked on web development a few years ago, then the above syntax may seem familiar. The old way of attaching handlers to DOM events was via HTML event attributes. This was straightforward but it had disadvantages such as polluting the HTML with JavaScript. Later on, Unobtrusive JavaScript became the standard so websites would separate HTML from JavaScript by attaching event listeners through jQuery.bind(). However, on large web applications it became difficult to find which callbacks were listening to a particular piece of HTML. React joins the best of both strategies because a) it lets us write event handlers in HTML event attributes and b) when the JSX code is transformed to JavaScript, it is taken out of the HTML and moved to a single event listener. React's approach is clear for developers and efficient for the web browser.

Updating the component's status

When we click on the submit button, the form will first show a Sending message and then, once we receive a response from the web server, it will update the message accordingly. We achieve this in React by chaining statuses. The following method makes the first state change:

handleSubmit: function (event) { event.preventDefault(); document.getElementById('heading').scrollIntoView(); this.setState({ type: 'info', message: 'Sending...' }, this.sendFormData); },

In React, the method this.setState() renders a component with new properties. Therefore, the following code in the render() method of the ContactForm component will behave differently than the first time we called it at the beginning of the article:

render: function() { if (this.state.type && this.state.message) { var classString = 'alert alert-' + this.state.type; var status = <div id="status" className={classString} ref="status"> {this.state.message} </div>; } return ( <div> <h1 id="heading">React contact form example: Tell us about your project</h1> {status}

When we rendered the form on server side, we did not set any state properties so React did not show a status message at the top of the form. Now we have, so React prints the following message:

Sending the form data and rendering a response

As soon as React shows the above Sending message on screen, it will call the method that will send the form data to the server: this.sendFormData(). We defined this transition in the following code:

this.setState({ type: 'info', message: 'Sending...' }, this.sendFormData);

This is how you chain statuses in React. In order to show a Sending status and then submit the form data, we provide an array with new properties for the render() method, and a callback to be executed once it finishes rendering. Here is a simplified version of the callback that sends the form data to the web server:

sendFormData: function () { // Fetch form values. var formData = { budget: React.findDOMNode(this.refs.budget).value, company: React.findDOMNode(this.refs.company).value, email: React.findDOMNode(this.refs.email).value }; // Send the form data. var xmlhttp = new XMLHttpRequest(); var _this = this; xmlhttp.onreadystatechange = function() { if (xmlhttp.readyState === 4) { var response = JSON.parse(xmlhttp.responseText); if (xmlhttp.status === 200 && response.status === 'OK') { _this.setState({ type: 'success', message: 'We have received your message and will get in touch shortly. Thanks!' }); } else { _this.setState({ type: 'danger', message: 'Sorry, there has been an error. Please try again later or send us an email at info@example.com.' }); } } }; xmlhttp.open('POST', 'send', true); xmlhttp.setRequestHeader('Content-type', 'application/x-www-form-urlencoded'); xmlhttp.send(this.requestBuildQueryString(formData)); },

The abobe code fetches the form values and then submits them. Depending on the response data, it shows a success or failure message by updating the component's state through this.setState(). Here is what we see on the web browser:

You may be surprised that we did not use jQuery to make the request. We don't need it. The native XMLHttpRequest object is available in the set of browsers that we support at Lullabot.com and has everything that we need to make requests client side.

The following code in the Express application handles the form submission by returning a successful response:

// Processes the form submission. app.post('/send', function (req, res) { return res.send({status: 'OK'}); });

In the real contact form at https://www.lullabot.com/contact we grab the form data and send an email. If you are curious about how this works, you can find an example snippet at the repository.

Conclusion

Clarity and simplicity are the adjectives that come to mind when we think of React. Our ContactForm component has a few custom methods and relies on some of React's API methods to render and process the form. The key to React is that every time that we want to provide feedback to the user, we set the state of the component with new values, which causes the component (and the components within) to render again.

React's solves some of the challenges that we have experienced in front end web applications when working with other technologies. The ability to render the result of the first request on the server is mindblowing. We also like its declarative syntax, which makes it easy for new team members to understand a given component. Finally, its efficient event system saves us from attaching too many event listeners in a page.

Did this article spark your curiosity? Go and try the live example and fork the repository if you want to dive deeper into it. At Lullabot, we are very excited about React and are looking forward to your feedback.

A PHP Developer’s Guide to Caching Data in Drupal 7

If there’s one thing in programming that drives me up the wall, it’s patterns that I use once every few months, such that I almost remember what to do but inevitably forget some key detail. Lately, that has been when I’ve needed to cache data from remote web services. I end up searching for A Beginner's Guide to Caching Data in Drupal 7 and checking it’s examples against my code. That’s no fun at all.

After some searching for a different project, I found the Drupal Doctrine Cache project and thought "what if I could chain the static and Drupal cache calls automatically?" - and of course, it’s already done with Doctrine’s ChainCache class. ChainCache gives us a consistent API for all of the usual cache operations. The class takes an array of CacheProvider classes that can be used to cache data. When fetching data, it goes through them in order until it finds the object you’re looking for. As a developer, caching data in memory in a static cache is no different than caching it in MySQL, Redis, or anything else. On top of that, ChainCache handles saving and deleting entries through the entire chain automatically. If you update the database (and invalidate your cached data), you can clear the static and persistent caches with a simple $cache->delete(). In fact, as someone using the cache object directly, you might not even know that a static cache exists! For example, the ChainCache could be updated to also persist data in a local APC cache. Or, the persistent Drupal cache could be removed if it turned out not to improve performance. Calling code doesn't need to have any knowledge of these changes. All that matters is you can reliably save, fetch, and delete cached data with a consistent interface.

What does all this mean? If you’re already using Composer in your Drupal projects, you can easily use these classes to simplify any of your caching code. If you’re not using Composer, this makes a great (and simple) example of how you can start to use modern PHP libraries in your existing Drupal 7 project. Let’s see how this works.

Adding Drupal Doctrine Cache with Composer

The first step is to set up your module so that it requires the Drupal Doctrine Cache library. For modules that get posted on drupal.org, I like to use Composer Manager since it will handle managing Composer libraries when different contributed modules are all using Composer on the same site. Here are the steps to set it up:

  1. Install Composer if you haven’t installed it yet.
  2. Create a Drupal module with an info file and a module file (I’ve put an example module in a sandbox).
  3. In the info file, depend on Composer Manager: dependencies[] = composer_manager
  4. Open up a terminal, and change to the module directory.
  5. Run composer init to create your initial composer.json file. For the package name, use drupal/my_module_name.
  6. When you get to the step to define dependencies (you can modify them later), add capgemini/drupal_doctrine_cache to require the library. You can add it later by editing composer.json or using composer require.

When you enable your module with Drush, Composer Manager will download the library automatically and put it in the vendor folder. For site implementations, it’s worth reading the Composer Manager documentation to learn how to configure folder paths and so on.

Using the CacheProvider for a Static and Persistent Cache

We’re now at the point where we can use all of the classes provided by the Drupal Doctrine Cache library and Doctrine Cache in our module. In a previous implementation, we might have had caching code like this:

function my_module_function() { $my_data = &drupal_static(__FUNCTION__); if (!isset($my_data)) { if ($cache = cache_get('my_module_data')) { $my_data = $cache->data; } else { // Do your expensive calculations here, and populate $my_data // with the correct stuff. cache_set('my_module_data', $my_data); } } return $my_data; }

We can now replace this code with the ChainCache class. While this is nearly the same amount of code as the previous version, I find it much easier to read and understand. One less level of nested if statements makes the code easier to debug. Best of all, to a junior or non-Drupal PHP developer, this code doesn’t contain any "Drupal magic" like drupal_static().

function my_module_function() { // We want this to be static so the ArrayCache() isn’t recreated on each function // call. In OOP code, make this a static class variable. static $cache; if (!$cache) { $cache = new ChainCache([new ArrayCache(), new DrupalDoctrineCache()]); } if ($cache->contains('my_module_data')) { $my_data = $cache->fetch('my_module_data'); } else { // Do your expensive calculations here. $my_data = 'a very hard string to generate.'; $cache->save('my_module_data', $my_data); } return $my_data; }

If the calling code needs to interact with the cache directly, it’s entirely reasonable to return the cache object itself, and document in the @returns tag that a CacheProvider is returned. Your specific caching configuration is safe and abstracted, avoiding sporadic drupal_static_reset() calls in your code.

Interfaces are the Future, and they’re Already Here

Even if you prefer the Drupal-specific version of this code, there’s something really interesting about how Doctrine, a small bit of glue code, and Drupal can now work together. The above is all possible because Doctrine ships a set of interfaces for caching, instead of just creating raw functions or concrete classes. Doctrine is helpful in providing many prebuilt cache implementations, but those can be swapped out for anything - including a thin wrapper around Drupal’s cache functions. You can even go the other way around, and tie Drupal’s cache system into something like Guzzle’s Cache Subscriber to cache HTTP requests in Drupal. By writing our code around interfaces instead of implementations, we let others extend our code in ways that are simply impossible with Drupal-7 style procedural programming. The rest of the PHP community is already operating this way, and Drupal 8 works this way as well.

Do you know about other PHP libraries that are great at working with Drupal 7’s core systems? Share them here by posting a comment below.

MozCon 2015 Recap, Part 2: Remarketing, Communities, and Analytics

In part 1 of the MozCon 2015 recap, I wrote about how Google is slowly killing SEO, and how that has led to a resurgence of on-site strategy and tactics that can help get you more traffic. SEO is dead. Long live SEO!

Talks touching on that topic were the most interesting and exciting of the conference. The other three topics that are worthy of note, however, offered practical advice that can be acted on immediately. Like before, I have given some highlights of the talks followed by my personal takeaways.

Remarketing

Duane Brown talked about delightful remarketing. Those aren’t two words that normally go together, but he made me a believer. This mainly consisted of some best practices to not annoy potential customers.

  • Set up a burn pixel, so you don’t keep marketing to someone who has already bought that product.
  • Look back window - how long will you continue to retarget them? He recommends no more than 3 days. That seems kind of short to me, but it’s something that warrants testing, as I’m sure it depends on target audience and product/service type.

He also mentioned Adwords Customizer, which was something new to me. You can set up a countdown, embedded right in your ad. This becomes more powerful when combined with remarketing.

Cara Harshman talked about online personalization and much of it relates to remarketing. How do you do it without being creepy?

She listed three parts of a framework to use as a guide when personalizing copy:

  1. Who to target
    • Contextual personalization - changing the text to match ad copy.
    • Demographic personalization - this is not just typical stuff like age and gender, but also where they are in the funnel. Enterprise or small business? Pre-sale or post sale? Adroll, for example, gives one a phone number and the other a link to a customer support site.
    • Behavioral personalization - what are they doing, or what are they more likely to do? Past purchasers are more likely to buy again, so perhaps show them higher margin items?
  2. What to show them - don’t get too detailed, or it just gets creepy.
  3. How to prioritize what to implement. Ask these questions:
    • What is the potential business impact?
    • What would be the technical effort to execute?
    • What are the requirements to sustain it?
Action Items for Remarketing
  1. Implement a burn pixel if you haven’t already.
  2. Limit how long you remarket to visitors. Begin some tests to see how long it typically takes customers to convert after the first visit. When there is a massive dropoff, that is probably your limit. Anything longer and you risk causing some burn out and bad will toward your brand.
  3. Segment your audience, but don’t slice them too thin. Use the personalization framework to help start the conversation.
  4. Start looking for additional ways to personalize that could have a big impact. Start with simple text changes with an aim to eventually go bigger. What is Code? gives a certificate of completion at the end.
Building and Maintaining Communities

Rich Millington offered some advice on building (or reinvigorating) online communities, even for brands with boring and mundane products. Many go the route of the big launch, which leads to nothing but a quick plummet. It pays off to think smaller and grow more organically. Often, all you need is 150 active members to reach critical mass, something that is self-sustaining.

Finding Your First Community Members

Where to start? Start small. Like any network, you start with your friends and people you already know. To get them to join, however, you need credibility or a founder who has that credibility. If you don’t have that credibility, don’t waste your time trying to start a community. Build your credibility instead. Create content, host events, interview experts - the typical things you would expect to do if you want to be known as a thought leader.

There are four ways to ensure people want to become involved in your community:

  1. Solve a problem they already know exists - one company targeted toward teachers could never get them engaged. Turns out, teachers just didn’t have the time, so instead they created a community for teachers to swap time-saving tips, and activity exploded.
  2. Seize an opportunity they are aware of - if your product is boring, like washing machines, come up with a new angle. Housework tips and hacks.
  3. Explore a passion they are curious about.
  4. Increase their status among their friends - exclusive clubs are popular for a reason.

Don’t focus so much on aesthetics and polish. Some of the most active communities are ugly and super simple. Just look at Reddit and HackerNews.

Onboarding New Members

Most platforms for communities encourage lurking. How do you break out of that and get people to participate faster? Most communities introduce new members the worst way possible: asking them to introduce themselves in some long thread, and asking them to complete their profile. Boring.

Instead persuade them to share their experience, opinion, or problem. One community sends new members an intro email where the main thrust is this: “Hi, glad you are here. We’d love your opinion on this discussion here:” Every few days, they rotate out the name and link they recommend to keep up with popular and interesting discussions.

Keep these starter questions in mind when onboarding new members:

  1. What are they doing? (Yammer)
  2. What are they thinking? (Facebook)
  3. What have they recently learned?
  4. What do they need help with?
Keeping Members Active and Engaged

After getting new members involved with their first thread, however, you need to keep them engaged. The speed of the response to the first thread is important. If new members get a response in the first 15 minutes to their question or opinion, they have an 87% chance of posting again. That number goes down the longer it stretches out. This is one reason why Ubercart, an ecommerce platform for Drupal, got so popular. The people behind it were very active on their support forums and made sure to answer every question quickly, even if they didn’t know the answer right away.

Gamify participation and show progress. Provide the sense that they are accomplishing something. Communities also need a clear map that members can follow that leads to them getting more autonomy and responsibility. You need moderators you can trust, and you need to show how people can grow into that position.

Finally, encourage friendships. Introduce them to similar people. These relationships they build will be the main reason they stay for the long term.

And whatever you do, don’t use a Facebook page for your community.

Action Items for Building Communities
  1. Do you have proper credibility in your sphere? If not, start making a plan on building that credibility, or make a list of people who could be your founding members that provides that credibility.
  2. Craft a welcome page and email that makes new members feel important.
  3. Make sure you set up a system so you are alerted when new people post their first topic, and make sure those people get a response as soon as possible.
Analytics Correcting for Dark Traffic

Marshall Simmonds discussed dark search and dark social, the traffic you get that had no referral strings or information. His people work with a vast network of publishers and can pull large amounts of traffic data. 160 sites in 68 categories, totaling 226 billion page views. What they found is that at least 18% of what gets labeled as “direct traffic” is not actually direct traffic. That’s just the bucket analytics providers throw stuff into when they can’t classify it any other way.

How do they know this? By looking closer at some of this so-called “direct traffic.” A lot of the pages had URLs three levels deep. Did someone really type that URL in by hand? They most likely were sent from somewhere. Either:

  • From a secure site
  • From inside an app
  • From incognito or private browsing
  • From somewhere with a new referrer string that isn’t recognized yet

He offered some simple heuristics to discover your dark traffic. First, finding direct visits that were actually sent from social networks.

  1. Aggregate everything classified as direct traffic
  2. Remove your homepage and major section fronts that might be bookmarked from the data set
  3. What you have left is probably dark social
  4. Filter for just new users
  5. Verify links against social campaigns
  6. What you have left is probably dark search

This is important for measuring ROI and having a true sense of where your traffic is coming from. You don’t want to be deceived into thinking something isn’t working.

Software updates also affect these numbers. When iOS/Android puts out an update, sometimes someone forgets to flip the referral string switch or something. Organic traffic drops. They did better with iOS 8, so they are learning. But whenever there are major software updates to an OS and browsers, monitor your analytics for major changes. That spike in direct traffic probably isn’t direct traffic.

Paid Search Informing Organic

Stephanie Wallace makes the argument that paid search and organic teams should not be so siloed. The separation happens because they often have some different goals and different skill set requirements, but there are some good benefits in bringing them together.

Organic traffic is more of a long term strategy, and as a result, it's hard to react to results. If you make any changes, you have to wait. And wait. And wait some more. But this is where paid search can be a great partner and fill in these gaps. It specializes in faster results. Some ways you can leverage paid campaigns to help your organic efforts are:

  1. Testing article titles and descriptions. A paid campaign can give you quick results on what title gets the highest CTR. This can be useful for content you have invested heavily in, and want to be sure you don’t sabotage your changes with some weak metadata. Also a good way to test content ideas before you start spinning your wheels.
  2. Identify content gaps that convert. Get a list of your content that was part of a conversion funnel, part of the path that turned a visitor into a customer. Add a secondary dimension of Paid Search to discover which keywords led people through that content. Do you have anything ranking for those keywords? If not, you now have a targeted list to focus your efforts on, one that you know has a high chance of increasing revenue.
  3. Conversion optimization. This may seem obvious, but your paid search landing pages aren’t the only pages that need optimization. Use a paid campaign to build up some data quickly for some of your organic landing pages, and act accordingly.
Beyond the Pageview

Adrian Vender argues that pageviews are not enough. We’re not good at tracking what the user is doing, how they are interacting with the content. A pageview does not equal a “success.” But event tracking is difficult and requires lots of javascript. And then when you’re tracking all this data, it goes into a big black hole where reports are confusing and hard to digest. So how do we do better?

First, use Google Tag Manager (or Tealium) to manage all the javascript. You won’t have to depend on your IT department or a developer to place your new code on the right page.

Second, start tracking more events. Don’t just track the pageview, but also fire an event when someone gets done reading the page. Fire an event for important navigation elements. Track outbound URLs. Track video views and progress. For really important interactions, you can even define user segments and build reports for just those users who, for example, clicked on that call-to-action button in your sidebar. Wouldn’t it be great to quantify that 20% of people, who read this one particular article to the very end, signed up for your email list?

And finally, learn more about your analytics package. You’re going to need to stretch your muscles to deal with all of this new data. He offers a good list to get you started if you’re using Google Analytics.

Action Items for Analytics
  1. Quantify your real direct traffic. Try and measure your dark traffic to ensure your decisions are made with more accurate data, and put measures in place to watch for sudden spikes in “direct traffic” whenever there are major software updates.
  2. If you run paid campaigns, use the paid keyword data to identify valuable content gaps that you aren’t ranking for.
  3. For cornerstone content, consider using paid campaigns to test headlines and descriptions for maximum impact.
  4. Implement Google Tag Manager, or an equivalent. Start tracking events.
  5. Segment your visitors based on critical interactions.
  6. Learn more about Google Analytics event tracking and building reports. Create shortcuts and custom dashboards for quick reference.
Wrapping Up

The more things change, the more they stay the same. The importance of on-site SEO has come back in a big way, and will continue to grow in importance. The influence of mobile cannot be overlooked. Google is disrupting themselves, accepting large cuts in revenue (mobile ad clicks are far less than desktop), to cater to mobile users. Smart marketers will pay attention.

That being said, Google has been wrong before, and they aren’t afraid to disseminate misinformation to accomplish an objective. For example, no one really saw a gain in traffic after switching to HTTPS, even though we were assured by Google that it would be a ranking signal. So be aware. Take things with a grain of salt. Do your own testing and research.

I’ll end with a final reading recommendation that will help you start to think more about the future. With Apple and Google set to release search APIs for their respective mobile operating systems, where apps can potentially serve their deep content to a device’s native search results, the target for marketers may be shifting faster than we think. So go read Emily Grossman’s article titled App Indexing & The New Frontier of SEO, and start preparing for new ways to reach people.

Pausing the Drupalize.Me Podcast

This week instead of a regular podcast episode, this is a very short update about the podcast itself. We will not be publishing any more episodes for the rest of 2015. We’ve had a great run over the last three years, after reviving the old Lullabot podcast as the new Drupalize.Me podcast. We’re taking a pause now so that we can take a step back and rethink the podcast. We’d like to reboot it as a more focused and useful tool for our listeners. If you have ideas or thoughts about what would be an amazing podcast that we can provide, please do let us know! You can look for the new podcast to return to the scene in early 2016.

Why Web Designers Should Moodboard

Moodboarding is a quick, lean way to document inspiration as you research a design project, and a great way to revisit this research phase as you progress further and further into the execution. Traditionally speaking, moodboards have served as a tried and true design artifact, assembled like a collage, full of inspirational imagery for a variety of designers. But interactive designers without this traditional background might overlook moodboards as the handy deliverable that they really are.

Moodwhatting?

What am I talking about? The moodboard is not so much a digital client deliverable as it is an artifact of the design process. But I’ve found it to be such a useful internal tool that it is still a must-have in any process with market research.

A sample from a recent Lullabot moodboard; it incorporates inspiration across many disciplines, to help describe the tone we want to achieve with our brand.

In its most basic form, a moodboard is a layout you can assemble with any inspiration you wish to keep referencing as you work, which can be enhanced with hierarchy, notes, and other obsessive details. In a more modern design workflow, moodboards might fit in right between research (after you know the audience you're designing for) and style tiles (before you get designing).

A Disclaimer

I feel the need to add a little disclaimer here, about what our intentions are, or more specifically, aren’t. Our intention when collecting inspiration is never to blatantly copy others. This is why I love examples that are way beyond the realm of web design, because it’s so much easier to see the benefits and get away from this temptation. The goal with inspirational moodboards is to capture a particular brand essence (or positioning, or tone, or what have you), first to illustrate a concept you wish to achieve, and then to challenge you with the inspiring work that others have done.

Baby’s First Board

My first legitimate moodboard was during a college internship, after taking on a student project to redesign a particular rum brand. One of the first steps was creating a literal, well, board, by cutting and pasting magazine art onto a foam core board. It was pretty old school. This process was tedious, but still had its moments. Coworkers that would stop by the intern’s pen to drop off grunt work would find themselves lured in by the eye candy on the board, which set the stage for us to pitch our branding progress and get feedback. It also made refining my design direction easier. Keep in mind, that was quite some time ago, and I was not as experienced at communicating design intent back then. So instead, I had this visual artifact I could literally point at.

Online Moodboarding Today

Now, there are much better ways for UX designers to moodboard, and faster. The Lullabot designers utilize a Chrome extension, Creonomy Board, that screenshots sites with a variety of options, all of which help to speed up this process. When taking the screenshots, you can use keyboard shortcuts or the extension itself to select a particular image, crop to a selected area, save the visible part of the page, or capture the entire page. Or you can simply drag and drop already-saved images to populate your board.

Once you have your screen capture, you can create different boards for particular buckets, star your favorites and filter to see just those, and assign comments and tags. But my favorite feature is how Creonomy Board automatically saves the link back to where you took the screenshot. This is great when you want to dig a little deeper or refer back to the context of the inspiration for any reason, like to see an interaction on a live site.

Creonomy Board’s Chrome extension provides several options for quickly capturing website inspiration.

All that being said, using this extension is simply a personal preference. I’ve heard of designers having great success using other tools that automate and organize this type of collecting. Don’t forget about Pinterest, Pocket, and even Evernote as you define your own process around moodboarding.

See other tools the Lullabot design team uses in their UX process with 8 Tools for a Leaner Design Research Process.

It’s really not about the app you use, which is why I don’t want to get too far into the details of any particular product. We utilize Creonomy Board to help us speed up and even automate the process of cataloging inspiration, so that the team can focus on the exciting, creative aspect of the inspiration itself. Enough about how, let’s explore why, and get you thinking about all the ways you can use it while creating a website.

Explore Aesthetics

I think the ways to apply a moodboard to your process boil down to two simple approaches. The first is possibly the more traditional or expected, which is to gather inspiration and present a visual direction. It’s a great way to begin style exploration, to pull what others are doing and visualize brand values, before getting into your own sketches or style tiles.

It’s important to force yourself to get outside of the UI box here. It’s an easy habit to get into, to hunt for polished pixels on dribbble, but we need to push ourselves further. You can find thoughtful typography looking at print design, and discover beautiful color palettes from fashion designs, for example. Think about traditional art, fashion design, printed media like posters, books, and packaging, and anything else that might be atypical to a web designer’s feed. You can go one step further by creating and illustrating metaphors that describe your brand or website goals (i.e., "our site should have the sophistication of an Eames recliner, but still be fun and witty, like John Cleese"). 

The last time I was in New York, I found myself engrossed in the gorgeous window designs of the many fashion retail stores. In Soho, I stopped to take pictures of those that I found most interesting. Weeks later, I ended up pulling a color palette from a Lacoste window, and for a client in a completely different industry. You never know where your next idea might come from, so document anything you discover that gets your creative gears turning, lest you regret it later. I have a few moodboards that are random stuff like this (Random Street Inspiration, Random Site Inspiration, etc), for me to pull from later.

Inspiration can come from unexpected places. This fashion retail window display inspired a nautical branding project. Set the Tone

If you want to get really fancy, designers can refine this brain-dump to more closely plan the next phase of styling. Often after a discussion, things that don’t feel quite right will be removed, and other exploration might come up around something that was particularly interesting.

I think the biggest advantage of a moodboard is that it helps bring everyone to the same page, before going too far down any direction. You can quickly capture a few examples of the aesthetic and feel you hope to attain, share it with your internal design team or the entire project team, and eliminate–or at least, reduce–miscommunication before committing to a design direction.

Once the first pass of unconventional inspiration is complete, feel free to take another pass that is closer to home. Think about the sorts of things that would be important to include in a style tile, and pull examples that work well together. Things like typography, color, page layouts, photography, even unique interaction elements, are all fair game. If you do pull things for only  very specific reason (like a beautiful color palette, but the type is all wrong), consider how to label or tag your pulls to explain specifically what you wanted to capture.

Collect Research

OK, I’m going a little out of order here as far as a typical design process, but the second application for moodboards has only occurred to me fairly recently, as the Lullabot design team was working together in Creonomy Board.

Lullabot has a pretty fantastic research process–if I do say so myself, after working within processes that didn’t allow for the same time to prepare for the design phase. One big part of this process is conducting market research, to understand the space our client lives in and the competitors they face. This can be especially valuable to see how other companies are solving similar architecture problems, or to see the position their messaging takes, for example. But it’s easy to get lost in a sea of similar companies, and we can find ourselves asking “what was that one site about the product that does the thing?” just weeks later.

As I mentioned, taking a screenshot from a site using the Creonomy Board plugin (rather than simply dropping files in) lets you also capture the web url it came from. This offers a practical application for the moodboard: to quickly and visually catalog a whole mess of sites and their particular pages. It’s so handy to scroll down through the preview thumbnails later and immediately get a sense of the research, but still have the option to dig deeper into their original site.

Talk Amongst Yourselves

There! Now that you see how Lullabot uses moodboards within different parts of their process, I hope you feel empowered to go out and create your own. And I really hope these examples are helpful as you consider your own design process, and am excited to hear from you. If moodboarding is already a part of your skillset, what tools do you use? How do you use them within your own process?

Write Unit Tests for Your Drupal 7 Code (part 2)

This article is a continuation of Write Unit Tests for Your Drupal 7 Code (part 1), where I wrote about how important it is to have unit tests in your codebase and how you can start writing OOP code in Drupal 7 that can be tested with PHPUnit.

In this article I show you how this can be applied to a real life example.

TL;DR

I encourage you to start testing your code. Here are the most important points of the article:

Dependency Injection and Service Container

Jeremy Miller defines dependency injection as:

[...] In a nutshell, dependency injection just means that a given class or system is no longer responsible for instantiating their own dependencies.

In our MyClass we avoided instantiating CacheController by passing it through the constructor. This is a basic form of dependency injection. Acoording to Martin Fowler:

There are three main styles of dependency injection. The names I'm using for them are Constructor Injection, Setter Injection, and Interface Injection.

As long as you are injecting your dependencies, you will be able to swap those objects out with  their test doubles in your unit tests.

An effective way to pass objects into other objects is by using dependency injection via a service container. The service container will be in charge of giving the receiving class all the needed objects. Then, the receiving object will only need to get the service container. In our System Under Test (SUT), the service container will yield the actual objects, while in the unit test domain it will deliver mocked objects. Using a service container can be a little bit confusing at first, or even daunting, but it makes your API more stable and robust.

Using the service container, our example is changed to:

class MyClass implements MyClassInterface { // ... public function __construct(ContainerInterface $service_container) { $this->cacheController = $service_container->get('cache_controller'); $this->anotherService = $service_container->get('my_services.another_one'); } // ... public function myMethod() { $cache = $this->cacheController->cacheGet('cache_key'); // Here starts the logic we want to test. // ... } // ... }

Note that if you need to use a new service called 'my_services.another_one', the constructor signature remains unchanged. The services need to be declared separately in the service providers.

Dependency injection and service encapsulation is not only useful for mocking purposes, but also to help you to encapsulate your components –and services–. Borrowing, again, Jeremy Miller’s words:

Making sure that any new code that depends on undesirable legacy code uses Dependency Injection leaves an easier migration path to eliminate the legacy code later with all new code.

If you encapsulate your legacy dependencies you can ultimately write a new version and swap them out. Just like you do for your tests, but with the new implementation.

Just like with almost everything, there are several modules that will help you with these tasks:

  • Registry autoload will help you to structure your object oriented code by giving you autoloading if you follow the PSR-0 or PSR-4 standards.
  • Service container will provide you with a service container, with the added benefit that is very similar to the one that Drupal 8 will ship with.
  • XAutoload will give you both autoloading and a dependency injection container.

With these strategies, you will write code that can have it’s dependencies mocked. In the previous article I showed how to use fake classes or dummies for that. Now I want to show you how you can simplify that by using Mockery.

Mock your objects

Mockery is geared towards providing even more flexibility when creating mocks. Mockery is not tied to any test framework which makes it useful even if you decided to move away from PHPUnit.
In our previous example the test case would be:

// Called from the test case. $fake_cache_response = (object) array('data' => 1234); $cache_controller_fake = \Mockery::mock('CacheControllerInterface'); $cache_controller_fake->shouldReceive('cacheGet')->andReturn($fake_cache_response); $object = new MyClass($cache_controller_fake); $object->myMethod();

Here, I did not need to write a CacheControllerFake only for our test, I used Mockery instead.
PHPUnit comes with a great mock builder as well. Check its documentation to explore the possibilities. Sometimes you will want to use one or the other depending on how you want to mock your dependency, and the tools both frameworks offer. See the same example using PHPUnit instead of Mockery:

// Called from the test case. $fake_cache_response = (object) array('data' => 1234); $cache_controller_fake = $this ->getMockBuilder('CacheControllerInterface') ->getMock(); $cache_controller_fake->method('cacheGet')->willReturn($fake_cache_response); $object = new MyClass($cache_controller_fake); $object->myMethod();

Mocking your dependencies can be hard –but valuable– work. An alternative is to include the real dependency, if it does not break the test runner. The next section explains how to save some time using Drupal Unit Autoload.

Cutting corners

Sometimes writing tests for your code makes you realize that you need to use a class from another Drupal module. The first instinct would be, «no problem, let’s create a mock and inject it in place of the real object». That is a very good approach. However, it can be tedious to create and maintain all these mocks, especially for classes that don’t depend on a bootstrap. That code could just be required in your test case.

Since your unit test can be considered a standalone PHP script that executes some code –and makes some assertions– you could just use the require_once statement. This would include the code that contains the class definitions that your code needs. However, a better way of achieving this is by using Composer’s autoloader. An example composer.json in your tests directory would be:

{ "require-dev": { "phpunit/phpunit": "4.7.*", "mockery/mockery": "0.9.*" }, "autoload": { "psr-0": { "Drupal\\Component\\": "lib/" }, "psr-4": { "Drupal\\typed_entity\\": "../src/" } } }

With the previous example, your unit test script would know how to load any class in the Drupal\Component and Drupal\typed_entity namespaces. This will save you from writing test doubles for classes that you don’t have to mock.

At this point, you will be tempted to add classes from your module’s dependencies. The big problem is that every drupal module can be installed in a different location, so a simple ../../contrib/modulename will not do. That would only work for your installation, but not for others. This is one of the reasons why I wrote with Christian Lopez (penyaskito) the Drupal Unit Autoload. By adding Drupal Unit Autoload to your composer.json you can add references to Drupal core and other contributed modules. The following example speaks for itself:

{ "require-dev": { "phpunit/phpunit": "4.7.*", "mockery/mockery": "0.9.*", "mateu-aguilo-bosch/drupal-unit-autoload": "0.1.*" }, "autoload": { "psr-0": { "Drupal\\Component\\": "lib/", "Symfony\\": ["DRUPAL_CONTRIB<service_container>/lib"] }, "psr-4": { "Drupal\\typed_entity\\": "../src/", "Drupal\\service_container\\": ["DRUPAL_CONTRIB<service_container>/src"] }, "class-location": { "\\DrupalCacheInterface": "DRUPAL_ROOT/includes/cache.inc", "\\ServiceContainer": "DRUPAL_CONTRIB<service_container>/lib/ServiceContainer.php" } } }

We added mateu-aguilo-bosch/drupal-unit-autoload to the testing setup, so we can include Drupal aware autoloading options to our composer.json.

Striving for the green

One of the most interesting possibilities that PHPUnit offers is code coverage. Code coverage is a measure used to describe the degree to which the methods are tested. Having a high coverage reduces the number of bugs in your code greatly. Moreover, adding new code with test coverage will help you ensure that you are not introducing any bugs along with the new code.

PhpStorm coverage integration.

A test harness with coverage information is a valuable tool to include in your CI tool. One way to execute all your PHPUnit cases is by adding a phpunit.xml file describing where the tests are located and other integrations. Running the phpunit command in that folder will execute all the tests.

Another good 3rd party service is coveralls. It will tell your CI tool how the coverage of your code will change with the pull request –or patch– in question; since Coveralls knows about most of the major CI tools almost no configuration is needed. Coveralls also provides a web UI to see what parts of the code are covered and the ones that are not.

Coveralls.io dashboard.

Write tests until you get 100% test coverage, or a satisfactory number. The higher the number the higher the confidence that the code is bug free.

Read the next section to see all these tools in action in contributed Drupal 7 module.

A real life example

I applied the tips of this article to the TypedEntity module. TypedEntity is a nice little module that helps you get your code organized around your Drupal entities and bundles, as first class PHP objects. This module will help you to change your mindset.
Make sure to check the contents of the tests/ directory. In there you will see real life examples of a composer.json and test cases. To run the tests follow these steps:

  1. Download the latest version of the module in your Drupal site.
  2. Navigate to the typed_entity module directory.
  3. Execute tests/run-tests.sh in your terminal. You will need to have composer installed to run them.

This module executes both PHPUnit and Simpletest tests on every commit using Travis CI configured through .travis.yml and phpunit.xml.

Another great example, this one with a very high test coverage, is the Amazon S3 module. A new release of this useful module was created recently by fellow Lullabot Andrew Berry, and he added unit tests!

Do you do things differently in your Drupal 7 modules? Leave your thoughts in the comments!

MozCon 2015 Recap, Part 1: The Resurgence of On-site SEO

This year, MozCon demonstrated that Google has effectively killed SEO, in the sense of getting more traffic to your site through natural search rankings.

Much of the conference fell back on some simplistic bromides around branding, social media, mobile, and the like, general digital marketing stuff that has been hashed and rehashed before, and in much greater depth. This content might be inspiring. It goes down like cotton candy and gives you a temporary sugar high. But for those who have any experience at all in the digital marketing arena, it just didn’t offer that much substance for you to sink your teeth into.

They also didn’t offer many practical steps to help smaller business owners compete in an increasingly uneven playing field. When a competitor can influence search rankings through a national TV campaign, what do you do? SEO has traditionally helped even the odds in this regard. There was some good, worthwhile content (which I’ll get to in a moment), but very little of it that could be called “SEO” anymore, and the lack of such practical content was noticeable. Based on conversations with attendees who have been coming for years, this was a major shift from previous iterations of the conference. Of course, the changing landscape does provide some interesting opportunities for those who are creative and tenacious enough to take advantage. As Wil Reynolds noted, VRBO was beat in 18 months by Airbnb. Airbnb ranks #9 for vacation rentals. VRBO still ranks #1 for the phrase, and it doesn’t matter.

But of course, killing SEO has been one of Google’s goals for a long time. Google wants you to buy ads. That’s how it gets most of its revenue. As long as there are known ways to raise your rankings in organic results, then your need for ad spend goes down.

There are still ways to increase your rankings and get more traffic naturally. On-site structure and best practices, link building, social signals, and user engagement still have their place. What’s missing is the ability to measure effectiveness. It has become a black box of mystery. Even if you follow Google’s own rules and recommendations 100%, to the letter, there is no guarantee that it will actually help you get more traffic. This adds uncertainty, which leads to paying more for organic rankings and less definable ROI, which leads to the grass over on the paid Google Adwords side looking so much greener.

It starts with the dreaded “not provided” referrer in analytics. Over 70% of all searches are encrypted and are classified as “not provided”, meaning that marketers are blindfolded most of the time when trying to discover how users are reaching their site (unless, of course, they pay for ads. Then the keyword data shows up just fine.)

Add to this to the various search algorithm updates and heavy-handed punishment Google lays down on some online publishers, and you have people scared for their lives, timid to try anything new that may cross the line. Those that have figured something out are not going to share it publicly anytime soon.

Even the best of the talks/topics at MozCon, that had clear examples to back up their assertions, had a large amount of uncertainty baked in. It’s like playing a game that generates random dungeon layouts each time. Everything is recognizable, the textures and colors familiar, but you still have to find your own way through the dark maze, and hope you stumble upon the treasure. It has made it much harder for the little guy to compete with big brands. And in many cases, it has sites competing against the Google giant itself for real estate in the search results.

[image:{"fid":"2984","width":"full","border":true}]

There are no organic listings above the fold. Zero. This query shows nearly all ads.

So what can we do to increase visibility of our web real estate? There are four broad topics that were worthwhile and were covered in various talks, with some highlights and my personal takeaways. Part one will cover the first key topic: the resurgence of on-site SEO.

Onsite SEO

Onsite SEO used to be the only SEO. Your content structure and keywords. What you said about yourself is how your rankings were determined. Google came along and said that what other people said about you is more important, and backlinks and their anchor text were a key factor. With the work of Google’s search spam team and other initiatives like machine learning, the landscape has changed and morphed, and it looks like we are coming full circle.

We are now seeing a renaissance in the importance of onsite optimization, for two main reasons.

Reason 1: Google’s Knowledge Graph

Pete Meyers talked about disappearing organic rankings, with ads, shopping results, local listings, videos, and other Google properties taking up more and more space. Google does not want you to leave their ecosystem. You can see this further when Google just provides a direct answer to your question, solving your need immediately.

[image:{"fid":"2985","width":"full","border":true}]

This is a great user experience; it saves the user clicks and time. Google’s Knowledge Graph powers these answers, and you see it in more detailed action if you type in a query that generates a knowledge card. Meyers used the space needle as an example, but for this article we’ll use the Empire State Building.

[image:{"fid":"2986","width":"full","border":true}]

The data is structured, and you can play around. The answer it spit out is listed as the “Height.” We can play some quick Jeopardy to see if we can get it to return any other parts of that knowledge, and we find that it does this really well.

[image:{"fid":"2987","width":"full","border":true}]

The problem is that it is editorially controlled, so it doesn’t scale. What happens when there is nothing to return from the structured data? We get a rich snippet that looks like a direct answer, but the text is actually pulled from the first ranking result.

[image:{"fid":"2988","width":"full","border":true}]

But these rich snippets don’t always pull from the first organic ranking. Ranking does matter...but all you have to do is get on the first page of the search results. After that, Google determines from that limited list what content to feature. How does it determine which one to use? Based on the on-page semantics of the page. Your content can directly shape the results of the SERPs (search engine results page). And once your content is featured in the rich snippet, you effectively double your search real estate, and some have seen significant CTR increases for those queries.

Another example. In the image below, you’ll see CNN has claimed the rich snippet, even though they are fourth result.

[image:{"fid":"2989","width":"full","border":true}]

What’s interesting is that mobile seems to be the driver of these rich snippets (and a lot of the other redesigns Google has been implementing). Do the same query on a smaller device, and the rich snippet fills up the space above the fold. There are also some hints that voice is driving these changes. When you ask a question, what does Google Now (or Siri) tell you? They need structured data to pull from, and in its absence, they need to fill in the gaps to help with scale.

Reason 2: Machine Learning and Semantics

Rand Fishkin’s talk on on-site SEO went through the evolution of Google’s search engine, one that had a bias against machine learning, and was dependent on a manually crafted algorithm. This has changed. Google now relies heavily on machine learning, where algorithms don’t just know the manually inputted factors for ranking, they come up with their own rules for what makes a good results page for a given query. The machine doesn’t have to be told what a cat is, for example. It can learn on its own, without human intervention.

This means that not even Google’s engineers know why a certain ranking decision has been made. They can get hints and clues, and the system can give them weights and data. But for the most part, the system is a black box. If Google doesn’t know, we certainly can’t know. We can continue to optimize for classic ranking inputs like backlinks, content, page load speed, etc. However, their effectiveness will continue to get harder and harder to measure. If we want to prepare better for the future, we’ll have to focus more on, and get better at, optimizing for user outputs. Did the searcher succeed in their task, and what signals that? CTR? Bounce rate? Return visits? Length and depth of visit?

Based on some experiments that have been run, the industry knows that Google takes into account relative CTR and engagement. If users land on your result and spend more time on your site versus other results for the same query, you have a vote in your favor. If users click the back button quickly after visiting your page, that doesn’t bode well for your rankings.

Rand offers five ways to start optimizing for this brave new world:

  1. Get above your rankings average CTR. Optimize title, description, and URL for click-throughs instead of keywords. Entice the click, but make sure you deliver with the content behind the headline This is an age where even the URL slug might need a copywriter’s touch. And since Google often tests new results, it could be worth it to publish additional unique content around the same topic, to get it just right.
  2. Have higher engagement than your neighbors for the same queries. Keep them on the page longer, make it easier for them to go deeper into your content and site, load your site fast, invest in fantastic UX, and don’t annoy your visitors. Offering some type of interaction is a good idea. Make it really hard for them to click that back button.
  3. Fill gaps in knowledge. Google can tell what phrases and topics are semantically related for a given space, and can use criteria to determine if a page will actually fulfill a user’s need. For example, if your page is about New York, but you never mention Brooklyn or Long Island, your page probably isn’t comprehensive. Likewise, articles on natural language processing are probably going to include phrases like “tokenization” and “parsing.”
  4. Earn more social shares, backlinks, and loyalty per visit. Google doesn’t just take shares, alone on an island, as a ranking factor. Raw shares and links don’t mean as much. If you earn them faster than your competition however, that’s a stronger signal. Your visits to shares ratio needs to be better. You should also aim to get more returning visitors over time.
  5. Fulfilling the searcher’s task, not just their query. If a large number of searchers keep ending up at a certain location, Google wants to get them to that destination faster without them having to go through a meandering path. If people searching for “best dog food” eventually stop their searching after they end up at Origen’s website, then Google might use that data to rank Origen higher for those queries. Based on Chrome’s privacy policy, they are definitely getting and storing this data. Likewise, some queries are more transactional, and results can be modified accordingly. Conversion rate optimization efforts might therefore help with search visibility.
Action Items for Onsite SEO
  1. What questions are your potential customers asking, questions whose answers can’t be editorially curated? What questions actually motivate some of the keywords you rank for on the first page? What are some gaps in content that are hard to editorially curate? Craft sections of your content on your ranked page so it better answers those questions, and fills in these gaps.
  2. Build deep content. Add value and insight, stuff that can’t be replaced by a simple, one-line answer.
  3. Watch one of Google’s top engineers, Jeff Dean, talk about Deep Learning for building intelligent computer systems.
  4. Give users a reason and a way to come back to your site. Build loyalty. Email newsletters and social profiles can help. If you know a good content path that builds loyalty in potential customers, consider setting up remarketing campaigns that guide them through this funnel. If they drop off, draw them back in at the appropriate place. This can also be done with an email newsletter series.
  5. Look for pages that could benefit from some kind of interactivity, adding value to the goal of the page and helping the user fulfill their need. A good example of this is the NY Times page on how family income relates to chances to go to college. Look for future opportunities. Another example is Seer Interactive’s Pinterest guide, where the checklists contain items you can actually…check.
  6. Ask yourself: can I create content that is 10 times better than anything else listed for this query?
  7. Matthew Brown, in his talk on insane content, gave several examples of companies that broke the mold on what content can be, raising engagement and building loyalty by shattering expectations. When working on content ideas, be sure and add the presentation as part of the discussions. How could you add some personalization?  How can you put the user more in charge of their reading experience? Some examples he showed:
    • Coder - Tinder but for code snippets. Swipe left if the code is bad, swipe right if the code is good.
    • Krugman Battles the Austerians.
    • What is Code? - this particular feature took on a second life when it was put on Github so changes could be recommended. A great way of extending life of an investment. It also adds some personalization in that it shows you how many visits and how long you’ve spent reading.
  8. Check some of your richer content from the past. Have you created any assets as part of your content that can live on their own? Maybe with just a little improvement?
  9. Define who a loyal user is. Is it someone who comes to the site twice a month? 5 times a month? Define other metrics of success for a piece of content. To measure it, you need to know what it is.
  10. Find content that has a high bounce rate and/or low time on page. Why are people leaving? Does it have a poor layout? Hard to read? Takes too long to load?
  11. Exchange your Google+ social buttons for WhatsApp buttons. This is based on a large pool of aggregate data that Marshall Simmonds and his team have analyzed. Simmonds said CNN saw big gains when they did this.

SEO is an exciting discipline, one that, by necessity, tries to be a synthesis of so many other disciplines. Where else are you going to be talking about content strategy, which then leads into the theories behind deep machine learning, and then onto fostering a better user experience?

In part 2 of the recap, we’ll go over remarketing, building online communities, and digging deeper into your analytics.

Why Responsive Design?

If you've built a web site in the past several years, you've probably heard of Responsive Design. Coined in 2010 by designer Ethan Marcotte, the term covers a set of techniques that allow one flexible web design to serve many different screen sizes.

In the years since it hit the scene, "responsive" has become the default choice for modern, mobile-friendly sites. With that visibility, though, comes misunderstanding and controversy! Some proponents treat responsive design as a silver bullet for every project's tough problems, while detractors complain it will limit their grand vision. Some clients believe they have to choose between a responsive site or a native mobile app, as if one ruled out the other. And in a few frustrating situations, we've seen totally separate mobile, tablet, and desktop mockups passed off as a "responsive" approach.

At Lullabot, we believe that responsive design is a critical part of future-friendly web publishing. Making the most of it requires understanding which problems it solves, which ones it doesn’t, and how it fits your project’s needs. In this article we'll take a look at how we build responsive sites, correct some common misconceptions, and explain the real benefits for your projects.

What is "Responsive," anyways?

Ethan Marcotte's seminal article and follow-up book define specific techniques that make a site responsive. Both make great reading for anyone who wants to dig into the details. At a high level, we can boil it down to a handful of principles.

Build a fluid, flexible layout.

Ever since the first Photoshop mockup was used to sell a site design, web developers have been creating "fixed width" layouts that target specific screen sizes. When space-constrained mobile browsers hit the market, they had to create custom mobile sites with stripped-down designs. Responsive design starts with a layout that stretches and squishes to fit any screen, often relying on percentages and ratios rather than fixed sizes and positions.

Media queries tweak elements when the layout breaks.

Stretchable, squeezable fluid layouts can't cover every extreme: a design that looks great on a 27" desktop monitor might look impossibly cluttered on a 4" phone. Designers identify situations where the fluid design breaks, and use conditional CSS rules to add special styling instructions for those "breakpoints." On a tall-but-skinny smartphone, for example, sidebars might be moved to the bottom of an article to leave more space for text. On a wide desktop screen, photo captions might be moved beside images to take advantage of the extra room.

Images and media must be flexible.

Giant fixed-width images, embedded media players, and clunky widgets can all break your smooth, fluid layouts. Styling for these elements should use percentages and ratios, too—ensuring that they conform to fit the responsive layout they're placed in. Tools like FitVid.js and the new HTML Picture element can make this part easier.

Respond to capabilities, not specific devices.

The most important principle is simple: responsive design plans for scenarios and capabilities, rather than specific hardware devices. Instead of "sniffing" for iPhones and generating custom HTML and CSS, a responsive site uses one standard set of HTML. Conditional CSS rules and client-side scripts account for the special cases like low-resolution portrait displays, mobile devices with GPS capabilities, and so on. That ensures the design will respond intelligently to any device that matches the right criteria—including newly released ones—and the long tail of older devices often ignored by the latest design trends.

Although device detection, tailored markup, and other techniques can be layered on top of a responsive design, a strong responsive foundation is the most reliable starting point.

[image:{"fid":"2982","width":"full","border":false}]

Complementary techniques

In addition to those core principles, our team has discovered that certain approaches blend well with responsive techniques. Not every responsive design project works this way, but the ones that do are much more likely to succeed.

Plan the content before the presentation.

This is good advice for any project, but it's especially important for responsive sites. Before we start wire framing, we want to understand the site's most important messages, know just what kinds of content will be used to communicate them, and decide how each element will be prioritized. We use content like building blocks to construct pages, rather than pouring it into the design like cake batter.

Design for mobile.

"Mobile-first" design doesn't mean that every site needs to look like a skinny river of text. Rather, it means keeping the most constrained scenario in mind as the design evolves. Planning around those limitations first forces important decisions about priorities and the relationships that connect each element in the design. It's much easier to start with a streamlined mobile layout, layering additional elements as screens get larger, than it is to squeeeeeeeze a busy desktop design onto a tiny touch screen.

Design components, not just pages.

Identifying the shared visual components that are used throughout the design, then planning how those components will behave in different contexts, can pay off handsomely. This modular approach can help separate the high-level work of building responsive layouts from the fiddly tweaking and testing needed for each reusable component. Techniques like Style Tyles can be used to explore the aesthetics while the components are being built

Build working wireframes.

Napkins and PDFs are great for rough concept sketches, but once a responsive design gets serious, nothing beats HTML and CSS. Viewing the wireframes on several devices, rather than flipping through separate wireframes for each breakpoint, helps stakeholders envision how the design will work in the real world. It also helps keep a design team honest, ensuring that the ideas they come up with will really work in a browser.

Use server-side tweaks to lighten the load.

Although responsive design is defined by its use of browser technologies like CSS, server-side logic can still help. When a layout needs to be personalized for the logged-in user, for example, conditional logic on the server can still supply a different layout template. Client-side scripts can also request specific widgets or content from the server if they're needed, avoiding the cost of downloading every piece of content and media on devices that will only show a smaller subset. Designer Luke Wroblewski calls this technique RESS: REsponsive design and Server Side components.

Understanding the challenges

Like many other techniques, Responsive Design is an important tool in our toolbox, not a cure-all for every problem ailing the web. Before diving into your first responsive project, it's important to keep some caveats in mind.

Building a desktop-only site is still cheaper… in the short term.

Creating a design around a single screen size will always take less time than one that accounts for many sizes. In some cases, a skilled team can also produce a stripped down “mobile design” without breaking the bank. The economics of responsive design make the most sense when additional screen sizes (like tablets) enter the mix, and when your desktop and mobile sites must offer equivalent functionality.

It isn't as flexible as designing one-off sites for every device.

A design that accounts for multiple displays and resolutions can’t handle extreme variations in layout and content from one size to another. Some design choices—like a hard and fast reliance on mouseover effects or swipe gestures—won’t work on every device, either. Nevertheless, few organizations have the resources and coordination to built a truly custom experience for every display size and device type. In addition, Google’s own research demonstrates that most users want consistency across devices when they use a web site; the more your mobile and desktop sites differ, the more difficult it can be to switch from one to the other.

Speed and mobile-friendliness aren’t automatic.

Responsive design can bring lots of benefits for mobile users, but making sure the site loads quickly still takes work. There’s nothing preventing a bad responsive implementation from burying low-bandwidth mobile users under high-res images, auto-play videos, and third-party Javascript. Techniques like conditional loading of media assets, testing restricted-bandwidth scenarios, and setting a clear performance budget before you build, are all critical.

Integrating ads takes extra planning.

For sites that rely on ad revenue, responsive design can complicate matters. Although there’s nothing preventing you from scaling ads up and down, or shuffling them around on the page as the layout changes, their aesthetic impact can be tougher to predict without extensive testing. The density of advertisements users have grown to accept on some desktop sites can be overwhelming on a small screen, and advertisers who pay a premium for specific spots on the page may need hand-holding to understand the breakpoint-to-breakpoint changes in a responsive layout.

Don’t be distracted by the app argument.

Mobile developers have been locked in a “Web apps versus native apps” tug-of-war ever since the iOS App Store launched in 2008. Responsive design is often roped into that argument, even though it’s a separate issue. These days, even native mobile apps use responsive design principles as often as web sites: the average Android app needs to cope with hundreds of different display sizes and resolutions, after all. Whether your organization needs a mobile app shouldn’t change the way your web site handles device and screen-size proliferation.

Should your site be responsive?

Let’s keep this one simple: Yes. Responsive design is the most cost-effective way to deal with the widest possible array of devices, and the simplest way to future-proof your web site. On their Responsive Design Podcast, Karen McGrane and Ethan Marcotte have talked to dozens of organizations about the challenges and the payoffs: there’s no denying its effectiveness.

"I can tell you that it worked way more emphatically than I would have—even in my wildest dreams—have hoped," said Mark Jannot of the National Audobon Society on a recent episode. "The percentage of mobile [traffic] doubled, from fifteen percent to thirty percent, essentially immediately."

Of course, that doesn’t mean that you can’t or shouldn’t pair it with techniques like native mobile apps, device and feature detection to enable additional features, server-side tailoring of site markup, and so on. Responsive techniques allow you to build a solid foundation for your web presence, one that will handle a wide range of scenarios as gracefully as possible. When that baseline is established, you can focus your customization resources where they’ll do the most good, rather than scrambling to catch up whenever a new device come into vogue.

Write Unit Tests for Your Drupal 7 Code (part 1)

Large software projects lead to many features being developed over time. Building new features on top of an existing system always risks regressions and bugs. One of the best ways to ensure that you catch those before your code hits production is by adding unit tests.

In this article series I will guide you through adding PHPUnit tests to your Drupal 7 modules. This article explores what unit tests are and why they are useful. And then looks at how to structure your Drupal 7 code in a way that is conducive to writing unit tests later on.

TL;DR

I encourage you to start testing your code. Here are the most important points of the article:

  • Start writing object-oriented code for Drupal 7 today. In your hooks you can just create the appropriate object and call the method for that hook.
  • Fake your methods to remove unmet dependencies.
  • Leverage PHPUnit to ease writing tests.
Unit tests to the rescue

In Drupal 7 core, Simpletest was added so everyone could write tests for their modules. While this has been a great step forward, executing those integration tests is very slow. This is a problem when you want to do Test Driven Development, or if you want to run those tests often as part of your workflow.

A part of Simpletest is a way to write unit tests instead of integration tests. You just need to create your test class by inheriting from DrupalUnitTestCase, but its drawback is that most of Drupal isn’t available. Most Drupal code needs a bootstrap, and it’s very difficult to test your code without a full (slow) Drupal installation being available. Since you don’t have a database available, you can’t call common functions like node_load() or variable_get(). In fact, you should think about your unit tests as standalone scripts that can execute chunks of code. You will see in the next section how PHPUnit can help you to create these testing scripts.

PHPUnit has you covered

In the greater PHP community, one of the leading unit test frameworks is PHPUnit, by Sebastian Bergmann and contributors. This framework is widely used in the community, attracting many integrations and extra features, compared to the aforementioned DrupalUnitTestCase.

Daniel Wehner comments on these integrations saying:

Since http://drupal.org/node/1567500 Drupal 8 started to use PHPUnit as it's unit test framework. One advantage of PHPUnit is that there are tools around which support it already.

Here is a screenshot of PhpStorm where you can see how you can execute your tests from the IDE:

[image:{"fid":"2960","width":"full","border":false}]

PHPUnit is the PHP version of the xUnit testing framework family. Therefore, by learning it, you will be able to leverage that knowledge in other languages. Besides there’s a big documentation and support base for xUnit architectures.

The best part of PHPUnit is that it allows you to write easy-to-read test classes efficiently, and it has many best practices and helper tools –like the test harness XML utility–. PHPUnit also comes with some handy tools to mock your objects and other developer experience improvements to help you write your tests in a clearer and more efficient way.

To use PHPUnit with Drupal 7 you need to write object-oriented code. The next section will show you an example of the dependency problem, and one way to solve it using OOP with the fake object strategy.

A change in your mindset

The hardest part of unit testing your Drupal code is changing your mindset. Many developers are getting ready to use object oriented PHP for Drupal 8, but they keep writing procedural code in their current work. The fact that Drupal 7 core is not as object oriented as it might have been does not imply that custom code you write must also be procedural and untestable.

In order to start unit testing your code, you need to start coding using OOP principles. Only loosely coupled code can be easily tested. Usually this starts by having small classes with clearly defined responsibilities. This way, you can create more complex objects that interact with those small pieces. Done correctly, this allows you to write unit tests for the simple classes and have those simple classes mocked to test the complex ones.

Unit testing is all about testing small and isolated parts of the code. You shouldn’t need to interact with the rest of the codebase or any elements in your system such as the database. Instead, all the code dependencies should be resolved through the use of mock objects, fake classes, dummies, stubs or test doubles.

Mock objects avoid dependencies by getting called instead of the real domain objects. See the Guzzle’s documentation for an example.

Gerard Meszaros writes about test doubles in these terms:

Sometimes it is just plain hard to test the system under test (SUT) because it depends on other components that cannot be used in the test environment. This could be because they aren't available, they will not return the results needed for the test or because executing them would have undesirable side effects. In other cases, our test strategy requires us to have more control or visibility of the internal behavior of the SUT.

When we are writing a test in which we cannot (or chose not to) use a real depended-on component (DOC), we can replace it with a Test Double. The Test Double doesn't have to behave exactly like the real DOC; it merely has to provide the same API as the real one so that the SUT thinks it is the real one!

In typical Drupal 7 modules, which is our System Under Test (SUT), there are many parts of the code that we want to test that rely on external dependencies –our depended-on component (DOC). Good examples of those dependencies are Drupal core, other contributed modules, or remote web services. The fact that a method calls a Drupal function, such as cache_get(), makes it very difficult for the test runner to execute that code, since that function will not be defined during the test. Even if we manually included includes/cache.inc, the cache_get() function might require other include files or even an active database connection.

Consider the following custom class:

class MyClass implements MyClassInterface { // ... public function myMethod() { $cache = cache_get('cache_key'); // Here starts the logic we want to test. // ... } // ... }

When we call myMethod() we will need to have the database ready, because it is calling to cache_get().

// Called from some hook. $object = new MyClass(); $object->myMethod();

Therefore, myMethod(), or any code that uses it, is not unit testable. To fix this, we wrap cache_get() in a class. The big advantage of this is that we now have a CacheController object that deals with all of our cache needs by interacting with the Drupal API.

class CacheController implements CacheControllerInterface { /** * Wraps calls to cache_get. * * @param string $cid * The cache ID of the data to retrieve. * @param string $bin * The cache bin to store the data in. * * @return mixed * The cache object or FALSE on failure. */ public function cacheGet($cid, $bin = 'cache') { return cache_get($cid, $bin); } }

And the custom class becomes:

class MyClass implements MyClassInterface { // ... public function __construct(CacheControllerInterface $cache_controller) { $this->cacheController = $cache_controller; } // ... public function myMethod() { $cache = $this->cacheController->cacheGet('cache_key'); // Here starts the logic we want to test. // ... } // ... }

The calling code stays the same.

Our test class will execute myMethod() with a fake cache controller that doesn’t need the bootstrap or the database.

// Called from the PHPUnit test case class. $cache_controller_fake = new CacheControllerFake(); $object = new MyClass($cache_controller_fake); $object->myMethod();

What our fake cache controller looks like:

class CacheControllerFake implements CacheControllerInterface { /** * Cache array that doesn't need the database. * * @var array */ protected $cache = array(); /** * Wraps calls to cache_get. * * @param string $cid * The cache ID of the data to retrieve. * @param string $bin * The cache bin to store the data in. * * @return mixed * The cache object or FALSE on failure. */ public function cacheGet($cid, $bin = 'cache') { return isset($this->cache[$bin][$cid]) ? $this->cache[$bin][$cid] : NULL; } }

The key is that the test will create a fake object for our CacheController and pass it to the SUT. Remember that you are not testing cache_get() but how the code that depends on it is working.

In this example, we have removed the dependency on includes/cache.inc and the existence of the database to test a method that calls to cache_get(). Using similar techniques you can test all your classes in your module.

The next article of the series will get deeper into the matter by covering:

  • Mocking tools like: PHPUnit mocking objects and the Mockery project.
  • Dependency injection in Drupal 7 to pass your dependencies easily.
  • Drupal Unit Autoload to reduce the number of classes to mock.
  • A real life example that applies all these concepts.

Do you add unit tests to your Drupal 7 modules? Share your experience in the comments!

The New Lullabot.com

React.js, CouchDB, Node.js, de-coupling Drupal; if any of that sounds cool to you, then this is the podcast for you. Kyle Hofmeyer gathered a several Lullabots together, who helped create the new lullabot.com, to learn what kind of wizardry was used to make this thing purr like a happy kitten. Jared Ponchot talks about the advantages this process provided for him and his design team. Sally Young talks about the guts of the site and the magic that went in to making this de-coupled Drupal site a success. We are also joined by Kris Bulman, Wes Ruvalcaba, and Betty Tran as they share their experience building the site. From front-end advantages to lazyboyDB, this podcast has it all.

Announcing The New Lullabot.com

Mmmm… love that new website smell! Some history

It's been nearly 10 years since we launched our first company website at lullabot.com. During that time, we've done five full redesigns of the site. The company has grown from two people to 62. We've expanded from a small Drupal consulting and education company to a full-service agency with a complete Design team, dedicated front-end developers, and of course, the expert Drupal back-end development which has always been our foundation.

As we've grown, our site design has reflected our focus and skills. The first site that Matt and I put together back in 2005 was intentionally sparse – not exactly beautiful, but functional and simple to maintain for just 2 or 3 people. As we hired talented designers and skilled front-end developers, site redesigns became more complex. In 2010, we split our Drupal education services into Drupalize.Me and the main focus of lullabot.com became our client services work, showcasing our design and development projects and sharing insights from our team.

Revving up the new Lullabot.com

The newest iteration of Lullabot.com is our most ambitious to date. As with most of our client engagements, the project started with research. Our Design team interviewed existing and potential clients, site visitors, and the Lullabot team to understand how people were using our site – what they wanted to get out of it, and why they visited. Our team distilled all they'd learned into goals and early wireframes for the site. They then worked with our Development staff to try to come up with the most flexible way of achieving these goals so that we could have full control of the site in ways that Drupal often doesn't afford. They wanted full <html> to </html> blue-sky design of any arbitrary page on the site without losing Drupal's amazing content management capabilities.

The technical team settled on a decoupled, isomorphic approach using Facebook's React, Node.js, CouchDB (a noSQL database) and Drupal as the backend CMS.

Content management is what Drupal does best, and this happens through a purpose-built subsite where the Lullabot team can login and post articles, podcasts, and manage their bios. Drupal pushes content into CouchDB, which exposes a REST API for React to consume. React is an isomorphic library (its code can run both in the server and the client), which means that when a visitor first visits the site, they receive the html of the entire page. Then, the rest of the navigation happens client-side, updating just the parts of the page which are different from the current one. Furthermore, React is written to be completely backward compatible with older browsers.

Our clients are often in need of API-driven native mobile apps, television-based apps, and content ingestion on connected devices. We've implemented these things in less holistic ways with our clients in the past. But the new Lullabot.com gave us a chance to experiment with some methodologies that weren't quite tried-and-tested enough to recommend to our clients. But now that we've had a chance to see the type of flexibility they give us on lullabot.com, we'll be adding this to the array of architectural strategies that we can consider for our clients in the future.

Look ma, no hands!

The results are amazing; high-speed, high-performance, and superlative flexibility. In layman's terms, this means our Design and Front-end people can go crazy – implementing blue-sky ideas without the usual Drupal markup constraints. The new site is fully responsive. Articles and portfolio work pages can have giant, dazzling, full browser-height background images or videos. Articles have big text that is easy to read on any scale from large desktop monitors to the smallest phone screens. Furthermore, we did everything with an eye toward blazing fast page loads. We omitted jQuery, trading convenience in the development process for speedy page loads. Then we looked at every http request, every image, every library to make sure our website was as snappy on an older smartphone as it was on the desktop. Best of all, we off-loaded much of the heavy lifting to the client-side with React.

Design-wise, the new site is uncluttered, sparse, and relatively simple. But whether you're looking for our vast archive of articles or podcasts, information about what services Lullabot offers, who we've worked with and what we've done, or you're curious to know what it's like to work at Lullabot, it's all there.

Over the coming months, we will be writing a series of articles and doing a few podcasts talking about different aspects of the new site. Please subscribe to the Lullabot email newsletter below and you'll be the first to know when new articles are published.

SEO and Customer Data

In this podcast, hostess Amber Matz talks with Andrew Wilson, Senior Account Manager at Drupalize.Me about SEO, and gathering and analyzing customer data. Learn about how structured data and rich snippets can enhance search results, along with some important things to know about how Google works. We talk about some hard SEO lessons learned here at Drupalize.Me. We also discuss the aggregated data in Google Analytics versus user-specific data and tracking provided by services like Kissmetrics and Intercom.io, and how, as a business, you can learn more about your customers through gathering and analyzing site visitor data with an ultimate goal of serving your customers and clients (potential and present) more effectively.

Switching Drush Versions

Different versions of Drupal require different versions of Drush, here’s how to make the switch easily

Stu Keroff on Asian Penguins

In this episode of Hacking Culture, Matthew Tift talks with Stu Keroff about a Linux User Group for Asian middle-school students called Asian Penguins. The kids learn not only how to use Linux, but also maintain more than 30 Linux computers for their school and provide Linux computers to local families that cannot afford a computer. This episode is released under the Creative Commons attribution share alike 4.0 International license. The theme music used in this episode comes from the Open Goldberg Variations. The musical interludes all come from ccMixter.org. “Reverie (small theme)" by _ghost (http://ccmixter.org/files/_ghost/25389) under CC BY license. "Heartbit" by Alex featuring Onlymeith (http://ccmixter.org/files/AlexBeroza/37758) under CC BY-NC license.

Pages