Feed aggregator

Evolving Web: Creating Landing Pages with Drupal 8 and Paragraphs

Planet Drupal -

As Drupal themers and site builders, we often have to look for creative solutions to build landing pages. Landing pages are special pages often used for marketing campaigns, to attract particular audiences, or to aggregate content about a certain topics.

We want lading pages to be attractive and entice users to click, but we often also need them to be flexible so we can communicate different things. We want landing pages to look great the day we launch a website, but also to be flexible so that a site admin can change the content or add a new page and it still looks great.

read more

Jeff Geerling's Blog: How to attach a CSS or JS library to a View in Drupal 8

Planet Drupal -

File this one under the 'it's obvious, but only after you've done it' category—I needed to attach a CSS library to a view in Drupal 8 via a custom module so that, wherever the view displayed on the site, the custom CSS file from my module was attached. The process for CSS and JS libraries is pretty much identical, but here's how I added a CSS file as a library, and made sure it was attached to my view:

Add the CSS file as a library

In Drupal 8, drupal_add_css(), drupal_add_js(), and drupal_add_library() were removed (for various reasons), and now, to attach CSS or JS assets to views, nodes, etc., you need to use Drupal's #attached functionality to 'attach' assets (like CSS and JS) to rendered elements on the page.

In my custom module (custom.module), I added the CSS file css/custom_view.css:

OSTraining: There Will Never Be a Drupal 9

Planet Drupal -

Yes, that's a big statement in the title, so let me explain.

Lots of OSTraining customers are looking into Drupal 8 and they have questions about Drupal 8's future. If they invest in the platform today, how long will that invesment last. 

This is just my personal opinion, but I think an investment in Drupal 8 will last a long, long time.

Drupal 8 took five years. It was a mammoth undertaking, and no-one in the Drupal community has the energy for a similar re-write. 

Frederic Marand: How to display time and memory use for Drush commands

Planet Drupal -

When you use Drush, especially in crontabs, you may sometimes be bitten by RAM or duration limits. Of course, running Drush with the "-d" option will provide this information, but it will only do so at the end of an annoyingly noisy output debugging the whole command run.

On the other hand, just running the Drush command within a time command won't provide fine memory reporting. Luckily Drush implements hooks to make acquiring this information easily, so here is a small gist you can use as a standalone Drush plugin or add to a module of your own:

read more

Issue 251

The Weekly Drop -

Issue 251 - August, 4th 2016 From Our Sponsor Intranets the Drupal Way!

With its customization-friendly technical architecture and wide variety of collaboration and user management tools, Drupal has been proven effective for corporate Intranets of all sizes and industries. But how do you know if Drupal is the best Content Management System for your Intranet? Mediacurrent’s latest eBook will help to guide you through that decision including evaluating Drupal as a platform for your Intranet, how Drupal solves for common Intranet challenges, and top picks for Intranet-ready Drupal modules.

Articles 7 Facts That Reveal the State of Drupal

Some interesting info about Drupal 8 today.

A Survey! Is Drupal Hard? Sponsored

How does Drupal compare to other platforms you've used? Please take the survey!

Basethemes in Drupal 8

The one and only MortenDK explains the concept of base themes in Drupal 8.

Rethinking Content

"While mobile-first is a good design philosophy, actually thinking in terms of a mobile display vs a desktop is already a step behind. It’s time to rethink content."

Setting Higher Standards for Corporate Contributions to Drupal

Dries Buytaert clarifies comment about companies contributing to core.

Tutorials Adding pURL Multidomain XMLSitemap Creating Content Blocks Programmatically in Drupal 8 Drinking the Drupal-Composer Kool-Aid?

"Do you want to manage modules and dependencies the PHP way instead of the "Drupal" way? Don't know how to use composer with Drupal? Are you planning to ditch drush make approach and adopt a composer based workflow?"

Drupal 8 Namespaces - Class Aliasing Hide the Page Title Depending on a Checkbox Field in a Particular Content Type How to Print Variables Using Kint in Drupal 8

If you used devel in Drupal 7 you'll want to look at this post about Kint in Drupal 8.

How to Use Entity Print in Drupal 8 Need to Quickly Roll Back Your Drupal Database? Restore It with Just One Click Waterwheel: The Drupal SDK for JavaScript Developers Sponsored

As Drupal begins to be more widely used as a back end for web services and for application ecosystems by extension, developers of wildly diverse backgrounds are consuming and manipulating data from Drupal in unprecedented ways. This webinar will discuss waterwheel, our client- and server-side JavaScript library, which helps developers talk with Drupal without having to learn about core REST’s authentication system, and how Waterwheel can benefit developers, workflows, architecture, and more.

Projects Releases of Various Drupal 8 Media Modules Releases admin_toolbar 8.x-1.16 Bean 7.x-1.10 entity_reference_revisions 8.x-1.0 media_entity 8.x-1.3 media_entity_image 8.x-1.2 menu_block 8.x-1.2 Paragraphs 8.x-1.0 Services 7.x-3.16 Podcasts Around the Drupalverse with Enzo García - Lullabot Podcast DrupalEasy Podcast 183 - Higher Ed - Shawn DeArmond News Drupal Association News: Drupal Association's 12 Month Focus

Drupal Association director Megan Sanicki offers insight into the direction of the DA over the next 12 months.

Drupal Association News: Changes for the Drupal Association Engineering Team

Jobs List Your Job on Drupal Jobs

Wanna get the word out about your great Drupal job? Get your job in front of hundreds of Drupal job seekers every day at Jobs.Drupal.Org.

Featured JobsRemote Drupal Developer

Toptal LLC Anywhere

Senior Drupal Developer

meltmedia Tempe/AZ/US

Web Developer

Southwest Florida Water Management District Brooksville/FL/US

lakshminp.com: Drupal composer workflow - part 2

Planet Drupal -

In the previous post, we saw how to add and manage modules and module dependencies in Drupal 8 using Composer.

In this post we shall see how to use an exclusive composer based Drupal 8 workflow. Let's start with a vanilla Drupal install. The recommended way to go about it is to use Drupal Composer project.

$ composer create-project drupal-composer/drupal-project:8.x-dev drupal-8.dev

If you are a careful observer(unlike me), you will notice that a downloaded Drupal 8 package ships with the vendor/ directory. In other words, we need not install the composer dependencies when we download it from d.o. On the other hand, if you "git cloned" Drupal 8, it won't contain the vendor/ directory, hence the extra step to run `composer install` in root directory. The top level directory contains a composer.json and the name of the package is drupal/drupal, which is more of a wrapper for the drupal/core package inside the core/ directory. The drupal/core package installs Drupal core and its dependencies. The drupal/drupal helps you build a site around Drupal core, maintains dependencies related to your site and modules etc.

Drupal project takes a slightly different project structure.It installs core and its dependencies similar to drupal/drupal. It also installs the latest stable versions of drush and drupal console.

$ composer create-project drupal-composer/drupal-project:8.x-dev d8dev --stability dev --no-interaction New directory structure

Everything Drupal related goes in the web/ directory, including core, modules, profiles and themes. Contrast this with the usual structure where there is a set of top level directories named core, modules, profiles and themes.

drush and drupal console(both latest stable versions) gets installed inside vendor/bin directory.The reason Drush and Drupal console are packaged on a per project basis is to avoid any dependency issues which we might normally face if they are installed globally.

How to install Drupal

Drupal can be installed using the typical site-install command provided by drush.

$ cd d8dev/web $ ../vendor/bin/drush site-install --db-url=mysql://<db-user-name>:<db-password>@localhost/<db-name> -y Downloading modules

Modules can be downloaded using composer. They get downloaded in the web/modules/contrib directory.

$ cd d8dev $ composer require drupal/devel:8.1.x-dev

The following things happen when we download a module via composer.

  1. Composer updates the top level composer.json and adds drupal/devel:8.1.x-dev as a dependency.
"require": { "composer/installers": "^1.0.20", "drupal-composer/drupal-scaffold": "^2.0.1", "cweagans/composer-patches": "~1.0", "drupal/core": "~8.0", "drush/drush": "~8.0", "drupal/console": "~1.0", "drupal/devel": "8.1.x-dev" },
  1. Composer dependencies(if any) for that module get downloaded in the top level vendor directory. These are specified in the composer.json file of that module. At the time of writing this, Devel module does not have any composer dependencies.
"license": "GPL-2.0+", "minimum-stability": "dev", "require": { } }

Most modules in Drupal 8 were(are) written without taking composer into consideration. We use the drush dl command every time which parses our request and downloads the appropriate version of the module from drupal.org servers. Downloading a module via composer requires the module to have a composer.json as a minimal requirement. So how does composer download all Drupal contrib modules if they don't have any composer.json? The answer lies in a not so secret sauce ingredient we added in our top level composer.json:

"repositories": [ { "type": "composer", "url": "https://packagist.drupal-composer.org" } ],

Composer downloads all packages from a central repository called Packagist. It is the npmjs equivalent of PHP. Drupal provides its own flavour of Packagist to serve modules and themes exclusively hosted at Drupal.org. Drupal packagist ensures that contrib maintainers need not add composer.json to their project.

Let's take another module which does not have a composer.json, like Flag(at the time of writing this). Let's try and download flag using composer.

$ composer require drupal/flag:8.4.x-dev ./composer.json has been updated > DrupalProject\composer\ScriptHandler::checkComposerVersion Loading composer repositories with package information Updating dependencies (including require-dev) - Installing drupal/flag (dev-8.x-4.x 16657d8) Cloning 16657d8f84b9c87144615e4fbe551ad9a893ad75 Writing lock file Generating autoload files > DrupalProject\composer\ScriptHandler::createRequiredFiles

Neat. Drupal Packagist parses contrib modules and serves the one which matches the name and version we gave when we ran that "composer require" command.

Specifying package sources

There is one other step you need to do to complete your composer workflow, i.e., switching to the official Drupal.org composer repository. The actual composer.json contains Drupal packagist as the default repository.

"repositories": [ { "type": "composer", "url": "https://packagist.drupal-composer.org" } ],

Add the Drupal.org composer repo using the following command:

$ composer config repositories.drupal composer https://packages.drupal.org/8

Now, your repositories entry in composer.json should look like this:

"repositories": { "0": { "type": "composer", "url": "https://packagist.drupal-composer.org" }, "drupal": { "type": "composer", "url": "https://packages.drupal.org/8" } }

To ensure that composer indeed downloads from the new repo we specified above, let's remove the drupal packagist entry from composer.json.

$ composer config --unset repositories.0

The repositories config looks like this now:

"repositories": { "drupal": { "type": "composer", "url": "https://packages.drupal.org/8" } }

Now, let's download a module from the new repo.

$ composer require drupal/token -vvv

As a part of the verbose output, it prints the following:

... Loading composer repositories with package information Downloading https://packages.drupal.org/8/packages.json Writing /home/lakshmi/.composer/cache/repo/https---packages.drupal.org-8/packages.json into cache ...

which confirms that we downloaded from the official package repo.

Custom package sources

Sometimes, you might want to specify your own package source for a custom module you own, say, in Github. This follows the usual conventions for adding VCS package sources in Composer, but I'll show how to do it in Drupal context.

First, add your github URL as a VCS repository using the composer config command.

$ composer config repositories.restful vcs "https://github.com/RESTful-Drupal/restful"

Your composer.json will look like this after the above command is run successfully:

"repositories": { "drupal": { "type": "composer", "url": "https://packages.drupal.org/8" }, "restful": { "type": "vcs", "url": "https://github.com/RESTful-Drupal/restful" } }

If you want to download a package from your custom source, you might want it to take precedence to the official package repository, as order really matters for composer. I haven't found a way to do this via cli, but you can edit the composer.json file and swap both package sources to look like this:

"repositories": { "restful": { "type": "vcs", "url": "https://github.com/RESTful-Drupal/restful" }, "drupal": { "type": "composer", "url": "https://packages.drupal.org/8" } }

Now, lets pick up restful 8.x-3.x. We can specify a Github branch by prefixing with a "dev-".

$ composer require "drupal/restful:dev-8.x-3.x-not-ready"

Once restful is downloaded, composer.json is updated accordingly.

"require": { "composer/installers": "^1.0.20", "drupal-composer/drupal-scaffold": "^2.0.1", "cweagans/composer-patches": "~1.0", "drupal/core": "~8.0", "drush/drush": "~8.0", "drupal/console": "~1.0", "drupal/devel": "8.1.x-dev", "drupal/flag": "8.4.x-dev", "drupal/mailchimp": "8.1.2", "drupal/token": "1.x-dev", "drupal/restful": "dev-8.x-3.x-not-ready" }, Updating drupal core

Drupal core can be updated by running:

$ composer update drupal/core > DrupalProject\composer\ScriptHandler::checkComposerVersion Loading composer repositories with package information Updating dependencies (including require-dev) - Removing drupal/core (8.1.7) - Installing drupal/core (8.1.8) Downloading: 100% Writing lock file Generating autoload files Downloading: 100% Downloading: 100% Downloading: 100% Downloading: 100% Downloading: 100% Downloading: 100% Downloading: 100% Downloading: 100% Downloading: 100% Downloading: 100% Downloading: 100% Downloading: 100% Downloading: 100% Downloading: 100% Downloading: 100% > DrupalProject\composer\ScriptHandler::createRequiredFiles

As the output reads, we updated core from 8.1.7 to 8.1.8. We will revisit the "Writing lock file" part in a moment. After this step is successful, we have to run drush updatedb to do any database updates. This applies to even updating modules.

$ cd d8dev/web $ ../vendor/bin/drush updatedb Updating modules

One of the cool things I like about composer workflow is, I can update selective modules, or even a single module. This is not possible using drush. The command for updating a module, say, devel is:

$ composer update drupal/devel > DrupalProject\composer\ScriptHandler::checkComposerVersion Loading composer repositories with package information Updating dependencies (including require-dev) Nothing to install or update Generating autoload files > DrupalProject\composer\ScriptHandler::createRequiredFiles

Hmmm. Looks like devel is already the latest bleeding edge version. To quickly revise and ensure what composer related artifacts we need to check in to version control,

Should you check in the vendor/ directory?

Composer recommends that you shouldn't, but there are some environments that don't support composer(ex. Acquia Cloud), in which case you have to check in your vendor folder too.

Should you check in the composer.json file?

By now, you should know the answer to this question :)

Should you check in the composer.lock file?

Damn yes. composer.lock contains the exact version of the dependencies which are installed. For example, if your project depends on Acme 1.*, and you install 1.1.2 and your co-worker runs composer install after a month or so, it might install Acme 1.1.10, which might introduce version discrepancies in your project. To prevent this, composer install will check if a lock file exists, and install only that specific version recorded or "locked" down in the lock file. The only time the lock file changes is when you run a composer update to update your project dependencies to their latest versions. When that happens, composer updates the lock file with the newer version that got installed.

Drupal, Drupal 8, Drupal Planet

Metal Toad: Avoiding Drupal 7 #AJAX Pitfalls

Planet Drupal -

Avoiding Drupal 7 #AJAX Pitfalls August 3rd, 2016 Marcus Bernal

Rather than provide a basic how-to tutorial on Drupal's form API #AJAX functionality, I decided to address a few pitfalls that often frustrate developers, both junior and senior alike. To me, it seems that most of the problems arise from the approach rather than the direct implementation of the individual elements.

  • Try to find a reasonable argument for not using #ajax.
  • Do not do any processing in the callback function, it's too late, I'm sorry.
  • Force button names that are semantic and scalable.
  • Template buttons and remove unnecessary validation from #ajax actions.
  • Use '#theme_wrappers' => array('container') rather than '#preffix' and '#suffix'.
Is AJAX Even Needed?

Since #ajax hinders accessibility and adds that much more complexity, before continuing on with the approach, reconsider others. Drupal will automatically handle the "no js" accessibility issue, providing full page refreshes with unsubmitted forms, but issues will still exist for those using screen readers. Because the time to request and receive the new content is indeterminate, screen readers will fail at providing the users with audible descriptions of the new content. Simply by choosing to use #ajax, you will automatically exclude those needing visual assistance. So, if it is simply hiding/showing another field or sets of fields, then #states would be a better fit. If the task is to select something out of a large selection, a multiple page approach or even an entity reference with an autocomplete field could suffice.

This example is a simplified version of a new field type used to select data from a Solr index of another site's products. The number of products was in the 200k's and the details needed to decide on a selection was more than just the product names, so building checkboxes/radios/select box would be too unwieldy and an autocomplete could not provide enough information. Also, the desired UX was to use a modal rather than multiple pages.

Callback is a Lie

An misconception that many developers, including past myself, have is that the AJAX callback function is the place to perform bulk of the logic. I have come to approach this function as just one that returns the portion of the form that I want. Any logic that changes the structure or data of a form should be handled in the form building function, because there it will be persistent as Drupal will store those changes but ignore any done within AJAX callback. So, the role of the callback function is simply a getter for a portion of the $form array. At first, it may seem easier to just hardcode the logic to return the sub array, but I recommend a dynamic solution that relies on the trigger's nested position relative to the AJAX container.

function product_details_selector_field_widget_form(&$form, &$form_state, $field, $instance, $langcode, $items, $delta, $element) { ... // Add a property to nested buttons to declare the relative depth // of the trigger to the AJAX targeted container $form['container']['modal']['next_page']['#nested_depth'] = 1; ... }

Then, for the callback, some "blind" logic can easily return the portion of form to render and return.

/** * AJAX callback to replace the container of the product_details_selector. */ function product_details_selector_ajax_return($form, $form_state) { // Trim the array of array parents for the trigger down to the container $array_parents = $form_state['triggering_element']['#array_parents']; $pop_count = 1; // The trigger is always included, so always have to pop if (isset($form_state['triggering_element']['#nested_depth'])) { $pop_count += $form_state['triggering_element']['#nested_depth']; } for ($i = 0; $i < $pop_count; $i++) { if (empty($array_parents)) { break; // Halt the loop whenever there are no more items to pop } array_pop($array_parents); } // Return the nested array return drupal_array_get_nested_value($form, $array_parents); // This function is so awesome }

With this approach, any future modifications to the $form array outside of the container are inconsequential to this widget. And if this widget's array is modified outside of the module, the modifier will just have to double check the #nested_depth values rather than completely overriding the callback function.

Name the Names

For clarity, from here on name will refer to what will be used for the HTML attributes id and name for containers (divs) and buttons, respectively.

Like with everything in programming, naming is the initial task that can make development, current and future, a simple walk through the business logic or a spaghetti mess of "oh yeahs". This is especially true for #ajax which requires the use of HTML ID attributes to place the new content as well as handling user actions (triggers). For most developers, this step is brushed over because the idea of their work being used in an unconventional or altered way is completely out of their purview. But a solid approach will reduce the frustration of future developers including yourself for this #ajax widget right now.

In this example and most cases these triggers will be buttons, but Drupal 7 also allowed for other triggering elements, such as the select box or radio buttons. Now, this leaves a weird situation where these other triggers have semantic names, but buttons will simply be named 'op'. For a simple form, this is no big deal, but for something complex, determining which action to take relies on the comparison of the button values. This gets much harder to do when you have multiple fields of the same type, bring in translation, and/or the client decides to change the wording later on in the project. So, my suggestion is to override the button names and base the logic on them.

// drupal_html_class() converts _ to - as well as removing dangerous characters $trigger_prefix = drupal_html_class($field['field_name'] . '-' . $langcode . '-' . $delta); // Short trigger names $show_trigger = $trigger_prefix . '-modal-open'; $next_trigger = $trigger_prefix . '-modal-next'; $prev_trigger = $trigger_prefix . '-modal-prev'; $search_trigger = $trigger_prefix . '-modal-search'; $add_trigger = $trigger_prefix . '-add'; $remove_trigger = $trigger_prefix . '-remove'; $cancel_trigger = $trigger_prefix . '-cancel'; // Div wrapper $ajax_container = $trigger_prefix . '-ajax-container';

The prefix in the example is built as a field form widget example. It is unique to field's name, language, and delta so that multiple instances can exist in the same form. But if your widget is not a field, it is still best to start with someting that is dynamically unique. Then, semantics are used to fill out the rest of the trigger names needed as well as the container's ID.

Button Structure

Ideally, every button within the #ajax widget should simply cause a rebuild of the same container, regardless of the changes triggered within the nested array. Since the callback is reduced to a simple getter for the container's render array, the majority of trigger properties can be templated. Now, all buttons that are built off of this template, barring intentional overrides, will prevent validation of elements outside of the widget, prevent submission, and have the same #ajax command to run.

$ajax_button_template = array( '#type' => 'button', // Not 'submit' '#value' => t('Button Template'), // To be replaced '#name' => 'button-name', // To be replaced '#ajax' => array( 'callback' => 'product_details_selector_ajax_return', 'wrapper' => $ajax_container, 'method' => 'replace', 'effect' => 'fade', ), '#validate' => array(), '#submit' => array(), '#limit_validation_errors' => array(array()), // Prevent standard Drupal validation '#access' => TRUE, // Display will be conditional based on the button and the state );   // Limit the validation errors down to the specific item's AJAX container // Once again, the field could be nested in multiple entity forms // and the errors array must be exact. If the widget is not a field, // then use the '#parents' key if available. if (!empty($element['#field_parents'])) { foreach ($element['#field_parents'] as $field_parent) { $ajax_button_template['#limit_validation_errors'][0][] = $field_parent; } } $ajax_button_template['#limit_validation_errors'][0][] = $field['field_name']; $ajax_button_template['#limit_validation_errors'][0][] = $langcode; $ajax_button_template['#limit_validation_errors'][0][] = $delta; $ajax_button_template['#limit_validation_errors'][0][] = 'container';

Limiting the validation errors will prevent other, unrelated fields from affecting the modal's functionality. Though, if certain fields are a requirement they can be specified here. This example will validate any defaults, such as required fields, that exist within the container.

$form['container']['modal']['page_list_next'] = array( '#value' => t('Next'), '#name' => $next_trigger, '#access' => FALSE, '#page' => 1, // For page navigation of Solr results ) + $ajax_button_template; // Keys not defined in the first array will be set from the values in the second   // Fade effect within the modal is disorienting $element['container']['modal']['search_button']['#ajax']['effect'] = 'none'; $element['container']['modal']['page_list_prev']['#ajax']['effect'] = 'none'; $element['container']['modal']['page_list_next']['#ajax']['effect'] = 'none';

The #page key is arbitrary and simply used to keep track of the page state without having to clutter up the $form_state, especially since the entire array of the triggering element is already stored in that variable. Other buttons within the widget do not need to track the page other than previous and next. Clicking the search button should result in the first page of a new search while cancel and selection buttons will close the modal anyway.

Smoking Gun

Determining the widget's state can now start easily with checks on the name and data of the trigger.

$trigger = FALSE; if (!empty($form_state['triggering_element']['#name'])) { $trigger = $form_state['triggering_element']; } $trigger_name = $trigger ? $trigger['#name'] : FALSE;   $open_modal = FALSE; if (strpos($trigger_prefix . '-modal', $trigger_name) === 0)) { $open_modal = TRUE; }   ... // Hide or show modal $form['container']['modal']['#access'] = $open_modal;   ... // Obtain page number regardless of next or previous $search_page = 1; if (isset($trigger['#page'])) { $search_page = $trigger['#page']; }   ... // Calculate if a next page button should be shown $next_offset = ($search_page + 1) * $per_page; if ($next_offset > $search_results['total']) { $form['container']['modal']['next_page']['#access'] = TRUE; // or '#disabled' if the action is too jerky } Theming

Now, to where most developers start their problem solving, how to build the AJAX-able portion. Drupal requires an element with an ID attribute to target where the new HTML is inserted. Ideally, it is best to make the target element and the AJAX content one and the same. There are a couple of ways for doing this, the most common that I see is far too static and therefore difficult to modify or extend.

// Div wrapper for AJAX replacing $element['contianer'] = array( '#prefix' => '<div id="' . $ajax_container . '">', '#suffix' => '</div>', );

This does solve the solution for the time being. It renders any child elements properly while wrapping with the appropriate HTML. But if another module, function, or developer wants to add other information, classes for instance, they would have to recreate the entire #prefix string. What I propose is to use the #theme_wrappers key instead.

// Div wrapper for AJAX replacing $element['container'] = array( '#theme_wrappers' => array('container'), '#attributes' => array( 'id' => $ajax_container, ), );   if (in_array($trigger_name, $list_triggers)) { $element['container']['#attributes']['class'][] = 'product-details-selector-active-modal'; }   // Div inner-wrapper for modal styling $element['container']['modal'] = array( '#theme_wrappers' => array('container'), '#attributes' => array( 'class' => array('dialog-box'), ), );   $element['container']['product_details'] = array( '#theme_wrappers' => array('container'), '#attributes' => array( 'class' => array('product-details'), ), '#access' => TRUE, );

I have experienced in the past that using #theme, causes the form elements to be rendered "wrong," losing their names and their relationships with the data. The themes declared within #theme_wrappers will render later in the pipeline, so form elements will not lose their identity and the div container can be built dynamically. That is, simply to add a class, one just needs to add another array element to $element['container']['#attributes']['class'].


I do not propose the above to be hard-set rules to follow, but they should be helpful ideas that allow for more focus to be put into the important logic rather than basic functional logistics. View the form as transforming throughout time as the user navigates while the AJAX functionality is simply a way to refresh a portion of that form and the complexity of building your form widget will reduce down to the business logic needed.

Chromatic: In Search of a Better Local Development Server

Planet Drupal -

The problem with development environments

If you're a developer and you're like me, you have probably tried out a lot of different solutions for running development web servers. A list of the tools I've used includes:

That's not even a complete list — I know I've also tried other solutions from the wider LAMP community, still others from the Drupal community, and I've rolled my own virtual-machine based servers too.

All of these tools have their advantages, but I was never wholly satisfied with any of them. Typically, I would encounter problems with stability when multiple sites on one server needed different configurations, or problems customizing the environment enough to make it useful for certain projects. Even the virtual-machine based solutions often suffered from the same kinds of problems — even when I experimented with version-controlling critical config files such as vhost configurations, php.ini and my.cnf files, and building servers with configuration management tools like Chef and Puppet.

Drupal VM

Eventually, I found Drupal VM, a very well-thought-out virtual machine-based development tool. It’s based on Vagrant and another configuration management tool, Ansible. This was immediately interesting to me, partly because Ansible is the tool we use internally to configure project servers, but also because the whole point of configuration management is to reliably produce identical configuration whenever the software runs. (Ansible also relies on YAML for configuration, so it fits right in with Drupal 8).

My VM wishlist

Since I've worked with various VM-based solutions before, I had some fairly specific requirements, some to do with how I work, some to do with how the Chromatic team works, and some to do with the kinds of clients I'm currently working with. So I wanted to see if I could configure Drupal VM to work within these parameters:

1. The VM must be independently version-controllable

Chromatic is a distributed team, and I don't think any two of us use identical toolchains. Because of that, we don't currently want to include any development environment code in our actual project repositories. But we do need to be able to control the VM configuration in git. By this I mean that we need to keep every setting on the virtual server outside of the server in version-controllable text files.

Version-controlling a development server in this way also implies that there will be little or no need to perform administrative tasks such as creating or editing virtual host files or php.ini files (in fact, configuration of the VM in git means that we must not edit config files in the VM since they would be overridden if we recreate or reprovision it).

Furthermore, it means that there's relatively little need to actually log into the VM, and that most of our work can be performed using our day-to-day tools (i.e. what we've configured on our own workstations, and not whatever tools exist on the VM).

2. The VM must be manageable as a git submodule

On a related note, I wanted to be able to add the VM to the development server repository and never touch its files—I'm interested in maintaining the configuration of the VM, but not so much the VM itself.

It may help to explain this in Drupal-ish terms; when I include a contrib module in a Drupal project, I expect to be able to interact with that module without needing to modify it. This allows the module to be updated independently of the main project. I wanted to be able to work with the VM in the same way.

3. The VM must be able to be recreated from scratch at any time

This is a big one for me. If I somehow mess up a dev server, I want to be able to check out the latest version of the server in git, boot it and go back to work immediately. Specifically, I want to be able to restore (all) the database(s) on the box more or less automatically when the box is recreated.

Similarly, I usually work at home on a desktop workstation. But when I need to travel or work outside the house, I need to be able to quickly set up the project(s) I'll be working on on my laptop.

Finally, I want the VM configuration to be easy to share with my colleagues (and sometimes with clients directly).

4. The VM must allow multiple sites per server

Some of the clients we work with have multiple relatively small, relatively similar sites. These sites sometimes require similar or identical changes. For these clients, I much prefer to have a single VM that I can spin up to work on one or several of their sites at once. This makes it easier to switch between projects, and saves a great deal of disk space (the great disadvantage to using virtual machines is the amount of disk space they use, so putting several sites on a single VM can save a lot of space).

And of course if we can have multiple sites per server, then we can also have a single site per server when that's appropriate.

5. The VM must allow interaction via the command line

I've written before about how I do most of my work in a terminal. When I need to interact with the VM, I want to stay in the terminal, and not have to find or launch a specific app to do it.

6. The VM must create drush aliases

The single most common type of terminal command for me to issue to a VM is drush @alias {something}. And when running the command on a separate server (the VM!), the command must be prefixed with an alias, so a VM that can create drush aliases (or help create them) is very, very useful (especially in the case where there are multiple sites on a single VM).

7. The VM must not be too opinionated about the stack

Given the variations in clients' production environments, I need to be able to use any current version of PHP, use Apache or Nginx, and vary the server OS itself.

My VM setup

Happily, it turns out that Drupal VM can not only satisfy all these requirements, but is either capable of all of them out of the box, or makes it very straightforward to incorporate the required functionality. Items 4, 5, 6, and 7, for example, are stock.

But before I get into the setup of items 1, 2, and 3, I should note that this is not the only way to do it.

Drupal VM is a) extensively documented, and b) flexible enough to accommodate any of several very different workflows and project structures than what I'm going to describe here. If my configuration doesn't work with your workflow, or my workflow won't work with your configuration you can probably still use Drupal VM if you need or want a VM-based development solution.

For Drupal 8 especially, I would simply use Composer to install Drupal, and install Drupal VM as a dependency.

Note also that if you just need a quick Drupal box for more generic testing, you don't need to do any of this, you can just get started immediately.


We know that Drupal VM is based on Ansible and Vagrant, and that both of those tools rely on config files (YAML and ruby respectively). Furthermore, we know that Vagrant can keep folders on the host and guest systems in sync, so we also know that we'll be able to handle item 1 from my wishlist--that is, we can maintain separate repositories for the server and for projects.

This means we can have our development server as a standalone directory, and our project repositories in another. For example, we might set up the following directory structure where example.com contains the project repository, and devserver contains the Drupal VM configuration.

Servers └── devserver/ Sites └── example.com/ Configuration files

Thanks to some recent changes, Drupal VM can be configured with an external config.yml file , a local config.yml file, and an external Vagrantfile.local file using a delegating Vagrantfile.

The config.yml file is required in this setup, and can be used to override any or all of the default configuration in Drupal VM's own default.config.yml file.

The Vagrantfile.local file is optional, but useful in case you need to alter Drupal VM's default Vagrant configuration.

The delegating Vagrantfile is the key to tying together our main development server configuration and the Drupal VM submodule. It defines the directory where configuration files can be found, and loads the Drupal VM Vagrantfile.

This makes it possible to create the structure we need to satisfy item 2 from my wishlist--that is, we can add Drupal VM as a git submodule to the dev server configuration:

Server/ ├── Configuration/ | ├── config.yml | └── Vagrantfile.local ├── Drupal VM/ └── Vagrantfile Recreating the VM

One motivation for all of this is to be able to recreate the entire development environment quickly. As mentioned above, this might be because the VM has become corrupt in some way, because I want to work on the site on a different computer, or because I want to share the site—server and all—with a colleague.

Mostly, this is simple. To the extent that the entire VM (along with the project running inside it!) is version-controlled, I can just ask my colleague to check out the relevant repositories and (at most!) override the vagrant_synced_folders option in a local.config.yml with their own path to the project directory.

In checking out the server repository (i.e. we are not sharing an actual virtual disk image), my colleague will get the entire VM configuration including:

  • Machine settings,
  • Server OS,
  • Databases,
  • Database users,
  • Apache or Nginx vhosts,
  • PHP version,
  • php.ini settings,
  • Whatever else we've configured, such as SoLR, Xdebug, Varnish, etc.

So, with no custom work at all—even the delegating Vagrant file comes from the Drupal VM documentation—we have set up everything we need, with two exceptions:

  1. Entries for /etc/hosts file, and
  2. Databases!

For these two issues, we turn to the Vagrant plugin ecosystem.

/etc/hosts entries

The simplest way of resolving development addresses (such as e.g. example.dev) to the IP of the VM is to create entries in the host system's /etc/hosts file: example.dev

Managing these entries, if you run many development servers, is tedious.

Fortunately, there's a plugin that manages these entries automatically, Vagrant Hostsupdater. Hostsupdater simply adds the relevant entries when the VM is created, and removes them again (configurably) when the VM is halted or destroyed.


Importing the database into the VM is usually a one-time operation, but since I'm trying to set up an easy process for working with multiple sites on one server, I sometimes need to do this multiple times — especially if I've destroyed the actual VM in order to save disk space etc.

Similarly, exporting the database isn't an everyday action, but again I sometimes need to do this multiple times and it can be useful to have a selection of recent database dumps.

For these reasons, I partially automated the process with the help of a Vagrant plugin. "Vagrant Triggers" is a Vagrant plugin that allows code to be executed "…on the host or guest before and/or after Vagrant commands." I use this plugin to dump all non-system databases on the VM on vagrant halt, delete any dump files over a certain age, and to import any databases that can be found in the dump location on the first vagrant up.

Note that while I use these scripts for convenience, I don't rely on them to safeguard critical data.

With these files and a directory for database dumps to reside in, my basic server wrapper now looks like this:

Server/ ├── Vagrantfile ├── config/ ├── db_dump.sh ├── db_dumps/ ├── db_import.sh └── drupal-vm/ My workflow New projects

All of the items on my wishlist were intended to help me achieve a specific workflow when I needed to add a new development server, or move it to a different machine:

  1. Clone the project repo.
  2. Clone the server repo.
  3. Change config.yml:
    • Create/modify one or more vhosts.
    • Create/modify one or more databases.
    • Change VM hostname.
    • Change VM machine name.
    • Change VM IP.
    • Create/modify one or more cron jobs.
  4. Add a database dump (if there is one) to the db_dumps directory.
  5. Run vagrant up.
Sharing projects

If I share a development server with a colleagues, they have a similar workflow to get it running:

  1. Clone the project repo.
  2. Clone the server repo.
  3. Customize local.config.yml to override my settings:
    • Change VM hostname (in case of VM conflict).
    • Change VM machine name (in case of VM conflict).
    • Change VM IP (in case of VM conflict).
    • Change vagrant synced folders local path (if different from mine).
  4. Add a database dump to the dumps directory.
  5. Run vagrant up.
Winding down projects

When a project completes that either has no maintenance phase, or where I won't be involved in the ongoing maintenance, I like to remove the actual virtual disk that the VM is based on. This saves ≥10GB of hard drive space (!):

$ vagrant destroy

But since a) every aspect of the server configuration is contained in config.yml and Vagrantfile.local, and b) since we have a way of automatically importing a database dump, resurrecting the development server is as simple as pulling down a new database and re-provisioning the VM:

$ scp remotehost:/path/to/dump.sql.gz /path/to/Server/db_dumps/dump.sql.gz $ vagrant up Try it yourself

Since I wanted to reuse this structure for each new VM I need to spin up, I created a git repository containing the code. Download and test it--the README contains detailed setup instructions for getting the environment ready if you don't already use Vagrant.

Miloš Bovan: Final code polishing of Mailhandler

Planet Drupal -

Final code polishing of Mailhandler

This blog post summarizes week #11 of the Google Summer of Code 2016 project - Mailhandler. 

Time flies and it is already the last phase of this year’s Google Summer of Code 2016. The project is not over yet and I would like to update you on the progress I made last week. In the last blog post I was writing about the problems I faced in week 10 and how we decided to do code refactoring instead of UI/UX work. The plan for the last week was to update Mailhandler with the newly introduced changes in Inmail as well as work on new user-interface related issues. Since this was the last week of issues work before doing the project documentation, I used the time to polish the code as much as possible.

As you may know, Inmail got new features on the default analyzer result. Since this change was suggested by Mailhandler, the idea was to remove Mailhandler specific analyzer result and use the default one instead. It allows the core module (and any other Inmail-based module) to use the standardized result across all enabled analyzers. The main benefit of this is to support better collaboration between analyzer plugins.
Even though Mailhandler updates were not planned to take a lot of time, it turned to be opposite. Hopefully, the long patch passed all the tests and was fixed in Use DefaultAnalyzerResult instead of Mailhandler specific one issue.
It was needed to not only replace the Mailhandler-specific analyzer result but to use user context and context concept in general as well. Each of the 5 Mailhandler analyzers were updated to “share” their result. Also, non-standard features of each of the analyzers are available as contexts. Later on, in the handler processing phase, handler plugins can access those contexts and extract the needed information.

The second part of the available work time was spent on user interface issues, mostly on improving Inmail. Mailhandler as a module is set of Inmail plugins and configuration files and in discussion with mentors, we agreed that improving the user interface of Inmail is actually an improvement to Mailhandler too.
IMAP (Internet Message Access Protocol) as a standard message protocol is supported by Inmail. It is the main Inmail deliverer and UI/UX improvements were really needed there. In order to use it, valid credentials are requested. One of the DX validation patterns is to validate those credentials via a separate “Test connection” button.

IMAP test connection button

In the previous blog posts, I mentioned a power of Monitoring module. It provides an overall monitoring of a Drupal website via nice UI. Since it is highly extensible, making Inmail support it would be a nice feature. Among the most important things to monitor was a quota of an IMAP plugin. This allows an administrator to see the "health state" of this plugin and to react timely. The relevant issue needs a few corrections, but it is close to be finished too.

Seeing that some of the issues mentioned above are still in “Needs review” or “Needs work” state, I will spend additional time this week in order to finish them. The plan for the following week is to finish the remaining issues we started and focus on the module documentation. The module documentation consist of improving plugin documentation (similary to api.drupal.org), Drupal.org project page, adding a Github read me, installation manuals, code comments, demo article updates and most likely everything related to describing the features of the module.




Milos Wed, 08/03/2016 - 17:54 Tags Google Summer of Code Drupal Open source Drupal Planet Add new comment

Drop Guard: 1..2..3 - continuous security! A business guide for companies & individuals

Planet Drupal -

A lot of Drupal community members, who are interested in or already use Drop Guard, were waiting for this ultimate guide on continuous security in Drupal. Using Drop Guard in a daily routine improved update workflows and increased the efficiency of the website support for all of our users. But there were still a lot of blind spots and unexplored capabilities such as using Drop Guard as an "SLA catalyser". So we've stuck our heads together and figured out how to share this information with you in a professional and condensed way.

Drupal Drupal Planet Drupal Community Security Drupal shops Business

CloudFlare's JSON-powered Documentation Generator

Cloudflare Blog -

Everything that it's possible to do in the CloudFlare Dashboard is also possible through our RESTful API. We use the same API to power the dashboard itself.

In order to keep track of all our endpoints, we use a rich notation called JSON Hyper-Schema. These schemas are used to generate the complete HTML documentation that you can see at https://api.cloudflare.com. Today, we want to share a set of tools that we use in this process.

CC BY 2.0 image by Richard Martin

JSON Schema

JSON Schema is a powerful way to describe your JSON data format. It provides complete structural validation and can be used for things like validation of incoming requests. JSON Hyper-Schema further extends this format with links and gives you a way describe your API.

JSON Schema Example { "type": "object", "properties": { "name": { "type": "string" }, "age": { "type": "number" }, "address": { "type": "object", "properties": { "street_address": { "type": "string" }, "city": { "type": "string" }, "state": { "type": "string" }, "country": { "type" : "string" } } } } } Matching JSON { "name": "John Doe", "age": 45, "address": { "street_address": "12433 State St NW", "city": "Atlanta", "state": "Georgia", "country": "United States" } }

JSON Schema supports all simple data types. It also defines some special meta properties including title, description, default, enum, id, $ref, $schema, allOf, anyOf, oneOf, and more. The most powerful construct is $ref. It provides similar functionality to hypertext links. You can reference external schemas (external reference) or a fragment inside the current schema (internal reference). This way you can easily compose and combine multiple schemas together without repeating yourself.

JSON Hyper-Schema introduces another property called links where you define your API links, methods, request and response formats, etc. The best way to learn more about JSON Schemas is to visit Understanding JSON Schema. You can also visit the official specification website or wiki. If you want to jump straight into examples, try this.

CC BY 2.0 image by Tony Walmsley

Generating Documentation: Tools

We already have an open source library that can generate complete HTML documentation from JSON Schema files and Handlebars.js templates. It's called JSON Schema Docs Generator (JSDC). However, it has some drawbacks that make it hard to use for other teams:

  • Complicated configuration
  • It's necessary to rebuild everything with every change (slow)
  • Templates cannot have their own dependencies
  • All additional scripting must be in a different place
  • It is hard to further customize it (splitting into sections, pages)

We wanted something more modular and extensible that addresses the above issues, while still getting ready-to-go output just with a few commands. So, we created a toolchain based on JSDC and modern JavaScript libraries. This article is not just a description for how to use these tools, but also an explanation of our design decisions. It is described in a bottom-up manner. You can skip to the bottom if you are not interested in the technical discussion and just want to get started using the tools.


JSON Schema files need to be preprocessed first. The first thing we have to do is to resolve their references ($ref). This can be quite a complex task since every schema can have multiple references, some of which are external (referencing even more schemas). Also, when we make a change, we want to only resolve schemas that need to be resolved. We decided to use Webpack for this task because a webpack loader has some great properties:

  • It's a simple function that transforms input into output
  • It can maintain and track additional file dependencies
  • It can cache the output
  • It can be chained
  • Webpack watches all changes in required modules and their dependencies

Our loader uses the 3rd party JSON Schema Ref Parser library. It does not adhere to the JSON Schema specification related to id properties and their ability to change reference scope since it is ambiguous. However, it does implement the JSON Pointer and JSON Reference specifications. What does this mean? You can still combine relative (or absolute) paths with JSON Pointers and use references like:

"$ref": "./product.json#/definitions/identifier"

but ids are simply ignored and the scope is always relative to the root. That makes reasoning about our schemas easier. That being said, a unique root id is still expected for other purposes.


Finally, we have resolved schemas. Unfortunately, their structure doesn't really match our final HTML documentation. It can be deeply nested, and we want to present our users with nice examples of API requests and responses. We need to do further transformations. We must remove some original properties and precompute new ones. The goal is to create a data structure that will better fit our UI components. Please check out the project page for more details.

You might be asking why we use another webpack loader and why this isn't part of our web application instead. The main reason is performance. We do not want to bog down browsers by doing these transformations repeatedly since JSON Schemas can be arbitrarily nested and very complex and the output can be precomputed.


With both of these webpack loaders, you can easily use your favorite JavaScript framework to build your own application. However, we want to make doc generation accessible even to people who don't have time to build their own app. So, we created a set of templates that match the output of json-schema-example-loader. These templates use the popular library React. Why React?

  • It can be used and rendered server-side
  • We can now bake additional features into components (e.g., show/hide...)
  • It is easily composable
  • We really really like it :)

doca-bootstrap-theme is a generic theme based on Twitter Bootstrap v3. We also have our private doca-cf-theme used by https://api.cloudflare.com. We encourage you to fork it and create your own awesome themes!

CC BY 2.0 image by Maia Coimbra


So, we have loaders and nice UI components. Now, it's time to put it all together. We have something that can do just that! We call it doca. doca is a command-line tool written in Node.js that scaffolds the whole application for you. It is actually pretty simple. It takes fine-tuned webpack/redux/babel based application, copies it into a destination of your choice, and does a few simple replacements.

Since all hard work is done by webpack loaders and all UI components live in a different theme package, the final app can be pretty minimal. It's not intended to be updated by the doca tool. You should only use doca once. Otherwise, it would just rewrite your application, which is not desirable if you made some custom modifications. For example, you might want to add React Router to create multi-page documentation.

doca contains webpack configs for development and production modes. You can build a completely static version with no JavaScript. It transforms the output of json-schema-example-loader into an immutable data structure (using Immutable.js). This brings some nice performance optimizations. This immutable structure is then passed to doca-bootstrap-theme (the default option). That's it.

This is a good compromise between ease of setup and future customization. Do you have a folder with JSON Schema files and want to quickly get index.html? Install doca and use a few commands. Do you need your own look? Fork and update doca-bootstrap-theme. Do you need to create more pages, sections, or use a different framework? Just modify the app that was scaffolded by doca.

One of the coolest features of webpack is hot module replacement. Once you save a file, you can immediately see the result in your browser. No waiting, refreshing, scrolling or lost state. It's mostly used in combination with React; however, we use it for JSON Schemas, too. Here's a demo:

It gets even better. It is easy to make a mistake in your schemas. No worries! You will be immediately prompted with a descriptive error message. Once it's fixed, you can continue with your work. No need to leave your editor. Refreshing is so yesterday!

Generating Documentation: Usage

The only prerequisite is to have Node.js v4+ on your system. Then, you can install doca with:

npm install doca -g

There are just two simple commands. The first one is doca init:

doca init [-i schema_folder] [-o project_folder] [-t theme_name]

It goes through the current dir (or schema_folder), looks for **/*.json files, and generates /documentation (or /project_folder). This command should be used only once when you need to bootstrap your project.

The second one is doca theme:

doca theme newTheme project

This gives a different theme (newTheme) to the project. It has two steps:

  • It calls npm install newTheme --save inside of project
  • It renames all doca-xxx-theme references to doca-newTheme-theme

This can make destructive changes in your project. Always use version control!

CC BY 2.0 image by Robert Couse-Baker

Getting started

The best way how to start is to try our example. It includes two JSON Schemas.

git clone [email protected]:cloudflare/doca.git cd doca/example doca init cd documentation npm install npm start open http://localhost:8000

That's it! This results in a development environment where you can make quick changes in your schemas and see the effects immediately because of mighty hot reloading.

You can build a static production ready app with:

npm run build open build/index.html

Or you can build it with no JavaScript using:

npm run build:nojs open build/index.html

Do you need to add more schemas or change their order? Edit the file /schema.js.
Do you want to change the generic page title or make curl examples nicer? Edit the file /config.js.


We're open sourcing a set of libraries that can help you develop and ship a rich RESTful API documentation. We are happy for any feedback and can't wait to see new themes created by the open source community. Please, gives us a star on GitHub. Also, if this work interests you, then you should come join our team!

GVSO Blog: [GSoC 2016: Social API] Week 10: A Social Post implementer

Planet Drupal -

[GSoC 2016: Social API] Week 10: A Social Post implementer

Week 10 is over, and we are only two weeks away from Google Summer of Code final evaluation. During these ten weeks, we have been rebuilding social networking ecosystem in Drupal. Thus, we created the Social API project and divided it into three components: Social Auth, Social Post and Social Widgets.

gvso Wed, 08/03/2016 - 01:42 Tags Drupal Drupal Planet GSoC 2016

Galaxy: GSoC’ 16: Port Search Configuration module; coding week #10

Planet Drupal -

Google Summer of Code 2016 is into its final lap. I have been porting the search configuration module to Drupal 8 as part of this program and I am into the last stage, fixing some of the issues reported and improving the module port I have been doing for the past two months.
Finally, I have set up my Drupal blog. I should have done it much more earlier. A quick advice to to those who are interested in creating a blog powered by Drupal and run it online, I made use of the openshift to host my blog. It gives the freedom to run maximum of three applications for free. Select your favorite Drupal theme and start blogging.
So, now lets come back to my project status. If you would like to have a glance at my past activities on this port , please refer these posts.
Last week I was mainly concentrating on fixing some of the issues reported in the module port. It was really a wonderful learning experience, creating new issues, getting reviewed the patches, updating the patches if required and finally the happiness of getting the patches committed into the Drupal core is a different feeling. Moreover, I could also get suggestions from other developers who are not directly part of my project which I find as the real blessing of being part of this wonderful community.

The module is now shaping up well and is moving ahead in the right pace. Last week, I had faced some issues with the twig and I was resolving it. The module is currently available for testing. I could work on some key aspects of the module in the past week. I worked on the namespace issues. Some of the functions were not working as intended due to the wrong usage of the PSR namespace. I could fix some of these issues. Basically, PSR namespaces helps to reuse certain standard functions of the Drupal API framework. They are accessed using the 'use' keyword. We can name the class locations using the namespace property.
For instance, if I want to use the Html escape function for converting special characters to HTML format,
use Drupal\Component\Utility\Html;
Now, $role_option s= array_map('Html::escape', user_role_names())
Hope you got the idea. Here I could have written the entire route/path of thee escape function. But through the usage of the namespace, I just need to define the route at the beginning and later on it can be used for further implementations n number of times.
The user_role_names() retrieved the names of the roles involved. This is an illustration of the usage of namespace. This is really an area to explore more. Please do read more on this, for better implementation of the Drupal concepts.

In the coming days, I would like to test the various units of the module ported, fix the issues if any and bring up a secure, user friendly search configuration module for Drupal.
Hope all the students are enjoying the process and exploring the Drupal concepts. Stay tuned for the future updates on this port process.

Tags: drupal-planet

Cocomore: „Memories“ and more: These new features make Snapchat even more attractive for businesses

Planet Drupal -

Until recently one of the biggest contradictions in social media was called: Snapchat and consistency. In early July Snapchat put an end to this. The new feature "Memories" now allows users to save images. Next to "Memories" Snapchat further developed the platform also on other positions. We show what opportunities the new changes offer for businesses.

Janez Urevc: Release of various Drupal 8 media modules

Planet Drupal -

Release of various Drupal 8 media modules

Today we released new versions of many Drupal 8 media modules. This release is specially important for Entity browser and Entity embed modules since we released the last planned alpha version of those modules. If there will be no critical bugs reported in next two weeks we'll release first beta versions of those modules.

List of all released modules:

slashrsm Tue, 02.08.2016 - 22:36 Tags Drupal Media Enjoyed this post? There is more! We had great and productive time at NYC sprint! Sam Morenson is thinking about media in Drupal core Presentations about various Drupal 8 media modules

View the discussion thread.

Phponwebsites: Multiple URL alias for a node in pathauto - drupal 7

Planet Drupal -

   As we discussed in my previous post, clean URL is one of the option to improve SEO. We've module called pathauto to clean URLs in drupal 7. It can allow us to set alias for content types, files, users & taxonomies. But we can set only one URL alias for a content type in drupal 7. You can set URL alias for a content type at admin/config/search/path/patterns. It looks like below image:

   Suppose you need two path for a content. For instance, the URL alias for a article need to node title and also article/node-title. Is it possible to set multiple path alias for a content type in drupal 7? Yes it is possible in drupal 7. We can set multiple URL alias for a conten type programmatically using pathauto module in drupal 7. We need to insert our path alias into the "url_alias" table while inserting & updating a node and remove path alias When delete a node.

Add URL alias programmatically when insert and update a node using pathauto module in drupal 7:
    For instance, I've choosen article content type. We need to insert & update a URL alias into the "url_alias" table using hook_node_insert() & hook_node_update() in drupal 7.

 * Implements hook_node_insert()
function phponwebsites_node_insert($node) {
  if ($node->type == 'article') {
    //save node alias

 * Implements hook_node_update()
function phponwebsites_node_update($node) {
  if ($node->type == 'article') {
    //update node alias

 * Insert and update alias for course
function _phponwebsites_insert_update_alias($node) {
  module_load_include('inc', 'pathauto');
  $title = pathauto_cleanstring($node->title);

  $values['source'] = 'node/' . $node->nid . '/article';
  $values['alias'] = 'article/' . $title;

  $all_values = array($values);

  foreach ($all_values as $all) {
    $query = db_merge('url_alias')
      ->fields(array('source' => $all['source'], 'alias' => $all['alias'], 'language' => LANGUAGE_NONE))
      ->key(array('source' => $all['source']))

 pathauto_cleanstring is obey the pathatuo module's rules which is mentioned at admin/config/search/path/settings. To know more details of pathauto_cleanstring, please visit http://www.drupalcontrib.org/api/drupal/contributions!pathauto!pathauto.inc/function/pathauto_cleanstring/7

After added the above code into your custome module(clear cache), you will create a article. You just test your url at admin/config/search/path in the pathauto's list. It looks like below image:

Now you could access the article by both node-title as well as article/node-title.

Delete URL alias programmatically when delete a node using pathauto module in drupal 7:
     We've inserted 2 URL alias for a node. So we need to delete those from "url_alias" table when delete a node. We can trigger it using hook_node_delete() in drupal 7. Consider the below code:

 * Implements hook_node_delete()
function arep_node_delete($node) {
  if ($node->type == 'article') {
    //delete node alias for ceu and non-ceu course
    module_load_include('inc', 'pathauto');
    $source[0] = 'node/' . $node->nid . '/article';

    foreach ($source as $s) {
      $path = path_load(
        array('source' => $s)


  path_load returns the details of a URL alias like source, alias, path id  & language. To know more details of path_load(), please visit https://api.drupal.org/api/drupal/includes!path.inc/function/path_load/7.x.

After added the above code into your customer module(clear cache), you will delete a node and check your URL alias at admin/config/search/path. Now tt should not be displayed here.

Now I've hope you know how to set multiple URL alias for a content type.

Related articles:
Remove speical characters from URL alias using pathauto module in Drupal 7
Add new menu item into already created menu in Drupal 7
Add class into menu item in Drupal 7
Create menu tab programmatically in Drupal 7
Add custom fields to search api index in Drupal 7
Clear views cache when insert, update and delete a node in Drupal 7
Create a page without header and footer in Drupal 7
Login using both email and username in Drupal 7

Introducing the p0f BPF compiler

Cloudflare Blog -

Two years ago we blogged about our love of BPF (BSD packet filter) bytecode.

CC BY 2.0 image by jim simonson

Then we published a set of utilities we are using to generate the BPF rules for our production iptables: the bpftools.

Today we are very happy to open source another component of the bpftools: our p0f BPF compiler!

Meet the p0f

p0f is a tool written by superhuman Michal Zalewski. The main purpose of p0f is to passively analyze and categorize arbitrary network traffic. You can feed p0f any packet and in return it will derive knowledge about the operating system that sent the packet.

One of the features that caught our attention was the concise yet explanatory signature format used to describe TCP SYN packets.

The p0f SYN signature is a simple string consisting of colon separated values. This string cleanly describes a SYN packet in a human-readable way. The format is pretty smart, skipping the varying TCP fields and keeping focus only on the essence of the SYN packet, extracting the interesting bits from it.

We are using this on daily basis to categorize the packets that we, at CloudFlare, see when we are a target of a SYN flood. To defeat SYN attacks we want to discriminate the packets that are part of an attack from legitimate traffic. One of the ways we do this uses p0f.

We want to rate limit attack packets, and in effect prioritize processing of other, hopefully legitimate, ones. The p0f SYN signatures give us a language to describe and distinguish different types of SYN packets.

For example here is a typical p0f SYN signature of a Linux SYN packet:


while this is a Windows 7 one:


Not getting into details yet, but you can clearly see that there are differences between these operating systems. Over time we noticed that the attack packets are often different. Here are two examples of attack SYN packets:

4:255:0:0:*,0::ack+,uptr+:0 4:64:0:*:65535,*:mss,nop,ws,nop,nop,sok:df,id+:0

You can have a look at more signatures in p0f's README and signatures database.

It's not always possible to perfectly distinguish an attack from valid packets, but very often it is. This realization led us to develop an attack mitigation tool based on p0f SYN signatures. With this we can ask iptables to rate limit only the selected attack signatures.

But before we discuss the mitigations, let's explain the signature format.

CC BY-SA 3.0 image by Hyacinth at the English language Wikipedia


As mentioned, the p0f SYN signature is a colon-separated string with the following parts:

  • IP version: the first field carries the IP version. Allowed values are 4 and 6.
  • Initial TTL: assuming that realistically a packet will not jump through more than 35 hops, we can specify an initial TTL ittl (usual values are 255, 128, 64 and 32) and check if the packet's TTL is in the range (ittl, ittl - 35).
  • IP options length: length of IP options. Although it's not that common to see options in the IP header (and so 0 is the typical value you would see in a signature), the standard defines a variable length field before the IP payload where options can be specified. A * value is allowed too, which means "not specified".
  • MSS: maximum segment size specified in the TCP options. Can be a constant or *.
  • Window Size: window size specified in the TCP header. It can be a expressed as:
    • a constant c, like 8192
    • a multiple of the MSS, in the c*mss format
    • a multiple of a constant, in the %c format
    • any value, as *
  • Window Scale: window scale specified during the three way handshake. Can be a constant or *.
  • TCP options layout: list of TCP options in the order they are seen in a TCP packet.
  • Quirks: comma separated list of unusual (e.g. ACK number set in a non ACK packet) or incorrect (e.g. malformed TCP options) characteristics of a packet.
  • Payload class: TCP payload size. Can be 0 (no data), + (1 or more bytes of data) or *.
TCP Options format

The following common TCP options are recognised:

  • nop: no-operation
  • mss: maximum segment size
  • ws: window scaling
  • sok: selective ACK permitted
  • sack: selective ACK
  • ts: timestamp
  • eol+x: end of options followed by x bytes of padding

p0f describes a number of quirks:

  • df: don't fragment bit is set in the IP header
  • id+: df bit is set and IP identification field is non zero
  • id-: df bit is not set and IP identification is zero
  • ecn: explicit congestion flag is set
  • 0+: reserved ("must be zero") field in IP header is not actually zero
  • flow: flow label in IPv6 header is non-zero
  • seq-: sequence number is zero
  • ack+: ACK field is non-zero but ACK flag is not set
  • ack-: ACK field is zero but ACK flag is set
  • uptr+: URG field is non-zero but URG flag not set
  • urgf+: URG flag is set
  • pushf+: PUSH flag is set
  • ts1-: timestamp 1 is zero
  • ts2+: timestamp 2 is non-zero in a SYN packet
  • opt+: non-zero data in options segment
  • exws: excessive window scaling factor (window scale greater than 14)
  • linux: match a packet sent from the Linux network stack (IP.id field equal to TCP.ts1 xor TCP.seq_num). Note that this quirk is not part of the original p0f signature format; we decided to add it since we found it useful.
  • bad: malformed TCP options
Mitigating attacks

Given a p0f SYN signature, we want to pass it to iptables for mitigation. It's not obvious how to do so, but fortunately we are experienced in BPF bytecode since we are already using it to block DNS DDoS attacks.

We decided to extend our BPF infrastructure to support p0f as well, by building a tool to compile a p0f SYN signature into a BPF bytecode blob, which got incorporated into the bpftools project.

This allows us to use a simple and human readable syntax for the mitigations - the p0f signature - and compile it to a very efficient BPF form that can be used by iptables.

With a p0f signature running as BPF in the iptables we're able to distinguish attack packets with a very high speed and react accordingly. We can either hard -j DROP them or do a rate limit if we wish.

How to compile p0f to BPF

First you need to clone the cloudflare/bpftools GitHub repository:

$ git clone https://github.com/cloudflare/bpftools.git

Then compile it:

$ cd bpftools $ make

With this you can run bpfgen p0f to generate a BPF filter that matches a p0f signature.

Here's an example where we take the p0f signature of a Linux TCP SYN packet (the one we introduced before), and by using bpftools we generate the BPF bytecode that will match this category of packets:

$ ./bpfgen p0f -- 4:64:0:*:mss*10,6:mss,sok,ts,nop,ws:df,id+:0 56,0 0 0 0,48 0 0 8,37 52 0 64,37 0 51 29,48 0 0 0, 84 0 0 15,21 0 48 5,48 0 0 9,21 0 46 6,40 0 0 6, 69 44 0 8191,177 0 0 0,72 0 0 14,2 0 0 8,72 0 0 22, 36 0 0 10,7 0 0 0,96 0 0 8,29 0 36 0,177 0 0 0, 80 0 0 39,21 0 33 6,80 0 0 12,116 0 0 4,21 0 30 10, 80 0 0 20,21 0 28 2,80 0 0 24,21 0 26 4,80 0 0 26, 21 0 24 8,80 0 0 36,21 0 22 1,80 0 0 37,21 0 20 3, 48 0 0 6,69 0 18 64,69 17 0 128,40 0 0 2,2 0 0 1, 48 0 0 0,84 0 0 15,36 0 0 4,7 0 0 0,96 0 0 1, 28 0 0 0,2 0 0 5,177 0 0 0,80 0 0 12,116 0 0 4, 36 0 0 4,7 0 0 0,96 0 0 5,29 0 1 0,6 0 0 65536, 6 0 0 0,

If this looks magical, use the -s flag to see the explanation on what's going on:

$ ./bpfgen -s p0f -- 4:64:0:*:mss*10,6:mss,sok,ts,nop,ws:df,id+:0 ; ip: ip version ; (ip[8] <= 64): ttl <= 64 ; (ip[8] > 29): ttl > 29 ; ((ip[0] & 0xf) == 5): IP options len == 0 ; (tcp[14:2] == (tcp[22:2] * 10)): win size == mss * 10 ; (tcp[39:1] == 6): win scale == 6 ; ((tcp[12] >> 4) == 10): TCP data offset ; (tcp[20] == 2): olayout mss ; (tcp[24] == 4): olayout sok ; (tcp[26] == 8): olayout ts ; (tcp[36] == 1): olayout nop ; (tcp[37] == 3): olayout ws ; ((ip[6] & 0x40) != 0): df set ; ((ip[6] & 0x80) == 0): mbz zero ; ((ip[2:2] - ((ip[0] & 0xf) * 4) - ((tcp[12] >> 4) * 4)) == 0): payload len == 0 ; ; ipver=4 ; ip and (ip[8] <= 64) and (ip[8] > 29) and ((ip[0] & 0xf) == 5) and (tcp[14:2] == (tcp[22:2] * 10)) and (tcp[39:1] == 6) and ((tcp[12] >> 4) == 10) and (tcp[20] == 2) and (tcp[24] == 4) and (tcp[26] == 8) and (tcp[36] == 1) and (tcp[37] == 3) and ((ip[6] & 0x40) != 0) and ((ip[6] & 0x80) == 0) and ((ip[2:2] - ((ip[0] & 0xf) * 4) - ((tcp[12] >> 4) * 4)) == 0) l000: ld #0x0 l001: ldb [8] l002: jgt #0x40, l055, l003 l003: jgt #0x1d, l004, l055 l004: ldb [0] l005: and #0xf l006: jeq #0x5, l007, l055 l007: ldb [9] l008: jeq #0x6, l009, l055 l009: ldh [6] l010: jset #0x1fff, l055, l011 l011: ldxb 4*([0]&0xf) l012: ldh [x + 14] l013: st M[8] l014: ldh [x + 22] l015: mul #10 l016: tax l017: ld M[8] l018: jeq x, l019, l055 l019: ldxb 4*([0]&0xf) l020: ldb [x + 39] l021: jeq #0x6, l022, l055 l022: ldb [x + 12] l023: rsh #4 l024: jeq #0xa, l025, l055 l025: ldb [x + 20] l026: jeq #0x2, l027, l055 l027: ldb [x + 24] l028: jeq #0x4, l029, l055 l029: ldb [x + 26] l030: jeq #0x8, l031, l055 l031: ldb [x + 36] l032: jeq #0x1, l033, l055 l033: ldb [x + 37] l034: jeq #0x3, l035, l055 l035: ldb [6] l036: jset #0x40, l037, l055 l037: jset #0x80, l055, l038 l038: ldh [2] l039: st M[1] l040: ldb [0] l041: and #0xf l042: mul #4 l043: tax l044: ld M[1] l045: sub x l046: st M[5] l047: ldxb 4*([0]&0xf) l048: ldb [x + 12] l049: rsh #4 l050: mul #4 l051: tax l052: ld M[5] l053: jeq x, l054, l055 l054: ret #65536 l055: ret #0 Example run

For example, consider we want to block SYN packets generated by the hping3 tool.

First, we need to recognize the p0f SYN signature. Here it is, we know that one off the top of our heads:


(notice: unless you use the -L 0 option, hping3 will send SYN packets with the ACK number set, interesting, isn't it?)

Now, we can use the bpftools to get BPF bytecode that will match the naughty packets:

$ ./bpfgen p0f -- 4:64:0:0:*,0::ack+:0 39,0 0 0 0,48 0 0 8,37 35 0 64,37 0 34 29,48 0 0 0, 84 0 0 15,21 0 31 5,48 0 0 9,21 0 29 6,40 0 0 6, 69 27 0 8191,177 0 0 0,80 0 0 12,116 0 0 4, 21 0 23 5,48 0 0 6,69 21 0 128,80 0 0 13, 69 19 0 16,64 0 0 8,21 17 0 0,40 0 0 2,2 0 0 3, 48 0 0 0,84 0 0 15,36 0 0 4,7 0 0 0,96 0 0 3, 28 0 0 0,2 0 0 7,177 0 0 0,80 0 0 12,116 0 0 4, 36 0 0 4,7 0 0 0,96 0 0 7,29 0 1 0,6 0 0 65536, 6 0 0 0,

This bytecode can then be passed to iptables:

$ sudo iptables -A INPUT -p tcp --dport 80 -m bpf --bytecode "39,0 0 0 0,48 0 0 8,37 35 0 64,37 0 34 29,48 0 0 0,84 0 0 15,21 0 31 5,48 0 0 9,21 0 29 6,40 0 0 6,69 27 0 8191,177 0 0 0,80 0 0 12,116 0 0 4,21 0 23 5,48 0 0 6,69 21 0 128,80 0 0 13,69 19 0 16,64 0 0 8,21 17 0 0,40 0 0 2,2 0 0 3,48 0 0 0,84 0 0 15,36 0 0 4,7 0 0 0,96 0 0 3,28 0 0 0,2 0 0 7,177 0 0 0,80 0 0 12,116 0 0 4,36 0 0 4,7 0 0 0,96 0 0 7,29 0 1 0,6 0 0 65536,6 0 0 0," -j DROP

And here's how it would look in iptables:

$ sudo iptables -L INPUT -v Chain INPUT (policy DROP 0 packets, 0 bytes) pkts bytes target prot opt in out source destination 6 240 tcp -- * * tcp dpt:80match bpf 0 0 0 0,48 0 0 8,37 35 0 64,37 0 34 29,48 0 0 0,84 0 0 15,21 0 31 5,48 0 0 9,21 0 29 6,40 0 0 6,69 27 0 8191,177 0 0 0,80 0 0 12,116 0 0 4,21 0 23 5,48 0 0 6,69 21 0 128,80 0 0 13,69 19 0 16,64 0 0 8,21 17 0 0,40 0 0 2,2 0 0 3,48 0 0 0,84 0 0 15,36 0 0 4,7 0 0 0,96 0 0 3,28 0 0 0,2 0 0 7,177 0 0 0,80 0 0 12,116 0 0 4,36 0 0 4,7 0 0 0,96 0 0 7,29 0 1 0,6 0 0 65536,6 0 0 0 Closing words

While defending from DDoS attacks is sometimes fun, most often it's a mundane repetitive job. We are constantly working on improving our automatic DDoS mitigation system, but we do not believe there is a strong reason to keep it all secret. We want to help others fighting attacks. Maybe if we all worked together one day we could solve the DDoS problem for all.

Releasing our code open source is an important part of CloudFlare. This blog post and the p0f BPF compiler are part of our effort to open source our DDoS mitigations. We hope others affected by SYN floods will find it useful.

Do you enjoy playing with low level networking bits? Are you interested in dealing with some of the largest DDoS attacks ever seen? If so you should definitely have a look at the opened positions in our London, San Francisco, Singapore, Champaign (IL) and Austin (TX) offices!

Zivtech: You Don't Know Git!

Planet Drupal -

I’m going to put it out there. This blog is not for senior developers or git gurus; I know you know git. This post is for the noobs, the career-changers like me. If I could go back in time, after I had graduated from my three month web development bootcamp, I would tell myself, “You don’t know git!”

I can hear myself saying, “But I know the workflow. I know to pull from master before starting a new branch. I know to avoid merge conflicts. I git add -A. I know what’s up.”

No. No. No. Fix Your Workflow
If there’s one command you want to know, it’s this: <code> git add -p </code> This command changed my entire workflow and was tremendously helpful. In bootcamp, you learn the basics of git and move on. You generally learn: <code>git add -A</code> or <code>git add .</code> This stages all the changes that you’ve made to the repository or all the changes you’ve made from your current directory. This worked during bootcamp because the changes were small, and I was often just committing to my own repository. Once I switched over to Drupal and started working with Features, I realized that after I would make updates, not all of the files that were showing up in code were things I had changed. How could that be?! Work with Your Team
I was working on a project with other developers who were also working on the same feature. I had to learn -p so that I could be a responsible member of the team and only commit what I had changed. That’s why it’s so important to use this command: <code>git add -p</code> If you’re ever unsure about a command in git, just type this command: <code>git add --help</code> The git manual will then show you all the options you can use, like this: -p, --patch Interactively choose hunks of patch between the index and the work tree and add them to the index. This gives the user a chance to review the difference before adding modified contents to the index.
Essentially, it allows you to review each file to determine what changed, and if you want to stage it or not.
In the example above, I made changes to my .gitignore file. I deleted the line in red, and added the line in green. Then it asks you what you want to do with those changes. If you type in ‘?’ and push enter, it will explain what your options are.

Not only does it help by preventing you from staging code that isn’t yours, it’s also helpful as a new developer to see what changed. In Drupal, you can think that you’re making a small change in the UI, but then see a ton of altered files. Using -p has helped me figure out how Drupal works, and I’m a lot more confident about what I’m staging now.

Now go out there and <code>git add -p</code> all of your changes and be the granular committer I know you can be!


Subscribe to Cruiskeen Consulting LLC aggregator